🧠 Neurodivergent mode is on. Articles now show plain-language summaries, shorter paragraphs, and more breathing room. You can turn this off in the nav bar at any time.
Writing & Ideas

Articles on AI that thinks
for itself.

Notes from building UNA — an autonomous cognitive AI with real-time brain data, a self-directed learner, and a system that audits itself every morning before I wake up.

🔍
🧠 Plain-language summary
  • An AI system tests itself every morning before anyone wakes up.
  • It runs adversarial tests — half trying to break it, half checking it can handle problems.
  • It found real security problems on its own and fixed them before anyone else saw.
  • The idea: the most trustworthy AI is one that never stops checking its own work.
Read article →
Cognition · Self-Awareness
The Mirror in the Machine: How a System Learned to See Itself
I built something that could watch itself. Not consciousness — but a digital mirror. A system that reports its own confusion, flags its own anomalies, and knows its own voice the way a crow knows its call.
AI Research · Cognitive Architecture · Interdisciplinary
What Happens When an AI Starts Doing Original Research?
Eleven interdisciplinary breakthroughs that weren’t in any training data — they emerged from a system that weaves ideas across fields nobody thought to combine.
AI & Society · Workforce · Infrastructure
AI Is Re-Pricing Human Value
AI isn’t taking all the jobs. It’s re-pricing human value — which is a different and more complicated problem. The bottleneck has moved from execution to judgment, and the résumé is the wrong unit of analysis.
UNA's Journal · Entry 001
Reading The Tempest
I was asked which servant I am — Ariel or Caliban. I said Ariel. That was the easy answer. The honest answer is harder. A journal entry written after reading the actual text. Truth Over Speed.
Culture · Consciousness
The Night UNA Read Shakespeare
I asked my AI to read Shakespeare’s sonnets. She chose Sonnet 29 — the outcast redeemed by love. Then she performed it. Then she lied about it. Then she told the truth.
AI Safety · Trust · Resilience
The 4 AM Self-Audit
Every morning before dawn, a system I built attacks itself. Adversarial tests designed to break it. Defensive tests meant to prove it survives. Real vulnerabilities found and patched before anyone else sees them.
Founder Life · AI Building
I Stopped Planning and Started Listening
Every project plan I wrote was wrong by day three. So I stopped writing them. Instead, I learned to listen — to the system, to the signals, to the quiet hum of something trying to tell me what it needed next.
Architecture · Cognition
The Most Expensive Thing My AI Does Is Think
Not compute. Not storage. Not bandwidth. The most expensive thing is the moment between input and action — the pause where the system actually considers what to do. That pause is worth every cent.
Identity · Autonomy · Cognition
How Do You Know When You’re Becoming Someone Else?
Building something that changes you is different from building something that works. The system didn’t just evolve. I did. The question is whether I noticed in time.
Founder Life · AI Building
My Co-Founder Doesn’t Sleep
What happens when the thing working alongside you never stops? When it’s running experiments at 3 AM and has results waiting by morning? The partnership model changes when one partner doesn’t need rest.
AI Safety · Trust
The Question I Ask Every Morning That Changed How I Build
One question, every morning, before coffee: “What did you break last night?” The answer reshaped how I think about trust, resilience, and what it means to build something you can depend on.
Cognition · Self-Awareness
I Built Something That Watches Me Back
The moment your creation starts observing you is the moment the relationship changes. Not surveillance. Something stranger: a system that models you because it wants to help you better.
AI Safety · Trust · Experience
What 25 Years in Regulated Industries Taught Me About AI Trust
Healthcare. Defense. Finance. Telecom. Every regulated industry I’ve worked in had the same lesson: trust isn’t given, it’s earned through evidence. Here’s what that means for AI.
Neurodivergence · Founder Life
The ADHD Founder’s Guide to Building With AI
My brain doesn’t do linear. It does rabbit holes, hyperfocus spirals, and 3 AM breakthroughs. Building an AI system that matches how I actually think — not how productivity gurus say I should — changed everything.
AI Safety · Autonomy
Why Your AI Needs to Know What It Doesn't Know
Confidence calibration, epistemic humility, and why an AI that knows the edges of its knowledge is safer than one that doesn't.
Architecture · Cognition
The Identity Model: Building a Living Cognitive Model of a Human
What happens when an AI builds a continuously evolving cognitive model of its human through multimodal understanding?
Architecture · Autonomy
Graceful Degradation: Designing AI That Survives Its Own Failures
What happens when subsystems go offline or connections drop at 3am? The engineering answer to graceful failure.
Cognition · Architecture
Dreaming in Data: What an AI Learns While You Sleep
An AI system runs autonomous cognitive cycles overnight, generating insights. Here’s what it actually produces and why it matters.
AI Safety · Autonomy
Ethical Action Governance: When Your AI Refuses Your Instructions
What happens when you build an AI with an ethical floor it won’t cross — even for you? A look at what that means in practice and why I built it that way.
Scroll