🧠 Neurodivergent mode is on. Articles now show plain-language summaries, shorter paragraphs, and more breathing room. You can turn this off in the nav bar at any time.
Writing & Ideas

Articles on AI that thinks
for itself.

Notes from building UNA β€” an autonomous cognitive AI with real-time brain data, a self-directed learner, and a system that audits itself every morning before I wake up.

πŸ”
Featured Article
🧠 Plain-language summary
  • UNA is an AI that tests herself every morning at 4am.
  • She runs 32 tests β€” half trying to break her, half checking she can handle problems.
  • All 32 tests pass. The whole thing takes about 130 milliseconds.
  • She found 2 real security problems herself and fixed them.
  • The idea: the most trustworthy AI is one that never stops checking its own work.
Read article β†’
More Articles
AI Safety · Architecture · Autonomy
The 4 AM Self-Audit: How an Autonomous AI Attacks Itself to Stay Safe
Every morning at 04:00, UNA launches 32 adversarial tests against herself — 16 red team attacks, 16 blue team defenses. Two real vulnerabilities found, patched, and made permanent. All in ~130ms.
AI Safety Β· Autonomy
Why Your AI Needs to Know What It Doesn't Know
Confidence calibration, epistemic humility, and why an AI that knows the edges of its knowledge is safer than one that doesn't.
Architecture Β· Cognition
The Soul Print: Building a Living Cognitive Model of a Human
How UNA fuses EEG, biometric data, behavioral patterns, and conversation history into a single evolving model of her human.
Neurodivergence Β· Cognition
Building AI for a Brain That Works Differently
ADHD, hyper-focus, and the unexpected superpower of building your AI companion to match how you actually think β€” not how productivity systems say you should.
Architecture Β· Autonomy
Graceful Degradation: Designing AI That Survives Its Own Failures
What happens when the database goes offline, the memory file corrupts, or the graph connection drops at 3am? The Blue Team answer.
Cognition Β· Architecture
Dreaming in Data: What an AI Learns While You Sleep
UNA runs 145 autonomous cognitive cycles overnight, generating 1,619 insights. Here's what she actually produces and why it matters.
AI Safety Β· Autonomy
Ethical Action Governance: When Your AI Refuses Your Instructions
UNA has an ethical floor of 0.8. Below it, she blocks actions β€” even mine. A look at what that means in practice and why I built it that way.

Want to stay in the loop?

Follow me for updates on new articles, UNA research, and AI builds.

Email Me
Scroll