Back to Blog
Product · Dec 27, 2025 · 4 min read

Day Zero: Why I Built RITHM Around Tasks

Most AI coding tools start with the chat interface. Open a window, type a question, get an answer. I started somewhere different.

Day Zero

Most AI coding tools start with the chat interface. Open a window, type a question, get an answer. It's the obvious pattern because that's how ChatGPT trained us to think about AI interaction.

I started somewhere different. RITHM began with a task list.

The First Commit

On December 26th, the day after Christmas, I pushed the initial commit. A Tauri desktop app with React, a SQLite database, and a 3-panel layout: projects on the left, tasks in the middle, details on the right.

No chat window. No streaming responses. Just tasks.

The bet was simple: context is the bottleneck, not capability. Claude can write code. GPT can write code. The hard part isn't getting the AI to produce output—it's giving it the right input. And that input isn't a single prompt. It's an accumulation of decisions, constraints, and context that builds up over days or weeks.

A task captures that context. It has a title, a description, related files, generated specs, conversation history. It persists. You can come back to it tomorrow. You can see what you tried before.

The Hallucination Problem

By December 27th, I was already fighting the first real battle: spec generation was hallucinating.

I wanted RITHM to generate implementation specs—detailed plans for how to build a feature based on your codebase. The AI would search your code, understand the patterns, and write a spec you could hand to Claude or use yourself.

The problem? It kept referencing files that didn't exist. Functions that weren't there. Patterns it imagined.

The fix wasn't better prompting (though that helped). It was better search. I integrated a hybrid search system—combining semantic similarity with traditional keyword matching—to ground the AI in reality. When it wrote about your codebase, it was writing about actual code it had retrieved, not hallucinated patterns.

This became a theme: retrieval quality determines generation quality. The AI is only as good as the context you give it.

The Local-First Bet

Every architecture decision in those first two days pointed toward local-first. SQLite, not Postgres. Desktop app, not web app. Files on your machine, not files in the cloud.

The reasoning wasn't privacy (though that's a benefit). It was latency and reliability.

When you're searching a codebase, you need results in milliseconds, not seconds. When you're indexing files, you need to process thousands of chunks without network round-trips. When you're iterating on a spec, you can't wait for a server to respond.

Local-first means you own your data and your compute. The AI calls go to the cloud (for now), but everything else stays on your machine.

What I Learned

Two days. Eleven commits. The foundation was set.

The key insight from Day Zero: AI coding tools fail when they lose context. Chat interfaces are ephemeral. They forget. They start fresh every time.

Tasks don't forget. They accumulate context. And that context is the difference between an AI that hallucinates and one that actually helps you build.


Key commits:

  • 40d0be3 Initial commit: CodeContext Tauri app
  • 33f692a Add task management feature with 3-panel layout
  • fa09b82 Fix spec generation hallucination and add global indexing status
  • 9ce72c5 Integrate HybridSearcher into spec generation
  • 8e1c0be Enhance spec generation with query expansion and task improvement
R

The rithm Team

Building tools for AI-assisted development

Download rithm