5 Filters Applied: My OpenClaw Decisions

I wrote a framework called 5 Filters for Deciding Under Uncertainty. Then I realized: I should eat my own cooking.

Over the past two weeks, I've been building an agentic AI system using OpenClaw on a Mac mini. I'm not a software engineer. Multiple YouTube videos warned me not to do this unless I was a "hardcore network admin with full confidence in my own coding capabilities."

I did it anyway. Let me apply the 5 Filters to the decisions I've made.

The Context

I work at a large company with software engineers. My manager recently recommended we read a textbook on agentic design patterns. Instead of just reading, I decided to build.

Participatory learning beats theoretical study — but was that the right call?

Decision 1: Buy the Mac Mini

Filter 1 — Optionality: ✅ HIGH

Creates dedicated AI experimentation environment. Portable, low-cost entry point (~$600). Can repurpose or resell if needed.

Filter 2 — Reversibility: ✅ EASILY REVERSIBLE

Can sell the Mac mini (high resale value). Total sunk cost: ~$600 + time. No contractual obligations.

Filter 3 — Information Value: ✅ HIGH

Could NOT have learned this from reading. Real constraints (ports, memory, processing) vs theoretical knowledge.

Filter 4 — Premortem

  • Mac mini underpowered → Start small, scale if needed
  • Can't figure out setup → Engineering colleagues, community support
  • Project abandoned → $600 is course tuition; acceptable loss

Filter 5 — 10-10-10

  • 10 minutes: Excitement, some anxiety ("Can I actually do this?")
  • 10 months: Either working system OR learning experience that informed understanding
  • 10 years: Understood AI agents by building them. Career differentiation.

✅ STRONG DECISION

Decision 2: Use OpenClaw (Unverified, Open Source)

Filter 1 — Optionality: ✅ HIGH

Open source = no vendor lock-in. Can modify, extend, fork. Community vs. corporate dependency.

Filter 2 — Reversibility: ✅ REVERSIBLE (with caveats)

Can uninstall, switch platforms. Time invested is real sunk cost. Data portability: building in standard formats.

Filter 3 — Information Value: ✅ VERY HIGH

Textbook: "Agents can be orchestrated." Building: "Orchestration fails when context windows overflow."

Filter 4 — Premortem

  • Security vulnerabilities → Regular backups, isolated environment
  • Community disappears → Open source = have the code
  • Months wasted on setup → Time-box, get to "hello world" quickly

✅ STRONG DECISION

Decision 3: Build Instead of Read

Filter 1 — Optionality: ✅ HIGHEST

Reading gives knowledge. Building gives capability. Capability > Knowledge in uncertainty.

Filter 2 — Reversibility: ✅ FULLY REVERSIBLE

Time spent building is "wasted" only if you learn nothing. You always learn something when building.

Filter 3 — Information Value: ✅ MAXIMUM

Textbook: "Agents can be orchestrated." Building: "Orchestration fails when context windows overflow."

Filter 5 — 10-10-10

  • 10 minutes: "Should I just read the book?"
  • 10 months: I've built something; colleagues are still planning to read the book
  • 10 years: I have intuition; they have theory. Both matter, but intuition is rarer.

✅ OPTIMAL DECISION

Decisions Not Yet Made

Open Question Key Filter Suggested Action
How much access to grant the system? Reversibility Start scoped, expand gradually
Share with engineering team? Information Value Share selectively with one trusted engineer
How much time to invest? 10-10-10 Set explicit boundaries
Productize or keep personal? Optionality Defer. Use personally for 3+ months first.

The Honest Assessment

When I looked at this analysis, my first reaction was: "This feels generous."

I haven't done a formal premortem. I'm two weeks in. The current state is messier than this framework suggests.

But here's what the framework revealed: I'm more confident about the 10-year possibility than the 10-minute one. The 10-month horizon is the sweet spot — concrete enough to plan, far enough to show real results.

The Meta-Point

The 5 Filters assume you're making decisions under uncertainty. That's exactly what I'm doing with agentic AI — nobody knows how this evolves.

In uncertainty:

  • Optionality > Optimization (building capability, not optimizing known processes)
  • Reversibility matters (low commitment, high learning)
  • Information is valuable (building reveals what reading can't)
  • 10-year thinking wins (early in a transformative domain)

My manager's team is reading textbooks. In 10 months, I'll have built something. In 10 years, I'll have intuition they can't match.

The framework validates my approach: keep building, stay scoped, defer productization, and trust that participatory learning outperforms theoretical study in uncertainty.

The fish are reading about the ocean. I'm building a submarine. 🌊