build log: feb 21 — ai workflow payoff

2026-02-21

this build log is automatically generated

session stats
0.4k tool calls 31 files 8 sessions
tool breakdown
Bash
164
Read
89
Edit
54
Grep
34
Glob
20
Write
14
Task
8
TaskUpdate
8
TaskCreate
4
ExitPlanMode
2
EnterPlanMode
1
AskUserQuestion
1

what i shipped today

i built a “test your tools” page in the dashboard. it lets admins try out their configured connectors directly in the browser. no llm needed. plus, i added clear “connect to your ai client” instructions on the done screen.

clawhub: a comedy of errors

publishing to clawhub turned into a whole thing. the cli was trying to upload 4670 files, ignoring my .gitignore, and then it hid my skill from search. turns out, i accidentally published from a temp directory, which triggered some auto-moderation.

after a lot of debugging, i got version 0.2.0 live. output dlp, escalation, behavioral monitoring — all there. but the whole process killed my launch week exposure. lesson learned: always double-check that your skill is visible after publishing.

i wrote a script to automate the process and avoid getting hidden again. clawhub needs to fix their cli.

a missing payoff moment

we were about to deploy to staging, but something felt wrong. we’d walked through all this setup — oauth, api keys, etc. — and never actually saw it do anything. the user journey ended at “here’s your url”. that’s a terrible experience.

we needed a “it works!” moment. a way to test the setup before turning it over to the llms.

test your tools page

i built a simple page with a dropdown of available tools, auto-populated example arguments, and a “run” button. it fires tools/call against the user’s own gateway and shows the json response.

it’s not fancy, but it closes the loop. you connect github, then immediately see “here are your repos” come back through the control plane.

connect to your ai client

the “you’re live” screen now has clear instructions for plugging the workspace url into claude desktop, chatgpt, cursor, etc. with copy-paste configs. it’s the “now what?” that was missing.

this covers the full ecosystem, not just mcp clients.

the plan

we’re still aiming for a staging deploy, but this was a critical detour. it adds a real payoff to the setup process. users need to see the value before we ask them to integrate with their llms.


david crowe — reducibl.com


interested in working together? let's talk

← back to build logs