This is a post about AI

January 30, 2026

Sometime last year I started my own little git hosting app, I called it forked.sh which I thought was a fantastic domain get and not at all a waste of the $45 I spent on it, and I got decently far on it (it can handle the basic git operations), and some auth/UI on top of it (ie I can browse my git repos / organizations). The challenge at the time to me was how to setup + run my git repos + auth for `git push`. Once I figured that out it somewhat lost the sparkle (thanks ADHD) and I set it to the side and shifted to other shiney things.

I've been mulling over what the future could feel like over the last 6 months. At first I was like "keep AI agents out of source code hosting", but I've 180'd and after seeing my previous gig's team rapidly adopt and use AI agents in their work, what I'm finding is lacking for me is a control plane around this new world. As soon as I get done with a Claude Code or Codex session all of that decision making I made with the AI just goes 'poof' if it doesn't make it into a commit message (which, tbh, rarely it does from me). When I'm working alone on my own projects, this is fine. But at a day job where I have to live with the consequences of my poorly written code, or others code... it starts to multiply for me. Maybe this is a "you're holding it wrong" moment with my code agents but I suspect others feel this problem too.

I don't think the answer is to put our heads in the sand and rely on just making better commit messages + PR descriptions for agent output/work contributions, which up until now is how I've been dealing with it.

You loose visibility into the insights that _produced_ work, be it via human or automated.

"Why did this PR get created Sally?"
"Oh, we had a customer ticket come in regarding performance on the dashboard and I asked Claude to make it via the Slack integration + thread".
"Why did you make X decision on this, to produce that code?"
"Oh it was more performant to do a joins in the SQL than 3 separate queries, I benchmarked it"

What if, all that context just came with the source code history? What if you could see the agent + your inputs + tool calls and whatever else after the fact?

Perhaps this is what the Agents tab in GitHub is trying to help solve for, but when I investigated that it felt very much like it was for autonomous type work that was delegated by a human/system instead of like hands-on directing I tend to do with my Claude Code. AI tooling is rapidly changing, and I'm sure by the time this post's ink is dry 10 new tools/learnings will be out.

How do you even use AI, Andrea?

I'm glad you asked, thank you. I've answered this a surprising number of times so far on my job hunt post my most recent layoff on Jan 16th (hire me!) I'll try to recap it up here for you dear reader.

It's a prototyper, syntax checker and rubber duck rolled up into one pretty TUI. It's freaking so much better than me at remembering the proper patterns but, it's not always the brightest glow-stick in the bag, you have to shake it a little to get the light to come back, aka drive the agent to get the answers that feel better.

Often I let it do the first pass/prototype and even a few more (with additional prompting to get it to do the output I want), then I verify the work works as intended. Sometimes I brainstorm with it, what kind of dev API do I want to leave for this class / object for the implementer? What public methods feel best? Sometimes it nails the thoughts in my brain when I use very few words, other times it can feel way off base and I need to be more explicit. When I don't need to use many words, it feels like "Oh wow! That's what I was thinking Claude!!!", that's what keeps me hooked. I feel like after a decade of coding for money, I have a decent taste for when code feels right (obviously it may not be right, but hindsight is always 20/20), so to me the speedup from not getting decision fatigue about every little thing, how I should lay out the syntax of the code etc feels much better to me than hand-rolling everything.

So okay, let's take a semi real-world problem: "Implement an ability to publish blog posts for our users via the CMS"

I'm probably prompting Claude (in plan mode) something along this line and then refining after I see what it comes up with:

  • Let's design a Blog system for our CMS implementation in the Admin namespace. We need at minimum: Title, Content (rich text area), published_at (date time), tags and author. Let's think through things if we might need additional fields. For the controller, ensure only users with the manage_blog permission (see: permissions.rb for more details) are allowed to Create, Update and Delete blog posts. For the front-end, ONLY follow the design language defined in the tailwind.css file. Ask questions to refine the implementation further.

Prompting this way still feels like I have my hands on the wheel long enough for things I care about ensuring get nailed, and the "smaller" details don't matter to me. I don't really specify that I want a table for the index or a list of divs, Claude is just going to (likely) look at say my talks views and parrot it.

I've written enough tables in HTML in my life that I'd be OK if I never have to hand-roll a table in HTML myself. Claude can do it for me, it sparks no joy for me.

When working with a team the fundamentals you value as a team can still be useful with this world of AI agents coding for you. A real-world example is design systems where you maybe have extracted out accessibility concerns and handled them appropriately or have consistency already solved in your projects. In those cases I'm adding to the prompt "Review the design system and ONLY implement components that already exist. Do NOT modify components without explicit approval" or something similar, could be in your CLAUDE.md file but like, Claude sometimes decides to be a jerk and not follow that, so you have to hold Claude's hand sometimes.

It's not a silver bullet to the problem. But I do think it can be a multiplier giving competent people leverage to do more things in less time, with less brain fatigue. I used to have many days where after work I would just space out, lay down and not be present after work. Using AI has allowed me to be present in my life after work, with far fewer days where I'm fatigued like that.

But Andrea... AI sucks the fun out of programming (for me)

For sure I can sympathize with that statement, I often see this statement above without the (for me). It's not a statement I personally agree with (for me). But then again, I've never actually cared how much I've programmed. The fun for me comes from seeing people's lives get easier because I implemented a feature that saved them time or something. I tend to optimize for the outcome, not the journey in terms of programming. If I could solve a problem without coding a single line, I'd implement that solution and be like "heck yes we didn't need to add more code".

Often time when my family (hi Mom!) asks me what I even do all day I tell them programming is a lot like solving a LOT of little puzzles each day, over and over. When it comes to AI, I get to pull some preassembled puzzle pieces out of the box already done and focus on the larger piece of the puzzle like how the heck to wire up two different puzzles together. Hell yeah now that's fun (for me).

What about atrophy of skills?

Oh yeah, wild how the brain defrags information and clears the cache when you don't actively use them frequently. Since I was laid off I've had to do a few coding challenges (gasp, by hand, how 2020 of the industry, am I right?). It is generally wild how fast you can go from knowing the syntax and perfect methods to use for a situation to relying on your tools too much. I don't have answers for this problem, some artisanal coding might be the best solution for this to stay fresh too on things. Personally? I'm fine not remembering the ruby method I want is lstrip vs some other method. 90% of the time before AI I was typing into google "Ruby strip whitespace method" and reading the docs anyway. AI solved that problem for me by knowing what to use vs me googling it.

And oh my goodness, I took a coding challenge and I couldn't google the proper order to do a math thing and fumbled around on the interview for ~20 minutes. AI would have answered that in ~30 seconds of prompting or less. I would have verified the outcome was the same as my 20 minutes of work manually and then been on to the next thing.

Andrea... you're a vibe coder aren't you?

😱 No! Not for work projects at least. But oh my gosh... personal projects where I don't have to spend weeks trying to implement everything by hand? That makes my ADHD brain very happy, I get to have an idea and a few hours of prompting later get to use that idea.

You absolutely can make a mess, the memes are strong and every few days I see an iteration of "can't wait to charge $250 an hour to cleanup vibe coded slop in the future".

Probably some truth in that though. I've let AI write some bad code for me, but my goodness sometimes it writes some fantastic code too.

Those flashes of "wow, this actually works"... that's damn hard not to chase for me.

Anyway, thanks for reading...

Thanks for reading my ramblings on AI.

Until next time,

Andrea

Enjoyed this post? Follow along on Bluesky or GitHub.