LLM Neurosis
There was no Skill Issue last week because I was struggling to synthesize my thoughts. I'm not sure I've come to any real conclusions, but I've at least come up with some coherent thoughts.
This week I'm talking about LLMs. I try not to discuss them too much here in Skill Issue, but this week I'm making an exception. Feel free to jump ship here. I won't blame you.
Unfortunately, while my Mastodon is feed is filled with people committed to avoiding AI at all costs, I cannot. I'm running a business in the industry (probably) most affected by this technology. Despite my skepticism of this technology, we're now seeing it reflected in the expectations of our clients.
There are many valid ethical concerns with LLMs. I agree with many of those concerns. Overall, I think the world would be better off without this technology. It's already done plenty of harm and will do more.
I believe we could solve the environmental impacts, but I doubt we will. There are more pressing environmental concerns, and humankind has been ignoring those for even longer.
I don't believe we can solve the negative social impact of this technology. This technology is going to continue to cause AI psychosis and spread misinformation. You don't need to think very hard to see what BullshitBench (which measures LLM's ability to reject nonsense queries) means for people using ChatGPT to learn new skills.
The only ethical concern that people bring up around LLM's that doesn't concern me is regarding intellectual property. Beyond legal obligations, I don't give a shit about intellectual property rights. I don't view ideas as property. (The world around me does, though, so I play by those rules.)
Unfortunately, none of the above are going to change expectations of most businesses who want to build software. At some point late last year these technologies got good enough. They reached a point where, despite my skepticism, I was forced to admit that they could make me (and my team) more productive.
That productivity comes at a cost. LLM-based coding tools are expensive. If we lean fully into them, my team could easily spend thousands a day on Claude. We're not a company like Shopify (who is leaning fully into this tech). Our budget is limited.
We're also a consultancy, and if we were to spend $20k/month on the Anthropic API, that'd all be coming out of our profits. For the time being, we trade our time for money. While our clients would love for us to be even more productive than we are, do we negotiate new rates that reflect AI spend? How do we quantify that?
Alternatively, do we simply pass the AI spend directly on to our clients? That could make sense. It's at least a coherent offering. I could say to our clients, "you set the AI budget, we'll use it as effectively as we can." There are benefits to that approach.
We're in a transition phase. Super Good's value proposition has always been that we bring expertise (in digital commerce, Solidus, Ruby on Rails, et cetera) and that's still something our clients need in the so-called "age of AI". How we best deliver that expertise remains to be seen.1
Psych-sludge legends, Neurosis have just dropped a new album. I've only just put it on, so I don't have any real thoughts to share, but if you're going to listen to one new release this week, surely this should be it.
-
All that said, if your business wants some code written by humans, hit us up. We love actually writing code. ↩