I’ve heard from no-coders who have built projects using LLMs, and their experience is similar. They start off strong, but eventually reach a point where they just can’t progress anymore, no matter how much they coax the AI. The code is a bewildering mess of non sequiturs, and beyond a certain point, no amount of duct tape can keep it together. It collapses under its own weight.
I couldn’t agree more. Alexa and I were discussing a WordPress theme I was playing with in Cursor, and I was telling her that the only reason I’m able to spin up some cool internal projects (despite being a strategist and not a developer) is that I understand the logic of front-end languages and what they can and can’t do.
Even if I were to insert all the CSS properties in a training doc and it could read my mind and there were no problems with the code – three highly laughable assumptions – you need to understand easings and z-index and the effects these have on other objects to avoid tearing your hair out. Part of this is some pretty dishonest marketing on how “far we’ve come” in generative AI, the other part is accepting the reality that we’ve overestimated the cost savings of FTEs as a result of AI.
Anyway, good post by Josh. Check it out.