A spiky point of view (SPOV) is a perspective others can disagree with. It’s a belief you feel strongly about and are willing to advocate for. It’s your thesis about topics in your realm of expertise. - Wes Kao
AI is changing software engineering like gunpowder changed warfare. It’s powerful, it’s dangerous, and no-one is sure how to get the best advantage, but someone’s going to get slaughtered.
Personally, I’m on team clanker. I went all-in a year ago: I spent 10 intense weeks at Gauntlet building with AI nearly every waking minute, and I now work with an AI-first team (currently hiring by the way) where LLM-maximalism is a baseline job expectation. AI now writes 100% of “my” code, and it’s months since I typed so much as an if.
I love it. I’m never going back.
On my team we’re encouraged to share our “spiky points of view” (see above). Software in the age of AI is a fertile field for SPOVs, since no-one can agree on anything. But I think I have an informed perspective: by the back of the envelope I’ve spent around 2,000 hours by now working with agentic tools like Codex and Claude Code. And I spent twelve years doing things the old-fashioned way, so I’m not some clueless slop-slinger who doesn’t understand what he’s building. I look forward to the next 10,000 hours of agentic engineering, but here are my SPOVs so far:
“Harness engineering” — where the engineer’s job is not to create the application, but to develop and steer the AI harness that creates the application — is the future of software. The coders who crush it will be the ones who figure out the most effective and reliable harness before their competition.
Not going all-on AI is career suicide. It takes time to get good with these tools — a few years from now when their use is non-optional, you can have a few years’ experience, or you can be the world’s greatest tradcoder but an AI novice. Which one do you think will get the job? I know what bet I’m making.
Most of your time should be spent researching, not building. Once you know what you want you’re almost done: just write clear instructions and the AI will execute them. Therefore your job is to figure out what you want in as much detail as possible using AI as your advisor, teacher and research assistant, until you have the clearest possible understanding, then it’s plain sailing. The limiting factor isn’t how fast AI can move, it’s how fast your brain can keep up, which requires the deepest expertise you can develop.
The future belongs to generalists. Hyper-specialised knowledge of specific languages and libraries — the type that lets you crank out thousands of lines without stopping to look anything up — used to set seniors apart from juniors, but its value will trend to zero in a world where AI knows all.
Instead, LLMs empower a new type of senior who needs less time for “coding” and so has new time to become an expert in product, design, infrastructure, ops, the business domain and every detail of the full stack. Once-distinct roles will merge together — software engineering will look more like product management and vice versa.
You don’t need a thousand markdown files defining a swarm of architect and developer and designer and reviewer subagents running in loops messaging each other in convoluted ways. Just open Codex and write your prompt; over time you’ll see where the inefficiencies in your workflow are, then you can add tooling to solve specific problems. The people I see flaunting the overcomplex hyper-automated stuff tend to be influencer-bros of dubious accomplishment or posers trying to sell you something.
Reading all the code is a waste of time. That doesn’t mean you should merge things you don’t understand; on the contrary, you always want a thorough and up-to-date mental model of your codebase. But you can build that from the top down, asking AI for an overview then drilling into specifics only when needed. (Yes, AI makes mistakes, but that’s why you need expertise: the more experienced you are the more you’ll know when to trust the model’s judgement and when to push back.)
Memory files like CLAUDE.md should be built reactively, not proactively. Their whole point is to tell the model things it can’t figure out by itself — and you don’t know what those are until you’ve run the experiment and seen what works. (This is the inherent flaw of /init, which I don’t like to use.) Every memory you add is another source of truth that needs to be maintained and kept up-to-date, so use them sparingly, focusing on compound engineering — accumulated wisdom that prevents the AI from making the same mistake twice.
MCP is a solution looking for a problem. Most of its use cases can be achieved just as well by having the AI directly call a CLI tool without some special protocol. If the AI isn’t familiar with the tool, just have it run tool --help or whatever to figure it out. (I suppose you could create a SKILL.md with the tool’s details, but I’ve never bothered.) MCP isn’t useless, but don’t reach for it until you’ve exhausted all the simpler options.
The times they are a-changin’, but I’m still bullish on Elixir. I’ll write a separate post about this, but you can listen to my recent appearance on the Elixir Mentor podcast to hear some of my reasoning. I also agree with everything José Valim wrote here (although he’s not exactly a neutral source).
I don’t understand people who say that AI sucks the fun out of software development. I’ve never enjoyed my work so much — it’s not even close! AI automates all the boring, repetitive, tedious parts of coding, freeing me to focus on what was always the real appeal: the thinking, the problem-solving, the continual learning. And boy, am I learning fast. To think of all the time I used to waste poring through page after page of tedious documentation and Stack Overflow questions because AI wasn’t available to give the answer instantly. I’m jealous of juniors: you kids don’t know how easy you have it.
If you liked this post, follow me on Twitter for more SPOVs, or check out my free 15-lesson tutorial The LiveView and OTP Crash Course to learn more about what makes Elixir so great.