LLMs are only a threat if you see your job as a code monkey. In that case you're likely already obsoleted by outsourced staff who can do your job much cheaper.
If you see your job as a "thinking about what code to write (or not)" monkey, then you're safe. I expect most seniors and above to be in this position, and LLMs are absolutely not replacing you here - they can augment you in certain situations.
The perks of a senior is also knowing when not to use an LLM and how they can fail; at this point I feel like I have a pretty good idea of what is safe to outsource to an LLM and what to keep for a human. Offloading the LLM-safe stuff frees up your time to focus on the LLM-unsafe stuff (or just chill and enjoy the free time).
I see my job as having many aspects. One of those aspects is coding. It is the aspect that gives me the most joy even if it's not the one I spend the most time on. And if you take that away then the remaining part of the job is just not very appealing anymore.
It used to be I didn't mind going through all the meetings, design discussions, debates with PMs, and such because I got to actually code something cool in the end. Now I get to... prompt the AI to code something cool. And that just doesn't feel very satisfying. It's the same reason I didn't want to be a "lead" or "manager", I want to actually be the one doing the thing.
You won't be prompting AI for the fun stuff (unless laying out boring boilerplate is what you consider "fun"). You'll still be writing the fun part - but you will be able to prompt beforehand to get all the boilerplate in place.
If you’re writing that much boilerplate as part of your day to day work, I daresay you’re Doing Coding Wrong. (Virtue number one of programming: laziness. https://thethreevirtues.com)
Any drudgework you repeat two or three times should be encapsulated or scripted away, deterministically.
Not just laziness of writing scripts, but also laziness of learning what your options are, like inside the framework you use, or what is available off the shelf.
And btw AI is also terrible with this, because they learned from the same code written by the people who make these mistakes all the time. I need to write detailed explanations for them all the time about how to use tools/frameworks/language features properly, because majority of examples in their learning data are simply a huge pile of technical debt. They could never created anything proper without a step by step rulebook, and examples written manually.
This is a nice fantasy. In practice, maintaining tools that help you scaffold common code patterns take more time to create and maintain than it does to copy, paste, and edit.
Turns out LLMs are REALLY good at "make me this thing that is 90% the same as another thing I've built / you've seen before, but with this 10% being different"
Also, by your own metrics, laziness is a virtue, and copying, pasting, and editing is much easier and lazier than maintaining boilerplate tools. So it's not even following your 3 commandments.
> Also, by your own metrics, laziness is a virtue, and copying, pasting, and editing is much easier and lazier than maintaining boilerplate tools. So it's not even following your 3 commandments.
I mean, so is paying someone to write the code for you, but you're not really an engineer at that point, are you?
Engineering involves using stable, deterministic abstractions and components and understanding the architecture and ramifications of your design on a deep level. Yes, you can outsource this work. But don't delude yourself into thinking that you're still in the same profession.
(Of course, maybe you always thought of yourself as an entrepreneur and only saw coding as a means to an end. I think a lot of people are coming to that conclusion.)
LLM output sort of “vendors in” smart macros (for lack of a better description) by saving the actual output of the LLM. In that sense, they serve different purposes.
Yes, LLMs are more like offline code generators that can't be reliably re-run. So the very first step of producing the code is "easy", but after that you have lost that ease, and have to read and maintain the larger generated output.
Until they can magically increase context length to such a size that can conveniently fit the whole codebase, we're safe.
It seems like the billions so far mostly go to talk of LLMs replacing every office worker, rather than any action to that effect. LLMs still have major (and dangerous) limitations that make this unlikely.
Models do not need to hold the whole code base in memory, and neither do you. You both search for what you need. Models can already memorize more than you !
> Models do not need to hold the whole code base in memory, and neither do you
Humans rewire their mind to optimize it for the codebase, that is why new programmers takes a while to get up to speed in the codebase. LLM doesn't do that and until they do they need the entire thing in context.
And the reason we can't do that today is that there isn't enough data in a single codebase to train an LLM to be smart about it, so first we need to solve the problem that LLM needs billions of examples to do a good job. That isn't on the horizon so we are probably safe for a while.
Getting up to speed is a human problem. Computers are so fast they can 'get up to speed' from scratch for every session, and we help them with AGENTS files and newer things like memories; e.g., https://code.claude.com/docs/en/memory
It is not perfect yet but the tooling here is improving. I do not see a ceiling here. LSPs + memory solve this problem. I run into issues but this is not a big one for me.
I’ll believe it when coding agents can actually make concise & reusable code instead of reimplementing 10 slightly-different versions of the same basic thing on every run (this is not a rant, I would love for agents to stop doing that, and I know how to make them - with proper AGENTS.md that serves as a table of contents for where stuff is - but my point is that as a human I don’t need this and yet they still do for now).
In my experience they can definitely write concise and reusable code. You just need to say to them “write concise and reusable code.” Works well for Codex, Claude, etc.
I guide the AI. If I see it produce stuff that I think can be done better, I either just do it myself or point it in the right direction.
It definitely doesn't do a good job of spotting areas ripe of building abstractions, but that is our job. This thing does the boring parts, and I get to use my creativity thinking how to make the code more elegant, which is the part I love.
As far as I can tell, what's not to love about that?
If you’re repeatedly prompting, I will defer to my usual retort when it comes to LLM coding: programming is about translating unclear requirements in a verbose (English) language into a terse (programming) language. It’s generally much faster for me to write the terse language directly than play a game of telephone with an intermediary in the verbose language for it to (maybe) translate my intentions into the terse language.
In your example, you mention that you prompt the AI and if it outputs sub-par results you rewrite it yourself. That’s my point: over time, you learn what an LLM is good at and what it isn’t, and just don’t bother with the LLM for the stuff it’s not good at. Thing is, as a senior engineer, most of the stuff you do shouldn’t be stuff that an LLM is good at to begin with. That’s not the LLM replacing you, that’s the LLM augmenting you.
Enjoy your sensible use of LLMs! But LLMs are not the silver bullet the billion dollars of investment desperately want us to believe.
> as a senior engineer, most of the stuff you do shouldn’t be stuff that an LLM is good at to begin with
Your use of the word "should" is pointing to some ideal that doesn't exist anymore.
In current actual reality, you do whatever your employer gives you to do, regardless of your job title.
If you have 40 years of broad development experience but your boss tells you to build more CRUD web apps or start looking for another job in the current ATS hell, then the choice whether to use coding agents seems obvious to me.
I think the point is that if you're building yet-another-CRUD web app, why aren't you abstracting more of it away already? It's not like we don't have the facilities for this in programming languages already.
The main issue with current LLM hypers is the complete unrealistic scenarios they come up with. When building a CRUD app, the most obvious solution is to use a framework to take care of the common use cases. And such framework will have loads of helpers and tools to speed up boilerplate.
An LLM isn’t (yet?) capable of remembering a long-term representation of the codebase. Neither is it capable of remembering a long-term representation of the business domain. AGENTS.md can help somewhat but even those still need to be maintained by a human.
But don’t take it from me - go compete with me! Can you do my job (which is 90% talking to people to flesh out their unclear business requirements, and only 10% actually writing code)? It so, go right ahead! But since the phone has yet to stop ringing, I assume LLMs are nowhere there yet. Btw, I’m helping people who already use LLM-assisted programming, and reach out to me because they’ve reached their limitations and need an actual human to sanity-check.
Dunning–Kruger is everywhere in the AI grift. People who don't know a field trying to deploy some AI bot that solves the easy 10% of the problem so it looks good on the surface and assumes that just throwing money (which mostly just buys hardware) will solve it.
They aren't "the smartest minds in the world". They are slick salesmen.
And if you didn't know, Claude Code is actually based on React and they are struggling to keep it at 60 FPS, whatever that means in the context of a terminal app
They are writing markup to render monospaced characters in a terminal lol
Agreed. Programming languages are not ambiguous. Human language is very ambiguous, so if I'm writing something with a moderate level of complexity, it's going to take longer to describe what I want to the AI vs writing it myself. Reviewing what an AI writes also takes much longer than reviewing my own code.
AI is getting better at picking up some important context from other code or documentation in a project, but it's still miles away from what it needs to be, and the needed context isn't always present.
Nevermind coding where is the llm for legal stuff? Why are all these programmers working on automating their job away instead of those bloodsucking lawyers who charge hundreds of eur per h.
It’s happening as fast for them. I literally sit next to our general counsel all day at the office. We work together continually. I show him things happening in engineering, and each time he shows me the analogous things happening in legal.
Domain knowledge and gatekeeping. We don't know what is required in their role fully, but we do know what is required in ours. We also know that we are the target of potentially trillions in capital to disrupt our job and that the best and brightest are being paid well just to disrupt "coding". A perfect storm of factors that make this faster than other professions.
It also doesn't help that some people in this role believe that the SWE career is a sinking ship which creates an incentive to climb over others and profit before it tanks (i.e. build AI tools, automate it and profit). This is the typical "It isn't AI, but the person who automates your job using AI that replaces you".
Why is that safe in the medium to long term? If LLMs can code monkey already after just 4 years, why assume in a couple more they can’t talk to the seniors’ direct report and get requirements from them? I’m learning carpentry just in case.
But I also have no idea how people are going to think about what code to write when they don't write code. Maybe this is all fine, is ok, but it does make me quite nervous!
That is definitely a problem, but I would say it’s a problem of hiring and the billion-dollars worth of potential market cap resting on performative bullshit that encourages companies to not hire juniors to send a signal to capture some of those billions regardless of actual impact on productivity.
LLMs benefit juniors, they do not replace them. Juniors can learn from LLMs just fine and will actually be more productive with them.
When I was a junior my “LLM” was StackOverflow and the senior guy next to me (who no doubt was tired of my antics), but I would’ve loved to have an actual LLM - it would’ve handled all my stupid questions just fine and freed up senior time for the more architectural questions or those where I wasn’t convinced by the LLM response. Also, at least in my case, I learnt a lot more from reading existing production code than writing it - LLMs don’t change anything there.
I agree that they can be used this way, and it would be less of a problem if they were. However, the current evidence we see from universities is that those who use LLMs to actually learn something are in the minority. The dopamine hit of something working without having had to do anything for it is much stronger.
I see what these can do and I'm already thinking, why would I ever hire a junior developer? I can fire up opencode and tell it to work multiple issues at once myself.
The bottleneck becomes how fast you can write the spec or figure out what the product should actually be, not how quickly you can implement it.
So the future of our profession looks grim indeed. There will be far fewer of us employed.
I also miss writing code. It was fun. Wrangling the robots is interesting in its own way, but it's not the same. Something has been lost.
You hire the junior developer because you can get them to learn your codebase and business domain at a discount, and then reap their productivity as they turn senior. You don’t get that with an LLM since it only operates on whatever is in its context.
(If you prefer to hire seniors that’s fine too - my rates are triple that of a junior and you’re paying full price for the time it takes me learning your codebase, and from experience it takes me at least 3 months to reach full productivity.)
Plenty of places, actually. Maybe not so much in the companies people here tend to be familiar with. It happens all the time where I work (smaller company far from the Bay area).
Because a junior developer doesn't stay a junior developer forever. The value of junior developers has never been the code they write. In fact, in my experience they're initially a net negative, as more senior developers take time to help them learn. But it's an investment, because they will grow into more senior developers.
The question really is what you think the long term direction of SWE as a profession is. If we need juniors later and senior's become expensive that's a nice problem to have mostly and can be fixed via training and knowledge transfer. Conversely people being hired and trained, especially when young into a sinking industry isn't doing anyone any favors.
While I think both sides have an argument on the eventual SWE career viability there is a problem. The downsides of hiring now (costs, uncertainity of work velocity, dry backlogs, etc) are certain; the risk of paying more later is not guaranteed and maybe not as big of an issue. Also training juniors doesn't always benefit the person paying.
* If you think long term that we will need seniors again (industry stays same size or starts growing again) given the usual high ROI on software most can afford to defer that decision till later. Goes back to pre-AI calculus and SWE's were expensive then and people still payed for them.
* If you think that the industry shrinks then its better to hold off so you get more out of your current staff, and you don't "hire to fire". Hopefully the industry on average shrinks in proportion to natural retirement of staff - I've seen this happen for example in local manufacturing where the plant lives but slowly winds down over time and as people retire they aren't replaced.
> The question really is what you think the long term direction of SWE as a profession is. If we need juniors later and senior's become expensive that's a nice problem to have mostly and can be fixed via training and knowledge transfer. Conversely people being hired and trained, especially when young into a sinking industry isn't doing anyone any favors.
Yes exactly!
What will SWE look like in 1 year? 5 years? 10?
Hiring juniors implies you're building something that's going to last long enough that the cost of training them will pay off. And hiring now implies that there's some useful knowledge/skill you can impart upon them to prepare them.
I think two things are true: there will be way fewer developer type jobs, full stop. And I also think whatever "developers" are / do day to day will be completely alien from what we do now.
If I "zoom out" and put my capitalist had on, this is the time to stop hiring and figure out who you already have who is capable of adapting. People who don't adapt will not have a role.
> If you think that the industry shrinks then its better to hold off so you get more out of your current staff, and you don't "hire to fire". Hopefully the industry on average shrinks in proportion to natural retirement of staff - I've seen this happen for example in local manufacturing where the plant lives but slowly winds down over time and as people retire they aren't replaced.
You can look even closer than that - look at some legacy techs like mainframe / COBOL / etc. Stuff that basically wound down but lasted long enough to keep seniors gainfully employed as they turned off the lights on the way out.
LLMs are a threat to the quality of code in a similar - but much more dramatic - way to high level languages and Electron. I am slightly worried about keeping a job if there's a downturn, but I'm much more worried about my job shifting into being the project manager for a farm of slop machines with no taste and a complete inability to learn.
I think it’s naive to think that not every part of our jobs will worryingly soon be automated. All the way up to and inckuding CEO. This is not exciting.
If you believe juniors are already not safe, it’s only a question of time before seniors are in the same position. First they came for the socialists, etc etc.
If you see your job as a "thinking about what code to write (or not)" monkey, then you're safe. I expect most seniors and above to be in this position, and LLMs are absolutely not replacing you here - they can augment you in certain situations.
The perks of a senior is also knowing when not to use an LLM and how they can fail; at this point I feel like I have a pretty good idea of what is safe to outsource to an LLM and what to keep for a human. Offloading the LLM-safe stuff frees up your time to focus on the LLM-unsafe stuff (or just chill and enjoy the free time).