tptacek 4 hours ago

After all, if we lose the joy in our craft, what exactly are we optimizing for?

Solving problems for real people. Isn't the answer here kind of obvious?

Our field has a whole ethos of open-source side projects people do for love and enjoyment. In the same way that you might spend your weekends in a basement woodworking shop without furnishing your entire house by hand, I think the craft of programming will be just fine.

  • JohnFen 2 hours ago

    > Solving problems for real people. Isn't the answer here kind of obvious?

    No. There are a thousand other ways of solving problems for real people, so that doesn't explain why some choose software development as their preferred method.

    Presumably, the reason for choosing software development as the method of solving problems for people is because software development itself brings joy. Different people find joy in different aspects even of that, though.

    For my part, the stuff that AI is promising to automate away is much of the stuff that I enjoy about software development. If I don't get to do that, that would turn my career into miserable drudgery.

    Perhaps that's the future, though. I hope not, but if it is, then I need to face up to the truth that there is no role for me in the industry anymore. That would pretty much be a life crisis, as I'd have to find and train for something else.

    • simonw an hour ago

      "There are a thousand other ways of solving problems for real people, so that doesn't explain why some choose software development as their preferred method."

      Software development is almost unique in the scale that it operates at. I can write code once and have it solve problems for dozens, hundreds, thousands or even millions of people.

      If you want your work to solve problems for large numbers of people I have trouble thinking of any other form of work that's this accessible but allows you to help this many others.

      Fields like civil engineering are a lot harder to break into!

    • fragmede an hour ago

      I'm probably just not as smart or creative as you, but say my problem is I have a ski cabin that I want to rent it to strangers for money. Nevermind a thousand, What are 100 ways without using software that I could do something about that, vs listing it on Airbnb?

  • frollogaston 4 hours ago

    Same as when higher-level languages replaced assembly for a lot of use cases. And btw, at least in places I've worked, better traditional tooling would replace a lot more headcount than AI would.

    • codr7 3 hours ago

      Not even close, those were all deterministic, this is probabilistic.

      • eddd-ddde an hour ago

        Yet the words you chose to use in this comment were entirely modelled inside your brain in a not so different manner.

      • tptacek 3 hours ago

        The output of the LLM is probabilistic. The code you actually commit or merge is not.

        • rozap 3 hours ago

          i'm just vibing though, maybe i merge, maybe i don't, based on the vibes

      • frollogaston 3 hours ago

        So what? I know most compilers are deterministic, but it really only matters for reproducible builds, not that you're actually going to reason about the output. And the language makes few guarantees about the resulting instructions.

  • ToucanLoucan 3 hours ago

    > Solving problems for real people. Isn't the answer here kind of obvious?

    Look at the majority of the tech sector for the last ten years or so and tell me this answer again.

    Like I guess this is kind of true, if "problems for real people" equals "compensating for inefficiencies in our system for people with money" and "solutions" equals "making a poor person do it for them and paying them as little as legally possible."

    • tptacek 3 hours ago

      Those of us who write software professionally are literally in a field premised on automating other people's jobs away. There is no profession with less claim to the moral high ground of worker rights than ours.

      • simonw 3 hours ago

        I often think about the savage job-destroying nature of the open source community: hundreds of thousands of developers working tirelessly to unemploy as many of their peers as possible by giving away the code they've written for free.

        (Interesting how people talk about AI destroying programming jobs all the time, but rarely mention the impact of billions of dollars of code being given away.)

      • JohnFen 2 hours ago

        > Those of us who write software professionally are literally in a field premised on automating other people's jobs away.

        How true that is depends on what sort of software you write. Very little of what I've accomplished in my career can be fairly described as "automating other people's jobs away".

      • Verdex 3 hours ago

        Speak for yourself.

        I've worked in a medical space writing software so that people can automate away the job that their bodies used to do before they broke.

      • smj-edison 3 hours ago

        Bit of a tangent but...

        Haven't we been automating jobs away since the industrial revolution? I know AI may be an exception to this trend, but at least with classical programming, demand goes up, GDP per capita goes up, and new industries are born.

        I mean, there's three ways to get stuff done: do it yourself, get someone else to do it, or get a machine to do it.

        #2 doesn't scale, since someone still has to do it. If we want every person to not be required to do it (washing, growing food, etc), #3 is the only way forward. Automation and specialization have made the unthinkable possible for an average person. We've a long way to go, but I don't see automation as a fundamentally bad thing, as long as there's a simultaneous effort to help (especially those who are poor) transition to a new form of working.

        • ToucanLoucan 3 hours ago

          > as long as there's a simultaneous effort to help (especially those who are poor) transition to a new form of working.

          Somehow everyone who says this misses that never in the history of the United States (and most other countries tbh) has this been true.

          We just consign people to the streets in industrial quantity. More underserved to act as the lubricant for capitalism.

          • smj-edison 3 hours ago

            But... My local library has a job searching program? I have a friend who's learning masonry at a government sponsored training program? It seems the issue is not that resources don't exist, but that these people don't have the time to use them. So it's unfair to say they don't exist. Rather, it seems they're structured in an unhelpful way for those who are working double jobs, etc.

            I see capitalism invoked as a "boogey man" a lot, which fair enough, you can make an emotional argument, but it's not specific enough to actually be helpful in coming up with a solution to help these people.

            In fact, capitalism has been the exact thing that has lifted so many out of poverty. Things can be simultaneously bad and also have gotten better over time.

            I would argue that the biggest issue is education, but that's another tangent...

            • ToucanLoucan 3 hours ago

              > So it's unfair to say they don't exist. Rather, it seems they're structured in an unhelpful way for those who are working double jobs, etc.

              I'll be sure to alert the next person I encounter working UberEats for slave wages that the resources exist that they cannot use. I'm sure this difference will impact their lives greatly.

              Edit: My point isn't that UberEats drivers make slave wages (though they do): My point is that from the POV of said people and others who need the aforementioned resources, whether they don't exist or exist and are unusable is fucking irrelevant.

              • smj-edison 2 hours ago

                Slave wages? Like the wages for a factory worker in 1918[1]? $1300 after adjusting for inflation. And that was gruelling work from dawn to dusk, being locked into a building, and nickel and dimed by factory managers. (See the triangle shirtwaist factory). The average Uber wage is $20/hour[2]. Say they use 2 gallons of gas (60 mph at 30 mpg) at $5/gallon. That comes out to $10/hour, which is not great, but they're not being locked into factories and working from dawn to dusk and being fired when sick. Can you not see that this is progress? It's not great, we have a lot of progress to make, but it sure beats starving to death in a potato famine.

                [1] https://babel.hathitrust.org/cgi/pt?id=mdp.39015022383221&se...

                [2] https://www.indeed.com/cmp/Uber/salaries/Driver (select United States as location)

              • smj-edison 2 hours ago

                Replying to your edit: it is relevant, because it means people are trying but it isn't working. When people aren't trying, you have to get people to start trying. When people are trying but it isn't working, you have to help change the approach. Doubling down on a failing policy (e.g. we just need to create more resources) is failing to learn from the past.

              • tptacek 2 hours ago

                At some point, you've stopped participating in good faith with the thread and are instead trying to push it towards some other topic; in your case, apparently, a moral challenge against Uber. I think we get it; can you stop supplying superficial rebuttals to every point made with "but UberEats employs [contracts] wave slaves"?

      • ToucanLoucan 3 hours ago

        > Those of us who write software professionally are literally in a field premised on automating other people's jobs away.

        Depends what you write. What I work on isn't about eliminating jobs at all, if anything it creates them. And like, actual, good jobs that people would want, not, again, paying someone below the poverty line $5 to deliver an overpriced burrito across town.

        • tptacek 3 hours ago

          I think most of the time when we tell ourselves this, it's cope. Software is automation. "Computers" used to be people! Literally, people.

          • myk9001 3 hours ago

            > "Computers" used to be people! Literally, people.

            Not always. Recruitment budgets have limits, so it's a fixed number of employees either providing services to a larger number of customers thanks to software, or serving fewer customers or do so less often without the software.

            • tptacek 3 hours ago
              • myk9001 2 hours ago

                Thank you for the link, the reference you're making slipped past me. That said, I think my point still holds: software doesn't always have to displace workers, it can also help current employees scale their efforts when bringing on more people isn't possible.

          • ToucanLoucan 3 hours ago

            I'm unable and unwilling to shadowbox with what you think I'm actually experiencing.

            • tptacek 3 hours ago

              That's fine; read it as me speaking to the whole thread, not challenging you directly. Technology drives economic productivity; increasing economic productivity generally implies worker displacement. That workers come out ahead in the long run (they have in the past; it's obviously not a guarantee) is besides my point. Software is automating software development away, the same way it automated a huge percentage of (say) law firm billable hours away. We'd better be ready to suck it up!

              • myk9001 3 hours ago

                > That workers come out ahead in the long run (they have in the past...)

                Would you mind naming a few instance of the workers coming out ahead?

                • tptacek 2 hours ago

                  Sure. Compare the quality of life of the Computers to that of any stably employed person today who owns a computer.

                  • myk9001 2 hours ago

                    Got it, you're talking about workers getting ahead as a category -- no objections to that.

                    I doubt the displaced computers managed to find a better job on average. Probably even their kids were disadvantaged since the parents had fewer options to support their education.

                    So, who knows if this specific group of people and their descendants ever fully recovered let alone got ahead.

  • EasyMarion 3 hours ago

    solving real problems is the core of it, but for a lot of people the joy and meaning come from how they solve them too. the shift to AI tools might feel like outsourcing the interesting part, even if the outcome is still useful. side projects will stick around for sure, but i think it's fair to ask what the day-to-day feels like when more of it becomes reviewing and prompting rather than building.

iamleppert 4 hours ago

There's nothing stopping you from coding if you enjoy it. It's not like they have taken away your keyboard. I have found that AI frees me up to focus on the parts of coding I'm actually interested in, which is maybe 5-10% of the project. The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing circle of hell that I really could care less about. I care about certain things that I know will make the product better, and achieve its goals in a clever and satisfying way.

Even when I'm stuck in hell, fighting the latest undocumented change in some obscure library or other grey-bearded creation, the LLM, although not always right, is there for me to talk to, when before I'd often have no one. It doesn't judge or sneer at you, or tell you to "RTFM". It's better than any human help, even if its not always right because its at least always more reliable and you don't have to bother some grey beard who probably hates you anyway.

  • righthand 3 hours ago

    They perhaps haven’t taken away your keyboard but anecdotally, a few friends work at places where their boss is requiring them to use the LLMs. So you may not have to code with them but some people are starting to be chained to them.

    • hermanradtke 3 hours ago

      Yes, there are bad places to work. There are also places that require detailed time tracking, do not allow any time to write tests, have very long hours, tons of on-call alerts, etc.

  • apothegm 4 hours ago

    So much this. The AI takes care of the tedious line by line what’s-the-name-of-that-stdlib-function parts (and most of the tedious test-writing parts) and lets me focus on the interesting bits like what it is I want to build and how the pieces should fit together. And debugging, which I find satisfying.

    Sadly, I find it sorely lacking at dealing with build systems and that particular type of boilerplate, mostly because it seems to mix up different versions of things too much and gives you totally broken setups more often than not. I’d just as soon never deal with the he’ll that is front end build/lint/test config again.

  • 2snakes 4 hours ago

    I read one characterization which is that LLMs don't give new information (except to the user learning) but they reorganize old information.

    • docmechanic 4 hours ago

      That’s only true if you tokenize words rather than characters. Character tokenization generates new content outside the training vocabulary.

    • barrenko 4 hours ago

      Custodians of human knowledge.

pdimitar 4 hours ago

I don't know man, maybe prompt most of your work, eyeball it and verify it rigorously (which if you cannot do, you should absolutely never touch an LLM!), run a script to commit and push after 3 hours and then... work on whatever code makes you happy without using an LLM?

Let's stop pretending or denying it: most of us would delegate our work code to somebody else or something else if we could.

Still, prompting LLMs well requires eloquence and expressiveness that many programmers don't have. I have started deriving a lot of value from those LLMs I chose to interact with by specifying clear boundaries on what's the priority and what can wait for later and what should be completely ignored due to this or that objective (and a number of other parameters I am giving them). When you do that well, they are extremely useful.

  • only-one1701 3 hours ago

    I see this "prompting is an art" stuff a lot. I gave Claude a list of 10 <Route> objects and asked it to make an adjustment to all of them. It gave me 9 back. When I asked it to try again it gave me 10 but one didn't work. What's "prompt engineering" there, telling it to try again until it gets it right? I'd rather just do it right the first time.

    • codr7 3 hours ago

      We used to make fun of and look down on coders who mindlessly copy paste and mash the compile button until the code runs, for good reasons.

    • tmpz22 3 hours ago

      Prompt engineering is just trying that task on a variety of models and prompt variations until you can better understand the syntax needed to get the desired outcome, if the desired outcome can be gotten.

      Honestly you’re trying to prove AI is ineffective by telling us it didn’t work with your ineffective protocol. That is not a strong argument.

      • only-one1701 3 hours ago

        What should I have done there? Tell it to make sure that it gives me all 10 objects I give it back? Tell it to not put brackets in the wrong place? This is a real question --- what would you have done?

        • simonw 3 hours ago

          How long ago was this? I'd be surprised to see Claude 3.7 Sonnet make a mistake of this nature.

          Either way, when a model starts making dumb mistakes like that these days I start a fresh conversation (to blow away all of the bad tokens in the current one), either with that model or another one.

          I often switch from Claude 3.7 Sonnet to o3 or o4-mini these days. I paste in the most recent "good" version of the thing we're working on and prompt from there.

        • tmpz22 an hour ago

          In no particular order:

          * experiment with multiple models, preferably free high quality models like Gemini 2.5. Make sure you're using the right model, usually NOT one of the "mini" varieties even if its marketed for coding.

          * experiment with different ways of delivering necessary context. I use repomix to compile a codebase to a text file and upload that file. I've found more integrated tooling like cursor, aider, or copilot, are less effective then dumping a text file into the prompt

          * use multi-step workflows like the one described [1] to allow the llm to ask you questions to better understand the task

          * similarly use a back-and-forth one-question-at-a-time conversation to have the llm draft the prompt for you

          * for this prompt I would focus less on specifying 10 results and more about uploading all necessary modules (like with repomix) and then verifying all 10 were completed. Sometimes the act of over specifying results can corrupt the answer.

          [1]: https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/

          I'm a pretty vocal AI-hater, partly because I use it day to day and am more familiar with its shortfalls - and I hate the naive zealotry so many pro-AI people bring to AI discussions. BUTTT we can also be a bit more scientific in our assessments before discarding LLMs - or else we become just like those naive pro-AI-everything zealots.

  • hellisothers 4 hours ago

    > Let's stop pretending or denying it: most of us would delegate our work code to somebody else or something else if we could.

    I don’t think this is the case, if anything the opposite is true. Most of us would like to do the work code but have realized, at some career point, that you’re paid more to abstract yourself away from that and get others to do it either in technical leadership or management.

    • diggan 3 hours ago

      > I don’t think this is the case, if anything the opposite is true

      I'll be a radical and say that I think it depends and is very subjective.

      Author above you seems to enjoy working on code by itself. You seem to have a different motivation. My motivation is solving problems I encounter, code just happen to be one way out of many possible ones. The author of the submission article seems to love the craft of programming in itself, maybe the problem itself doesn't even matter. Some people program just for the money, and so on.

    • pdimitar 3 hours ago

      Well, does not help that a lot of work tasks are meaningless drudgery that we collectively should have trivialized and 100% automated at least 20 years. That was kind of the core my point: a lot of work tasks are just plain BS.

  • codr7 3 hours ago

    I wouldn't, I got into software exactly because I enjoy solving problems and writing code. Verifying shitty, mindless, computer generated code is not something I would consider doing for all the money in the world.

    • pdimitar 3 hours ago

      1. I work on enjoyable problems after I let the LLM do some of the tasks I have to do for money. The LLM frees me bandwidth for the stuff I truly love. I adore solving problems with code and that's not going to change ever.

      2. Some of the modern LLMs generate very impressive code. Variables caching values that are reused several times, utility functions, even closure helpers scoped to a single function. I agree that when the LLM code's quality falls bellow a certain threshold then it's better in every way to just write it yourself instead.

  • simonw 4 hours ago

    > "verify it rigorously (which if you cannot do, you should absolutely never touch an LLM!)"

    100% this.

    • only-one1701 3 hours ago

      I like writing code more than reading it, personally.

      • simonw 3 hours ago

        Yeah, I think that's pretty common. It took me 15+ years of my own career before I got over my aversion to spending significant amounts of time reading through code that I didn't write myself.

    • williamstein 3 hours ago

      Totally. And yet rigorous proof is very difficult. Having done some mathematics involving nontrivial proofs, I respect even more how difficult rigor is.

  • jaredcwhite 3 hours ago

    > most of us would delegate our work code to somebody else or something else if we could

    Not me. I code because I love to code, and I get paid to do what I love. If that's not you…find a different profession?

    • pdimitar 30 minutes ago

      Needlessly polarizing. I love coding since 12 years old (so more than 30 years at this point) but most work tasks I'm given are fairly boring and uninteresting and don't move almost any science or knowledge forward.

      Delegating part of that to an LLM so I can code the stuff I love is a big win for my motivation and is making me doing the work tasks with a bit more desire and pleasure.

      Please don't forget that most of us out there can't code for money anything that their heart wants. If you can, I'd be happy for you (and envious) but please understand that's also a fairly privileged life you'd be having in that case.

  • jimbob45 3 hours ago

    The act of coding preserves your skills for that all-important verification step. No coding and the whole system falls apart.

    • codr7 3 hours ago

      Exactly, how are you supposed to verify anything when you don't have any skills left beyond prompting.

    • pdimitar 3 hours ago

      Absolutely. That's why I don't give the LLM the reins for long, nor do I tell it to do the whole thing. I want to keep my mind sharp and my abilities honed.

  • troupo 4 hours ago

    > Still, prompting LLMs well requires eloquence and expressiveness that many programmers don't have

    It requires magical incantations that may or may not work and where a missing comma in a prompt can break the output just as badly as the US waking up and draining compute resources.

    Has nothing to do with eloquence

  • dingnuts 3 hours ago

    > work on whatever code makes you happy without using an LLM?

    This isn't how it works, psychologically. The whole time I'm manual coding, I'm wondering if it'd be "easier" to start prompting. I keep thinking about a passage from The Road To Wigan Pier where Orwell addresses this effect as it related to the industrial revolution:

    >Mechanize the world as fully as it might be mechanized, and whichever way you turn there will be some machine cutting you off from the chance of working—that is, of living.

    >At a first glance this might not seem to matter. Why should you not get on with your ‘creative work’ and disregard the machines that would do it for you? But it is not so simple as it sounds. Here am I, working eight hours a day in an insurance office; in my spare time I want to do something ‘creative’, so I choose to do a bit of carpentering—to make myself a table, for instance. Notice that from the very start there is a touch of artificiality about the whole business, for the factories can turn me out a far better table than I can make for myself. But even when I get to work on my table, it is not possible for me to feel towards it as the cabinet-maker of a hundred years ago felt towards his table, still less as Robinson Crusoe felt towards his. For before I start, most of the work has already been done for me by machinery. The tools I use demand the minimum of skill. I can get, for instance, planes which will cut out any moulding; the cabinet-maker of a hundred years ago would have had to do the work with chisel and gouge, which demanded real skill of eye and hand. The boards I buy are ready planed and the legs are ready turned by the lathe. I can even go to the wood-shop and buy all the parts of the table ready-made and only needing to be fitted together; my work being reduced to driving in a few pegs and using a piece of sandpaper. And if this is so at present, in the mechanized future it will be enormously more so. With the tools and materials available then, there will be no possibility of mistake, hence no room for skill. Making a table will be easier and duller than peeling a potato. In such circumstances it is nonsense to talk of ‘creative work’. In any case the arts of the hand (which have got to be transmitted by apprenticeship) would long since have disappeared. Some of them have disappeared already, under the competition of the machine. Look round any country churchyard and see whether you can find a decently-cut tombstone later than 1820. The art, or rather the craft, of stonework has died out so completely that it would take centuries to revive it.

    >But it may be said, why not retain the machine and retain ‘creative work’? Why not cultivate anachronisms as a spare-time hobby? Many people have played with this idea; it seems to solve with such beautiful ease the problems set by the machine. The citizen of Utopia, we are told, coming home from his daily two hours of turning a handle in the tomato-canning factory, will deliberately revert to a more primitive way of life and solace his creative instincts with a bit of fretwork, pottery-glazing, or handloom-weaving. And why is this picture an absurdity—as it is, of course? Because of a principle that is not always recognized, though always acted upon: that so long as the machine is there, one is under an obligation to use it. No one draws water from the well when he can turn on the tap. One sees a good illustration of this in the matter of travel. Everyone who has travelled by primitive methods in an undeveloped country knows that the difference between that kind of travel and modern travel in trains, cars, etc., is the difference between life and death. The nomad who walks or rides, with his baggage stowed on a camel or an ox-cart, may suffer every kind of discomfort, but at least he is living while he is travelling; whereas for the passenger in an express train or a luxury liner his journey is an interregnum, a kind of temporary death. And yet so long as the railways exist, one has got to travel by train—or by car or aeroplane. Here am I, forty miles from London. When I want to go up to London why do I not pack my luggage on to a mule and set out on foot, making a two days of it? Because, with the Green Line buses whizzing past me every ten minutes, such a journey would be intolerably irksome. In order that one may enjoy primitive methods of travel, it is necessary that no other method should be available. No human being ever wants to do anything in a more cumbrous way than is necessary. Hence the absurdity of that picture of Utopians saving their souls with fretwork. In a world where everything could be done by machinery, everything would be done by machinery. Deliberately to revert to primitive methods to use archaic took, to put silly little difficulties in your own way, would be a piece of dilettantism, of pretty-pretty arty and craftiness. It would be like solemnly sitting down to eat your dinner with stone implements. Revert to handwork in a machine age, and you are back in Ye Olde Tea Shoppe or the Tudor villa with the sham beams tacked to the wall.

    >The tendency of mechanical progress, then, is to frustrate the human need for effort and creation. It makes unnecessary and even impossible the activities of the eye and the hand. The apostle of ‘progress’ will sometimes declare that this does not matter, but you can usually drive him into a comer by pointing out the horrible lengths to which the process can be carried.

    sorry it's so long

iugtmkbdfil834 3 hours ago

I think.. based on recent events.. that some of the corporate inefficiencies are very poorly captured. Last year we had an insane project that was thrown at us before end of the year, because, basically, company had a tiff with the vendor and would rather have us spend our time in meetings trying to do what they are doing rather than pay vendor for that thing. From simple money spent perspective, one would think company's simple amoral compass would be a boon.

AI coding is similar. We just had a minor issue with ai generated code that was clearly not vetted as closely as it should have been making output it generated over a couple of months not as accurate as it should be. Obviously, it had to be corrected, then vetted and so on, because there is always time to correct things...

edit: What I am getting at is the old-fashioned, penny smart, but pound foolish.

ahamilton454 4 hours ago

I’ve been struggling with a very similar feeling. I too am a manager now. Back in the day there was something very fulfilling about fully understanding and comprehending your solution. I find now with AI tools I don’t need to understand a lot. I find the job much less fulfilling.

The funny thing is I agree with other comments, it is just kind of like a really good stack overflow. It can’t automate the whole job, not even close, and yet I find the tasks that it cannot automate are so much more boring (the ones I end up doing).

I envy the people who say that AI tools free them up to focus on what they care about. I haven’t been able to achieve this building with ai, if anything it feels like my competence has decreased due to the tools. I’m fairly certain I know how to use the tools well, I just think that I don’t enjoy how the job has evolved.

kristjank 4 hours ago

When we outsource the parts of programming that used to demand our complete focus and creativity, do we also outsource the opportunity for satisfaction? Can we find the same fulfillment in prompt engineering that we once found in problem-solving through code?

Most of AI-generated programming content I use are comments/explanations for legacy code, closely followed by tailored "getting started" scripts and iterations on visualisation tasks (for shitty school assignments that want my pyplots to look nice). The rest requires an understanding, which AI can help you achieve faster (it's read many a book related to the topic, so it can recall information a lot like an experienced colleague may), but it can't confer capital K Knowledge or understanding upon you. Some of the tasks it performs are grueling, take a lot of time to do manually, and provide little mental stimulation. Some may be described as lobotomizing and (in my opinion) may mentally damage you in the "Jack Torrance typewriter" kinda way.

It makes me able to work on the fun parts of my job which possess the qualities the article applauds.

lrvick 3 hours ago

So long as your experience and skill allows you to produce work of higher quality than average for your industry, then you will always have a job which is to review that average quality work, and surgically correct it when it is wrong.

This has always been true in every craft, and it remains true for programmers in a post LLL world.

Most training data is open source code written by novice to average programmers publishing their first attempts at things and thus LLMS are heavily biased to replicate the naive, slow, insecure code largely uninformed by experience.

Honestly to most programmers early in their career right now, I would suggest spending more time reviewing code, and bugfixes, than writing code. Review is the skillset the industry needs most now.

But you will need to be above average as a software reviewer to be employable. Go out into FOSSland and find a bunch of CVEs, or contribute perf/stability/compat fixes, proving you review and improve things better than existing automated tools.

Trust me, there are bugs -everywhere- if you know how to look for them and proving you can find them is the resume you need now.

The days of anyone that can rub two HTML tags together having a high paying job are over.

codr7 3 hours ago

I've tried getting different AIs to say something meaningful about code, never got anything of value back so far. They can't even manage tab-completion well enough to be worth the validation effort for me.

bix6 3 hours ago

“ Fast forward to today, and that joy of coding is decreasing rapidly. Well, I’m a manager these days, so there’s that…”

This sounds a more likely reason for losing your joy if your passion is coding.

throwaway20174 3 hours ago

The catch is that when AI handles 95% or 99% of a task, people say great, don't need humans. 99% is great.

But when that last 1% breaks and AI can’t fix it. That’s where you need the humans.

  • codr7 3 hours ago

    By then the price will have increased quite a bit; if you want me to fix your AI crap, you're going to pay until it hurts.

SkyBelow 4 hours ago

So if I'm understanding this, there are two central arguments being made here.

1. AI Coding leads to a lack of flow.

2. A lack of flow leads to a lack of joy.

Personally, I can't find myself agreeing with the first argument. Flow happens for me when I use AI. It wouldn't surprise me if this differed developer to developer. Or maybe it is the size of requests I'm making, as mine tend to be on the smaller size where I already have an idea of what I want to write but think the AI can spit it out faster. I also don't really view myself as prompt engineering; instead it feels more like a natural back and forth with the AI to refine the output I'm looking for. There are times it gets stubborn and resistant to change but that is generally a sign that I might want to reconsider using AI for that particular task.

  • simonw 4 hours ago

    One trend I've been finding interesting over the past year is that a lot of engineers I know who moved into engineering management are writing code again - because LLMs mean they can get something productive done in a couple of hours where previously it would have taken them a full day.

    Managers usually can't carve out a full day - but a couple of hours is manageable.

    See also this quote from Gergely Orosz:

      Despite being rusty with coding (I don't code every day
      these days): since starting to use Windsurf / Cursor with
      the recent increasingly capable models: I am SO back to
      being as fast in coding as when I was coding every day
      "in the zone" [...]
    
      When you are driving with a firm grip on the steering
      wheel - because you know exactly where you are going, and
      when to steer hard or gently - it is just SUCH a big
      boost.
    
      I have a bunch of side projects and APIs that I operate -
      but usually don't like to touch it because it's (my)
      legacy code.
    
      Not any more.
    
      I'm making large changes, quickly. These tools really
      feel like a massive multiplier for experienced devs -
      those of us who have it in our head exactly what we want
      to do and now the LLM tooling can move nearly as fast as
      my thoughts!
    
    From https://x.com/GergelyOrosz/status/1914863335457034422
    • smithclay 4 hours ago

      This is also true of (technical) product managers from an engineering background.

      It's been amazing to spin up quick React prototypes during a lunch break of concepts and ideas for quick feedback and reactions.

  • IshKebab 3 hours ago

    Yeah I think flow is more about holding a lot of knowledge about the code and its control flow in your head at a time. I think there's an XKCD or something that illustrates that.

    You still need to do that if you're using AI, otherwise how do you know if it's actually done a good job? Or are people really just vibe coding without even reading the code at all? That seems... unlikely to work.

misiti3780 4 hours ago

i dont know where you are working, but where I work i cant prompt 90% of my job away using cursor. in fact, I find all of these tools to be more and more useless and our codebase is growing and becoming more complex

based on the current state of AI and the progress im witnessing on a month-by-month basis - my current prediction is there is zero chance AI agents are going to be coding and replacing me in the next few years. if i could short the startups claiming this, I would.

  • simonw 4 hours ago

    Don't get distracted by claims that AI agents "replace programmers". Those are pure hype.

    I'm willing to bet that in a few years most of the developers you know will be using LLMs on a daily basis, and will be more productive because of it (having learned how to use it).

  • earthnail 4 hours ago

    I have the same experience. It‘s basically a better StackOverflow, but just like with SO you have to be very careful about the replies, and also just like SO its utility diminishes as you get more proficient.

    As an example, just today I was trying to debug some weird WebSocket behaviour. None of the AI tools could help, not Cursor, not plain old ChatGPT with lots of prompting and careful phrasing of the problem. In fact every LLM I tried (Claude 3.7, GPT o4-mini-high, GPT 4.5) introduced errors into my debugging code.

    I’m not saying it will stay this way, just that it’s been my experience.

    I still love these tools though. It’s just that I really don’t trust the output, but as inspiration they are phenomenal. Most of the time I just use vanilla ChatGPT though; never had that much luck with Cursor.

    • codr7 3 hours ago

      No one was forcing you to use SO, in fact we made fun of people who did copy-paste/compile-coding.

    • UncleEntity 3 hours ago

      Yeah, they're currently horrible at debugging -- there seems to be blind spots they just can't get past so end up running in circles.

      A couple days ago I was looking for something to do so gave Claude a paper ("A parsing machine for PEGs") to ask it some questions and instead of answering me it spit out an almost complete implementation. Intrigued, I threw a couple more papers at it ("A Simple Graph-Based Intermediate Representation" && "A Text Pattern-Matching Tool based on Parsing Expression Grammars") where it fleshed out the implementation and, well... color me impressed.

      Now, the struggle begins as the thing has to be debugged. With the help of both Claude and Deepseek we got it compiling and passing 2 out of 3 tests which is where they both got stuck. Round and round we go until I, the human who's supposed to be doing no work, figured out that Claude hard coded some values (instead of coding a general solution for all input) which they both missed. In applying ever more and more complicated solutions (to a well solved problem in compiler design) Claude finally broke all debugging output and I don't understand the algorithms enough to go in and debug it myself.

      Of course I didn't use any sort of source code management so I could revert to a previous version before it was broken beyond all fixing...

      Honestly, I don't even consider this a failure. I learned a lot more on what they are capable of and now know that you have to give them problems in smaller sections where they don't have to figure out the complexities of how a few different algorithms interact with each other. With this new knowledge in hand I started on what I originally intended to do before I got distracted with Claude's code solution to a simple question.

      --edit--

      Oh, the irony...

      After typing this out and making an espresso I figured out the problem Claude and Deepseek couldn't see. So much for the "superior" intelligence.

  • tptacek 4 hours ago

    One of the ways these tools are most useful for me is in extremely complex codebases.

    • simonw 4 hours ago

      This has become especially true for me in the past four months. The new long context reasoning models are shockingly good at digging through larger volumes of gnarly code. o3, o4-mini and Claude 3.7 Sonnet "thinking" all have 200,000 token context limits, and Gemini 2.5 Pro and Flash can do 1,000,000. As "reasoning" models they are much better suited to following the chain of a program to figure out the source of an obscure bug.

      Makes me wonder how many of the people who continue to argue that LLMs can't help with large existing codebases are missing that you need to selectively copy the right chunks of that code into the model to get good results.

      • IshKebab 3 hours ago

        But 1 million tokens is like 50k lines of code or something. That's only medium sized. How does that help with large complex codebases?

        What tools are you guys using? Are there none that can interactively probe the project in a way that a human would, e.g. use code intelligence to go-to-definition, find all references and so on?

        • tptacek 3 hours ago

          This to me is like every complaint I read when people generate code and the LLM spits out an error, or something stupid. It's a tool. You still have to understand software construction, and how to hold the tool.

          Our Rust fly-proxy tree is about 80k (cloc) lines of code; our Go flyd tree (a Go monorepo) is 300k. Generally, I'll prompt an LLM to deal with them in stages; a first pass, with some hints, on a general question like "find the code that does XYZ"; I'll review and read the code itself, then feed that back to the LLM with questions like "summarize all the functionality of this package and how it relates to other packages" or "trace the flow of an HTTP request through all the layers of this proxy".

          Generally, I'll take the results of those queries and have them saved in .txt files that I can reference in future prompts.

          I think sometimes developers are demanding something close to AGI from their tooling, something that would do exactly what they would do (only, in the span of about 15 seconds). I don't believe in AGI, and so I don't expect it from my tools; I just want them to do a better job of fielding arbitrary questions (or generating arbitrary code) than grep or eglot could.

        • simonw 3 hours ago

          Yeah, 50,000 lines sounds about right for 1m tokens.

          If your codebase is larger than that there are a few tricks.

          The first is to be selective about what you feed into the LLM: if you know the work you are doing is in a particular area of the codebase, just paste that bit in. The LLM can make reasonable guesses about things the code references that it can't see.

          An increasingly effective trick is to arm a tool-using LLM with a tool like ripgrep (effectively the "interactively probe the project in a way that a human would" idea you suggested). Claude Code and OpenAI Codex both use this trick. The smarter models are really good at deciding what to search for and evaluating the results.

          I've built tools that can run against Python code and extract just the class, function and method signatures and their docstrings - omitting the actual code. If you code is well designed and has reasonable documentation that could be enough for the LLM to understand it.

          https://github.com/simonw/symbex is my CLI tool for that

          https://simonwillison.net/2025/Apr/23/llm-fragment-symbex/ is a tool I released this morning that turns Symbex into a plugin for my LLM tool.

          I use my https://llm.datasette.io/ tool a lot, especially with its new fragments feature: https://simonwillison.net/2025/Apr/7/long-context-llm/

          This means I can feed in the exact code that the model needs in order to solve a problem. Here's a recent example:

            llm -m openai/o3 \
              -f https://raw.githubusercontent.com/simonw/llm-hacker-news/refs/heads/main/llm_hacker_news.py \
              -f https://raw.githubusercontent.com/simonw/tools/refs/heads/main/github-issue-to-markdown.html \
              -s 'Write a new fragments plugin in Python that registers issue:org/repo/123 which fetches that issue
                  number from the specified github repo and uses the same markdown logic as the HTML page to turn that into a fragment'
          
          From https://simonwillison.net/2025/Apr/20/llm-fragments-github/ - I'm populating the context with the exact examples needed to solve the problem.
bitwize 3 hours ago

Earlier this year, a hackernews started quizzing me about the size and scope of the projects I worked on professionally, with the implication that I couldn't really be working on anything large or complex -- that I couldn't really be doing serious development, without using a full-fat IDE like IntelliJ. I wasn't going to dox myself or my professional work just so he could reach a conclusion he's already arrived at. The point is, to this person, beyond a certain complexity threshold -- simple command-line tools, say -- an IDE was a must, otherwise you were just leaving productivity on the table.

https://news.ycombinator.com/item?id=42511441

People are going to be making the same judgements about AI-assisted coding in the near future. Sure, you could code everything yourself for your own personal enrichment, or simply because it's fun. But that will be a pursuit for your own time. In the realm of business, it's a different story: you are either proompting, or you're effectively stealing money from your employer because you're making suboptimal use of the tools available. AI gets you to something working in production so much faster that you'd be remiss not to use it. After all, as Milt and Tim Bryce have shown, the hard work in business software is in requirements analysis and design; programming is just the last translation step.

blueboo 4 hours ago

The old joy may be gone. But the new joy is there, if you're receptive to it

  • codr7 3 hours ago

    And which joy is that? Short sighted profits?

JeremyMorgan 4 hours ago

One of the things people often overlook don't talk about in this arguments is the manager's point of view and how it's contributing to the shakeups in this industry.

As a developer I'm bullish on coding agents and GenAI tools, because they can save you time and can augment your abilities. I've experienced it, and I've seen it enough already. I love them, and want to see them continue to be used.

I'm bearish on the idea that "vibe coding" can produce much of value, and people without any engineering background becoming wildly productive at building great software. I know I'm not alone. If you're a good problem solver who doesn't know how to code, this is your gateway. And you better learn what's happening with the code while you can to avoid creating a huge mess later on.

Developers argue about the quality of "vibe coded" stuff. There are good arguments on both sides. At some point I think we all agree that AI will be able generate high quality software faster than a human, someday. But today is not that day. Many will try to convince you that it is.

Within a few years we'll see massive problems from AI generated code, and it's for one simple reason:

Managers and other Bureaucrats do not care about the quality of the software.

Read it again if you have to. It's an uncomfortable idea, but it's true. They don't care about your flow. They don't care about how much you love to build quality things. They don't care if software is good or bad they care about closing tickets and creating features. Most of them don't care, and have never cared about the "craft".

If you're a master mason crafting amazing brickwork, you're exactly the same as some amateur grabbing some bricks from home depot and slapping a wall together. A wall is a wall. That's how the majority of managers view software development today. By the time that shoddy wall crumbles they'll be at another company anyway so it's someone else's problem.

When I talk about the software industry collapsing now, and in a few years we're mired with garbage software everywhere, this is why. These people in "leadership" are salivating at the idea of finally getting something for nothing. Paying a few interns to "vibe code" piles of software while they high five each other and laugh.

It will crash. The bubble will pop.

Developers: Keep your skills sharp and weather out the storm. In a few years you'll be in high demand once again. When those walls crumble, they will need people who what they're doing to repair it. Ask for fair compensation to do so.

Even if I'm wrong about all of this I'm keeping my skills sharp. You should too.

This isn't meant to be anti-management, but it's based on what I've seen. Thanks for coming to my TED talk.

* And to the original point, In my experience the tools interrupt the "flow" but don't necessarily take the joy out of it. I cannot do suggestion/autocomplete because it breaks my flow. I love having a chat window with AI nearby when I get stuck or want to generate some boilerplate.

  • UncleEntity 3 hours ago

    > If you're a master mason crafting amazing brickwork, you're exactly the same as some amateur grabbing some bricks from home depot and slapping a wall together.

    IDK, there's still a place in society for master masons to work on 100+ year old buildings built by other master masons.

    Same with the robots. They can implement solutions but I'm not sure I've heard of any inventing an algorithmic solution to a problem.