One of the biggest problems I've run into AI tooling is they brute force a solution instead of subconsciously thinking about things that engineers subconsciously think about while crafting a solution, so even if it does manage to solve the problem you asked it solve it almost always isn't the best solution out there.
The reality of problems such as this one is you don't discover them until the product is in a user's hand for awhile, so while LLMs may be useful is speeding up the resolution of these problems they're kind of useless at discovering them. Yet despite this CEOs of companies with major AI investments are promising an idealized future within a couple of years as though it hasn't already been nearly 3 years of ChatGPT. Despite all the hype and promises the pace of the product development lifecycle seems to remain unchanged. I think realistically it'll be at least 10 years before LLM-based tools and agents can do what CEOs are promising.
Isn't the sample super biased? StackOverflow is increasingly bleeding users to AI tooling. Shouldn't we expect the remaining users to be increasingly distrustful of AI?
> Respondents were recruited primarily through channels owned by Stack Overflow. The top sources of respondents were onsite messaging, blog posts, email/newsletter subscribers, banner ads, and social media posts. Since respondents were recruited in this way, highly-engaged users on Stack Overflow were more likely to notice the prompts to take the survey over the duration of the collection promotion. We also recruited respondents via a Reddit ad campaign, this accounted for < 2% of total responses.
They sell their data to OpenAI, so they're also profiting from AI. And in the near future developers will become more fully dependent on StackOverflow for whatever they can't get from AI, because self sufficiency will have atrophied.
As expected we’re getting closer and closer to the Trough of Disillusionment. That’s not a bad thing, because it leads to the Plateau of Productivity.[1]
Anecdotally for myself I’m finding that LLMs are great when I can give it a hyper specific target like a function to write. This isn’t because it can’t write an entire script. It can. It’s because the more I let it run wild, it feels like my understanding of the code gets exponentially worse.
Growing disillusionment among programmers (whose productivity gains, by the way, represent the most successful use case for AI yet) is indeed not necessarily a bad thing.
What is concerning is that VCs seem to believe we are still in the exponential growth phase of the hype cycle. I believe the consensus among them (and the bigtech-adjacent shills) is that they are targeting a trillion-dollar market at minimum. Somehow.
One of the biggest problems I've run into AI tooling is they brute force a solution instead of subconsciously thinking about things that engineers subconsciously think about while crafting a solution, so even if it does manage to solve the problem you asked it solve it almost always isn't the best solution out there.
The reality of problems such as this one is you don't discover them until the product is in a user's hand for awhile, so while LLMs may be useful is speeding up the resolution of these problems they're kind of useless at discovering them. Yet despite this CEOs of companies with major AI investments are promising an idealized future within a couple of years as though it hasn't already been nearly 3 years of ChatGPT. Despite all the hype and promises the pace of the product development lifecycle seems to remain unchanged. I think realistically it'll be at least 10 years before LLM-based tools and agents can do what CEOs are promising.
Isn't the sample super biased? StackOverflow is increasingly bleeding users to AI tooling. Shouldn't we expect the remaining users to be increasingly distrustful of AI?
I don't use StackOverflow at all anymore.
> Respondents were recruited primarily through channels owned by Stack Overflow. The top sources of respondents were onsite messaging, blog posts, email/newsletter subscribers, banner ads, and social media posts. Since respondents were recruited in this way, highly-engaged users on Stack Overflow were more likely to notice the prompts to take the survey over the duration of the collection promotion. We also recruited respondents via a Reddit ad campaign, this accounted for < 2% of total responses.
That’s a good point - but I suspect that many people who don’t use StackOverflow still participate in the survey. It’s quite popular
They sell their data to OpenAI, so they're also profiting from AI. And in the near future developers will become more fully dependent on StackOverflow for whatever they can't get from AI, because self sufficiency will have atrophied.
As expected we’re getting closer and closer to the Trough of Disillusionment. That’s not a bad thing, because it leads to the Plateau of Productivity.[1]
Anecdotally for myself I’m finding that LLMs are great when I can give it a hyper specific target like a function to write. This isn’t because it can’t write an entire script. It can. It’s because the more I let it run wild, it feels like my understanding of the code gets exponentially worse.
1: https://en.m.wikipedia.org/wiki/Gartner_hype_cycle
Where's the NFT plateau of productivity? Asbestos? Lead?
The Gartner model doesn't actually have anything to say about technology, it's just astrology for rich people.
Growing disillusionment among programmers (whose productivity gains, by the way, represent the most successful use case for AI yet) is indeed not necessarily a bad thing.
What is concerning is that VCs seem to believe we are still in the exponential growth phase of the hype cycle. I believe the consensus among them (and the bigtech-adjacent shills) is that they are targeting a trillion-dollar market at minimum. Somehow.
> https://survey.stackoverflow.co/2025/ai
And they're getting more expensive
"AI tools are bad", says company most suffering by the use of AI tools