Survey of 2023–2024 data finds that AI created more tasks for 8.4 percent of workers.
See full article...
See full article...
And even where time was saved, the study estimates only 3 to 7 percent of those productivity gains translated into higher earnings for workers, raising questions about who benefits from the efficiency.
Ah, there's the problem, they listened to people who weren't software developers about what's capable of replacing a software developer.In "Large Language Models, Small Labor Market Effects," economists Anders Humlum and Emilie Vestergaard focused specifically on the impact of AI chatbots across 11 occupations often considered vulnerable to automation, including accountants, software developers [...]
Acceptable alternative answer: two software developers.Ah, there's the problem, they listened to people who weren't software developers about what's capable of replacing a software developer.
Because the answer continues to be another software developer.
Honestly, I've been avoiding AI tools like the plague in anticipation of these exact problems. I'm going to be better off writing functions and macros that will do all of the automation deliberately than I would be trying to coax an AI answer into working.This has been my experience as well, and why I don't usually rely on genAI for critical tasks.
Like coding with Visual Studio.
It's very much a sense of "Oh wow... how did it get that right?" when it does predict what I want to do, but far, far more often it's "For the love of God shut up - that's NOT what I want!" along with a lot of it getting some repetitive tasks right, but then finding out it gets a few of them wrong somewhere in the bulk, or just ignoring some cases forcing me to review every line of code again - something I don't have to do as much with my own hand-written code (since, you know, I wrote it and was there while it was being written).
I've reached the point where I've turned off most of the predictive coding features entirely. I can code faster and more reliably than it can.
Right, it's like debugging someone else's code, i.e. the worst part of programming.This has been my experience as well, and why I don't usually rely on genAI for critical tasks.
Like coding with Visual Studio.
It's very much a sense of "Oh wow... how did it get that right?" when it does predict what I want to do, but far, far more often it's "For the love of God shut up - that's NOT what I want!" along with a lot of it getting some repetitive tasks right, but then finding out it gets a few of them wrong somewhere in the bulk, or just ignoring some cases forcing me to review every line of code again - something I don't have to do as much with my own hand-written code (since, you know, I wrote it and was there while it was being written).
I've reached the point where I've turned off most of the predictive coding features entirely. I can code faster and more reliably than it can.
As a lawyer, ChatGPT has created more work for me because if a client sends me some legal document generated by (or partially generated by) ChatGPT I spend more time revising it and fixing the problems than if they'd just asked me to draft the damn thing in the first place.
Just think of all the lawyers caught submitting AI slop and then think about how much of that is slipping through and becoming legal record that a judge can later cite or be used to influence a major court decision.As a lawyer, ChatGPT has created more work for me because if a client sends me some legal document generated by (or partially generated by) ChatGPT I spend more time revising it and fixing the problems than if they'd just asked me to draft the damn thing in the first place.
This has been my experience as well, and why I don't usually rely on genAI for critical tasks.
Like coding with Visual Studio.
It's very much a sense of "Oh wow... how did it get that right?" when it does predict what I want to do, but far, far more often it's "For the love of God shut up - that's NOT what I want!" along with a lot of it getting some repetitive tasks right, but then finding out it gets a few of them wrong somewhere in the bulk, or just ignoring some cases forcing me to review every line of code again - something I don't have to do as much with my own hand-written code (since, you know, I wrote it and was there while it was being written).
I've reached the point where I've turned off most of the predictive coding features entirely. I can code faster and more reliably than it can.
What I've found is that the useful task for GenAI is to identify the repetitive tasks and make suggestions on how I can automate them. It does a pretty good job at that, and I can immediately look at the response and know if it's correct or not. So I can quickly whip up scripts to automate the most repetitive tasks I do, saving time in the future in a reliable manner.Honestly, I've been avoiding AI tools like the plague in anticipation of these exact problems. I'm going to be better off writing functions and macros that will do all of the automation deliberately than I would be trying to coax an AI answer into working.
The efforts spent developing coding assistants would be better spent on making bespoke code generating functions on common repetitive tasks. You know what's faster than having an LLM spit out a comprehensive switch-case structure for a given enumeration? Turning that into a right-click->generate-code feature that I don't have to double check.
The tech bros push out more vaguely named models that are all slightly different so the researchers are always a step or two behind the latest and greatest and they can just say, but the research that shows minor improvements was only for the previous model, this one is better.Great, so studies have statistically shown that AI represents all that is soulless and wrong. Now what?
I like to say "work always expands to fill the time given to it"Keynes thought that the advent of new technologies will make workers so efficient that we’d be working less and less and have more time for leisure and egalitarian pursuits.
But I guess he hadn’t anticipated “infinite growth” or “middle managers” and so we just create more work for ourselves. In some cases so much more that we’re working more than before and still not getting anything done.
Do you want more meetings? Because this is how you get more meetings.
Not very much of it. You'd need both an incredibly inattentive judge, and an incredibly inattentive opposing counsel, to let this stuff get through.Just think of all the lawyers caught submitting AI slop and then think about how much of that is slipping through and becoming legal record that a judge can later cite or be used to influence a major court decision.
Anyone with a brain would guess that but plenty of people are blinded by almost religious techno optimism. Having data like this is importantI mean, did we ever think it was supposed to be anyone but the people paying for the chatbots? Companies aren't known for increasing wages when those workers have been more productive.
Not surprising when you consider that the rollout strategy for most corporations has been to license the tools first and try to find the applications for them after the fact.
Documentation.Right, it's like debugging someone else's code, i.e. the worst part of programming.
I raise you: unit testsDocumentation.![]()
Creative lawyering.It was never about a brighter future that frees mankind from the chores of production.
It was always about increasingly concentrating capital in a few hands through the work of alienated serfs.
Wow, if this result holds up in other studies then we might be getting close to the bubble bursting on the LLM industry.
It feels true, certainly. While LLMs have occasionally saved me hours, other times I have had to spend significant time double checking and correcting their outputs. A few times hallucinated answers sent me on a wild goose chase that wasted a lot of time trying to make a false approach work. You have to be careful and selective in using these things and know when to try something else.
I think LLMs and other current generative AI will still find use-cases and be somewhat useful, and sometimes very harmful, but they are vastly overvalued. Image generation will probably find more "valuable" use than LLMs. Increasing efficiency and advertising will help tide over the industry as it is forced to sell a flawed product at modest prices, until the "real thing" happens some time between 5 and 20 years from now. But that won't be an LLM dominated system.
I wish I believed you.Wow, if this result holds up in other studies then we might be getting close to the bubble bursting on the LLM industry.
I wouldn't say they said it better, you certainly were more succinct.LLM's are the programmer's dream: doesn't everyone want to spend less time coding so they can spend more time debugging?
\s
*edit: ninja crepuscularbrolly said it better
As did C. Northcote Parkinson, credited with Parkinson's Law, and several similar dicta besides.I like to say "work always expands to fill the time given to it"
Ouch. Yeah, the problem is that LLMs have no understanding. Of anything. They are very clever search and summary engines. That is all that they areOhh, this just happened to me last week. We've deployed GitLab Duo across the firm so I thought I'd give it a try and asked it to implement some Spring function I'd never used before. It spit out something that looked competent but when I integrated it and tried to run it, it just gave back empty results.
Turned out there were two ways to implement it but you had to stick to one way or the other. The AI conflated the two and mixed up the implementation. Added a good half day of debugging and I found the answer at Stack Overflow anyway.
When we're all using the AI tools and no one's answering questions at Stack Overflow, where will the AIs crib their answers from?