How's that any different from now?We should prepare for a tidal wave of buggy software.
Interesting now how is it going from "I have an idea for a program" to "I have a program that works"?
How's that any different from now?
As Uncle Joe said, "Quantity has a quality of its own."How's that any different from now?
I see you have met my boss!Concepts of a plan is the same as a plan. And once something is planned, implementation is usually just an afterthought, amirite?
(/s and you have no idea how painful this is to type)
There will be plenty of jobs for fixing buggy AI software.So… the enormous sums of money needed to sustain LLM development and AI will come from replacing programmers? We should prepare for a tidal wave of buggy software.
Giving the intern class a starting point for climbing the ladder.*There will be plenty of jobs for fixing buggy AI software.
...and it's also key that Codex shows its thinking and work every step of the way...
True, though I still argue it's leading to far worse code overall. I've noticed our devs (major tech company) have become highly reliant on it for some core components of our application and those have turned into complete disasters. Bugs started trickling in and now they are flooding in as they vibe code their way to a completely unworkable implementation. We are already having to start from scratch on a few areas.I don't personally know any programmers who don't use AI at this point. Even at shops that forbid normal AI (like Disney) they have local agents.
It feels like this is a genie that's not going to be stuffed back into the bottle.
I am fully convinced that all AI—regardless of how good it is, how cool it is, how capable—is fundamentally leading humanity into being more lazy and using our brains less.True, though I still argue it's leading to far worse code overall. I've noticed our devs (major tech company) have become highly reliant on it for some core components of our application and those have turned into complete disasters. Bugs started trickling in and now they are flooding in as they vibe code their way to a completely unworkable implementation. We are already having to start from scratch on a few areas.
I don't think we're waiting for it be stuffed back in the bottle. This entire time AI has been presented as a solution looking for problems. A solution that my testing has shown to not meet any of my case use scenarios so far. I do know some people who use it to find issues with code validated by the tools their work gives them that don't run. Not looking to use AI for anything. What I want is companies to present their new tool which is better than the old tool and if that happens to use AI tech, that's fine, but if they lead with "it's AI", I'm going to look at if skew eyed.I don't personally know any programmers who don't use AI at this point. Even at shops that forbid normal AI (like Disney) they have local agents.
It feels like this is a genie that's not going to be stuffed back into the bottle.
Well, you don’t know me, but I for one am a programmer who doesn’t use AI. Given that I’ve spent the last year using non-AI tools to reduce static-analysis-flagged issues in our code base by 10x(ish), and made the MTBF get boringly large in the process, I don’t think my job security is in any great danger any time soon…I don't personally know any programmers who don't use AI at this point. Even at shops that forbid normal AI (like Disney) they have local agents.
It feels like this is a genie that's not going to be stuffed back into the bottle.
But that was a solved problem already, without confabulations, less space, less power usage, and cheaper, in like literally the intelligence product Microsoft had before LLMs.Criticism of LLMs aside, the release of a model built for one specific task is pretty much the only focused business case use I've seen for AI so far.
Why wouldn't you still test? Especially when the instructions say to test?Every single bug that was previously caught in QA/SIT/UAT?
When I need software now I go to the App Store. If it’s there for a couple bucks I buy it. If it’s add supported or non existent I ask Claude to build it for me. I have a nice little library of programs now. I’m selling a few of them for a buck. It’s great being able to think of exactly what you need and build it in a few hours.
So true. In my experience, it's been best to know exactly what you want your class or function or whatever to do (and know how to do it by hand if you had to). Then use discreet comments for each section to prompt the tool. e.g. // Loop over users array, finding those with malformed emailsBugs started trickling in and now they are flooding in as they vibe code their way to a completely unworkable implementation. We are already having to start from scratch on a few areas.
Why wouldn't you still test? Especially when the instructions say to test?
I mean, I hope that for you, and all my friends.Well, you don’t know me, but I for one am a programmer who doesn’t use AI. Given that I’ve spent the last year using non-AI tools to reduce static-analysis-flagged issues in our code base by 10x(ish), and made the MTBF get boringly large in the process, I don’t think my job security is in any great danger any time soon…
I didn't say it will produce production ready code, I said that's how OpenAI is trying to position it.Does the author know what "production ready" means? Since when is code "production ready" without any review or testing? Is it even compiled yet?
Hi, it's me. I'm the programmer. It's me.I don't personally know any programmers who don't use AI at this point. Even at shops that forbid normal AI (like Disney) they have local agents.
It feels like this is a genie that's not going to be stuffed back into the bottle.
Oh, obviously. As much as Sam Altman may despise Elon Musk, he's clearly seen how successful Musk's (and Trump's, to an extent) constant over-hype train has led to massive financial success for his companies. Tesla is a meme/hype stock; it doesn't matter how poorly he runs that company. Altman is hoping he can achieve the same reality for OpenAI.I'm getting the feeling that these frequent announcements from OpenAI have more to do with keeping the hype up more than anything else.
This is a situation where tech reporting falls far short. A lot of these things should be advertisements not articles - they should be paying you to promote their product (or maybe they are and you're just not telling us the reader?)
Don't forget, also: "If you don't have to support it after you've written it"If it really doesn't matter whether your code works, if undocumented and undocumentable behavior is not a showstopper, then AI coding sounds great!
There are also a lot of students who dont understand memory allocation or have touched c or asm and will look like a deer in headlightsI mean, I hope that for you, and all my friends.
I think job security in programming is probably not going to be great going forward, but how that plays out in practice I couldn't possibly say. The lower you are on the totem pole the more someone is going to wonder if a better programmer and an agent could replace your whole team.
I'm talking with Jason in Slack right now about this topic and what we're really wondering is what does the next generation look like? Because there's going to be a lot of students faking their way through courses with AI and graduating into the work force with impressive portfolios they couldn't really explain when asked about.
I do certainly agree that the future looks uncertain especially for the young’uns as you say! I’m old enough that it doesn’t matter so much… I may become the equivalent of a COBOL programmer in a few years… get laid off, retire for a couple of years, then return or go elsewhere as a consultant at exorbitant rates, fixing slop. I foresee that it will be a target-rich environment…I mean, I hope that for you, and all my friends.
I think job security in programming is probably not going to be great going forward, but how that plays out in practice I couldn't possibly say. The lower you are on the totem pole the more someone is going to wonder if a better programmer and an agent could replace your whole team.
I'm talking with Jason in Slack right now about this topic and what we're really wondering is what does the next generation look like? Because there's going to be a lot of students faking their way through courses with AI and graduating into the work force with impressive portfolios they couldn't really explain when asked about.
I'd predict the rise in cases of suicide on the part of highly over-worked, over-stressed and underpaid QC personnel first. Once THEY'RE replaced by AI, then, yeah, the buggy software will be everywhere.So… the enormous sums of money needed to sustain LLM development and AI will come from replacing programmers? We should prepare for a tidal wave of buggy software.