Blames "user error, not AI error" for incident in December involving its Kiro tool.
See full article...
See full article...
The evolutionary parade:"I have approximate knowledge of many things."
And stunt their critical thinking skills.that's already happening in the Copilot world.
you can let CP write the code, then you push it to github where a different CP instance will do a code review for you.
it's a system designed to make developers lazy.
This paper's been fascinating, not least because it's doing what it sets out to do, sharpening my rhetoric.
Just an example, when arguing against using AI to replace artists, I will now START by pointing out that it won't save them any money and may in fact cost them more. LLM AI's primary "value" is aimed at highly paid staff, not the small time writers and artists getting paid like... writers and artists. Once that's established, that they may even end up paying MORE to use the service, THEN I can go into how the art and writing it produces isn't very good, or very accurate, and is all derivative opening them up to potential lawsuits, and that that's what they're paying for, what they're INSISTING their staff uses. A more costly inferior product. And, since we're talking about rhetoric, it's worth adding in that with the public at large turning on AI, they'd get more value out of simply not buying into it and making that a selling point of their company. Outside of immediate profit, this also attracts potential artists and writers TO your company, giving you the pick of the litter of those who want job security so you get the best talent, which will then draw in more customers. That last bit is old news and how things worked before this bubble, but it's still worth repeating.
You hear that Condo Nast?
The "solution" the Agent provided was to create a variable that contained the status of docker being available or not. In every test it then checked this variable. If docker wasn't available it would simply skip the test. Hence the problem according to the agent was "solved".
Actually according to Sam Altman AGI will have arrived in 2025, so we've all already been replaced by chatbots who just don't realize they aren't humans em dash scary thought isn't it my fellow human not chatbots?Remember, every single white collar job will be gone within the next year apparently.
I thought it was about who controls The Spice?"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." - Reverend Mother Gaius Helen Mohiam, Dune
Ahem, it’s about who owns the means of production, kids!
Oil, or I guess rare earth minerals now.I thought it was about who controls The Spice?
I think it's there because otherwise folks might assume this article was about that failure. It was a huge deal and around the same time.Just wondering what the point of this snippet is. Aluding to AI involvement in that incident? Pointing out that humans make worse mistakes? I'm honestly not sure why that line was even in there. The October incident isn't referenced again, and just serves as a point of 'we had an outage' that is unrelated to the other outages.
This is a really interesting risk of AI systems.
When an experimental Uber autonomous vehicle killed Elaine Herzberg in 2018, one of the things NTSB cited was "automation complacency", where the human safety driver sort of checked out after hours of running the test track repeatedly with no previous errors.
This is why I think most high-consequence automated systems are held to a way higher standard than for humans, because if something fails 1 in 1000 times it can actually be more dangerous than failing 1 in 10 times, because the humans get complacent and don't mitigate the failures on the one-in-a-thousand system.
Generative AI is based on randomness and has no guarantees of performance (yet). As generative AI gets better it might ironically lead to more dangerous failures.
Oh yes we will.I love this for them.
I hate that in just a few years this will be the norm, and nobody’ll know how the pipes work anywhere.
Especially since a weirder version of the Standard Model plays into some of those movies, and can be considered a subset of the MCU.Oh, there's a much shorter and simpler argument: sign a contract, pay an artist, and your IP rights are generally ironclad. Can't put Darth Vader's face on something without paying Disney, and George Lucas before that.
It's worth remembering that my entire childhood took place during the period between Return of the Jedi and The Phantom Menace. During that 16 year period during which George Lucas produced zero Star Wars films, how much did he earn in IP royalties?
The difference between artificially generated art and human generated art is going to be the earnings difference between a single summer blockbuster vs a major movie franchise like Star Wars or the Marvel Cinematic Standard Model*. That might not sound so bad at first, some summer blockbusters make a lot of money. But a lot of them go bust and lose money. This is why we have so many sequels and cinematic universes and reboots, because nobody wants to invest in a film without a guaranteed audience.
You need recognizable IP to guarantee the audience, and no artificially generated art will give you that.
*Seriously I think the Standard Model needs a shorter introduction than the MCU at this point