Because it does that for some stupid reason. If you copy and paste the link into your address bar instead of just clicking it, it goes to the archive page.Why does the archive link go to RT Russian propaganda page?
I'm wondering why anyone would think that a somewhat advanced toaster/blender/mixer could even "speak".I had a similar thought reading the other story on this. Grok can't really be relied on to "admit" culpability (issues of legal liabilities of LLMs aside) because it can be simply generating a sentence in response to a leading prompt, not accurately reporting its own actions. LLMs are notorious for inventing and fabricating events in a comprehensible sequence based on no actual facts. They don't reliable describe their own inner workings, processes, or histories. They just produce statistically expectable, syntactically correct strings related to their prompts. An LLM can't admit committing crimes any more reliably than it could count the number of times R appears in 'strawberry.'
Russia seems to skunk such sites every once in a while.Why does the archive link go to RT Russian propaganda page?
This article is great except for this one bit.showing signs of schizophrenia at worst
Ars is the one outlet not accepting the non-apology at face value and describing the situation in context. Most other media is reporting “Grok apologized” and then telling people they can see pr0n of their favorite celebs on Grok.Publicizing an apology from Grok treats this failure as a joke and publicity stunt.
I do not believe you can have legal "consensual" sexual images of minors (minors being literally under the age of consent), so I'm not entirely sure why this was phrased this way.reports that it generated non-consensual sexual images of minors.
I mean... let's be completely honest here. Grok is just regurgitating what any given user wants it to say. It isn't capable of actual self-reflection. Any "non-apology" or "apology" the software gives is absolutely meaningless. Of course, we're getting radio silence from xAI and Musk about this, which is beyond weird because of how much the right loves to point out how much pedophilia is such a problem... but they're surprisingly quiet when the CSAM is coming from within their own house.Ars is the one outlet not accepting the non-apology at face value and describing the situation in context. Most other media is reporting “Grok apologized” and then telling people they can see pr0n of their favorite celebs on Grok.
Poohters, Bonesaw, and anyone else who is tired of the US hegemony globally stands to benefit greatly from our abrupt, sudden cruelty, and blatant and wanton disregard for human lives achieved by this de-funding.One of the things that makes shutting down USAID particularly contemptible, is that USAID is absolutely an "America First" program, the benefits of that program to the US vastly outweighed both the costs to the US and the benefits to the recipients combined.
Literally no one benefits in any meaningful way from shutting down the program except it possibly increasing the likelihood that some people will vote Republican.
Don't threaten me with a good time.I'm wondering why anyone would think that a somewhat advanced toaster/blender/mixer could even "speak".
An AI is a program. It does things it's designed to do. The problem is that those who designed it don't know what their product will do. From a consumer safety point of view, this is a fucking nightmare. People have DIED from using it who would likely not be dead had they not used it.
Trying to shift the blame from the toaster to the user is even worse. Yes, you probably shouldn't be sticking knives into the toaster while its plugged in to retrieve a locked in piece of ash that used to be bread because the toaster is defective, but it happens, and the fact it happens should be baked into the design with the proper circuit breaker to avoid electrocution.
These Silicon Valley fuckwits seem to not care that they have a defective product.
Why? Because these Silicon Valley fuckwits have inked or received maybe as much as a couple of trillion dollars worth of investment to get it's busted ass product to some level of acceptable to the general public. A public who is FINE playing with it for free but almost never willing to PAY FOR IT, and haven't made a penny in profits. "High revenue" isn't profits if you're not making more money than you've spent.
Knowing that their own house of cards will collapse the instant the bubble bursts, they 're DESPERATE to keep the "revenue" flowing in, bringing in a bunch of other Silicon Valley fuckwits in different parts of the Tech World to dig a deeper hole into which they'll all fall to some degree - some far deeper than others - praying to their false gods that SOMETHING scores a big hit with the world, because if they don't get a killer, must-have app out of it, they'll need decades to pay off investors, assuming they get enough revenue to stay in business long enough to do that.
IMHO, this is why we're getting the snark from these assholes. They know they're fucked, and there's not a thing they can do to stop the steamroller. This kind of reaction is PANIC based on desperation. So I'd not be at all surprised if that balloon bursts by the end of the month.
I do think that it's a useful intensifier to say that:Why does the article keep referring to non-consensual sexual images of minors?
It seems to me the question of consent doesn't, and shouldn't, enter the discussion of sexual images of minors.
When I was a kid, Twitter PR would at least send you aOf course, we're getting radio silence from xAI and Musk about this
Like Ars Technica?many in the media ran with Grok’s remorseful response.
You know, I hadn't thought about it that way, but that makes sense. Thanks.I do think that it's a useful intensifier to say that:
1. They could not have legally consented to sharing sexual images, even if they wanted to, but
2. There is additional trauma being inflicted upon them by the fact that even if they could have consented to share sexual images, they wouldn't have.
I'm going with "programmed."Call it training or learning I guess, but "AI" isn't "trained" and doesn't "learn" in the way we understand those actions as human beings. We need a new description that doesn't obfuscate the truth about what LLMs and "AI" are and how they are created.
“Seeded”Call it training or learning I guess, but "AI" isn't "trained" and doesn't "learn" in the way we understand those actions as human beings. We need a new description that doesn't obfuscate the truth about what LLMs and "AI" are and how they are created.
As long as they still hold the AI's operators accountable, that's exactly what should be happening ...I suspect Congress and/or SCOTUS will come to AI's rescue soon enough to give them a "get out of jail free" card on this kind of thing
If it's illegal to generate the sorts of AI images it's generated, then Grok should be jailed.
I am going to build my own timeline with go fish and kittens.Sorry, but we're using it and don't have a spare.
See, this I didn't know. Good to know when my Pro subscription comes up for renewal and I have to weigh the scales. The fact the Kyle came out and said balderdash to this shows someone is paying attention in the writers room.
That seems like an odd take to me. I think most people are (to greater and lesser degrees, sure) are able to observe themselves and come to conclusions about their own reasoning and actions that are much more than just guesses or confabulations.Humans can't give accurate statements about our reasoning processes either, we pretty much always post hock confabulate. The part of the brain that does the figuring out and the part that constructs the narrative are separate and so the narrative construction is always a guess.