I think the absolute best part of this whole thing is the name of the sting op. Mad props to whomever thought of it because it’s so damn fitting!Torrez’s office then conducted an undercover investigation codenamed “Operation MetaPhile,”
This reads a bit too close to the old arguments that tried to classify RSA as a weapon for my liking. Private and encrypted messages are powerful tools with broad utility that most people need to use in their day-to-day lives. It is not unexpected that child-exploitation should rely so heavily on mundane and broadly useful technologies including cameras, microphones and automobiles.I've been thinking that it's a mistake to offer encrypted chats inside public discussions apps like Facebook and Instagram. It blurs the lines between what happens in public and what happens in private.
In real life there is a vast difference between meeting someone on a public terrace and going to their appartment. If we had a similar separation/barrier between public chats and private ones, that might help some kids (and other vulnerable people) avoid being digitally dragged into unhealthy situations.
Chat apps like WhatsApp and Signal should remain proprely encrypted, and we should promote awareness that going there is like stepping into someones private residence and there is no public watchdog to keep anyone safe.
Do you have any idea how gross I feel reading that Thomas and I agree on something?Sticking this here, since its legal.
Victory on The High Seas!
https://thehill.com/regulation/court-battles/5799938-supreme-court-rules-cox-communications/
And people are inevitably going to die anyway, so really there's no culpability for not adding brakes."I knew not putting brakes in the car would lead to some deaths, but I was selling a lot of cars at a great margin. It was a risk I was willing to let others take."
We're too big to care, folks. Sorry! Moose out front should have told you!“harms to children, such as sexual exploitation and detriments to mental health, were inevitable on the company’s platforms due to their vast user bases”
I'm most bothered by the "Let's turn off Encryption for Kids"
Who is a "Kid" then?
Should a parent be able to snoop on their kids communications up to 18?
Not all parents are paragons of virtue so the kid may want/need to have a private channel.
I mean, legally are they not a "person" until a certain age so the law does not protect them and the government can search through the Facebook Firehose for bad actors?
Disclaimer: I am not a parent.
It could be that their other options for arguments...would to, or unearth during depositions, even more dubious or corrupt or criminal behavior. If I were a betting sort--I'd spin on that possibility.I'm surprised this was their best argument. Does that mean all the less embarrassing arguments, like "we tried to prevent it," weren't credible?
Disclosure: I am a parent.I'm most bothered by the "Let's turn off Encryption for Kids"
Who is a "Kid" then?
Should a parent be able to snoop on their kids communications up to 18?
Not all parents are paragons of virtue so the kid may want/need to have a private channel.
I mean, legally are they not a "person" until a certain age so the law does not protect them and the government can search through the Facebook Firehose for bad actors?
Disclaimer: I am not a parent.
I wonder how long it would take quantum computing to crack that codeBonus points for apropos naming.
They can't. This is a problem created by voters who elected people to Congress. The Courts are hamstrung here.Someday the courts will address modern problems with modern fines, but until then, corporations will continue to do business like fucking asshole robber barrons.
The argument they're making is poorly framed and probably deliberately framed still more poorly by some media (including this ars article I suppose) - Their argument is essentially trying to say they built a public road (except that it isn't actually public (or a road) but we can leave the asterisk on that for now) and it's inevitable that someone will use the road while doing bad things and they can't be held responsible for that beyond an arbitrarily defined due diligence to act responsibly.If they knew the exploitation was inevitable and still built the platform and the tools for the criminals to use then why are they not on the hook for willfull ignorance/conspiracy charges?
I mean at least a minimum attractive nuisance charge like people who have pools and no fence or pool cover.
If Torrez gets his way, Facebook, Instagram, and WhatsApp will be effectively age-gated, and child predators will be detected and removed more often in the future. To accomplish that, he thinks kids should also be cut off from sending encrypted messages, which he argued bad actors rely on to evade arrest. The court may agree, as it was revealed during the trial that Meta chose to set chats as encrypted by default despite warnings that the setting would make it harder to investigate child predators on its platforms.
In a case like this the win gives other AGs in states that have similar laws a strategy to follow, but more importantly it gives them a lot of discovery to work with.Yes, people have an opinion on what the fine should be (ask Zuck - he's sure to tell you it's too high).
But you can only fine people to the extent of what the law says. The GDP of New Mexico is about $110 billion, and the income of Meta is about $200 billion. So yes, the laws of NM aren't probably scaled to fine Meta an appropriate amount. But a legal win is a legal win that gives AGs in other states ammunition to challenge Meta in their own way.
The only "choice" NM could have made was an after the fact exception for Meta in the appropriate laws in order to fine them more. And I'd caution against going down that route, as laws are meant to be equal for all involved.
(FYI I also wish the fine was higher for all the reasons people have previously stated)
Meta and/or the state are not a parent, and putting aside the reductionism of how you frame the way things are, the state of affairs of "minors" (however that is defined) being akin to property is neither something I think we should continue to speak of as desirable (though I'm sure conservatives will always brow beat about "parental rights" to corporal punishment or isolation or other ills) nor the reality because minors do, in fact, have rights, even if lesser.Kids [under whatever legal definition, usually for this type of law it's ~13] aren't "supposed" to have unsupervised accounts because they legally are NOT a "person who can enter legal agreements by themselves" if they are not legally emancipated, in that sense of "are not a person".
So the simple answer would be quite frankly, yes. Yes a parent should be able to snoop on their kids communications. And as controversial as this part will be, probably even up to 18 or whenever the kid otherwise legally becomes their own adult.
The better overall answer would include a better balance of encryption and keys, especially for shared access accounts (businesses, supervised accounts, etc), where flags and message content fingerprints were generated at the ends and attached, and key based access was all logged. Then even if you do have a "not paragon of virtue" parent, if the child takes them to court their access of messages that lead to problematic actions is provably logged. Even if there's a "law enforcement access" key tied to an account (which might be required for all "at risk" accounts, such as children), its use (ideally only after a warrant or direct permission/request) to review messages is logged.
But all of that would presume platforms like Meta weren't just absolute garbage "least possible effort to turn on the advertising and engagement firehose" monstrosities that don't care about doing anything well so long as the money printing part keeps rolling.
Most kids who have parents they wouldn't want seeing their accounts contents make their own accounts with a faked age. Quite frankly, most kids whose parents are such problems that they shouldn't be able to see their messages are far worse problems in other ways to a degree that trying to design a system to pre-emptively [supposedly] solve for them by stopping EVERYONE from being able to supervise their kids' platform activity would be a ludicrous case of ruining overall functionality for everyone who isn't because a few people are terrible, and STILL wouldn't fix them just taking the child's device and forcing access: so it would solve exactly NOTHING.
You can't fix everything with technology, and sometimes trying in the wrong ways makes things worse in multiple directions.
That's not a bad idea, really. To use reality as an example: Kids routinely talk to strangers all the time. Most of that, it's talking to other kids they just met. That's 100% ok, it should even be encouraged. But it should also be taking place in public. Just like chatting with some random kid in the playground. There's no expectation of privacy, it should be easily and widely visible, with responsible adults (parents, teachers, police, etc) easily able to monitor and step in if necessary.I've been thinking that it's a mistake to offer encrypted chats inside public discussions apps like Facebook and Instagram. It blurs the lines between what happens in public and what happens in private.
In real life there is a vast difference between meeting someone on a public terrace and going to their appartment. If we had a similar separation/barrier between public chats and private ones, that might help some kids (and other vulnerable people) avoid being digitally dragged into unhealthy situations.
Chat apps like WhatsApp and Signal should remain proprely encrypted, and we should promote awareness that going there is like stepping into someones private residence and there is no public watchdog to keep anyone safe.
The problem for these platforms is that they want the freedom of being a "dumb pipe" and the profit and power and control of being an editorial. If Facebook were somehow an open source and non-profit enterprise that uniformly and blindly operated a service - no political involvement, no targeted ads (if ads at all) etc. - then it would be a different story. It would be like trying to find the ISP liable for what a customer said online or the road responsible for the getaway car. I'm not sure something like that would even be possible, but Meta wants their cake and to eat it to, and to gaslight everyone about what's going on with the cake.Disclosure: I am a parent.
Should parents be able to have visibility to what their kids are doing online? Definitively yes.
Should they "snoop"? Not as easy.
Often when information of a tragedy comes out there is a very loud chorus of "the parents should have known!" People want to defend developers from their products that allow kids to do reprehensible things and say the answer is "parenting."
At the same time, a kid wants and deserves privacy. In the end, it comes down to trust, which has to be earned both by the parent(s) and the kid.
For better or worse, my policy is "I trust you, AND I am going to look over your shoulder from time to time. If I get the sense you are hiding something, I am going to look much harder."
What I will not tolerate as a parent OR as a citizen of the Nation of Earth is the idea that companies can be allowed to turn a blind eye to predatory use of their products while claiming the opposite, and particularly when their tolerance is a demonstrably calculated choice in their zeal to actively engineer their product for maximum potential exploitation of the vulnerable.
Phase 2 of the trial now moves to a judge trial to decide additional penalties as well as possible judicial orders for Meta to make changes. So, it isn't over. If they do NOT comply with the state law, they can face a new lawsuit with likely much higher penalties for willful violation. I am not without hope. Add in, a lawsuit in LA today also found Meta and google liable for addicting a minor to their social media apps and there is a breath of a chance of the chickens coming home to roost. This might well become an existential threat to social media.Th fine was too low , a rounding error for these bastards
![]()
It's the wrong language. I do have "rights" as a parent, but if I have to assert them, it's a problem. More importantly to me, my kids, AND to society is that I have responsibilities as a parent, which I take seriously. Meta or any of those clowns making it hard for me to exercise that responsibility in a... well... a responsible and respectful manner is upending rational priorities.Meta and/or the state are not a parent, and putting aside the reductionism of how you frame the way things are, the state of affairs of "minors" (however that is defined) being akin to property is neither something I think we should continue to speak of as desirable (though I'm sure conservatives will always brow beat about "parental rights" to corporal punishment or isolation or other ills) nor the reality because minors do, in fact, have rights, even if lesser.
All of this is the wrong direction.
Until about 8th grade all of our son's home online activity with the exception of the very limited call and text-only phone we gave him in 5th grade was done in the dining room. We didn't watch what he was doing but he was doing it where we could see him. When he asked if he could move the computer to his bedroom we felt like there was a good basis for trust there.Disclosure: I am a parent.
Should parents be able to have visibility to what their kids are doing online? Definitively yes.
Should they "snoop"? Not as easy.
Often when information of a tragedy comes out there is a very loud chorus of "the parents should have known!" People want to defend developers from their products that allow kids to do reprehensible things and say the answer is "parenting."
At the same time, a kid wants and deserves privacy. In the end, it comes down to trust, which has to be earned both by the parent(s) and the kid.
For better or worse, my policy is "I trust you, AND I am going to look over your shoulder from time to time. If I get the sense you are hiding something, I am going to look much harder."
What I will not tolerate as a parent OR as a citizen of the Nation of Earth is the idea that companies can be allowed to turn a blind eye to predatory use of their products while claiming the opposite, and particularly when their tolerance is a demonstrably calculated choice in their zeal to actively engineer their product for maximum potential exploitation of the vulnerable.
That's a them problem. Perhaps they should shit or get off the pot.The problem for these platforms is that they want the freedom of being a "dumb pipe" and the profit and power and control of being an editorial.
Maybe. I can't seem to find any though. The bit I can find seems to indicate "urban legend". I can't find a single news report or any LEA report after a bunch of search engine poking with anything mentioning any Disney kidnappings. I wouldn't say it has never happened, but I would think it would make BIG headlines if it ever had.To be a bit grim, surely physical places like Disneyland/world have a kidnapping rate?
Melania Trump Touts AI-Powered Robots Teaching Classics To Kids At Bizarre White House Event
At a bizarre White House event on Wednesday, first lady Melania Trump spelled out a vision of the future where children are given a classical education by Artificial Intelligence-powered robots.
They reported $60B in net income in 2025; much much higher than $9B.Meta has an annual revenue of $200b, with ~$9b of profit.
I noticed that too. So ostensibly AI is enabling META to create monitoring reports using AI but in reality they simply produce useless garbage that wastes people's time.I just want to call out this part. This is one of the things that AI is supposed to help with, since moderation is in fact very hard and can even be harmful to the moderators themselves.
But even here, AI is doing more harm than good.
being available “in the comfort of your home.”On a related note about child abuse by corporations, I just saw this story (which I hope will be commented by Ars)
The robots appeared to be supplied by Figure AI
I don't know the answer to that but I suspect actual kidnappings are very rare at airports. Anyone being coerced against their will would be noticeable. However trafficked people1 regularly pass through airports because by then the coercion mechanisms are already in place.How often are there kidnappings at an airport? Those likely have less good security than Disney parks.
$375M is for one state, and if they don't stop the offending behavior, they can just be sued again. What if it were $375M x 50states = $18B, and if it were annual instead of a one-time thing?Good to see Meta lose here, but $375M is a drop in the bucket for zuck, and is only 17% of what the state wanted. It's truly pathetic we can't hold corporations accountable, yet people were spending 20+ years in prison for half a gram of cannabis.
But they have well established policies and procedures on how to mitigate the harm as quickly as possible - alerting staff, checking camera footage, IDs, etc.To be a bit grim, surely physical places like Disneyland/world have a kidnapping rate?
Just to really hammer home how small this fine is, this is as financially ruinous as giving someone earning $50,000/year a $94 fine. Corporations must be fined proportionally to their revenue of there's any chance of having them be a real threat.Meta has an annual revenue of $200b, with ~$9b of profit.
To be fair, I would rather have a bot read to my kid than have Melania get anywhere near them.On a related note about child abuse by corporations, I just saw this story (which I hope will be commented by Ars)
The robots appeared to be supplied by Figure AI
It wasn’t a mail order .. it was more a retail purchase ..Oh, I thought Trump ordered her from Slovenia.
You're assuming they didn't set them up that way intentionally to mask the severity of the problem.I just want to call out this part. This is one of the things that AI is supposed to help with, since moderation is in fact very hard and can even be harmful to the moderators themselves.
But even here, AI is doing more harm than good.
I'm not sure your parade of horribles and slippery slope allusions (speaking of sensationalism) contribute much to a serious discussion. This audience gets it, and--by and large--is capable of holding two contradictory thoughts in mind at the same time.The argument they're making is poorly framed and probably deliberately framed still more poorly by some media (including this ars article I suppose) - Their argument is essentially trying to say they built a public road (except that it isn't actually public (or a road) but we can leave the asterisk on that for now) and it's inevitable that someone will use the road while doing bad things and they can't be held responsible for that beyond an arbitrarily defined due diligence to act responsibly.
Meta-and-Zuck-bad aside, on its face that is not actually a wholly untenable position(though I question the judgement of the person who thought it was viable in this environment and given their history). What's the alternative? Arbitrary liability? No road at all? But indeed, just looking through the implications of the ars article and many of the comments, that's exactly what they're saying, implicitly or explicitly.
Of course Meta and Zuck are being disingenuous and made multiple misrepresentations here about how they treated the issues in question. I have no sympathy for them, though the penalty is meaningless to a company of that scale. And it's also difficult in other ways, in that they do exercise discretion (and change that discretion seemingly arbitrarily, such as when a different president comes in) rather than acting as a "dumb pipe/platform", and because what they would like is essentially zero possibility for accountability for anything, because who gets to define the standard? What they really care about is the precedent and the prospect of an explosion of such litigation across many jurisdictions which will all apply unpredictable standards and potentially impose controls on their business. All that is not discussed here or in some other reporting I see.
But there are consequences for entertaining this sort of campaign. Belanger writes:
A previous commenter questioning this was mostly positively received but also got significant downvotes, whatever that's worth or means really. On ars, where articles about attacks on privacy and encryption and legal and media antics generally as well as about social inclusion and access get positive and serious discussion. Perhaps people aren't making the connection...
But that's where the plaintiffs, author, and some commenters are sitting. Reducing rights and protections and access for minors, and everyone else, "for the children" but also because... Reflexive anger at the company and its leadership and its industry more generally may be well grounded in many respects, but I don't think this stuff is. I think this sort of thing very rarely is anywhere. And I think it's notable that despite the subject matter and emotional representations, the jury took a lot of time and clearly had some difficulty. It'll "inevitably" be an unpopular take right now, and I don't even necessarily disagree with this specific outcome, but this is a misguided media and legal bandwagon and it's disingenuous and dangerous. As alluded to above, it girdles and imposes on those it claims to advocate for and there is the classic issue of privacy, but there are other effects. What do platforms do out of fear of this sort of thing? Anyone perceived to be "risky" is preemptively excluded from social participation? What could possibly be wrong with that? We do things like that already and there certainly isn't a wide body of research, to say nothing of common sense, showing it's terrible for those people and counterproductive on the whole. But the thing is, the way things get percolated in cases and media, they're typically popular for the same reasons algorithms at Facebook/whatever are so geared toward sensationalism.
This so gives off "it's a feature, not a flaw" vibes it makes me sick...At trial, Mark Zuckerberg and Instagram chief Adam Mosseri testified that “harms to children, such as sexual exploitation and detriments to mental health, were inevitable on the company’s platforms due to their vast user bases,” The Guardian reported.