Meta loses trial after arguing child exploitation was “inevitable” on its apps

I've been thinking that it's a mistake to offer encrypted chats inside public discussions apps like Facebook and Instagram. It blurs the lines between what happens in public and what happens in private.

In real life there is a vast difference between meeting someone on a public terrace and going to their appartment. If we had a similar separation/barrier between public chats and private ones, that might help some kids (and other vulnerable people) avoid being digitally dragged into unhealthy situations.

Chat apps like WhatsApp and Signal should remain proprely encrypted, and we should promote awareness that going there is like stepping into someones private residence and there is no public watchdog to keep anyone safe.
This reads a bit too close to the old arguments that tried to classify RSA as a weapon for my liking. Private and encrypted messages are powerful tools with broad utility that most people need to use in their day-to-day lives. It is not unexpected that child-exploitation should rely so heavily on mundane and broadly useful technologies including cameras, microphones and automobiles.

What is unusual about encryption among these other broad technologies is how many people forget that most people genuinely prefer that their communications remain private even in the face of government and private investigators. As good of a man as he might be, I do not want Torrez in my DMs or private communications, nor would I have wanted him peering into those at any point in my childhood.

I'm glad Torrez is winning this case, but I would ask that he respects privacy and security as basic rights and advocates for a better way to combat child exploitation.
 
Upvote
10 (10 / 0)

Torao

Wise, Aged Ars Veteran
100
Subscriptor
Do you have any idea how gross I feel reading that Thomas and I agree on something?
"I knew not putting brakes in the car would lead to some deaths, but I was selling a lot of cars at a great margin. It was a risk I was willing to let others take."
And people are inevitably going to die anyway, so really there's no culpability for not adding brakes.
 
Upvote
1 (1 / 0)

graylshaped

Ars Legatus Legionis
67,692
Subscriptor++
“harms to children, such as sexual exploitation and detriments to mental health, were inevitable on the company’s platforms due to their vast user bases”
We're too big to care, folks. Sorry! Moose out front should have told you!
c04c2b405ed0ba044c984f2f5d1e262a.jpg
 
Upvote
4 (4 / 0)
I'm most bothered by the "Let's turn off Encryption for Kids"
Who is a "Kid" then?

Should a parent be able to snoop on their kids communications up to 18?
Not all parents are paragons of virtue so the kid may want/need to have a private channel.

I mean, legally are they not a "person" until a certain age so the law does not protect them and the government can search through the Facebook Firehose for bad actors?

Disclaimer: I am not a parent.

Kids [under whatever legal definition, usually for this type of law it's ~13] aren't "supposed" to have unsupervised accounts because they legally are NOT a "person who can enter legal agreements by themselves" if they are not legally emancipated, in that sense of "are not a person".

So the simple answer would be quite frankly, yes. Yes a parent should be able to snoop on their kids communications. And as controversial as this part will be, probably even up to 18 or whenever the kid otherwise legally becomes their own adult.

The better overall answer would include a better balance of encryption and keys, especially for shared access accounts (businesses, supervised accounts, etc), where flags and message content fingerprints were generated at the ends and attached, and key based access was all logged. Then even if you do have a "not paragon of virtue" parent, if the child takes them to court their access of messages that lead to problematic actions is provably logged. Even if there's a "law enforcement access" key tied to an account (which might be required for all "at risk" accounts, such as children), its use (ideally only after a warrant or direct permission/request) to review messages is logged.

But all of that would presume platforms like Meta weren't just absolute garbage "least possible effort to turn on the advertising and engagement firehose" monstrosities that don't care about doing anything well so long as the money printing part keeps rolling.

Most kids who have parents they wouldn't want seeing their accounts contents make their own accounts with a faked age. Quite frankly, most kids whose parents are such problems that they shouldn't be able to see their messages are far worse problems in other ways to a degree that trying to design a system to pre-emptively [supposedly] solve for them by stopping EVERYONE from being able to supervise their kids' platform activity would be a ludicrous case of ruining overall functionality for everyone who isn't because a few people are terrible, and STILL wouldn't fix them just taking the child's device and forcing access: so it would solve exactly NOTHING.

You can't fix everything with technology, and sometimes trying in the wrong ways makes things worse in multiple directions.
 
Upvote
-4 (4 / -8)
I'm surprised this was their best argument. Does that mean all the less embarrassing arguments, like "we tried to prevent it," weren't credible?
It could be that their other options for arguments...would to, or unearth during depositions, even more dubious or corrupt or criminal behavior. If I were a betting sort--I'd spin on that possibility.
 
Upvote
2 (2 / 0)

graylshaped

Ars Legatus Legionis
67,692
Subscriptor++
I'm most bothered by the "Let's turn off Encryption for Kids"
Who is a "Kid" then?

Should a parent be able to snoop on their kids communications up to 18?
Not all parents are paragons of virtue so the kid may want/need to have a private channel.

I mean, legally are they not a "person" until a certain age so the law does not protect them and the government can search through the Facebook Firehose for bad actors?

Disclaimer: I am not a parent.
Disclosure: I am a parent.

Should parents be able to have visibility to what their kids are doing online? Definitively yes.
Should they "snoop"? Not as easy.

Often when information of a tragedy comes out there is a very loud chorus of "the parents should have known!" People want to defend developers from their products that allow kids to do reprehensible things and say the answer is "parenting."

At the same time, a kid wants and deserves privacy. In the end, it comes down to trust, which has to be earned both by the parent(s) and the kid.

For better or worse, my policy is "I trust you, AND I am going to look over your shoulder from time to time. If I get the sense you are hiding something, I am going to look much harder."

What I will not tolerate as a parent OR as a citizen of the Nation of Earth is the idea that companies can be allowed to turn a blind eye to predatory use of their products while claiming the opposite, and particularly when their tolerance is a demonstrably calculated choice in their zeal to actively engineer their product for maximum potential exploitation of the vulnerable.
 
Upvote
12 (12 / 0)
Someday the courts will address modern problems with modern fines, but until then, corporations will continue to do business like fucking asshole robber barrons.
They can't. This is a problem created by voters who elected people to Congress. The Courts are hamstrung here.

TLDR version...

Day fines (that is penalties that scales with wealth or income), were things just starting to be experimented with in the late 1980s. And ALL US experimentation with them collapsed basically over night--because of the "War on Crime" and the fear-of-crime politics popular in the early 90s. Remember 3 strikes laws, and all that? Yea, that. You see Day Fines fell under the regulatory category of "sentence reform", and everyone was hot to have statutorily hard-set penalties in order to be "tough on crime"....which sounds "cool" and "bad@ss", except inflation is always happening and updating fines seldom if ever happens.

If you want a law school's take on the history and what to do...experts write about it alot:
https://www.law.georgetown.edu/amer...a-and-Reversing-the-Costly-Punitive-Trend.pdf
 
Upvote
2 (2 / 0)
If they knew the exploitation was inevitable and still built the platform and the tools for the criminals to use then why are they not on the hook for willfull ignorance/conspiracy charges?

I mean at least a minimum attractive nuisance charge like people who have pools and no fence or pool cover.
The argument they're making is poorly framed and probably deliberately framed still more poorly by some media (including this ars article I suppose) - Their argument is essentially trying to say they built a public road (except that it isn't actually public (or a road) but we can leave the asterisk on that for now) and it's inevitable that someone will use the road while doing bad things and they can't be held responsible for that beyond an arbitrarily defined due diligence to act responsibly.

Meta-and-Zuck-bad aside, on its face that is not actually a wholly untenable position(though I question the judgement of the person who thought it was viable in this environment and given their history). What's the alternative? Arbitrary liability? No road at all? But indeed, just looking through the implications of the ars article and many of the comments, that's exactly what they're saying, implicitly or explicitly.

Of course Meta and Zuck are being disingenuous and made multiple misrepresentations here about how they treated the issues in question. I have no sympathy for them, though the penalty is meaningless to a company of that scale. And it's also difficult in other ways, in that they do exercise discretion (and change that discretion seemingly arbitrarily, such as when a different president comes in) rather than acting as a "dumb pipe/platform", and because what they would like is essentially zero possibility for accountability for anything, because who gets to define the standard? What they really care about is the precedent and the prospect of an explosion of such litigation across many jurisdictions which will all apply unpredictable standards and potentially impose controls on their business. All that is not discussed here or in some other reporting I see.

But there are consequences for entertaining this sort of campaign. Belanger writes:
If Torrez gets his way, Facebook, Instagram, and WhatsApp will be effectively age-gated, and child predators will be detected and removed more often in the future. To accomplish that, he thinks kids should also be cut off from sending encrypted messages, which he argued bad actors rely on to evade arrest. The court may agree, as it was revealed during the trial that Meta chose to set chats as encrypted by default despite warnings that the setting would make it harder to investigate child predators on its platforms.

A previous commenter questioning this was mostly positively received but also got significant downvotes, whatever that's worth or means really. On ars, where articles about attacks on privacy and encryption and legal and media antics generally as well as about social inclusion and access get positive and serious discussion. Perhaps people aren't making the connection...

But that's where the plaintiffs, author, and some commenters are sitting. Reducing rights and protections and access for minors, and everyone else, "for the children" but also because... Reflexive anger at the company and its leadership and its industry more generally may be well grounded in many respects, but I don't think this stuff is. I think this sort of thing very rarely is anywhere. And I think it's notable that despite the subject matter and emotional representations, the jury took a lot of time and clearly had some difficulty. It'll "inevitably" be an unpopular take right now, and I don't even necessarily disagree with this specific outcome, but this is a misguided media and legal bandwagon and it's disingenuous and dangerous. As alluded to above, it girdles and imposes on those it claims to advocate for and there is the classic issue of privacy, but there are other effects. What do platforms do out of fear of this sort of thing? Anyone perceived to be "risky" is preemptively excluded from social participation? What could possibly be wrong with that? We do things like that already and there certainly isn't a wide body of research, to say nothing of common sense, showing it's terrible for those people and counterproductive on the whole. But the thing is, the way things get percolated in cases and media, they're typically popular for the same reasons algorithms at Facebook/whatever are so geared toward sensationalism.
 
Upvote
1 (2 / -1)
Yes, people have an opinion on what the fine should be (ask Zuck - he's sure to tell you it's too high).

But you can only fine people to the extent of what the law says. The GDP of New Mexico is about $110 billion, and the income of Meta is about $200 billion. So yes, the laws of NM aren't probably scaled to fine Meta an appropriate amount. But a legal win is a legal win that gives AGs in other states ammunition to challenge Meta in their own way.

The only "choice" NM could have made was an after the fact exception for Meta in the appropriate laws in order to fine them more. And I'd caution against going down that route, as laws are meant to be equal for all involved.

(FYI I also wish the fine was higher for all the reasons people have previously stated)
In a case like this the win gives other AGs in states that have similar laws a strategy to follow, but more importantly it gives them a lot of discovery to work with.

The win itself probably won't be admitted or acknowledged if other cases go to court.
 
Upvote
4 (4 / 0)
Kids [under whatever legal definition, usually for this type of law it's ~13] aren't "supposed" to have unsupervised accounts because they legally are NOT a "person who can enter legal agreements by themselves" if they are not legally emancipated, in that sense of "are not a person".

So the simple answer would be quite frankly, yes. Yes a parent should be able to snoop on their kids communications. And as controversial as this part will be, probably even up to 18 or whenever the kid otherwise legally becomes their own adult.

The better overall answer would include a better balance of encryption and keys, especially for shared access accounts (businesses, supervised accounts, etc), where flags and message content fingerprints were generated at the ends and attached, and key based access was all logged. Then even if you do have a "not paragon of virtue" parent, if the child takes them to court their access of messages that lead to problematic actions is provably logged. Even if there's a "law enforcement access" key tied to an account (which might be required for all "at risk" accounts, such as children), its use (ideally only after a warrant or direct permission/request) to review messages is logged.

But all of that would presume platforms like Meta weren't just absolute garbage "least possible effort to turn on the advertising and engagement firehose" monstrosities that don't care about doing anything well so long as the money printing part keeps rolling.

Most kids who have parents they wouldn't want seeing their accounts contents make their own accounts with a faked age. Quite frankly, most kids whose parents are such problems that they shouldn't be able to see their messages are far worse problems in other ways to a degree that trying to design a system to pre-emptively [supposedly] solve for them by stopping EVERYONE from being able to supervise their kids' platform activity would be a ludicrous case of ruining overall functionality for everyone who isn't because a few people are terrible, and STILL wouldn't fix them just taking the child's device and forcing access: so it would solve exactly NOTHING.

You can't fix everything with technology, and sometimes trying in the wrong ways makes things worse in multiple directions.
Meta and/or the state are not a parent, and putting aside the reductionism of how you frame the way things are, the state of affairs of "minors" (however that is defined) being akin to property is neither something I think we should continue to speak of as desirable (though I'm sure conservatives will always brow beat about "parental rights" to corporal punishment or isolation or other ills) nor the reality because minors do, in fact, have rights, even if lesser.

All of this is the wrong direction.
 
Upvote
6 (6 / 0)

gautier

Ars Praetorian
559
Subscriptor++
It is only about profit. The cost of controling bad actors on Meta product is greater than any legal fine. It is not only about child abuse. A copy of my wife passport is actually used on Facebook for AirBnB scams. I was able to identified several Facebook accounts of the scammers, account using profile photos know to be used by scammers, some years old, with no update. I gave Meta the profiles name, the groups name, various digitals evidence. Meta did not flinch, the profiles and groups are still there, my wife, a well know executive very visible on business platforms, continue to receive complaints from scammed persons. Meta just don't care because this does not affect its profits, but it affect my wife public image.
 
Upvote
7 (7 / 0)

Zitchas

Smack-Fu Master, in training
16
I've been thinking that it's a mistake to offer encrypted chats inside public discussions apps like Facebook and Instagram. It blurs the lines between what happens in public and what happens in private.

In real life there is a vast difference between meeting someone on a public terrace and going to their appartment. If we had a similar separation/barrier between public chats and private ones, that might help some kids (and other vulnerable people) avoid being digitally dragged into unhealthy situations.

Chat apps like WhatsApp and Signal should remain proprely encrypted, and we should promote awareness that going there is like stepping into someones private residence and there is no public watchdog to keep anyone safe.
That's not a bad idea, really. To use reality as an example: Kids routinely talk to strangers all the time. Most of that, it's talking to other kids they just met. That's 100% ok, it should even be encouraged. But it should also be taking place in public. Just like chatting with some random kid in the playground. There's no expectation of privacy, it should be easily and widely visible, with responsible adults (parents, teachers, police, etc) easily able to monitor and step in if necessary.

But private two-person encrypted chats? That's equivalent to being in someone's bedroom. A kid shouldn't be in the bedroom of random adults, period.

And there's easily a spectrum here. For instance, in a classroom it'd be the equivalent of a forum that has a class worth of kids in it, but the only adults that should have access are the teacher(s), aide(s), and principal. There's some degree of privacy, but it's not complete privacy. Just like a real life class room.

Beyond that, there's plenty of situations where an adult may need to have a private chat with a kid. Those should be clearly documented when they happen, there should be clear processes for them to be reviewed to determine if they are appropriate, but also restrictions against them being made public or people gaining access to them when they shouldn't. For instance, teacher having a one-on-one conversation with a kid to resolve a problem. The fact that such a discussion happens should be documented. Certain people, such as school administrators and parents should know that it happened. If someone has a concern, they should be able to request a verification of it; in which case a third party reviews it to ensure that there was nothing problematic going on and/or provides a summary of what transpired in it; depending on the situation, or refer it to the police if something criminal happened. Basically, strong privacy protections but with documentation and verifications.

On the other hand, some random adult with no official role having a private chat with a kid should be about as private as a thread in discord. There's a bit of privacy, mostly just cutting off the noise of the rest of the channel rather than any hard privacy, but anyone who wants to can wander by to hear what's being said.


Maybe there could be ways for people to request privacy for such adult-kid conversations, but it should always have an avenue for verification by someone unrelated to the adult, as well as an easy way for it to be referred to the police. True end-to-end encrypted chats between adult and kids (aside from between parents and their kids) basically should not exist.

But that's all hypotheticals. It'd be a huge endeavor to implement, and probably pretty close to impossible. But it would be good enough if it was implemented on Meta.
 
Upvote
5 (6 / -1)
Disclosure: I am a parent.

Should parents be able to have visibility to what their kids are doing online? Definitively yes.
Should they "snoop"? Not as easy.

Often when information of a tragedy comes out there is a very loud chorus of "the parents should have known!" People want to defend developers from their products that allow kids to do reprehensible things and say the answer is "parenting."

At the same time, a kid wants and deserves privacy. In the end, it comes down to trust, which has to be earned both by the parent(s) and the kid.

For better or worse, my policy is "I trust you, AND I am going to look over your shoulder from time to time. If I get the sense you are hiding something, I am going to look much harder."

What I will not tolerate as a parent OR as a citizen of the Nation of Earth is the idea that companies can be allowed to turn a blind eye to predatory use of their products while claiming the opposite, and particularly when their tolerance is a demonstrably calculated choice in their zeal to actively engineer their product for maximum potential exploitation of the vulnerable.
The problem for these platforms is that they want the freedom of being a "dumb pipe" and the profit and power and control of being an editorial. If Facebook were somehow an open source and non-profit enterprise that uniformly and blindly operated a service - no political involvement, no targeted ads (if ads at all) etc. - then it would be a different story. It would be like trying to find the ISP liable for what a customer said online or the road responsible for the getaway car. I'm not sure something like that would even be possible, but Meta wants their cake and to eat it to, and to gaslight everyone about what's going on with the cake.

But what also increasingly concerns me even with relatively reasonable comments like yours here is that many people are, in fact, trying to do things like threaten the ISP with liability to either get money or to get the ISP (or whoever) to take some action. Take, or example, the reporting here about music companies trying to get ISPs to preemptively violate the rights of internet users or whoever it was trying to get Reddit to unmask decade-old unrelated comments they didn't like or various governments endlessly trying to undermine encryption or install spyware and on and on and on...

It's angry, confused, ill-informed, misdirected hysteria, with or without nefarious actors, and it sells well. Always has been and always will be.

edit: literally while I was writing this comment: https://meincmagazine.com/tech-policy...ttempt-to-kick-music-pirates-off-the-internet
 
Upvote
7 (7 / 0)

azazel1024

Ars Legatus Legionis
15,020
Subscriptor
Th fine was too low , a rounding error for these bastards

:mad:
Phase 2 of the trial now moves to a judge trial to decide additional penalties as well as possible judicial orders for Meta to make changes. So, it isn't over. If they do NOT comply with the state law, they can face a new lawsuit with likely much higher penalties for willful violation. I am not without hope. Add in, a lawsuit in LA today also found Meta and google liable for addicting a minor to their social media apps and there is a breath of a chance of the chickens coming home to roost. This might well become an existential threat to social media.

"oh no".
 
Upvote
3 (3 / 0)

graylshaped

Ars Legatus Legionis
67,692
Subscriptor++
Meta and/or the state are not a parent, and putting aside the reductionism of how you frame the way things are, the state of affairs of "minors" (however that is defined) being akin to property is neither something I think we should continue to speak of as desirable (though I'm sure conservatives will always brow beat about "parental rights" to corporal punishment or isolation or other ills) nor the reality because minors do, in fact, have rights, even if lesser.

All of this is the wrong direction.
It's the wrong language. I do have "rights" as a parent, but if I have to assert them, it's a problem. More importantly to me, my kids, AND to society is that I have responsibilities as a parent, which I take seriously. Meta or any of those clowns making it hard for me to exercise that responsibility in a... well... a responsible and respectful manner is upending rational priorities.
 
Upvote
4 (4 / 0)
Disclosure: I am a parent.

Should parents be able to have visibility to what their kids are doing online? Definitively yes.
Should they "snoop"? Not as easy.

Often when information of a tragedy comes out there is a very loud chorus of "the parents should have known!" People want to defend developers from their products that allow kids to do reprehensible things and say the answer is "parenting."

At the same time, a kid wants and deserves privacy. In the end, it comes down to trust, which has to be earned both by the parent(s) and the kid.

For better or worse, my policy is "I trust you, AND I am going to look over your shoulder from time to time. If I get the sense you are hiding something, I am going to look much harder."

What I will not tolerate as a parent OR as a citizen of the Nation of Earth is the idea that companies can be allowed to turn a blind eye to predatory use of their products while claiming the opposite, and particularly when their tolerance is a demonstrably calculated choice in their zeal to actively engineer their product for maximum potential exploitation of the vulnerable.
Until about 8th grade all of our son's home online activity with the exception of the very limited call and text-only phone we gave him in 5th grade was done in the dining room. We didn't watch what he was doing but he was doing it where we could see him. When he asked if he could move the computer to his bedroom we felt like there was a good basis for trust there.
 
Upvote
3 (3 / 0)

azazel1024

Ars Legatus Legionis
15,020
Subscriptor
To be a bit grim, surely physical places like Disneyland/world have a kidnapping rate?
Maybe. I can't seem to find any though. The bit I can find seems to indicate "urban legend". I can't find a single news report or any LEA report after a bunch of search engine poking with anything mentioning any Disney kidnappings. I wouldn't say it has never happened, but I would think it would make BIG headlines if it ever had.

It is a huge public place with massive crowds, but considering just how incredibly covered in cameras it is, everyone is videoed and photoed entering the parks, have to go through security screening, etc. How often are there kidnappings at an airport? Those likely have less good security than Disney parks.

Both actively employ significant security and especially Disney knows their are going to have massive reputational harm if anything like that happens, and it'll significantly impact their business.

Social media companies, if they are doing anything, seem to have a "meh" attitude. "the cost of doing business and cheaper to do little". I don't think they can drive it down to 0, but they could be doing MUCH better. As it is, they are actively fighting ANYTHING.
 
Upvote
3 (3 / 0)

AusPeter

Ars Praefectus
5,086
Subscriptor
On a related note about child abuse by corporations, I just saw this story (which I hope will be commented by Ars)

Melania Trump Touts AI-Powered Robots Teaching Classics To Kids At Bizarre White House Event​


At a bizarre White House event on Wednesday, first lady Melania Trump spelled out a vision of the future where children are given a classical education by Artificial Intelligence-powered robots.

The robots appeared to be supplied by Figure AI
 
Upvote
7 (7 / 0)

MHStrawn

Ars Scholae Palatinae
1,425
Subscriptor
“Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew,” Torrez said. “Today the jury joined families, educators, and child safety experts in saying enough is enough.”

So Meta has been found by a court of law to have a harmful product, ignored warning and lied about it to the public.

And the punishment is a fine equal to about 0.55% of their 2025 profits (not revenues...profit). That's nothing more than a cost of doing business for META.

This will only have any impact if META is hit with hundreds of these or a massive class action suit (not sure that can even be done here, IAMAL). But until the financial hit to META is in the tens of billions of dollars this is meaningless.
 
Upvote
1 (1 / 0)

MHStrawn

Ars Scholae Palatinae
1,425
Subscriptor
I just want to call out this part. This is one of the things that AI is supposed to help with, since moderation is in fact very hard and can even be harmful to the moderators themselves.

But even here, AI is doing more harm than good.
I noticed that too. So ostensibly AI is enabling META to create monitoring reports using AI but in reality they simply produce useless garbage that wastes people's time.

A perfect encapsulation of AI's contributions to society.
 
Upvote
4 (4 / 0)

Madestjohn

Ars Tribunus Angusticlavius
7,452
On a related note about child abuse by corporations, I just saw this story (which I hope will be commented by Ars)



The robots appeared to be supplied by Figure AI
being available “in the comfort of your home.”
… emphasis on home not at surprising as the rich have traditionally valued the services of indentured private tutors giving elite ‘classical’ and cloistered education to their privileged offspring
And plays to the conservative homeschoolers drive towards privatization of education as well
 
Upvote
4 (4 / 0)
How often are there kidnappings at an airport? Those likely have less good security than Disney parks.
I don't know the answer to that but I suspect actual kidnappings are very rare at airports. Anyone being coerced against their will would be noticeable. However trafficked people1 regularly pass through airports because by then the coercion mechanisms are already in place.

1 And for the record, people are trafficked for all sorts of reasons. I do not trust organizations that say they exist to fight trafficking but then only focus on legislation that pushes all sex work farther underground.
 
Upvote
8 (8 / 0)

Zapitron

Ars Centurion
318
Subscriptor
Good to see Meta lose here, but $375M is a drop in the bucket for zuck, and is only 17% of what the state wanted. It's truly pathetic we can't hold corporations accountable, yet people were spending 20+ years in prison for half a gram of cannabis.
$375M is for one state, and if they don't stop the offending behavior, they can just be sued again. What if it were $375M x 50states = $18B, and if it were annual instead of a one-time thing?

And even if the money wouldn't be missed by them, it's more than enough money for us to TEACH EVERY SINGLE KID IN NEW MEXICO HOW TO USE AD BLOCKERS. See how Zuck likes that!
 
Last edited:
Upvote
3 (3 / 0)

_dgc_

Wise, Aged Ars Veteran
119
Subscriptor++
To be a bit grim, surely physical places like Disneyland/world have a kidnapping rate?
But they have well established policies and procedures on how to mitigate the harm as quickly as possible - alerting staff, checking camera footage, IDs, etc.
There is a huge difference between trying to achieve the best outcome, and arguing it's bound to happen, cost of doing business, oh well.
 
Upvote
1 (1 / 0)

bushrat011899

Ars Scholae Palatinae
658
Subscriptor
Meta has an annual revenue of $200b, with ~$9b of profit.
Just to really hammer home how small this fine is, this is as financially ruinous as giving someone earning $50,000/year a $94 fine. Corporations must be fined proportionally to their revenue of there's any chance of having them be a real threat.
 
Upvote
-1 (1 / -2)

graylshaped

Ars Legatus Legionis
67,692
Subscriptor++
On a related note about child abuse by corporations, I just saw this story (which I hope will be commented by Ars)

The robots appeared to be supplied by Figure AI
To be fair, I would rather have a bot read to my kid than have Melania get anywhere near them.
 
Upvote
4 (4 / 0)

Madestjohn

Ars Tribunus Angusticlavius
7,452
Oh, I thought Trump ordered her from Slovenia.
It wasn’t a mail order .. it was more a retail purchase ..

I think the story was he was presented her at the Kit-kat club by Zampolli, a ‘fashion’ agent who would organize these exclusive private parties where rich men would meet Eastern European ‘models’
Kinda like a invitation only luxury car or yacht show
Where you could see and maybe even try out the models on sale.
 
Upvote
7 (7 / 0)
I just want to call out this part. This is one of the things that AI is supposed to help with, since moderation is in fact very hard and can even be harmful to the moderators themselves.

But even here, AI is doing more harm than good.
You're assuming they didn't set them up that way intentionally to mask the severity of the problem.
 
Upvote
5 (5 / 0)

graylshaped

Ars Legatus Legionis
67,692
Subscriptor++
The argument they're making is poorly framed and probably deliberately framed still more poorly by some media (including this ars article I suppose) - Their argument is essentially trying to say they built a public road (except that it isn't actually public (or a road) but we can leave the asterisk on that for now) and it's inevitable that someone will use the road while doing bad things and they can't be held responsible for that beyond an arbitrarily defined due diligence to act responsibly.

Meta-and-Zuck-bad aside, on its face that is not actually a wholly untenable position(though I question the judgement of the person who thought it was viable in this environment and given their history). What's the alternative? Arbitrary liability? No road at all? But indeed, just looking through the implications of the ars article and many of the comments, that's exactly what they're saying, implicitly or explicitly.

Of course Meta and Zuck are being disingenuous and made multiple misrepresentations here about how they treated the issues in question. I have no sympathy for them, though the penalty is meaningless to a company of that scale. And it's also difficult in other ways, in that they do exercise discretion (and change that discretion seemingly arbitrarily, such as when a different president comes in) rather than acting as a "dumb pipe/platform", and because what they would like is essentially zero possibility for accountability for anything, because who gets to define the standard? What they really care about is the precedent and the prospect of an explosion of such litigation across many jurisdictions which will all apply unpredictable standards and potentially impose controls on their business. All that is not discussed here or in some other reporting I see.

But there are consequences for entertaining this sort of campaign. Belanger writes:


A previous commenter questioning this was mostly positively received but also got significant downvotes, whatever that's worth or means really. On ars, where articles about attacks on privacy and encryption and legal and media antics generally as well as about social inclusion and access get positive and serious discussion. Perhaps people aren't making the connection...

But that's where the plaintiffs, author, and some commenters are sitting. Reducing rights and protections and access for minors, and everyone else, "for the children" but also because... Reflexive anger at the company and its leadership and its industry more generally may be well grounded in many respects, but I don't think this stuff is. I think this sort of thing very rarely is anywhere. And I think it's notable that despite the subject matter and emotional representations, the jury took a lot of time and clearly had some difficulty. It'll "inevitably" be an unpopular take right now, and I don't even necessarily disagree with this specific outcome, but this is a misguided media and legal bandwagon and it's disingenuous and dangerous. As alluded to above, it girdles and imposes on those it claims to advocate for and there is the classic issue of privacy, but there are other effects. What do platforms do out of fear of this sort of thing? Anyone perceived to be "risky" is preemptively excluded from social participation? What could possibly be wrong with that? We do things like that already and there certainly isn't a wide body of research, to say nothing of common sense, showing it's terrible for those people and counterproductive on the whole. But the thing is, the way things get percolated in cases and media, they're typically popular for the same reasons algorithms at Facebook/whatever are so geared toward sensationalism.
I'm not sure your parade of horribles and slippery slope allusions (speaking of sensationalism) contribute much to a serious discussion. This audience gets it, and--by and large--is capable of holding two contradictory thoughts in mind at the same time.

Meta needs to do more to recognize its manipulation of information being channeled to individuals comes with an obligation to handle it responsibly. They aren't a dumb pipe.
 
Upvote
0 (1 / -1)

Fatesrider

Ars Legatus Legionis
24,977
Subscriptor
At trial, Mark Zuckerberg and Instagram chief Adam Mosseri testified that “harms to children, such as sexual exploitation and detriments to mental health, were inevitable on the company’s platforms due to their vast user bases,” The Guardian reported.
This so gives off "it's a feature, not a flaw" vibes it makes me sick...
 
Upvote
5 (5 / 0)