Facebook is a popular venue for selling all manner of cybercrime services

Status
You're currently viewing only jdale's posts. Click here to go back to viewing the entire thread.

jdale

Ars Legatus Legionis
18,346
Subscriptor
385,000 people are actively participating in groups advocating and carrying out criminal activities. Why is the response just to shut them down? At least log all the activity and provide it to the authorities first. Many of them no doubt are outside the US and it would be hard to prosecute, but I'm sure many of them live here.
 
Upvote
13 (13 / 0)

jdale

Ars Legatus Legionis
18,346
Subscriptor
I don't think we've ever answered the fundamental question, which is whether Facebook and similar platforms should be responsible for user generated content. If yes, then we can work out a solution. If no, then it's not surprising to me they would be slow to act because it sets a precedent they won't or can't maintain.

I fall in the 'no' camp - I think it's more important to prevent the creeping scourge of censorship from advancing, and that the best disinfectant for toxic ideas is sunlight. And for the purposes of identifying and prosecuting illegal behavior, it's more advantageous to have fewer platforms to monitor, making it easier to trace criminal networks.

If I were Facebook I would be largely hands off in terms of censoring or removing content, but I would be bending over backwards to support legitimate legal investigations (maybe even help initiate them when issues are brought to their attention) and remove content only according to law enforcement requests or judicial orders. That should keep them clear of any liability issues.

The worst outcome for them would be to knowingly ignore illegal activity and take no action - but it needn't be exclusively on them to self police.

The internet is full of publicly available pools of toxic ideas. These groups could easily be found by a simple search for "carding" or "cvv". Reddit is full of, well, Reddit. And there are worse sites. We're wallowing in fake news. "Sunlight" has not done a thing to disinfect any of these. Why is that?

It's because sunlight doesn't reach the internet.

Think about this. These are places where you can participate in discussions without ever showing your face, and very likely without revealing your identity. There's no social feedback telling you what a terrible person you are, usually no negative consequences at all. "Sunlight" is a metaphor for being exposed to the world, but when you participate in these places, you're not exposed. Maybe you're sitting outdoors on a park bench in the sun, but no one in the park can see what a terrible person you are. There is no sunlight. Maybe you're in a dark basement. Doesn't matter. There is no sunlight.

The only way sunlight will disinfect anything is if people are exposed to their friends and families and neighbors and community members for what they are doing. And in many cases to the police.

Even then I have some doubts about how much it will actually help.
 
Upvote
6 (6 / 0)

jdale

Ars Legatus Legionis
18,346
Subscriptor
I don't think we've ever answered the fundamental question, which is whether Facebook and similar platforms should be responsible for user generated content. If yes, then we can work out a solution. If no, then it's not surprising to me they would be slow to act because it sets a precedent they won't or can't maintain.

I fall in the 'no' camp - I think it's more important to prevent the creeping scourge of censorship from advancing, and that the best disinfectant for toxic ideas is sunlight. And for the purposes of identifying and prosecuting illegal behavior, it's more advantageous to have fewer platforms to monitor, making it easier to trace criminal networks.

If I were Facebook I would be largely hands off in terms of censoring or removing content, but I would be bending over backwards to support legitimate legal investigations (maybe even help initiate them when issues are brought to their attention) and remove content only according to law enforcement requests or judicial orders. That should keep them clear of any liability issues.

The worst outcome for them would be to knowingly ignore illegal activity and take no action - but it needn't be exclusively on them to self police.

if facebook is going to be responsible for the content, then it has to read and edit or reject all content before publishing.
It would become a monitored messaging system-- you are allowed to submit communications to your friends-- they then decide if your communication is acceptable.

How do they know your wedding invitation isn't really a code used by horrible people to communicate horrible things...
Better not let that out...

If Facebook is going to rely on users reporting illegal activity in order to remove it, they need to actually remove it when it gets reported. That was not happening adequately here. Talos tells us they would report offending groups and only a single post would be removed, leaving a group full of illegal content otherwise intact. It's only when they pressed that Facebook started actually cleaning things up.

That said, I don't think it's unreasonable for them to scan public posts for indicator terms like "carding" or "cvv" and have humans check whether those posts are illegal content.

Checking whether other posts are encoded illegal content is an absurd strawman. You can't block that sort of thing. In general, you won't be able to prevent people who are already working together from communicating with each other. But you can make it substantially harder for new people to find the resources to get started, and you can reduce the value of the site as a marketplace for selling criminal goods and services by making it harder for people who are not already associated to find each other.
 
Upvote
5 (5 / 0)

jdale

Ars Legatus Legionis
18,346
Subscriptor
The real question is where should Facebook be drawing the line? How responsible are they for content others share and what actions are appropriate for them to take? Should they call the police when someone posts about smoking marijuana because it's illegal federally? What if they post a picture with a representation of a marijuana leaf? Should they ban your account for that?

How do you distinguish between articles/discussions about hacking and actual criminal intent? It's usually obvious to humans, but as another user posted, they're getting >1M posts a minute in 40 different languages. They've tried automated takedowns in the past and ended up taking down more legitimate groups than illegitimate ones. Sharing a post from white supremacist to show how awful they are is viewed as identical to being the originator of the content.

I tend to side with the EFF on this one. No filtering of content and no sharing with the police or authorities of private information. If the FBI wants to host a sting operation on Facebook, then they can do so with their resources and catch people. At least having it on the public internet brings it into the public eye and federal and local law enforcement can deal with it. The only way Facebook should be sharing information is when presented a valid warrant against a specific user or group.

Look at the lead image from this article, which is a post that says:

Selling CVV fresh
$5

Selling CVV fresh mix 137 CVV
Take all 300$
Or buy minimum
5cvv 25$
10cvv 40$...


This is not a "discussion" about activity that would be criminal, this is the illegal sale of credit card information that has been obtained illegal. It's not talk, it's an actual crime. There's really no gray area in a post like this.
 
Upvote
7 (7 / 0)

jdale

Ars Legatus Legionis
18,346
Subscriptor
I don't think we've ever answered the fundamental question, which is whether Facebook and similar platforms should be responsible for user generated content. If yes, then we can work out a solution. If no, then it's not surprising to me they would be slow to act because it sets a precedent they won't or can't maintain.

I fall in the 'no' camp - I think it's more important to prevent the creeping scourge of censorship from advancing, and that the best disinfectant for toxic ideas is sunlight. And for the purposes of identifying and prosecuting illegal behavior, it's more advantageous to have fewer platforms to monitor, making it easier to trace criminal networks.

If I were Facebook I would be largely hands off in terms of censoring or removing content, but I would be bending over backwards to support legitimate legal investigations (maybe even help initiate them when issues are brought to their attention) and remove content only according to law enforcement requests or judicial orders. That should keep them clear of any liability issues.

The worst outcome for them would be to knowingly ignore illegal activity and take no action - but it needn't be exclusively on them to self police.

The internet is full of publicly available pools of toxic ideas. These groups could easily be found by a simple search for "carding" or "cvv". Reddit is full of, well, Reddit. And there are worse sites. We're wallowing in fake news. "Sunlight" has not done a thing to disinfect any of these. Why is that?

It's because sunlight doesn't reach the internet.

Think about this. These are places where you can participate in discussions without ever showing your face, and very likely without revealing your identity. There's no social feedback telling you what a terrible person you are, usually no negative consequences at all. "Sunlight" is a metaphor for being exposed to the world, but when you participate in these places, you're not exposed. Maybe you're sitting outdoors on a park bench in the sun, but no one in the park can see what a terrible person you are. There is no sunlight. Maybe you're in a dark basement. Doesn't matter. There is no sunlight.

The only way sunlight will disinfect anything is if people are exposed to their friends and families and neighbors and community members for what they are doing. And in many cases to the police.

Even then I have some doubts about how much it will actually help.

That sounds more like an argument for removing online anonymity than moderating content.

People have always been free to congregate in private and discuss whatever nonsense they want. The fact it now happens online changes the dynamics, but not the underlying moral questions regarding how much those in power should be moderating the 'public' debate.

These Facebook discussions aren't happening "in private" though. They are happening right out in the open for anyone who searches for "cvv" or "carding". Their ability to operate openly makes it easier for them to sell stolen card info, increasing the value of the crime, and makes it easier to recruit new people to commit the crimes.

The metaphor here is not people congregating in a quiet room and talking. It's not even people meeting in a public square and talking. This is people standing on a street corner selling drugs and training others to do the same.

"Moderating debate" is a strawman. It's about whether actual crime should be permitted on the platform or not.
 
Upvote
5 (5 / 0)

jdale

Ars Legatus Legionis
18,346
Subscriptor
The real question is where should Facebook be drawing the line? How responsible are they for content others share and what actions are appropriate for them to take? Should they call the police when someone posts about smoking marijuana because it's illegal federally? What if they post a picture with a representation of a marijuana leaf? Should they ban your account for that?

How do you distinguish between articles/discussions about hacking and actual criminal intent? It's usually obvious to humans, but as another user posted, they're getting >1M posts a minute in 40 different languages. They've tried automated takedowns in the past and ended up taking down more legitimate groups than illegitimate ones. Sharing a post from white supremacist to show how awful they are is viewed as identical to being the originator of the content.

I tend to side with the EFF on this one. No filtering of content and no sharing with the police or authorities of private information. If the FBI wants to host a sting operation on Facebook, then they can do so with their resources and catch people. At least having it on the public internet brings it into the public eye and federal and local law enforcement can deal with it. The only way Facebook should be sharing information is when presented a valid warrant against a specific user or group.

Look at the lead image from this article, which is a post that says:

Selling CVV fresh
$5

Selling CVV fresh mix 137 CVV
Take all 300$
Or buy minimum
5cvv 25$
10cvv 40$...


This is not a "discussion" about activity that would be criminal, this is the illegal sale of credit card information that has been obtained illegal. It's not talk, it's an actual crime. There's really no gray area in a post like this.

So is posting a picture of smoking marijuana, even if it's legal in your state (because it's still a crime federally). That's my point. Where do we draw the line and what's their responsibility?

No, pictures of marijuana use are not themselves illegal. They might at best be evidence of illegal acts.

If you posted "I'm selling marijuana, msg me for prices" that would be a crime. And I would expect Facebook to remove that if they became aware of it.

The case from the article is relatively easy but if we search for CVV specifically then this article would be flagged and removed too because it reposted the same text. It would also flag 100 legitimate cybersecurity articles about the threat. My bet is they even have the same verbage in most of them.

The next problem is that natural language processing is far from perfect. It's easy to find specific language but then they'll just change their verbage after a week and then you are flagging nothing but the legitimate uses of it. Facebook tried (poorly admittedly) to crack down on the alt-right users in 2016. Within a month the alt-right had a whole new list of keywords and images that signaled to other people and it was the leftist pages and articles that were getting pulled down.

Automatic filters should be used to flag potential violations for review. Review has to be performed by humans. The outcome of those reviews can feed back into the training of filters so they get better, but Facebook employs thousands of moderators because they are necessary.
 
Upvote
1 (1 / 0)
Status
You're currently viewing only jdale's posts. Click here to go back to viewing the entire thread.