Already under scrutiny for spreading hate, social network also helps peddle spam and fraud.
Read the whole story
Read the whole story
I don't think we've ever answered the fundamental question, which is whether Facebook and similar platforms should be responsible for user generated content. If yes, then we can work out a solution. If no, then it's not surprising to me they would be slow to act because it sets a precedent they won't or can't maintain.
I fall in the 'no' camp - I think it's more important to prevent the creeping scourge of censorship from advancing, and that the best disinfectant for toxic ideas is sunlight. And for the purposes of identifying and prosecuting illegal behavior, it's more advantageous to have fewer platforms to monitor, making it easier to trace criminal networks.
If I were Facebook I would be largely hands off in terms of censoring or removing content, but I would be bending over backwards to support legitimate legal investigations (maybe even help initiate them when issues are brought to their attention) and remove content only according to law enforcement requests or judicial orders. That should keep them clear of any liability issues.
The worst outcome for them would be to knowingly ignore illegal activity and take no action - but it needn't be exclusively on them to self police.
I don't think we've ever answered the fundamental question, which is whether Facebook and similar platforms should be responsible for user generated content. If yes, then we can work out a solution. If no, then it's not surprising to me they would be slow to act because it sets a precedent they won't or can't maintain.
I fall in the 'no' camp - I think it's more important to prevent the creeping scourge of censorship from advancing, and that the best disinfectant for toxic ideas is sunlight. And for the purposes of identifying and prosecuting illegal behavior, it's more advantageous to have fewer platforms to monitor, making it easier to trace criminal networks.
If I were Facebook I would be largely hands off in terms of censoring or removing content, but I would be bending over backwards to support legitimate legal investigations (maybe even help initiate them when issues are brought to their attention) and remove content only according to law enforcement requests or judicial orders. That should keep them clear of any liability issues.
The worst outcome for them would be to knowingly ignore illegal activity and take no action - but it needn't be exclusively on them to self police.
if facebook is going to be responsible for the content, then it has to read and edit or reject all content before publishing.
It would become a monitored messaging system-- you are allowed to submit communications to your friends-- they then decide if your communication is acceptable.
How do they know your wedding invitation isn't really a code used by horrible people to communicate horrible things...
Better not let that out...
The real question is where should Facebook be drawing the line? How responsible are they for content others share and what actions are appropriate for them to take? Should they call the police when someone posts about smoking marijuana because it's illegal federally? What if they post a picture with a representation of a marijuana leaf? Should they ban your account for that?
How do you distinguish between articles/discussions about hacking and actual criminal intent? It's usually obvious to humans, but as another user posted, they're getting >1M posts a minute in 40 different languages. They've tried automated takedowns in the past and ended up taking down more legitimate groups than illegitimate ones. Sharing a post from white supremacist to show how awful they are is viewed as identical to being the originator of the content.
I tend to side with the EFF on this one. No filtering of content and no sharing with the police or authorities of private information. If the FBI wants to host a sting operation on Facebook, then they can do so with their resources and catch people. At least having it on the public internet brings it into the public eye and federal and local law enforcement can deal with it. The only way Facebook should be sharing information is when presented a valid warrant against a specific user or group.
I don't think we've ever answered the fundamental question, which is whether Facebook and similar platforms should be responsible for user generated content. If yes, then we can work out a solution. If no, then it's not surprising to me they would be slow to act because it sets a precedent they won't or can't maintain.
I fall in the 'no' camp - I think it's more important to prevent the creeping scourge of censorship from advancing, and that the best disinfectant for toxic ideas is sunlight. And for the purposes of identifying and prosecuting illegal behavior, it's more advantageous to have fewer platforms to monitor, making it easier to trace criminal networks.
If I were Facebook I would be largely hands off in terms of censoring or removing content, but I would be bending over backwards to support legitimate legal investigations (maybe even help initiate them when issues are brought to their attention) and remove content only according to law enforcement requests or judicial orders. That should keep them clear of any liability issues.
The worst outcome for them would be to knowingly ignore illegal activity and take no action - but it needn't be exclusively on them to self police.
The internet is full of publicly available pools of toxic ideas. These groups could easily be found by a simple search for "carding" or "cvv". Reddit is full of, well, Reddit. And there are worse sites. We're wallowing in fake news. "Sunlight" has not done a thing to disinfect any of these. Why is that?
It's because sunlight doesn't reach the internet.
Think about this. These are places where you can participate in discussions without ever showing your face, and very likely without revealing your identity. There's no social feedback telling you what a terrible person you are, usually no negative consequences at all. "Sunlight" is a metaphor for being exposed to the world, but when you participate in these places, you're not exposed. Maybe you're sitting outdoors on a park bench in the sun, but no one in the park can see what a terrible person you are. There is no sunlight. Maybe you're in a dark basement. Doesn't matter. There is no sunlight.
The only way sunlight will disinfect anything is if people are exposed to their friends and families and neighbors and community members for what they are doing. And in many cases to the police.
Even then I have some doubts about how much it will actually help.
That sounds more like an argument for removing online anonymity than moderating content.
People have always been free to congregate in private and discuss whatever nonsense they want. The fact it now happens online changes the dynamics, but not the underlying moral questions regarding how much those in power should be moderating the 'public' debate.
The real question is where should Facebook be drawing the line? How responsible are they for content others share and what actions are appropriate for them to take? Should they call the police when someone posts about smoking marijuana because it's illegal federally? What if they post a picture with a representation of a marijuana leaf? Should they ban your account for that?
How do you distinguish between articles/discussions about hacking and actual criminal intent? It's usually obvious to humans, but as another user posted, they're getting >1M posts a minute in 40 different languages. They've tried automated takedowns in the past and ended up taking down more legitimate groups than illegitimate ones. Sharing a post from white supremacist to show how awful they are is viewed as identical to being the originator of the content.
I tend to side with the EFF on this one. No filtering of content and no sharing with the police or authorities of private information. If the FBI wants to host a sting operation on Facebook, then they can do so with their resources and catch people. At least having it on the public internet brings it into the public eye and federal and local law enforcement can deal with it. The only way Facebook should be sharing information is when presented a valid warrant against a specific user or group.
Look at the lead image from this article, which is a post that says:
Selling CVV fresh
$5
Selling CVV fresh mix 137 CVV
Take all 300$
Or buy minimum
5cvv 25$
10cvv 40$...
This is not a "discussion" about activity that would be criminal, this is the illegal sale of credit card information that has been obtained illegal. It's not talk, it's an actual crime. There's really no gray area in a post like this.
So is posting a picture of smoking marijuana, even if it's legal in your state (because it's still a crime federally). That's my point. Where do we draw the line and what's their responsibility?
The case from the article is relatively easy but if we search for CVV specifically then this article would be flagged and removed too because it reposted the same text. It would also flag 100 legitimate cybersecurity articles about the threat. My bet is they even have the same verbage in most of them.
The next problem is that natural language processing is far from perfect. It's easy to find specific language but then they'll just change their verbage after a week and then you are flagging nothing but the legitimate uses of it. Facebook tried (poorly admittedly) to crack down on the alt-right users in 2016. Within a month the alt-right had a whole new list of keywords and images that signaled to other people and it was the leftist pages and articles that were getting pulled down.