Kentucky Judges must consult algorithms for cash bail decisions; now more whites get bail.
Read the whole story
Read the whole story
Focus on race is not the relevant aspect.
The only relevant aspect is if the algorithm made the correct prediction calculating the probability of reincidence on the people who reincided.
If there is a racial bias on the algorithm, it should be verifiable by checking the post sentence behavior.
Other way around.Blacks break the law about 16 times more often than whites. That’s fact, not bias.
I sort of assumed that that government would pickup the charges for the trackers in general, but that the wearer would pickup the cost for intentional damage.based on questions about their employment status, education, and criminal record...inputs such as age, offense, and prior convictions
Isn't it obvious that all or almost all of the parameters being fed into the algorithm are likely to have correlations with disadvantaged minorities? So of course any algorithms based on parameters that correlate with race will produce racially-biased results. The only way to avoid that would be to explicitly hard-code racial equality by comparing whites against the white baseline and blacks against the black baseline etc. But then which baseline do you use for mixed race people? How fine-grained do you make the categories?
Even if you take race completely out of the equation here, why is it apparently okay to discriminate against people with less education or based on age or employment status or past criminal record? Basically this algorithm is designed to discriminate against already disadvantaged people. If we're going to say that the utility of making accurate risk assessments warrants discrimination against disadvantaged people, then inevitably any disadvantaged racial minority will be disproportionately discriminated against by the algorithm.
Not to mention, the inclusion of past criminal records as a major part of the criteria guarantees that any existing biases in the system will be perpetuated even by a completely otherwise unbiased algorithm.
This whole thing seems like a fool's errand--attempting to create a discrimination algorithm that doesn't discriminate. This idea that it would somehow be possible to come up with some kind of objective criteria that are fair game for discrimination but won't correlate with anything else that we don't want to discriminate based on seems very unlikely. All of the inputs will necessarily be biased by whatever inequalities currently exist in society. You can't have it both ways. If you want to maximize the utility of making accurate risk assessments, then you have to let the algorithm discriminate. If you don't want the algorithm to discriminate, then you should just roll some dice as that would be the only way to make it truly fair.
Realistically, we need to come up with better options for bail and the conditions for bail. Any system that looks at flight risk using the factors you mentioned will assign a higher risk to disadvantaged minorities, any system that leaves out those factors would result in a system that doesn't work. We have the technology to put trackers on people. Maybe, that would be a better option than bail in many cases.
Only if the state picks up the cost of those trackers 100%. Otherwise it's exactly the same shit, just now a private company, which has far more freedom to harass and garnish wages, that is collecting on the money, instead of the state.
(snip racist trollbait)
It does appear that many are charge $10 per day for monitoring and some cities/states are passing that along to those on parole. Regardless of who is paying, that is a gross overcharge. Geofenced cell phone ads are pennies a hit, so I can't believe that it would cost that much to monitor a parolee. It appears that someone has a much higher profit margin than I.I sort of assumed that that government would pickup the charges for the trackers in general, but that the wearer would pickup the cost for intentional damage.based on questions about their employment status, education, and criminal record...inputs such as age, offense, and prior convictions
Isn't it obvious that all or almost all of the parameters being fed into the algorithm are likely to have correlations with disadvantaged minorities? So of course any algorithms based on parameters that correlate with race will produce racially-biased results. The only way to avoid that would be to explicitly hard-code racial equality by comparing whites against the white baseline and blacks against the black baseline etc. But then which baseline do you use for mixed race people? How fine-grained do you make the categories?
Even if you take race completely out of the equation here, why is it apparently okay to discriminate against people with less education or based on age or employment status or past criminal record? Basically this algorithm is designed to discriminate against already disadvantaged people. If we're going to say that the utility of making accurate risk assessments warrants discrimination against disadvantaged people, then inevitably any disadvantaged racial minority will be disproportionately discriminated against by the algorithm.
Not to mention, the inclusion of past criminal records as a major part of the criteria guarantees that any existing biases in the system will be perpetuated even by a completely otherwise unbiased algorithm.
This whole thing seems like a fool's errand--attempting to create a discrimination algorithm that doesn't discriminate. This idea that it would somehow be possible to come up with some kind of objective criteria that are fair game for discrimination but won't correlate with anything else that we don't want to discriminate based on seems very unlikely. All of the inputs will necessarily be biased by whatever inequalities currently exist in society. You can't have it both ways. If you want to maximize the utility of making accurate risk assessments, then you have to let the algorithm discriminate. If you don't want the algorithm to discriminate, then you should just roll some dice as that would be the only way to make it truly fair.
Realistically, we need to come up with better options for bail and the conditions for bail. Any system that looks at flight risk using the factors you mentioned will assign a higher risk to disadvantaged minorities, any system that leaves out those factors would result in a system that doesn't work. We have the technology to put trackers on people. Maybe, that would be a better option than bail in many cases.
Only if the state picks up the cost of those trackers 100%. Otherwise it's exactly the same shit, just now a private company, which has far more freedom to harass and garnish wages, that is collecting on the money, instead of the state.
I don't know why you would think that, considering it doesn't happen now. Most counties outsource the admin of trackers to private companies, who absolute fuck people over when it comes to charges for administering them.
Arrest record aren't very good, convictions are better but still not perfect. What do you think would be a better source?(snip racist trollbait)
Not gonna quote the parent and give them satisfaction. However, it should be pointed out that arrest records are not proof of any group breaking the law more than any other. It's only proof that they are arrested more or less often than another. Whether someone is arrested relies on lots of factors, including the size of the police presence in that neighborhood, how aggressive they are at pursuing arrests, and yes, the personal biases of the officers involved.
arrest, conviction or victim?Why not try homicide records? It's not plausible that racial bias would distort these too much.
Arrest record aren't very good, convictions are better but still not perfect. What do you think would be a better source?(snip racist trollbait)
Not gonna quote the parent and give them satisfaction. However, it should be pointed out that arrest records are not proof of any group breaking the law more than any other. It's only proof that they are arrested more or less often than another. Whether someone is arrested relies on lots of factors, including the size of the police presence in that neighborhood, how aggressive they are at pursuing arrests, and yes, the personal biases of the officers involved.
That is why I asked you for a better source.Arrest record aren't very good, convictions are better but still not perfect. What do you think would be a better source?(snip racist trollbait)
Not gonna quote the parent and give them satisfaction. However, it should be pointed out that arrest records are not proof of any group breaking the law more than any other. It's only proof that they are arrested more or less often than another. Whether someone is arrested relies on lots of factors, including the size of the police presence in that neighborhood, how aggressive they are at pursuing arrests, and yes, the personal biases of the officers involved.
Convictions are still not good, as they depend heavily on many other factors than whether or not a crime was committed (arrest rates, ability to afford decent legal representation, etc).
That is why I asked you for a better source.Arrest record aren't very good, convictions are better but still not perfect. What do you think would be a better source?(snip racist trollbait)
Not gonna quote the parent and give them satisfaction. However, it should be pointed out that arrest records are not proof of any group breaking the law more than any other. It's only proof that they are arrested more or less often than another. Whether someone is arrested relies on lots of factors, including the size of the police presence in that neighborhood, how aggressive they are at pursuing arrests, and yes, the personal biases of the officers involved.
Convictions are still not good, as they depend heavily on many other factors than whether or not a crime was committed (arrest rates, ability to afford decent legal representation, etc).
I'll agree that the conviction rate is flawed, but it is what we have. Even if you believe that the conviction rate is heavily skewed, we need something to determine if we are improving or declining. Without a metric, all we have are random numbers and baseless opinions.That is why I asked you for a better source.Arrest record aren't very good, convictions are better but still not perfect. What do you think would be a better source?(snip racist trollbait)
Not gonna quote the parent and give them satisfaction. However, it should be pointed out that arrest records are not proof of any group breaking the law more than any other. It's only proof that they are arrested more or less often than another. Whether someone is arrested relies on lots of factors, including the size of the police presence in that neighborhood, how aggressive they are at pursuing arrests, and yes, the personal biases of the officers involved.
Convictions are still not good, as they depend heavily on many other factors than whether or not a crime was committed (arrest rates, ability to afford decent legal representation, etc).
And the answer is likely that there isn't one. That doesn't mean we should use a poor one.
How did the predictions of the algorithms match reality? That question is the only fair test of their accuracy. How many of the white people predicted to be safe to leave at home committed some infraction? It is certainly possible for artificial intelligence techniques to be racially biased. The algorithms are only as good as the data they are trained on. But, it is also possible for there to be a real correlation between the color of a person's skin and some kind of behavioral outcome. The fact that the results of the algorithm had some correlation with race does not prove that they were wrong.
Let's be clear here: what we're talking about is using conviction rates by demographic to prove that those demographics deserve the conviction rates that they get, while also having other information showing that the conviction rates are not fair. That's not a minor flaw. Instead of simply admitting that we do not have a valid metric here, you're arguing that we should use one that we already know isn't valid because we have other information showing that it is not reliable.I'll agree that the conviction rate is flawed, but it is what we have. Even if you believe that the conviction rate is heavily skewed, we need something to determine if we are improving or declining. Without a metric, all we have are random numbers and baseless opinions.
Blacks break the law about 16 times more often than whites. That’s fact, not bias.
No, no and no. Do you purposely go out of your way to misread something and do so in the most obnoxious manner possible?Let's be clear here: what we're talking about is using conviction rates by demographic to prove that those demographics deserve the conviction rates that they get, while also having other information showing that the conviction rates are not fair. That's not a minor flaw. Instead of simply admitting that we do not have a valid metric here, you're arguing that we should use one that we already know isn't valid because we have other information showing that it is not reliable.I'll agree that the conviction rate is flawed, but it is what we have. Even if you believe that the conviction rate is heavily skewed, we need something to determine if we are improving or declining. Without a metric, all we have are random numbers and baseless opinions.
You're basically saying that we're better off being wrong on purpose than admitting that we don't know. Even for you, that is some next-level nonsense.
I would also like to see if there is a correlation with single parents, absent father/mother, and married.How did the predictions of the algorithms match reality? That question is the only fair test of their accuracy. How many of the white people predicted to be safe to leave at home committed some infraction? It is certainly possible for artificial intelligence techniques to be racially biased. The algorithms are only as good as the data they are trained on. But, it is also possible for there to be a real correlation between the color of a person's skin and some kind of behavioral outcome. The fact that the results of the algorithm had some correlation with race does not prove that they were wrong.
The article also only looked at race correlation. It would have been interesting to look at income/poverty correlation. And also break that out to urban and rural. Many of these "race" issues often can be correlated to poverty. Urban blacks being disaportionally poor can make a class issue a race issue for those that want to find racism everywhere. Urban vs. rural is a different matter. In areas where everyone knows everyone, there is more inherit trust.
Are you talking about police behavior now?As usual, if the data doesn't add up to what you want it to, blame the data...how about maybe putting some actual effort into why certain communities are having a hard time getting out of the recidivism cycle! They deserve better. If it is never your fault that things keep going wrong, you never feel any incentive to try something new.
https://slate.com/news-and-politics/201 ... -bias.html
It could also be that judges in rural counties are more likely to actually know the person charged and/or have other very good means to asses their flight/re-offend risk and give them the benefit of the doubt given their special knowledge. Judges in urban areas may not have this special knowledge and so may apply a higher standard of caution when determining bail status.Algorithms aren't magic. They carry the biases of whoever coded them, good or bad.
With neural networks (which are now being used in risk assessment tools for courts), those biases don't even need to be coded. They're automatically imported (and possibly magnified) from the training data, in an entirely opaque way.
There's no real way to know what the algorithm is doing, since it's basically a black-box, and removing the biases can't be easily done. Even detecting them in the training data is extremely difficult.
Hidden biases are a huge problem when applying neural networks to business automation tasks in general. The idea of them being used for court proceedings should be horrifying for anyone with even a cursory understanding of the technology.
Acknowledging the above has been an issue for many similar systems. And we don’t really know how this one is trained. But if race isn’t part of the demographic data used for training, only prior record and age and such, then this is a different problem.
The “judges in rural counties overturning the algorithm’s decision more often” bit sticks out to me. It’s the good ol boy network at work. So what’s interesting about this article is that it’s not necessarily algorithm bias, but the way the judges use it.
Unless race directly correlates with FTA, then it has no business being part of the discussion.Using "race" to Judge the results of an algorithm is a flawed premise. Instead, a sampling system should be used with the sample containing the proper representation of each ethnic group. You then determine if the algorithm is working properly for those in the sample. Correcting the algorithm as necessary.
If the algorithm still appears to be biased after the sampling has proven it correct, you look for outside factors that may correlate to race. Then work out how to resolve those items.
One of the issues, is that in many cases we are sending these people back into situations and environments that encouraged the criminal activity. Those situations and environments can correlate with race. The algorithm will likely flag those items in determining risk making the results appear bias.
The algorithm should be adjusted to be correct without racial adjustments, and conditions of bail adjusted so the recurrence rate is neutral.
Are you sure that whites aren't 16 times more likely to be let go with a warning because they look like good people?Blacks break the law about 16 times more often than whites. That’s fact, not bias.
I'd be amazed at how hard you're finding it to grasp this, but given your posting history it's not a surprised.No, no and no. Do you purposely go out of your way to misread something and do so in the most obnoxious manner possible?Let's be clear here: what we're talking about is using conviction rates by demographic to prove that those demographics deserve the conviction rates that they get, while also having other information showing that the conviction rates are not fair. That's not a minor flaw. Instead of simply admitting that we do not have a valid metric here, you're arguing that we should use one that we already know isn't valid because we have other information showing that it is not reliable.I'll agree that the conviction rate is flawed, but it is what we have. Even if you believe that the conviction rate is heavily skewed, we need something to determine if we are improving or declining. Without a metric, all we have are random numbers and baseless opinions.
You're basically saying that we're better off being wrong on purpose than admitting that we don't know. Even for you, that is some next-level nonsense.
I stated that it is all we currently have, flawed as it is. The alternative is to use nothing, which gets us exactly nowhere. At least with this metric we can say we are moving forward or backwards. Yes, the convictions rates are inflated due to factors not related to the crime in question. But if we make a change, and if those rates drop, we know something made them drop. We may only be 75% sure that the drop is related to the specific change, but that is better that not know at all if a change made any difference.
But you are welcome to present a new solution that doesn't involve pulling numbers out of thin air.
At least you are consistent in your intentional misinterpretation of statementa. I literally make an argument that they are needed so that we have some form a metric, something that you yourself stated. So everything else you said is just garbage meant to get a reaction.trolling remarksNo, no and no. Do you purposely go out of your way to misread something and do so in the most obnoxious manner possible?Let's be clear here: what we're talking about is using conviction rates by demographic to prove that those demographics deserve the conviction rates that they get, while also having other information showing that the conviction rates are not fair. That's not a minor flaw. Instead of simply admitting that we do not have a valid metric here, you're arguing that we should use one that we already know isn't valid because we have other information showing that it is not reliable.I'll agree that the conviction rate is flawed, but it is what we have. Even if you believe that the conviction rate is heavily skewed, we need something to determine if we are improving or declining. Without a metric, all we have are random numbers and baseless opinions.
You're basically saying that we're better off being wrong on purpose than admitting that we don't know. Even for you, that is some next-level nonsense.
I stated that it is all we currently have, flawed as it is. The alternative is to use nothing, which gets us exactly nowhere. At least with this metric we can say we are moving forward or backwards. Yes, the convictions rates are inflated due to factors not related to the crime in question. But if we make a change, and if those rates drop, we know something made them drop. We may only be 75% sure that the drop is related to the specific change, but that is better that not know at all if a change made any difference.
But you are welcome to present a new solution that doesn't involve pulling numbers out of thin air.
Instead of just eliminating it, how about replacing it with ankle tracking bracelets? Have the state pick up the cost of tracking, the accused would pick up the cost of any damage to the bracelet. This would serve several purposes; assure they stay out of areas they shouldn't be, assure they return to court, and to track them down if they don't show. We could even possible improve the bracelets to make them less noticeable so they are not a red letter.Here's a wild fucking thought: how about we eliminate cash bail? It's bullshit. If someone is a sufficiently low risk to the community to let them out of jail for $25k, they're a sufficiently low risk to let walk for $0. And if someone shouldn't be let go, just keep them in jail.
Cash bail is an alternative tax that's insanely unfair, ripe for abuse, and serves no particular purpose.
How did the predictions of the algorithms match reality? That question is the only fair test of their accuracy. How many of the white people predicted to be safe to leave at home committed some infraction? It is certainly possible for artificial intelligence techniques to be racially biased. The algorithms are only as good as the data they are trained on. But, it is also possible for there to be a real correlation between the color of a person's skin and some kind of behavioral outcome. The fact that the results of the algorithm had some correlation with race does not prove that they were wrong.
As I mentioned in a previous comment, your argument ignores the underlying problem that the label is biased. We never actually observe whether or not "individuals commit an infraction" - we observe that as measured by the interaction of (police * society * judges). Take my state of Iowa, for example. Black people and white people use cannabis at approximately equally rates, but black people are arrested for it around eight times more often per capita. That's a pretty horrifically biased measurement mechanism.
How did the predictions of the algorithms match reality? That question is the only fair test of their accuracy. How many of the white people predicted to be safe to leave at home committed some infraction? It is certainly possible for artificial intelligence techniques to be racially biased. The algorithms are only as good as the data they are trained on. But, it is also possible for there to be a real correlation between the color of a person's skin and some kind of behavioral outcome. The fact that the results of the algorithm had some correlation with race does not prove that they were wrong.
As I mentioned in a previous comment, your argument ignores the underlying problem that the label is biased. We never actually observe whether or not "individuals commit an infraction" - we observe that as measured by the interaction of (police * society * judges). Take my state of Iowa, for example. Black people and white people use cannabis at approximately equally rates, but black people are arrested for it around eight times more often per capita. That's a pretty horrifically biased measurement mechanism.
What you say is true, BUT, your observation hinges on the assumption that one of the inputs (labels) to a neural network is race. If a neural network is fed data, and has no input for race, the neural network can't bias/weight its inputs or middle layers based on the arrest data to that race input. All it can do is bias/weight its inputs with each other.
It could very well be that there are more arrests based on jurisdiction ( say county or city), and the county may have higher arrest rates for cannabis, and that the base population has a higher minority count. That MAY be a sign of race bias, or, it may be a sign that the officers in that county simply prosecute more cannabis cases across the board. Mix that across other counties, it can look like minorities are getting the shaft. This goes back to correlation vs causation.
Like the debate with equal pay - is the overal average descrepancy because of a systemic bias of women in the workplace, or, is the bulk of the pay discrepancy due to virtual nonrepresenation of women in high paying jobs by choice (as seen in the choices made when going to school) in professions like the trades - plumbing, roofing, electrical, tool and die, welding, construction, etc. In the case of Iowa, is it because of racism, or, do we have counties in Iowa with poor populations with a high percentage of minorities with strained police budgets so to survive financially, police arrest more often to generate fines/justify their manpower levels? Basically, is it race, or money? We don't know enough about the nature of this "algorithm" and what is uses as actual data inputs to develop a model.
One general comment. In order for a neural network to even be able to be useful, it must have a series of biases/weights across the various inputs. If neural networks are thought to basically mimic what goes on in a human mind, then it may be impossible to have something that is "intelligent" without bias. If that's the case, we are wasting time, resources, and focus with these systems. Basically - if there is a neural network involved - human or otherwise - bias - whether direct or indirect, will always be there.
Why come read news at Ars if all they do is post articles from other news sources ?
Complaining about the author of an article that you didn't read is a pretty stupid thing to do.I don't understand why the author is presuming that the algorithms are flawed because the outcome was not the same across race. There are a number of issues with this.
So even if that training data has zero information related to race, sex, gender, etc, it's still automatically biased?
It could be, there can be other confounding factors as well.
If blacks are charged more often for the same behavior, charged with more severe crimes for the same behavior, and/or convicted more often on the same charges, then any algorithm that decides based upon even theoretical neutral attributes like charge severity & conviction rate could well become biased.
You don't need explicit attribute data to produce results that measure as biased when reviewed with that attribute included.
Yes.Algorithms aren't magic. They carry the biases of whoever coded them, good or bad.
With neural networks (which are now being used in risk assessment tools for courts), those biases don't even need to be coded. They're automatically imported (and possibly magnified) from the training data, in an entirely opaque way.
There's no real way to know what the algorithm is doing, since it's basically a black-box, and removing the biases can't be easily done. Even detecting them in the large volumes of historical training data is extremely difficult.
Hidden biases are a huge problem when applying neural networks to business automation tasks in general. The idea of them being used for court proceedings should be horrifying for anyone with even a cursory understanding of the technology.
So even if that training data has zero information related to race, sex, gender, etc, it's still automatically biased?
Even if that information isn't specifically included, it might be statistically related to other information that is (address, employment status, type of crime allegedly committed, etc.)
When I was an undergrad, I did volunteer work at a forensics lab, mostly testing for marijuana. I'm pretty sure a significant number of the cases with negative results were "walking while black" charges. As marijuana becomes legalized for medical use (and possibly recreational use) in more areas, I suspect different charges will be filed against the same victims, and as a consequence we'll start seeing studies indicating a link between legalized marijuana use and increases in jaywalking, driving without insurance, driving without a seatbelt, etc. Such a link may or may not actually exist, but we'll certainly see it in the statistics.
And this happens because, instead of reporting the news, the article is full of bias from the author. "What went wrong?" is a premisse, not a conclusion as it should be.I don't understand why the author is presuming that the algorithms are flawed because the outcome was not the same across race. There are a number of issues with this.
It presumes that the status quo regarding no bail releases before the implementation of the algorithm was fair or at least more fair. It could be the case for example that previously judges who were concerned about being perceived as racist were being harsher on white people accused of crimes when it came to granting no bail releases. It could also mean what the article itself pointed out, that judges were harsher on more affluent people and maybe white people are more affluent and thus benefited more from the change. I am not saying that either of theses scenarios was the case but the idea that because the percentage change in a result is different for different races proves that the system put in place is biased is just silly.