How did the predictions of the algorithms match reality? That question is the only fair test of their accuracy. How many of the white people predicted to be safe to leave at home committed some infraction? It is certainly possible for artificial intelligence techniques to be racially biased. The algorithms are only as good as the data they are trained on. But, it is also possible for there to be a real correlation between the color of a person's skin and some kind of behavioral outcome. The fact that the results of the algorithm had some correlation with race does not prove that they were wrong.