Some U.S. jurisdictions are using advanced formulas to evaluate the risks that defendants pose. Judges weigh these evaluations when considering pretrial release.
By Micah Schwartzbach, Attorney
“Will this person break the law or skip town?” This question is at the forefront of the mind of a judge who is evaluating bail for a defendant awaiting trial.
Judges must make decisions about whether to release defendants, and under what terms. A judge can consider a report prepared by a bail investigator or similar government employee who has looked into the defendant’s standing in the community. The judge can listen to what people have to say about the defendant. And, of course, the court will look at the current charges and the rap sheet. But some would argue that—even with all this information—decisions about pretrial release are fundamentally subjective.
Numbers Game
That’s where bail algorithms—formulas that use statistics in order to assess risk—come into play. Here’s the basic idea: The algorithm takes selected information about the defendant and produces an objective, scientifically based assessment. Before making a bail decision, the judge gets to consider the algorithm’s result.
A bail algorithm will typically consider several factors, then produce some kind of score or recommendation for or against release. One bail tool, for example, gives separate scores for risk of breaking the law and risk of failing to appear in court. It also highlights heightened chances for violence.
The relevant factors in bail algorithms tend to include the defendant’s:
- age
- current charges
- criminal history, and
- record of failing to appear.
Algorithm Experiences
Bail algorithms are a response to perceptions of arbitrariness and unfairness in decisions about pretrial release. Critics of traditional forms of evaluating bail have alleged unnecessarily overcrowded jails and disproportionate outcomes for poor people and minorities.
In 2016, San Francisco become one of a growing number—but still a small minority—of U.S. jurisdictions to implement a bail-algorithm system. Reception in the Bay Area city appeared to be mixed. Among the complaints were that prosecutors would often ask for high bail even when the algorithm didn’t call for it. Another grievance was that judges would disregard the algorithm’s recommendation.
Bail algorithms have also been criticized for the factors they don’t consider—some don’t weigh employment status or substance abuse history. The counter is that the data shows that drug abuse, for instance, isn’t nearly the indicator of risk that people have long thought it is.
Another criticism is that algorithms focus on the label attached to the charged offense rather than the underlying facts. In one San Francisco case, for example, the judge determined that a man who allegedly shoplifted and knocked down security guards while trying to run away wasn’t your conventional “robbery” defendant. The potential rebuttal to this complaint, of course, is that judges are free to disagree with the computer.
A core contention is whether bail algorithms are susceptible to racial bias. Some say the math is colorblind. Others argue that racial inequality in arrest patterns means that bail algorithms, which weigh criminal history, discriminate by race.
Despite the controversy, plenty have embraced the concept of data-driven decisions about pretrial release. And many in the criminal justice system have said they want to reform bail algorithm systems to make them better—and more prevalent.