A pair of policy briefs by the Center for Justice Innovation offers recommendations for when AI could and should not be used in the criminal legal space. They call for leaders in the field to deploy AI responsibly — to foreground values and commit to mitigating harm. Publication of the briefs preceded the Center’s announcement of the AI and Justice Consortium, which they describe as convening justice practitioners, researchers, tech companies and communities to discuss and develop AI infrastructure in criminal justice. The reports’ focus on prioritizing values and assessing risk are germane to court ADR.
Leading with Values and Lessons Learned
The first brief urges criminal justice leaders to learn from previous efforts to deploy algorithmic-based technologies. The authors offer three main recommendations: 1) Prioritize values over technology; 2) Stay active, curious and informed about technology; and 3) Resist the siren song of efficiency long enough to weigh unintended consequences.
Risk assessment systems and electronic monitoring are two examples to learn from. The authors note that early legal adopters positioned these technologies as “evidence-based” and “neutral” tools that could effectively predict recidivism. However, the field did not sufficiently establish a values framework to evaluate and constrain the use of these technologies prior to deployment — ignoring those who warned of risk scores injecting inherent biases. The warnings were prescient. These technologies perpetuated race- and class-based inequities and harms present in the criminal justice system; efforts to address these issues came too late and are still catching up.
The authors thus call for a front-end commitment to values such as transparency and the well-being of communities. Through a sustained effort to understand why and how predictive technologies such as AI can and will fail, practitioners can make informed, proactive decisions to mitigate harm.
We need to proceed with caution, the authors contend, to determine what purposes AI tools might serve, for whom and for what intended outcomes. The report outlines potentially positive uses of AI tools, such as surfacing racial biases in charging and sentencing patterns and expediting reviews of applicant case files to potentially get people out of prison sooner. The authors describe these uses as not bearing direct negative impact on people’s liberties.
Establishing Constraints and Safeguards
The second brief discusses the practical application of the first brief’s values recommendations; it includes recommendations to constrain the use of AI in high-risk scenarios and establish safeguards for its use in low-risk scenarios. This set of recommendations synthesizes the insights of criminal legal and technology leaders who attended a working session at the Center. Most notably, the authors call for a moratorium on AI use in high-stakes contexts to “allow for a thorough assessment of AI’s impact on liberty and safety, and for a proper consideration of whether it should be deployed at all in certain higher-stakes contexts.”
The authors distinguish high-risk scenarios, in which AI should not be used, from low-risk scenarios, in which AI tools could be used with human oversight and decision-making. High-risk scenarios involve the potential for significant harm: spaces where people are intensely vulnerable (such as jails and prisons) and decisions such as detention versus release. In contrast, examples of lower-risk scenarios in which AI tools are potentially better suited include supporting case managers to disseminate community resources and housing services, summarizing case notes, or analyzing case patterns to match services to client needs. They also identify an opportunity for court staff and researchers to use AI to identify disparities in court policies and programs.
The authors call for mandatory comprehensive evaluations before any deployment of AI in justice settings. Conducting trial runs with representative, real-world data sets is the minimum needed to anticipate real-world consequences and identify biases. Lastly, the authors call for a greater push for standards to “put the brakes on hasty experimentation” and safeguard AI implementation in criminal justice. Taken together, the reports urge criminal justice leaders to carefully consider what purposes they use AI for, when to draw firm lines to mitigate harm, and how to be guided by the needs and values of people in the justice system.
The value of these briefs to the ADR community is the emphasis on the risks that seemingly neutral AI uses might entail and the call for real-world evaluation of potential consequences and biases prior to deployment. To ensure that AI is adopted in ADR responsibly, we need to check that guardrails are in place and that the risks to parties are not ignored.




