Resources / Study / Innovation for Court ADR

Just Court ADR

The blog of Resolution Systems Institute

Archive for the ‘Ethics’ Category

How Should AI Be Used in Criminal Justice?

Stephen Sullivan, February 25th, 2026

A pair of policy briefs by the Center for Justice Innovation offers recommendations for when AI could and should not be used in the criminal legal space. They call for leaders in the field to deploy AI responsibly — to foreground values and commit to mitigating harm. Publication of the briefs preceded the Center’s announcement of the AI and Justice Consortium, which they describe as convening justice practitioners, researchers, tech companies and communities to discuss and develop AI infrastructure in criminal justice. The reports’ focus on prioritizing values and assessing risk are germane to court ADR.

Leading with Values and Lessons Learned

Photo by Markus Winkler via Pexels

The first brief urges criminal justice leaders to learn from previous efforts to deploy algorithmic-based technologies. The authors offer three main recommendations: 1) Prioritize values over technology; 2) Stay active, curious and informed about technology; and 3) Resist the siren song of efficiency long enough to weigh unintended consequences.

Risk assessment systems and electronic monitoring are two examples to learn from. The authors note that early legal adopters positioned these technologies as “evidence-based” and “neutral” tools that could effectively predict recidivism. However, the field did not sufficiently establish a values framework to evaluate and constrain the use of these technologies prior to deployment — ignoring those who warned of risk scores injecting inherent biases. The warnings were prescient. These technologies perpetuated race- and class-based inequities and harms present in the criminal justice system; efforts to address these issues came too late and are still catching up.

The authors thus call for a front-end commitment to values such as transparency and the well-being of communities. Through a sustained effort to understand why and how predictive technologies such as AI can and will fail, practitioners can make informed, proactive decisions to mitigate harm.

We need to proceed with caution, the authors contend, to determine what purposes AI tools might serve, for whom and for what intended outcomes. The report outlines potentially positive uses of AI tools, such as surfacing racial biases in charging and sentencing patterns and expediting reviews of applicant case files to potentially get people out of prison sooner. The authors describe these uses as not bearing direct negative impact on people’s liberties.

Establishing Constraints and Safeguards  

The second brief discusses the practical application of the first brief’s values recommendations; it includes recommendations to constrain the use of AI in high-risk scenarios and establish safeguards for its use in low-risk scenarios. This set of recommendations synthesizes the insights of criminal legal and technology leaders who attended a working session at the Center. Most notably, the authors call for a moratorium on AI use in high-stakes contexts to “allow for a thorough assessment of AI’s impact on liberty and safety, and for a proper consideration of whether it should be deployed at all in certain higher-stakes contexts.”

The authors distinguish high-risk scenarios, in which AI should not be used, from low-risk scenarios, in which AI tools could be used with human oversight and decision-making. High-risk scenarios involve the potential for significant harm: spaces where people are intensely vulnerable (such as jails and prisons) and decisions such as detention versus release. In contrast, examples of lower-risk scenarios in which AI tools are potentially better suited include supporting case managers to disseminate community resources and housing services, summarizing case notes, or analyzing case patterns to match services to client needs. They also identify an opportunity for court staff and researchers to use AI to identify disparities in court policies and programs.

The authors call for mandatory comprehensive evaluations before any deployment of AI in justice settings. Conducting trial runs with representative, real-world data sets is the minimum needed to anticipate real-world consequences and identify biases. Lastly, the authors call for a greater push for standards to “put the brakes on hasty experimentation” and safeguard AI implementation in criminal justice. Taken together, the reports urge criminal justice leaders to carefully consider what purposes they use AI for, when to draw firm lines to mitigate harm, and how to be guided by the needs and values of people in the justice system.

The value of these briefs to the ADR community is the emphasis on the risks that seemingly neutral AI uses might entail and the call for real-world evaluation of potential consequences and biases prior to deployment. To ensure that AI is adopted in ADR responsibly, we need to check that guardrails are in place and that the risks to parties are not ignored.

Should There be an Ethical Obligation for Mediators to Support Transparency?

Jennifer Shack, March 22nd, 2019

I’m doing something different this month. Instead of summarizing empirical research or an evaluation, I’m discussing an article that presents an argument for mediators to be more transparent about the mediations they conduct and calls for a new standard for compulsory mediation that is mandated by the court or required by a contract of adhesion (e.g., a consumer or employment contract). The article is the start of a conversation, with many questions to be addressed, such as what exactly constitutes measured transparency, and how confidentiality and transparency can be balanced.

In her article, “Dispute Resolution Neutrals’ Obligation to Support Measured Transparency” (Oklahoma Law Review, Vol. 71, No. 3, 2019), Nancy Welsh argues that transparency is needed regarding the use and outcomes of dispute resolution processes in order to protect the public and the integrity of the processes. Further, according to Welsh, the neutrals themselves have an ethical responsibility to support that transparency. This is particularly true when parties don’t have a choice (or their choice is limited) but to participate in these processes.

Since Welsh focuses on mediators in her article, I will as well. First, though, let’s talk about what Welsh means by transparency. Although Welsh doesn’t state exactly what she means by the term, it appears from her examples that transparency is the provision of enough information about the use and outcomes of a process that the public can have confidence in that process and parties can make informed decisions. The information provided should also allow for empirical research and systematic analysis to be done, which can point to best practices and enlighten the public as to the effectiveness of the process.

To illustrate what this information might be, she points to the data released by the Nevada Supreme Court regarding the compliance of lenders with the foreclosure mediation program’s statutory requirements. She also highlights the opportunity that the RSI/ABA Model Mediation Surveys pose for gathering standardized participant feedback and mediator reporting.

Noting that the courts and arbitration organizations publish more information about the cases they hear than is generally available for mediation, Welsh points to reasons mediators should be more transparent. First, as with arbitration, parties are often compelled to mediate, from mandatory mediation in the courts to contracts of adhesion that include a mediation requirement. When processes are imposed upon parties, there is a greater responsibility to ensure that the processes are fair and effective, particularly when there is limited judicial review of the outcome, as with mediated settlements. Transparency helps to ensure that, according to Welsh: More information about mediations can help to equalize the knowledge of one-shot users and repeat players, allow for public oversight, and make it less likely that mediators would engage in unethical behaviors. It would, therefore, provide potential users with greater confidence in the usefulness and integrity of the process.

For these reasons, Welsh argues that a new set of standards for compelled mediations is the best option. Because these mediations are the ones that are most in need of transparency, a set of standards specific to them is warranted. As Welsh notes, a customized standard “would acknowledge that mediation occurring pursuant to mandates by courts, legislatures, or contracts of adhesion is different, and that its circumstances require a heightened level of public accountability.”

This article highlights a trend that is coming to the fore in other areas of dispute resolution. As dispute resolution processes, in particular arbitration, have become not only more routine but also more often required, calls for – and requirements for – transparency have followed. Welsh notes that confidentiality has become the hallmark of mediation. For the sake of self-determination and process integrity, she argues that the veil of confidentiality needs to be pierced, in a measured way, to make more information available to users, researchers and courts.

Big News in Court ADR — A Look Back at 2014

Just Court ADR, December 18th, 2014

Our monthly e-newsletter Court ADR Connection has updates on RSI’s activities, cutting-edge ADR research, and the latest court ADR news from across the country. As we wind down 2014, I thought it might be fun to take a look at a few of the most significant news stories we reported on this year.

Detroit Bankruptcy Mediated in “Grand Bargain”

The most-watched court ADR news story of 2014 may have been the mediated settlement that resolved the City of Detroit’s municipal bankruptcy. Without doubt, this riveting drama of competing interests coming together to form a “Grand Bargain” will be studied and discussed for years to come. We reported on facets of this story a few times, both here in our blog and in our newsletter: (more…)

Grievance Procedures and Mediation Policy Goals

Jennifer Shack, August 6th, 2014

Here’s something I wrote for RSI’s e-newsletter this month that I thought would interest our blog readers as well:

Parties to court mediation in Florida have the opportunity to submit their complaints regarding a mediator to a robust grievance process. The structure includes four stages: committee review to determine whether a complaint is facially sufficient; a preliminary review of rules that may have been violated and the mediator’s response to the complaint, which are used to determine probable cause; a meeting between mediator and complainant; and a formal hearing.  In “Mediator Ethical Breaches: Implications for Public Policy” (Penn State Yearbook on Arbitration and Mediation, Vol. 6, p. 107 (2014)), Sharon Press examines this grievance process and finds that the burden of proof required at the formal hearing stage has the potential to undermine the policy goals of mediation programs. (more…)

Verified by ExactMetrics