Resources / Study / Innovation for Court ADR

Just Court ADR

The blog of Resolution Systems Institute

Archive for the ‘Research’ Category

How Well Can an AI Facilitator Recognize Emotions During Dispute Resolution?

Jennifer Shack, December 1st, 2025

Recent research suggests that large-language models (LLMs) acting as facilitators in text-based dispute resolution can be trained to accurately identify human emotions and to intervene to change the trajectory of a dispute when the emotions might otherwise lead to an impasse.

The authors[1] of the August 2025 paper “Emotionally-Aware Agents for Dispute Resolution” recruited students to act as disputants regarding the sale of a basketball jersey. The ultimate dataset included 2,025 disputes, with an average 10.7 messages per dispute.

To allow for comparison with prior research, the researchers initially categorized emotions as others had done to assess LLM capacity to identify emotions in negotiations.[2] The emotions tracked were joy, sadness, fear, love, anger and surprise. The study also used a self-measure frustration scale as an indication of “ground truth” to be used as a benchmark for comparison with the LLMs’ identification of emotions. The dispute participants assessed their level of frustration during the dispute exchange, as well as their perception of the other party’s level of frustration.

To set a baseline against prior emotion models, the researchers first ran the disputants’ text exchanges through T5-Twitter, a large fine-tuned model adapted for recognizing emotions.[3] They found that T5-Twitter (T5) failed to recognize anger in conversations that participants had reported as frustrating. The researchers hypothesized that this was because T5 was classifying each dialogue turn in isolation, rather than within the context of the entire interaction. Although they had adopted the emotions used for negotiation research, the researchers also noted that those emotions were more relevant to negotiation than to dispute resolution, which concerned them.

Testing Other LLMs

The next phase of the study was to test the researchers’ hypothesis that general LLMs could better identify emotions than T5 had. The researchers prompted a variety of LLMs to analyze the same dialogues, using different prompting strategies, but with slightly different emotions. They changed the emotion “love” to “compassion” and added a “neutral” category so the LLMs were not forced to choose an emotion when none was apparent. They also prompted the LLMs to consider each dialogue turn within the context of previous turns. Finally, they helped the LLMs to learn within context by including in the prompt several sample dialogue turns with hand-annotated emotions.

Again comparing self-reported frustration with each LLM’s classification of emotions, the researchers found that GPT-4o outperformed T5 (as well as other LLMs). T5 skewed toward annotating utterances as joy or anger, while GPT-4o was more diverse in its assessments and used “neutral” as a dampener by not assigning emotions to unemotional statements. GPT-4o also recognized compassion where T5 did not recognize love.

The researchers then used multiple linear regression[4] to predict participants’ subjective feelings about the result of the dispute resolution effort (as measured by the Subjective Value Inventory) based upon the emotions that T5 and GPT-4o assigned to each dialogue turn. They found that GPT-4o provided the biggest improvement in predicting participants’ feelings about the result, even when accounting for changes in prompts to T5. They also found that buyers were more straightforward to predict than sellers.

Preventing Impasse

The researchers then examined whether GPT-4o could determine when to intervene to de-escalate anger before it escalates into impasse. This would require GPT-4o to identify a pattern of escalation. GPT-4o’s automatic identification of emotion showed that when sellers respond to buyers’ anger with anger in these dialogues, the anger spirals, and impasse results. They found something similar with compassion. When sellers began with compassion, buyers responded with compassion, and the dialogue more often resulted in agreement.

In sum, the researchers demonstrated that properly prompted LLMs with in-context learning can accurately assign emotions to text. Additionally, they found that they could predict subjective dispute outcomes from emotional expressions alone, without knowing the actual content of a conversation.

Researchers can use GPT-4o emotion assignment to reveal how emotions can shape disputes over time: Anger spirals, but so does compassion when it comes early in the dispute. This indicates that LLMs can be trained to know when to intervene in order to change the trajectory of a dispute. Future work will look at how they can do this.


[1] The authors are Sushrita Rakshit, James Hale, Kushal Chawla, Jeanne M. Brett and Jonathan Gratch.

[2] They characterize negotiations as a coming together to create a new relationship (e.g., car salesman and customer), while disputes involve an existing relationship that has gone badly.

[3] I’m extrapolating here, based on the context and what I could find about fine-tuned LLMs.

[4] Multiple linear regression uses several independent variables to predict a specific outcome.

New Resources Help Address Barriers to Diversifying Organization’s Mediator Roster

Stephen Sullivan, October 24th, 2025

RSI recently completed our evaluation of an equity audit implementation project by the Center for Conflict Resolution (CCR) in Chicago. CCR staff, board members and volunteers worked with a consulting partner to uncover barriers preventing their volunteer mediator roster from fully reflecting the diversity of the communities CCR serves. After identifying barriers, they made major changes to how CCR recruits and screens applicants to its Mediator Mentorship Program (MMP), which onboards mediators to volunteer at CCR. RSI evaluated the efficacy of CCR’s implementation and examined initial outcomes of the revamped process. 

Stephen Sullivan
RSI Researcher Stephen Sullivan will join CCR Volunteer Director Israel Putnam and former CCR Executive Director Cassie Lively to discuss this research at a 9 a.m. session Nov. 6 at the Association for Conflict Resolution conference in Philadelphia. Get conference details and register here.

We are excited to share that our evaluation report, Fostering Equity in a Volunteer Mediator Roster: An Evaluation of the Center for Conflict Resolution’s Equity Audit Implementation, is now available on RSI’s website. The report includes our findings from surveying, interviewing and observing staff, board members and volunteers who participated in the project and facilitated CCR’s new applicant screening and recruiting processes. 

In addition to the evaluation, we created a guide for community mediation centers, to help them learn from CCR’s efforts.In A Guide for Enhancing Mediator Roster Equity from Concept to Implementation, we document the strategies CCR staff, board members and volunteers took to address barriers to equity in the MMP. We describe which approaches were most effective and which were less effective, and we provide recommendations for staff at other community mediation centers (CMCs). 

A Guide for Community Mediation Centers

The guide contains step-by-step instructions to help mediation centers adapt CCR’s approaches to addressing barriers that could keep people from a variety of backgrounds from applying and participating fully as CMC mediators. It advises CMCs on how to build alignment among staff and volunteers on a set of equity-related goals; retool application materials to collect more accurate and relevant information about applicants to their programs; and create more effective screening processes to assess applicants’ mediation-related skill sets. 

CCR staff found that their experience with the equity audit and its implementation challenged previously held assumptions about how to best enhance diversity. For example, did you know that using predominantly written application materials might hamper efforts at diversifying mediator rosters? Or that activity-based group interviews might provide more relevant and useful information about applicants’ capacities to be successful mediators than traditional one-on-one interviews? 

In the guide, we explain what CCR staff learned about these issues and describe the creative solutions they devised to address them. One major solution is the Matching Event, CCR’s innovative new format for screening applicants to the MMP.

During a Matching Event, applicants participate in a series of stations involving activities designed to assess specific skills, such as being empathetic and being comfortable with conflict. Stations are facilitated by two CCR “Station Runners” (staff or volunteer mentors), with activities that range from describing the emotions of characters in a movie clip to role playing as parties in conflict. Station Runners use CCR’s newly crafted Matching Event Scorecard to rate the extent to which applicants meet these criteria.

CCR generously permitted RSI to include its Matching Event materials in the guide, so that others can understand how they work in greater detail. We also wrote step-by-step instructions to help CMCs craft their own Matching Events, should that fit their applicant assessment needs. 

Takeaways for CMCs 

RSI had two overarching aims with the evaluation: The first was to assess the successes and challenges involved with the process of implementing the audit recommendations; the second, to evaluate the effectiveness and results of implementation activities, such as staff training sessions and the Matching Events. While the evaluation’s findings and recommendations are geared toward CCR, they have broader implications for other CMCs interested in doing similar work. 

Below is a set of key takeaways for CMCs interested in making the role of community mediator accessible to more of the people with the skills to participate. These takeaways are based on what we learned from conducting the evaluation as well as working with CCR staff, board members and volunteers to create the guide.

A successful audit and implementation project requires collaboration, time and consistent communication. CCR staff, board members and volunteers needed plenty of time to review and reflect on the findings of the audit before they could take action. Collaboration helped to make the process more effective; by bringing different stakeholders together during workshops and meetings, CCR was able to build buy-in and ensure different aspects of the program were addressed. Staff and volunteers also benefited most when they were updated on the project’s progress. 

Meaningful change requires an open mind and flexibility. CCR leadership gave staff and volunteers wide latitude to make changes to program processes. As a result, staff and volunteers felt empowered to address barriers creatively and maintained investment in the project. Many of the barriers were long standing mindsets and processes; permission to make major changes was critical to the project’s success. 

Making processes more flexible does not reduce program rigor. One of the most noteworthy learning lessons for CCR was that a one-size-fits-all approach for participation in the MMP is not a prerequisite to maintaining quality program standards. By introducing flexibility to MMP processes and expanding outreach, CCR was able to create opportunities for volunteer mediators from diverse backgrounds to contribute to the organization while keeping rigorous requirements in place. 

Enhancing pathways to program participation is an ongoing dialogue and process. From the outset, CCR recognized that any changes made to the MMP as a result of this project would need to be revisited as their outcomes became clear. Building broader access to the program is a process; CCR has planned time for staff and volunteers to further reflect and make changes as needed. 

Tools Help Courts Explain ODR to the Public

Stephen Sullivan, May 12th, 2025

RSI has completed the second phase of the ODR Party Engagement (OPEN) Project! We are thrilled to share that our new communication tools to help courts educate self-represented litigants (SRLs) about ODR more effectively are now available. The tools can be accessed on our OPEN Project website.

The tools include RSI’s Model Notice to Defendant of Mandatory ODR, our Model ODR Explainer Video, and desktop and mobile website prototypes that contain our ODR Home Page, our Model ODR Self-Help Guide for Defendants, and our Model Account Registration Webpages.

Our Toolkit for Making ODR Make Sense to the Public provides step-by-step instructions for adapting our models or designing each model type based on our focus group and usability testing research.

OPEN Launch Party Recording
Learn all about RSI’s newest tools for improving court communications in this recording of our OPEN Launch Party webinar.

Designing New Court Communication Models

We partnered with an inclusive designer and an accessibility evaluator to ensure the models were easy to use and understand and accessible to individuals with disabilities. We structured the models around a simple workflow that provides a clear path for parties to follow to learn about and prepare for ODR. Importantly, we also scaffolded information about ODR across the models — we designed them to gradually introduce details about how ODR works, so parties do not feel overwhelmed.

To obtain feedback on the models from individuals similar to those most likely to use them, we conducted usability tests across the U.S. with a diverse set of participants whose backgrounds resembled those of SRLs with low literacy and low digital literacy. The final models reflect this collaborative approach among RSI, our design partner, an accessibility expert and 20 real users.

Usability Testing our Models

Overall, usability test participants found RSI’s OPEN Communication Models to be visually engaging, intuitive to navigate and, importantly, easy to read and understand. We asked participants to rate each of the models for how easy they were to understand; the final versions of the models received an average 4.8/5 rating.

Below are key findings from usability testing:

  • A mobile-first design is essential
    Overwhelmingly, our usability testers shared that they primarily access the internet using their smartphones. It is critical to create materials that are not just mobile-friendly but mobile-first in their design. This finding was further supported by participants’ enthusiasm for mobile-first features, such as the inclusion of a QR code on the Notice to simplify navigation to the website.
  • Testers’ confidence grew
    We found that as participants successfully navigated each model, their expressed confidence, understanding of ODR and sense of ease grew. Participants also demonstrated an interest in learning more about ODR, suggesting that our approach to scaffold information was effective at boosting participants’ engagement with the process.
  • Data privacy and security are top of mind
    Usability test participants responded very positively to our dedicated data privacy and confidentiality section on the model ODR Home Page. Providing concise and specific information about how ODR platforms address data privacy concerns can help alleviate users’ anxieties over these issues, even for those who are most hesitant about using the internet.
  • Simple materials enhance excitement for ODR
    Most of our usability test participants did not have any prior knowledge about ODR and were learning about it for the first time. After going through our materials, testers were not only able to accurately answer our questions about how ODR works, but also expressed their excitement for the prospect of ODR being available in their communities.

Recommendations to Courts

Feedback from our usability testers demonstrates that simple, easy-to-understand communication materials can positively impact parties’ understanding of and interest in ODR. Based on what we learned from usability testers and our work with an inclusive designer and an accessibility evaluator to design effective models, we developed a set of recommendations for courts to ensure that their communication materials can effectively be understood by SRLs. Check out our report, Designing a New Way to Communicate about ODR: Usability Testing Insights, to learn more about these recommendations and our usability test findings. 

Next Up: Support for Using Our Models

RSI is pleased to share that we have begun offering a technical assistance service to help courts and ADR organizations to enhance their communication materials about ADR programs. Contact us to learn more about the different ways we can help you communicate more effectively.

We are extremely grateful for the American Arbitration Association-International Centre for Dispute Resolution Foundation’s support for the OPEN Project and the dissemination of its findings.

Join RSI at an Online Demonstration of our New OPEN Project Communication Tools

Just Court ADR, March 11th, 2025

We’re rolling out RSI’s newest tools to support courts’ communication with parties, and you’re invited! Join us for an online demonstration, a Question & Answer session, and a chance to win a free one-hour consultation! Participation is free; registration is required.

What: RSI’s OPEN Project Model Tools Launch Party!
When: Thursday, April 3, 2025; 12 p.m. Central
Where: Zoom; please register here

Background:

You might have read about Phase 1 of our ODR Party Engagement (OPEN) Project. For Phase 2, RSI has developed model materials — a webpage, a notice document, an informational video and an interactive guide — to help courts communicate more effectively with self-represented litigants (SRLs) about online dispute resolution (ODR). We developed these models with the support of an inclusive designer and an accessibility expert, then user-tested them with a diverse set of individuals around the United States. Although focused on ODR, these materials offer innovative solutions to communicating with SRLs about any court program.

You can learn more about the OPEN Project, and download Communicating Effectively About ODR: A Guide for Courts and our Document Preparation Worksheet and Checklist, on the OPEN Project section of our website. You’ll also find updates on the project’s progress on our blog, Just Court ADR.

RSI is excited to share these new resources, and we hope to see you at the launch!

Verified by ExactMetrics