This post illustrates my recent discussion of the template-based online conflict resolution system Facebook has implemented for user disputes. The system asks users a series of questions, and suggests possible resolutions based on their answers. In some cases, Facebook may offer a user a pre-written message, filled out with a computerized template, to use as a starting point to address a conflict.
Here are some examples of the conflict resolution templates currently available for users who object to other users’ photos on Facebook. (Click any image to enlarge.) In order to get these screencaps, I used RSI’s Facebook account (“RSI”) to visit my personal Facebook (“Mary”.) In this simulation, RSI is upset by Mary’s photo of a cherry tree in bloom, and decides to report it. While looking at the photo, RSI clicks “Options” and then “Report Photo.” A discussion box pops up. Facebook auto-fills the name “Mary” in its messages, to talk to RSI about the problematic photo. The goal of the messages is to help RSI articulate its feelings about Mary’s post, to decide what action to take, and to receive help crafting a message that Mary may respond to positively.
Example 1: Mild annoyance; no company intervention
The process begins by asking why the user doesn’t want to see the photo. The answer will send the user through one of three different branches through this system.
The results are quite different depending on the user’s problem. “It’s annoying” is mainly for mild complaints; “I’m in the photo and I don’t like it” leads to a more personal series, and “I think it shouldn’t be on Facebook” leads to a list of issues that may violate Facebook’s policies.
In this example, RSI’s annoyance is seen as a mild accusation, and Facebook will not intervene directly. Instead, RSI is offered two specific choices: “Hide all posts from Mary” or “Message Mary.” RSI chooses to send a message.
Example 2: Conflict Resolution Guided by Template
Here is an example of a message that is filled out by template. This example begins with the “Why don’t you want to see this photo?” image in Fig. 1. This time, RSI selects “I think it shouldn’t be on Facebook.” The options under this choice include more serious issues than the first example. To develop these ideas, Facebook provides RSI with examples of each choice. The complaints follow a broad range: the offending photo could be pornographic, annoying, insulting, or show the user in an unpleasant way.
When RSI selects “This photo is of me or my family,” it gets different options than “this photo is annoying or not interesting.” Once again, Facebook does not offer to intervene directly with Mary. However, RSI’s choices are stricter: “Message Mary,” or “Block Mary,” a stronger choice than “Hide Mary” since it cuts off all communication.
In this case, the message box provides a template for communication.
Facebook pre-fills the message box with “Hey Mary, this photo is personal and I would prefer to keep it private. Would you please take it down?”
Example 3: Helping Users Address Very Serious Issues
Many of the message templates use the same text as Fig. 5, but some are highly tailored for specific situations, including potentially life-threatening problems. If RSI clicks “Something else” in Fig. 1, an expanded list of issues appears, with more examples. One striking example is “This displays someone harming themselves or planning to harm themselves.” Examples include holding a gun to their head or promoting an eating disorder.
If RSI selects the self-harm issue, Facebook presents several choices that acknowledge the gravity and sensitivity of the problem. First, Facebook offers advice on what to do if a friend is in danger. Second, the wording for every option RSI can take has been gentled with a tone that assumes that RSI wants to help a friend, not initiate a conflict. Rather than say “Message Mary,” Facebook suggests RSI “offer help or support.” Also, Facebook suggests RSI may want to “reach out to a friend” other than Mary, to talk through the problem and decide what to do. Lastly, since Facebook does have rules about this type of image, RSI is invited to “submit to Facebook for review,” which could lead Facebook to take its own actions.
The messages are also highly tailored depending on RSI’s choices. The “Offer Support” template suggests a message of concern that would go directly to Mary. The message begins “Hey Mary, this post makes me feel worried about you. Are you OK?” and concludes with a helpline number.
Facebook also guides RSI’s actions by helping RSI discuss the issue with a third friend, rather than go to Mary directly. It reads “Hey, this post makes me feel worried about Mary. Do you have any idea why Mary would have written this? Do you think there’s something we can do to help?”
Though it looks simple at first glance, Facebook’s online conflict resolution system reveals great complexity as one follows each branch of the tree of possible decisions. The tool doesn’t just help users articulate their feelings, it also guides them through the process of choosing an action to take. This is necessary for online dispute resolution, but could it have a place in court ADR as well? In British Columbia an online system is currently being tested for small claims cases. Would it benefit self-represented parties to have a tool to help them articulate their feelings, or is the court system simply too complex for this? Let us know what you think in the comments.