FAILFaire is a gathering of technology non-profits to share stories of failure and give an award to the worst one. The purpose is to learn from each other’s failures and not replicate them. According to a New York Times article on FAILFaire, non-profits are leery of revealing failures because it may turn off donors and thus harm those they are trying to help. This type of thinking has led many not to examine the reasons behind failures, instead focusing on what has worked rather than what has not. There is a school of thought that says we can learn more from our – and others’– mistakes than we can from our successes.
So, in the interest of improving monitoring and evaluation practices, I’m going to share RSI’s worst failure. RSI was working with an Illinois court to develop a monitoring system for its family mediation program. The system we came up with included case management data that the Clerk’s Office collected, along with data collected by the court through post-mediation reports and questionnaires. In Illinois, the Clerk’s Office is an entirely separate governmental entity from the court system, with an elected Clerk in each county. We thought the best approach, then, would be to create software that would download data from the Clerk’s database into the database for the program. This would eliminate the need to re-enter the Clerk’s data as well as the constant need for the Clerk to send that data to court staff. In essence, the Clerk’s database and our database would be connected, but not integrated. It seemed like a good way to reduce the amount of work that staff would have to do in order to monitor the program.
What happened instead was that years were spent attempting to get the two systems to communicate properly so that our database could report such information as how long it was taking for cases to move through the mediation process or even how many cases were mediated and what the outcome was. After several years of effort, our database still could not do so properly.
The major reason it took so long was that debugging the software so the two systems could communicate required the cooperation of the Clerk’s IT staff, who were extremely busy attending to other issues. While the domestic relations court valued the information, the database was not a priority in an over-worked Clerk’s Office. Then, when it appeared we were finally going to get things right, the Clerk’s Office changed the software it was using to enter and track case information. Our database could no longer communicate with theirs. After a few meetings with the court and the Clerk’s Office, it became clear that it would take a lot of time that they did not have to reconnect the two databases. The project ended after years of work and thousands of dollars spent with no good data from the mediation program, making it our worst failure.
What we learned from the project is that it may seem to decrease the court’s workload to integrate two databases, when in fact it increases it because of the technological issues that are involved. We also learned that a project that requires any significant amount of time on the part of staff needs to have the support of all the participating entities and leadership needs to communicate this priority to the staff.
A final lesson was that the court needs to believe that the information it will gain from its efforts will be of value to it. RSI spends a lot of time educating courts about the importance of discovering problems with their programs so that they can improve them, but we are often working against a long-standing culture that either worries about the implications of self-examination or does not value it. We are still learning how best to overcome this obstacle.
So, anyone want to share their own failures?
Tags: court programs, lessons learned
What great lemonade! Others facing the same situation (which is virtually always whenever the entire evaluative “system” is not totally integrated into the court administrative software/hardware set up) should take note and not repeat the “mistake” costing resources AND possibly eliminating the possibilty (if not “just” delaying) getting evaulation results allowing mid course correction of on going programs.