In 2009, Michael Dougherty and I created Mass Enrollment, an interactive simulation of the process of signing up illegal immigrants for an amnesty, should Congress pass a law granting immigration benefits to illegal immigrants. Mass Enrollment had a user interface, allowing the user to explore different sign-up processes, different temporary staffing strategies for the sign-up, and different strategies for application fees. Mass Enrollment simulated the dynamics of adoption: did immigrants apply at the first opportunity or did they wait to see what happened when others applied? Mass Enrollment simulated both legitimate applications, and also fraudulent applications by people who are trying to achieve immigration benefits with fraudulent documents. And Mass Enrollment simulated the backlogs in the process, showing which tasks in the selected process built up long delays.
We built Mass Enrollment to engage with US Citizenship and Immigration Services, the federal agency responsible for administering permanent residency and other immigration benefits. Agency personnel used the simulation to explore different options of how they could handle the huge new workload they expected if immigration reform became law. And we used the simulation to get closer to that agency, to better understand how they saw their challenges.
When I demonstrated Mass Enrollment to some simulation colleagues, one colleague suggested we add optimization. She suggested we add a single button to the user interface to automatically explore all the alternatives, and find the best combination. Rather than the user exploring alternatives one by one and evaluating the results himself, we could automate and just return the answer. Much faster.
Why not an optimize button? Technically it is not difficult to harness a model for optimization, but doing so destroys the value of the model. The value of a model lies in how it changes the user’s understanding of the system being modeled. In Mass Enrollment, the user can choose alternative processes for signing up illegal immigrants, processes he might have never considered before. He can see the effects of these processes, and learn how the choice of process affects the decision-making by the immigrants. By exploring alternatives one-by-one, the user acquires an appreciation for how the whole works, a kind of kinesthetic appreciation that is otherwise difficult to acquire. Users learn from a simulation. But optimization undermines that learning, giving a quick answer instead of letting him find an answer.
Further, optimization requires a commitment to what quantity is being optimized. Wait times and backlogs are important. Maybe the immigration sign-up process should be optimized to minimize task backlogs, to minimize waits by the immigrants? One approach to minimizing waits is to perform no screening for fraud. But fraud is also important: without fraud screening, many violent criminals will be given permanent residency. So maybe the immigration sign-up process should be optimized to minimize fraud. One approach to minimizing fraud is to thoroughly check the validity of every document. But that approach is expensive, and cost is important as well.
Of course some combined measure can be constructed, some combination of wait times, fraud, and cost, and that combination can be optimized. But what about users who have a different idea about how to tradeoff wait times, fraud, and cost? And other results are also important, e.g. the agency’s reputation with the immigrant communities. Are we to create a huge combined measure, some combination of everything that could be important?
It’s better to let a user explore, and come to his own conclusion about what is important. It’s better to let him try stupid things, to see what happens. It’s better to give him a chance to rewire his neurons, rather than giving him an answer.
And in the context of business development, the true and best value of he model is the collaboration, creativity, joint discovery of new alternatives, trust — in summary, the relationship and shared profound understanding of the problem — that results from using the model together to explore alternatives. The same is true for the context of consensus building among disputing but interested parties: they discover all the aspects of a problem and solutions that they can agree on, and jointly discover the effects of the alternatives for the aspects they disagree about (as with Future Border)