Megan MacGarvie, Associate Professor of Markets, Public Policy, and Law offers insight into the use of simulations to teach students about risk and uncertainty in forecasting, sharing lessons learned from election predictions that highlight the importance of understanding probabilities and assumptions in modeling.
I teach an undergraduate course in business analytics at Boston University’s Questrom School of Business. In this course, in which students dream up a new product and forecast its profitability, we spend a lot of time talking about risk and uncertainty. We ask students to estimate not just the expected profit associated with their new product idea, but also the range of potential profits and the probability that they will be profitable at all.
We teach simple simulation methods using Microsoft Excel. We make assumptions about the distributions of relevant variables and their parameters, and we teach students to think of their estimates in terms of ranges, distributions and probabilities rather than single fixed numbers.
This part of the course is typically taught in late October and early November. I have taught this course during the last three presidential elections. Back in 2016, as polls predicted Hillary Clinton would easily defeat Donald Trump, I thought it would be a fun exercise to adapt a simplified version of one of the forecasting models discussed in the news to show students how forecasters were using simulation to predict who would become president.
It turned out to be relatively straightforward: using the latest data on state-level polls and correlations in voting patterns across states, making a few assumptions and using @Risk simulation software plugged into Excel, I could generate predictions that looked (at a high level) similar to the ones published by Nate Silver’s FiveThirtyEight blog and other election forecasts. My simplified simulation gave Clinton a 73.3% probability of winning the presidency (with 291 electoral college votes). Pleased with myself, I showed the students this cool example of the power of simulation to help managers make plans in uncertain environments.
Since I teach at 8 AM, I went to bed early on the night of the election. When I woke up the next morning, I had the experience of many Americans trying to wrap their minds around the election results. I also had to answer the question: “WHAT am I going to say to my students?”
I was plagued with self-recrimination. How could I have been so overconfident? Had I just lost my students’ trust, after spending most of a semester trying to convince them of the power of quantitative modeling to improve business decision-making? How would I show my face in the classroom?
After talking to friends and colleagues, I came up with a plan for the first class after the election. I showed a slide titled: “Was our model wrong? Is simulation actually useful?” There were three main points:
- Our model predicted Clinton would win…BUT it gave Trump an almost 30% chance. The election results show that business decision makers can’t ignore potential events with probabilities that high.
- There may have been non-response bias in the polls (systematic variation in the types of voters who don’t pick up the phone). When the underlying data on which a model is built are biased, even the most sophisticated model will be wrong. In other words, “garbage in, garbage out.”
- Maybe our assumption that undecided voters would break equally for Trump & Clinton was wrong. This shows the importance of thinking VERY carefully about assumptions, parameters and distributions in business plans. Changes in assumptions can have very big effects on predicted outcomes.
Instead of a moment of humiliation, it was one of the better lectures of the semester. Students were eager to understand what had gone wrong with–let’s be honest, not just my prediction, but those of experts who do this for a living. I hope that admitting that I had been wrong, and then working to understand WHY the model was wrong was a positive example that they could take with them into their careers in business.
Fast forward to 2020. I was teaching on Zoom to students all over the world. I thought it would be fun to revisit the election forecast. With the help of a fantastic student, I updated the model with polling data for 2020 and once again, it predicted a Democratic victory. I showed the forecast in class, but this time with a caveat about its failure to predict the last election. In response, I had adjusted the model to assume that undecided voters were more likely to end up voting for Trump (as had been the case in 2016).
On the morning after the election, ballots were still being counted, but it looked like a victory for Trump. My pandemic-purchased smart watch registered an all-time high resting heart rate. I was an idiot…how could I have done this AGAIN? My former department chair was kind enough to point out the value of being willing to take risks in the classroom and “teach without a net.” That made me feel a bit better, but not as good as I felt when the final results were in and my forecast of 308 electoral college votes for Biden lined up with the actual tally of 306 votes.
Fast-forward again to today. I’m preparing to dig out my four-year-old Excel files and rerun the simulations with updated data. I plan to incorporate the finding reported in the Economist that overestimates of Democrats’ chances seem to be increasing over time. No doubt there will be surprises and my model will turn out to be wrong in ways I don’t currently anticipate. This time I’m less focused on whether the model will predict the correct outcome of the election. I’m instead thinking about how to make sure students absorb the deeper lessons associated with using quantitative models to predict uncertain events like elections. I give myself a 78.9% chance of succeeding.