What I learned working with the US Military and others after the 2010 earthquake in Haiti

The earthquake that hit Haiti on 12 January 2010 unleashed almost unimaginable misery and chaos in the span of minutes. In the days and months that followed, many more suffered its grim repercussions. Along with the rest of the world, the United States was quick to respond, and soon the US military was leading the way. Within 48 hours of the earthquake, the US Southern Command had established Joint Task Force-Haiti, whose leadership team coordinated a vast and complex effort. For the next nineteen weeks, JTF-H led Operation Unified Response, our military’s longest and largest ever disaster relief effort on foreign shores. At its peak, 22,000 service members, 58 aircraft, and 23 ships were involved, along with a vast amount of supplies and equipment, not to mention support from many quarters.haiti_needs

Many were moved to contribute. Shortly after the earthquake, an MIT colleague emailed to ask if anyone would step in to help his defense-contractor contacts working with the response efforts.  Hoping to contribute in some small way to the response, I volunteered—my desire to help certainly outweighed my expertise. Before explaining how much I learned about working efficiently amid uncertainty and urgency, not to mention about the almost impossibly hard work of post-disaster efforts, I’ll set the stage by describing the collaboration, its goals, and its operational methods.

In early February 2010 my small MIT “away” team started its collaboration with military and humanitarian experts working with the Joint Task Force. We aimed to contribute to a larger effort from MIT Lincoln Laboratory, and soon were working directly with them, MIT humanitarian logistics experts, military personnel, and the Haitian arm of Boston-based non-profit Partners In Health.

Our team aimed to bring varied methods, ideas, connections, and research efforts to the practical challenges facing the humanitarian response in Haiti. In February, shortfalls in communications, electricity, fuel, food, water, and other inputs persisted across the shake zone and beyond. The responders’ efforts were stymied by these very constraints, yet without informed action they would fail to alleviate the problems. A key question was: how to collect and share real-time information on current needs, so that the JTF-H leaders and their colleagues could supply what was most needed? Over two million Haitians had been left without shelter. Many had no phones and there were few ways of knowing who was where. People moved from place to place as conditions changed.

Our collaboration lasted just three intense months, from February to April. It provided me with one of the most vivid learning experiences of my life. Imagining the potential cost of a misstep drove home the need for every action to be effective: what we did in a conference room in Cambridge, Massachusetts mattered, because there was no time to waste. Responding well to a large-scale disaster, I learned, poses the starkest of challenges—the situation changes constantly, stakes are high, and information is hard to come by. It’s obvious that the most crucial thing is to prioritize actions, yet the past and future must factor in at every step. To do right by the Haitian people, we needed to appreciate the pre-disaster situation and its historical context. We also needed to think ahead to what could follow the emergency response, knowing that every disaster relief operation has the potential to set the stage for subsequent recovery or instead to create new problems that would become evident only later. Combining it all in the right mix seemed, to me as a neophyte, a near-impossible task, and I quickly grew to appreciate that the people who can carry out this work warrant our gratitude and admiration. In those early weeks thousands worked heroically on the ground in Haiti.

Our MIT-based team gleaned all we could from phone calls (often rushed, interrupted, or garbled), video conferences, our own site visits to Port au Prince and environs, interviews of knowledgeable informants, and of course much research. We consulted a vast range of existing sources for information, talked to experts in disciplines from child nutrition to data mining, downloaded datasets, analyzed spreadsheets, and shared updates and work products on a private website. Our aim was to contribute to the evolving plans for collecting and making sense of the data most needed to serve the millions in need.

The specific results of the overall effort and the data collection project are documented in the US Army’s Center for Lessons Learned archives and in other reports, including a paper published later that year in Military Review, that present the team’s innovations in humanitarian assessment along with recommendations for future disaster response. We’ll focus here on a few things I learned as a team member: new disciplines and practices that could help in all kinds of project teams.

In those early post-earthquake weeks, with our loose team of on-the-ground military personnel, defense contractor experts, leaders of Haiti-based organizations, and personnel from multiple universities interacting around the clock every day, there were many emails. At the suggestion of the team’s mentor, who was an experienced military leader, there was one main daily email. Each email included a paragraph that reminded the team of two crucial elements: the goal of the overall mission, which had been dubbed Operation Unified Response, as well as the specific aims of our team. These were listed crisply using simple formatting: each key idea got its own brief line and was indented by tabs to indicate where it fit in the plan. It was a quick verbal and visual guide to what we were all focusing on. It even managed to telegraph the hierarchy of steps and results. The formatting was simple enough to work in any email reader, and the entire paragraph was short enough that it could be read quickly. The indentation drew attention to the causal logic behind the entire project: steps were shown clearly, then the phrase “so that…” flagged specific objectives. The language was brief, direct, and jargon-free.

This simple technique made our mission salient. In the early weeks, the part of the paragraph that presented the team’s goals and means-ends hypotheses was refined a few times as our focus developed. Because it was easy to find at any moment, and easy to read and remember, we could actually use it to check our work. Sometimes we would invoke it several times in a single day when discussing the task at hand. Were we focusing on something that would contribute to the goals? Were the results we were seeing in line with the cause-and-effect linkages laid out in the email footer? As one remote component of a large, fast-moving team, this method helped prevent wasted effort and enabled us to identify the most important findings to share with others.

A second technique facilitated this sharing. Every Monday evening, the Commanding General of Operation Unified Response was briefed. A short spotlight briefing would follow the main presentation, and our broader team would put these briefings together every week. The aim was to use the 20 minutes to maximize utility for Commanding General, or CG, who made daily and weekly decisions. So we followed a standard format that I imagine is common across military settings.

The cover lists date, status (“unclassified” in our case), title, and the names—usually at least a dozen—of the key team members and their affiliations. This would make follow-up easy. Page two was called the BLUF slide: Bottom Line Up Front. It would list in a few bullet points the conclusions that would provide the basis for the CG to make decisions. The rest of the presentation would explain and provide specific evidence for the BLUF points.

Rigor and logic were the watchword in preparing the briefing deck. If the goal was to inform the key decision-maker, every point would need to be supported with the strongest possible analysis that made the supporting data vivid, offered some basis for comparison or assessment (for instance, by mapping trends over time or comparing camps), and cogently accounted for limitations and open issues in the analysis. Graphs, schematics, photographs, and quotes were all used to back up the specific points, making for as well-rounded a presentation as possible.

Knowing that we needed to create actionable points for the BLUF slide gave the team focus for the entire week. We were all motivated to show week-by-week progress, for one thing. Second, discovering something that would not help the CG make decisions for the coming week was of no value. When we read books on the history of Haiti, dug into our datasets, or examined how to plan for latrines, this provided a sharp focus—no small feat for academics!

A second aspect of the BLUF page fascinated me. The rule was, our CG could call a stop to the presentation once that second page was shown. He could do so for any of three reasons: something more urgent demanded attention; the key points were already accepted (perhaps because they were obvious?) and there was no need to delve into the background then and there; or the points were sufficiently irrelevant or off base that it would be a waste of time to go further. Understanding that this was the norm provided further focus for our work. We didn’t want anything we did to be too obvious or irrelevant. We also appreciated that there could be times when the audience in the room had more important things to do, so the presentation could be cut short without sacrificing the punchline. Knowing that when the entire deck was shown it would be because the CG was choosing to spend the remaining 19 minutes to consider our work was also motivating. We all knew that time and attention were at a premium in all activities: the same principles should guide even the most formal and routine events. Cut anything that is not a good use of time, that does not contribute to the mission and the objectives. The consistency of applying these principles helped the entire team to feel motivated and focused throughout the loosely organized and often chaotic effort. It also supported our humility as part of something much larger.

Taken together, the briefings tell a story of the project. The team mined their experiences to draw lessons learned after the operation’s stand-down on 1 June. One way we did this was via an all-day multi-stakeholder after action review that took a no-holds-barred approach to identifying what we learned. Key insights were distilled in written reports. The insights are informing ongoing efforts to better prepare for the next disaster. And now I use the BLUF approach, the focused mission statement, and after action reviews whenever it makes sense.

 

photo source: http://humanitarian.mit.edu/projects/haiti-needs

Hacking the hype: Why hackathons don’t work and what leaders can do to spur real innovation within their organizations

6792100891_fb5e692fbd_zAs buzz-worthy business trends go, hackathons—where people from different backgrounds come together to work on a project for a few intense, caffeine-fuelled days—are a top contender.

They’re most common in Silicon Valley: at Facebook and Google, hackathons are hallowed traditions. Even old-economy companies like GM and GE use them. They’re popular in research and education, too: this year, MIT hosted numerous hackathons, including Hacking Arts, Hacking Rehabilitation, and, of course, the second-annual breast pump hackathon: Make the Breast Pump Not Suck!

We get it. Hackathons are fun. There’s all-you-can-eat pizza and Red Bull flows freely. For participants, they’re quasi-social opportunities to work on something real with smart, passionate people. For companies and universities, they represent quick, relatively inexpensive ways to encourage collaboration, produce new ideas, and generate publicity.

 

But there’s a downside to the hackathon hype and our research on designing workplace projects for innovation and learning reveals why: innovation is usually a lurching journey of discovery and problem solving. Innovation is an iterative, often slow-moving, process that requires patience and discipline. Hackathons, with their feverish pace, lack of parameters, and winner-take-all culture, discourage this process. We could find few examples of hackathons that have directly led to market success.

 

The biggest disadvantage of hackathons is in many ways their draw: they are divorced from reality. The hackathon formula is pretty standard: throw a bunch of diverse teams together in a novel setting. Provide them with more playful materials than they’d normally encounter. And then put them to work on a worthy challenge where, at least at first, no ideas are rejected. These attributes can be positive: Exposing people to different perspectives is a surefire way to get them to look at problems in a new light. New spaces and unusual materials can stimulate creativity.

“Solving” a problem in a vacuum is, however, a waste of time and money. When hackathon participants lack necessary contextual knowledge and technical expertise, the result is often ideas that are neither feasible nor inventive. Worse yet, these flaws tend to go unrecognized, owing to the limited time for the event.

Hackathons rely on a pared-down framing of the challenge at hand. Exploration is confined to what can be done in the room or online. It’s difficult to do serious market research, use-case studies, and financial modeling, let alone to investigate potential unintended effects. Long after the hackathon is over, due diligence may reveal that several competitors in the market are doing something similar; clients already rejected the idea years ago; or the company can’t manufacture a prototype that meets the specs.

Such stories hint at an insidious side effect of hackathons: once they become synonymous with innovation, everything else is cast as plodding downstream work, demeaned as “mere execution.” But the study of innovation shows that everything hinges on the hard work of taking a promising idea and making it work—technically, legally, financially, culturally, ecologically. Constraints are great enablers of innovation.

Another drawback of hackathons is that they create a false sense of success. Every hackathon proclaims its winners and awards prizes. What if none of the ideas are any good? Doesn’t matter. The top team still gets a check and the very fact that the organization hosted a hackathon ticks the innovation box.

If not hackathons, then what? How can leaders embed innovation capabilities within their organizations by tapping into some coolness and excitement?

Every team project could benefit from well-placed injections of energy. Let’s move beyond the belief that open-ended exploration is important only for the initial ideation phase. Fluid discussions are needed at the middle and end, too.

Leaders must seek out people from other divisions and disciplines to challenge the project team’s thinking with both a critical eye and a creative spirit. A mid-project meeting ought to include participants with varied expertise to explore interim findings and rework plans—radically, if needed. At a project’s end, fresh perspectives could energize and add context to the process of reviewing results, pinpointing lessons learned, and sharing the best discoveries.

Managers must also cultivate new approaches to failure. It’s well known that many organizations have difficulty exiting projects. Bosses feel on the hook so they punt while the team limps along; team members are punished for sharing bad news so they bury it. What if projects were designed to combine a hacking mindset with rigorous examination of the data and experience they glean? This would reward smart failures that reveal new insights and equip leaders with the information needed to rescale, pivot, or axe their projects.

Hackathons trigger blips of great energy. But to sustain energy and deliver real impact, leaders must enable all the steps needed to innovate effectively. Hacking our workaday projects to challenge assumptions, test ideas, and fuel data-driven creativity might turn out to be the ultimate innovation.

A version of this post appeared on Fast Company, December 1 2015, under the title Why Hackathons Are Bad for Innovation.

Anjali Sastry, a senior lecturer at MIT Sloan School of Management, and Kara Penn, the cofounder and principal consultant at Mission Spark, are the authors of FAIL BETTER: Design Smart Mistakes and Succeed Sooner (Harvard Business Review, 2014; see www.failbetternow.com ).

 

Photo sources: https://flic.kr/p/bmchXM and https://www.flickr.com/photos/zitec-com/5718483135/