What is Expertise? Let's ask the experts.

By: Dan Spokojny | March 20, 2024

This article was originally published on fp21’s Substack, Foreign Policy Expertise.

In my doctoral studies, I learned that great research begins by surveying what other scholars have said about one’s topic of interest. This article lays a foundation for understanding expertise by exploring academic literature from a variety of fields. This is what academics call a literature review. The point of this article is not to provide a final answer to “What is foreign policy expertise?” Instead, my goal is to review others’ attempts at explaining expertise.

This post is a bit dense, but I think it’s vital (and fascinating) background. And for the skeptics in the room, next week’s article will flesh out what the “art of foreign policy” looks like to balance out this more scientific approach.

A casual reading of history finds celebrated U.S. foreign policy leaders in every era. John Quincy Adams helped articulate the Monroe Doctrine. William Seward kept France and Britain from recognizing the Confederacy during the Civil War. George Marshall designed the eponymous plan for rebuilding post-war Europe. Dean Acheson created NATO. George Kennan authored the containment policy of the Soviet Union. Henry Kissinger negotiated an opening to China. Brent Scowcroft managed the peaceful disintegration of the Soviet Union. Madeleine Albright helped negotiate the end of the war in the former Yugoslavia.

Yet surprisingly little is said about what qualifies our leaders as experts in foreign policy. They can be brilliant, creative, hard-working, and sometimes spectacularly successful in achieving their goals. But for every one of these historic successes, each of these leaders presided over countless failures.

The luminary diplomats Ambassadors William Burns and Linda Thomas-Greenfield describe diplomacy’s “fundamentals” as “smart policy judgment” and a “feel for foreign countries.” [1] Policymakers must possess a “nuanced grasp of history and culture, a hard-nosed facility in negotiations, and the capacity to translate U.S. interests.” Certainly, this is all generally true, but Burns and Thomas-Greenfield (like the Department of State’s promotion process [2]) offer only ambiguous and highly subjective guidance on what good judgment actually looks like. There’s simply no framework for identifying real expertise.

So, what does expertise look like? How is expertise conceptualized and measured in other fields? How should we think about expertise in the conduct of foreign policy?

My favorite definition from psychology describes expertise as “consistently superior performance on a specified set of representative tasks for a domain.” [3]

Let’s review some ways that scholars have discussed expertise. Expertise is often used interchangeably with skill, capability, and experience — so let’s pick apart these terms.

Expertise as Superior Information

Experts, we might all agree, know a lot. Many of our foreign policy elite are lauded for possessing an encyclopedic knowledge about the world.

Expertise is often synonymous with the possession of information. [4] According to this view, superior information is the key ingredient that enables a decision-maker to design an effective policy.

How might information affect policy outcomes? The most straightforward logic suggests that the more facts one acquires, the better one understands the world. Such facts might include an opinion poll of voters before an election, or covert intelligence about an opponent’s military capabilities and intentions.

Yet, I find this conception of expertise too simplistic. Possessing many facts does mean you can achieve consistently superior performance in shaping the future. Expertise is more than simply a collection of facts. Wikipedia can’t design our foreign policy.

 

There is more to foreign policy expertise than simply memorizing many facts about the world.

 

A more sophisticated view focuses on a specific kind of information. An expert’s informational resources may include a library of descriptions, interpretations, and causal explanations about how the world behaves.[5] This sort of knowledge helps reduce uncertainty about the likely outcomes of various policy ideas and facilitates the selection of the best policy.

According to one scholar, information is the “ex ante investment in understanding how different policy instruments map into likely outcomes in his policy area.”[6] Perfect expertise would mean knowing exactly how the world would change due to a policy intervention, or exactly which policy intervention would produce a desired goal.

It’s important to recognize that gathering information is costly. Investments in information – expertise – can be made by acquiring experience, training, commissioned research, or similar efforts. Such an investment chronologically: one must first acquire information before it can be used.

This conception of expertise is quite satisfying to me. But let’s make things more complicated:

Expertise as Superior Skill

If two policymakers are given access to the exact same information, the more expert of the pair will be more capable of using that information to achieve a higher-quality result. Same ingredients; better meal! Proponents of this conception of expertise suggest there is more to it than merely obtaining the right set of facts.

A useful way of thinking about this is by imagining information in the hands of a novice. If a novice could wield the information just as capably as the expert – if, in effect, by receiving the information, the novice becomes an expert – then skill is not required.[7] If, on the other hand, the information is less valuable (or even worthless) in the hands of a novice, skill is required to convert information into policy impact.

Herbert Simon offered a great description of this type of expertise: “The situation has provided a cue: This cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition.”[8]

In foreign policy, I believe some tasks are more dependent on skill, while others are more dependent on information. A face-to-face negotiation with a stubborn opponent, for instance, requires a great deal of skill. But where does this skill come from?

Under What Conditions Does Skill Develop?

Research in cognitive psychology, sports science, and elsewhere suggests that as a person gains more experience in a domain, their skill tends to increase.[9]

Some of my favorite research comes from Gary Klein, who studies chess players, nurses, athletes, and others to understand the secrets to elite performance. His study of firefighters, for instance, found that “when faced with a complex situation, the commanders could see it as familiar and know how to react. The commanders’ secret was that their experience let them see a situation, even a nonroutine one, as an example of a prototype, so they knew the typical course of action right away. They were being skillful.”[10]

Klein’s research subjects often explained they had an intuition, or a “sixth sense,” about the right course of action. Klein termed this recognition-primed decision-making.

But not every recreational tennis player becomes Serena Williams. Two conditions emerge from Klein’s research as the basis for expertise to flourish: 1) one must receive clear feedback about the success of their actions, and 2) one must practice and improve based on this feedback.

Other researchers have validated these findings. Anders Ericsson popularized the idea that it takes 10,000 hours of “deliberate practice” to become an expert. This sort of effortful training requires effortful attention to converting one’s weaknesses into strengths.[11] Ericsson and many other researchers have demonstrated that deliberate practice seems more responsible for high performance than innate ability.

Simply playing a lot of tennis does not make you Serena Williams. Expertise requires deliberate practice. Photo by Boss Tweed.

Here’s the vital takeaway: Not all experience is equal. In the absence of deliberate practice, expertise will not readily form.[12] When we accumulate experience but don't receive objective feedback, our brains trick us into thinking we are much better at a task than we really are. We develop confidence, but not expertise.

This research presents foreign policy practitioners with a big flashing ‘proceed with caution’ warning sign. Both of Klein’s conditions are almost completely absent in our institutions of foreign policy. Most institutions of US foreign policy are not designed to provide clear feedback about success and failure, nor are most American diplomats trained on their strengths and weaknesses. I suspect overconfidence is rampant in the hallways of Foggy Bottom and the White House.

Beware: Here Lies Cognitive Biases!

Overconfidence is a common example of a cognitive bias: “systematic cognitive dispositions or inclinations in human thinking and reasoning that often do not comply with the tenets of logic, probability reasoning, and plausibility.”[13]

“One of the essential insights from psychological research,” writes Oeberst and Imhoff, “is that people’s information processing is often biased.”[14] For example, confirmation bias is the tendency to ignore information inconsistent with one’s prior beliefs.[15] Intergroup bias is the propensity to apply different standards to the evaluation to the behavior of one’s in-group compared to one’s out-group.[16] The disproportionate weight given to the first piece of information one receives is called the anchoring bias.[17] Groupthink is the psychological phenomenon where irrational decision-making results from the desire for conformity within a team[18] -- it was famously indicated as a cause for President John F. Kennedy’s failed Bay of Pigs invasion.

The Nobel Prize-winning scholar of human judgment, Daniel Kahneman, named intuition “system one” thinking: it is fast, almost instinctual, but prone to failure in predictable ways.[19] The ingrained response to spring away from a snake at one’s feet serves humans well in nature, but the same cognitive wiring for snap judgment misleads humans in more complex settings in predictable ways.

Cognitive bias is such a pernicious threat to expertise because it is almost impossible to detect in oneself.[20] Many experienced actors falsely believe they have developed real expertise when they are no better than dart-throwing monkeys. This is why overconfidence, or the “illusion of validity,” is so dangerous.

Worse yet, research demonstrates that in many situations, as one’s confidence increases, their demonstrated abilities often get worse![21]

Elite policymakers appear quite susceptible to overconfidence,[22] in part because their biases tend to harden over time.[23] Overconfidence born from the successful attainment of power may lead one to underestimate possible risks, the difficulty of a task, or one’s true capabilities.[24] Indeed, overconfidence has been blamed for a number of infamous wars, including World War I,[25] Vietnam,[26] and the second Iraq War.[27] “Beware of intuition,” warns one scholar of human judgment.[28]

Some of the most illuminating studies of human judgment in foreign policy settings come from the psychology scholar Phillip Tetlock. He designs tournaments for evaluating the accuracy of predictions about the near-future, such as “Will Russia occupy Kyiv before the end of 2025?” It turns out that some people are consistently quite good at predicting the future, while many others are no better than random guesses. Surprisingly, analysts with the deepest subject-matter expertise were, on average, worse at predicting the future than generalists, even within their area of specialty.[29] Tetlock found that subject-matter experts tend to “toil devotedly within one tradition,” fitting new evidence within an overly confining theory of the world. This occasionally led to their bold predictions being correct, but more often, negative evidence was minimized, harming the expert’s accuracy. The difference, it turns out, may come down to those who are able to short-circuit their intuitive snap judgments.

Techniques to Improve Judgment

Tetlock’s work in forecasting demonstrates that the most accurate analysts employ a suite of tools to improve their analytical accuracy, such as using base rate statistics and testing for scope sensitivity. As little as an hour of training on these techniques has been demonstrated effective at improving analytical accuracy.[30] Unsurprisingly, the most talented forecasters regularly conduct post-mortem analyses of their successes and failures to continually improve their accuracy.

Here’s where it gets scary: when one of Tetlock’s forecasting tournaments was sponsored by the Director of National Intelligence, Tetlock’s “superforecasters” outperformed intelligence community professionals in a three-year competition by more than 50%.[30] Unfortunately, follow-up research found that neither the policy nor the intelligence community adopted updated forecasting techniques.[34]

Daniel Kahneman would label the techniques used by Tetlock’s superforecasters as “system two” thinking, in contrast to the snap judgment system one approach. System two is time-consuming, effortful, and systematic. It literally activates a different part of the brain and can help one overcome biases and make complicated decisions more effectively.[33]

In sum, while intuitive judgment can mislead decision-makers, there are ways to hone system two processing and sharpen one’s analytical accuracy.[35] If we define expertise as “consistently superior performance,” we must take these findings seriously if we hope to improve foreign policy expertise.

The Complications of Organizations

While some research has linked experience with effectiveness in foreign policy,[34] and other scholars have demonstrated that embassies led by careerists tend to be more effective than inexperienced political appointees,[35] expertise is rarely studied in the conduct of foreign policy organizations.

This may be driven, at least in part, because leading foreign policy practitioners don’t believe that expertise can be measured in foreign policy. Victoria Nuland, the just-retired Under Secretary of State for Political Affairs, stated that diplomacy “is not a science” during the State Department’s launch of its Learning Agenda in 2022.[36] Ambassador Barbara Bodine, former Dean of the School of Professional Studies at the Foreign Service Institute, explained that “it’s almost impossible to quantify what we do, and in fact, I think that there’s a great danger in trying to quantify it.” The highest-ranked living diplomat and now CIA director, William Burns, suggests “recovering the lost art of American diplomacy.”[37]

 

Virtually no training is offered for “good judgment” at the State Department despite this being a core precept for promotion.

 

Some scholars agree there is little room for expertise in the study of international affairs. Expertise is a socially constructed phenomenon, some suggest, dependent more on subjective reputation than on an objective evaluation of capabilities.[37] Peter Haas’ descriptions of epistemic communities, for instance, rest on “shared notions of validity – that is, intersubjective, internally defined criteria for weighing and validating knowledge in the domain of their expertise.”[38] In this understanding of expertise, actors may be experienced, possess more relevant knowledge, and hold more authority, but such characteristics are not associated with some sort of objectively greater capability. The difficulty of even defining what “success” looks like in foreign policy may lend credence to this view.[39]

Another strong challenge to a theory of foreign policy expertise is the ‘garbage can model’ of bureaucratic decision-making by Cohen, March, and Olsen.[40] They suggest that organizations — especially government ones — are “organized anarchies” characterized by 1) problematic preferences, 2) unclear decision-making technologies, and 3) fluid participation of actors in the process. Problems will thus consistently fail to connect with the best available solutions. Instead, “choices are made only when the shifting combinations of problems, solutions, and decision-makers happen to make action possible.”

A notable application of the model is Martha Feldman’s penetrating study of the Department of Energy, which found that analysts rarely offer clear solutions to well-specified problems.[41] Instead, the analyst’s work simply provides another interpretation of a problem or solution that might one day influence a policymaker.

The garbage can model dovetails with a swath of other literature that finds that decisions within organizations are often made carelessly.[42] Satisficing describes a decision-making model in which an actor or organization chooses the first acceptable policy option that arises rather than searching for higher-quality options.[43] Uncertainty avoidance may also be common in bureaucracies. When people are reluctant to face risk, pressing problems are prioritized in lieu of long-term strategic choices.  Further, competition among bureaucrats focused primarily on protecting their own turf further distorts decision-making.[44]

Other authors study the ways in which organizational learning may fail, depriving the leadership of the best possible advice and risking policy failure.[45] High turnover of political officials leads to organizational dysfunction [46], and inexperienced leadership can contribute to disastrous mistakes.[47] Efforts to learn from failures through programs focused on creating institutional memory often fall short.[48] Amy Zegart’s study of the creation of the modern US national security apparatus suggests organizations were “literally created by actors who are out for themselves, who put their own interests above national ones.”[49]

Other scholars suggest expertise is undervalued in our society today. In “The Death of Expertise,” Tom Nichols observes that few professional settings have serious accountability or external review mechanisms.[50] He worries about the “illusion of expertise provided by a limitless supply of facts” in the age of Google. Research shows U.S. policymakers working in national security extensively use flawed historical analogies in deciding policies [51] and are distrustful or downright dismissive of modern scientific methods that might help mitigate their biases.[52] The historians Neustadt and May explore how failures to learn lessons from the past contributed to a situation in which “so many results diverged so far from policy intentions.”[53]

Conclusion

This is an important time to study the nature of expertise in foreign policy. The rapid and ubiquitous advances of technology have brought unprecedented and virtually infinite amounts of information into the hands of citizens and leaders alike. The depth of knowledge one can acquire about events in a foreign country is extraordinary.  Advances in the scientific study of the tools of foreign policy are improving our collective understanding.  This is a golden age for expertise. Never have so many been so empowered to share their insight and knowledge.

It would be very poor research design to assume that every success was caused by expertise while every policy failure must have been dreamed up by a dope. Yet lot of foreign policy writing often seems to fall into this trap. One must be careful to remember that experts will fail too – just perhaps a little less often than amateurs.

Instead, advancing expertise requires us to think more like scientists. We need carefully calibrated tools to study the effects of different approaches on success and failure in foreign policy.

I’m sure I missed a lot in this article — please let me know! — but I hope this helps lay a stronger foundation for understanding expertise. Ultimately, a more clear-eyed understanding of expertise in foreign policy will help our government better capitalize on the capacity of its most important resource: human capital.

Next week: I will focus on the “Art of Foreign Policy,” and the limits of science and expertise.


Bibliography

[1] Burns, William J., and Linda Thomas-Greenfield. "The Transformation of Diplomacy." Foreign Affairs 99.6 (2020): 100-111.

[2] Path To Foreign Service, “The new 2022 Foreign Service Core Precepts,” Feb 2022. https://pathtoforeignservice.com/foreign-service-core-precepts/

[3] Ericsson KA, Lehmann AC. Expert and exceptional performance: evidence of maximal adaptation to task constraints. Annu Rev Psychol. 1996;47:273-305.

[4] Prominent examples include: Aghion, Philippe, and Jean Tirole. 1997. “Formal and Real Authority in Organizations.” Journal of political economy 105(1): 1–29; Gailmard, Sean, and John W Patty. 2012a. “Formal Models of Bureaucracy.” Annual Review of Political Science 15: 353–77; Hirsch, Alexander V, and Kenneth W Shotts. 2012. “Policy‐Specific Information and Informal Agenda Power.” American Journal of Political Science 56(1): 67–83; De Mesquita, Ethan Bueno, and Matthew C Stephenson. 2007. “Regulatory Quality under Imperfect Oversight.” American Political Science Review 101(3): 605–20; Stephenson, Matthew C. “Bureaucratic Decision Costs and Endogenous Agency Expertise.” Journal of Law, Economics, & Organization 23, no. 2 (2007): 469–98. http://www.jstor.org/stable/40058187; and, Ting, Michael M. 2009. “Organizational Capacity.” The Journal of Law, Economics, & Organization 27(2): 245–71.

[5] Feldman, Martha S. 1989. 231 Order without Design: Information Production and Policy Making. Stanford University Press.

[6] Turner, Ian R. 2017. “Political Agency, Oversight, and Bias: The Instrumental Value of Politicized Policymaking.” Oversight, and Bias: The Instrumental Value of Politicized Policymaking (February 8, 2017).

[7] Callander, Steven. 2011. “Searching for Good Policies.” American Political Science Review 105(4): 643–62.

[8] Simon, Herbert A. 1992. “What Is an ‘Explanation’ of Behavior?” Psychological science 3(3): 150–61.

[9] Ericsson, K Anders. 2006. “The Influence of Experience and Deliberate Practice on the Development of Superior Expert Performance.” The Cambridge handbook of expertise and             expert performance 38: 685–705.

[10] Klein, Gary A. 2017. Sources of Power: How People Make Decisions. MIT press.

[11] Ericsson, K Anders, Ralf T Krampe, and Clemens Tesch-Römer. 1993. “The Role of Deliberate Practice in the Acquisition of Expert Performance.” Psychological Review 100(3): 363.

[12] Ericsson, K Anders. 2006. “The Influence of Experience and Deliberate Practice on the Development of Superior Expert Performance.” The Cambridge handbook of expertise and             expert performance 38: 685–705.

[13] Korteling, JE, Alexander Toet. 2022. Encyclopedia of Behavioral Neuroscience, 2nd edition

[14] Oeberst, A., & Imhoff, R. (2023). Toward Parsimony in Bias Research: A Proposed Common Framework of Belief-Consistent Information Processing for a Set of Biases. Perspectives on Psychological Science, 0(0).

[15] Nickerson R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220.

[16] Hewstone, Miles, Mark Rubin, and Hazel Willis. "Intergroup bias." Annual review of psychology 53, no. 1 (2002): 575-604.

[17] Tversky, Amos, and Daniel Kahneman. "Judgment under Uncertainty: Heuristics and Biases: Biases in judgments reveal some heuristics of thinking under uncertainty." science 185, no. 4157 (1974): 1124-1131.

[18] Janis, Irving Lester. Groupthink. Boston: Houghton Mifflin, 1983.

[19] Kahneman, Daniel. Thinking, fast and slow. MacMillan, 2011.

[20] Tversky, Amos, and Daniel Kahneman. 1974. “Judgment under Uncertainty: Heuristics and Biases.” science 185(4157): 1124–31.

[21] Kahneman, Daniel, and Amos Tversky. 1973. “On the Psychology of Prediction.” Psychological Review 80(4): 237.

[22] Hafner-Burton, Emilie M, D Alex Hughes, and David G Victor. 2013. “The Cognitive Revolution and the Political Psychology of Elite Decision Making.” Perspectives on Politics 11(2): 368–86.

[23] Saunders, Elizabeth N. 2017. “No Substitute for Experience: Presidents, Advisers, and Information in Group Decision Making.” International Organization 71(S1): S219–47.

[24] Johnson, Dominic D P, and James H Fowler. 2011. “The Evolution of Overconfidence.” Nature 477(7364): 317.

[25] Tuchman, Barbara W. 2011. The March of Folly: From Troy to Vietnam. Random House.

[26] Johnson, Dominic DP, and James H. Fowler. "The evolution of overconfidence." Nature 477, no. 7364 (2011): 317-320.

[27] Johnson, Dominic D P. 2009. Overconfidence and War. Harvard University Press.

[28] Wilcox, John. "How Can We Get More Accurate: Recommendations About Human Judgment." In Human Judgment: How Accurate Is It, and How Can It Get Better?, pp. 113-133. Cham: Springer International Publishing, 2023.

[29] Tetlock, Philip E. 2006. Expert Political Judgment: How Good Is It? How Can We Know? Princeton University Press.

[30] Tetlock, Philip E., and Dan Gardner. Superforecasting: The art and science of prediction. Random House, 2016.

[31] Ibid.

[32] Samotin, Laura; Jeffrey Friedman, and Michael Horowitz. “Obstacles to Harnessing Analytic Innovation in Foreign Policy Analysis: A Case Study of Crowdsourcing in the U.S. Intelligence Community.” October 25, 2022. Pre-print. https://cpb-us-e1.wpmucdn.com/sites.dartmouth.edu/dist/0/433/files/2022/10/Samotinl-Friedman-Horowitz-Crowdsourcing-INS.pdf

[33] Kahneman, Daniel. Thinking, fast and slow. MacMillan, 2011.

[34] Saunders, Elizabeth N. 2017. “No Substitute for Experience: Presidents, Advisers, and Information in Group Decision Making.” International Organization 71(S1): 19–47.

[35] Haglund, Evan T. 2015. “Striped Pants versus Fat Cats: Ambassadorial Performance of Career Diplomats and Political Appointees.” Presidential Studies Quarterly 45(4): 653–78; and, Scoville, Ryan M. "Unqualified ambassadors." Duke Law Journal 69 (2019): 71.

[36] Harvard Kennedy School Belfer Center, “US Department of State Learning Agenda Launch.” June 30, 2022. Recorded on Zoom: https://harvard.zoom.us/rec/play/T6HmY1ezCYw3nfKbHTJfNwNFHXOi7VXjVUJk8WV-oH0YofAvrlt5KH8NpjPTZJ977R-Ffd91NaxLAVCT.9e22n1gbf54CD0Zu?continueMode=true&_x_zm_rtaid=RbAFYOVeQQ-_wv4XpKrrww.1657286119620.74ff8dd79f08e069fa88d3c6ba8f4f61&_x_zm_rhtaid=958

[37] Shanteau, James. 1992. “Competence in Experts: The Role of Task Characteristics.” Organizational behavior and human decision processes 53(2): 252–66.

[38] Haas, Peter M. 1992. “Introduction: Epistemic Communities and International Policy Coordination.” International Organization 46(1): 1–35.

[39] Baldwin, David A. "Success and failure in foreign policy." Annual Review of Political Science 3, no. 1 (2000): 167-182.

[40] Cohen, Michael D., James G. March, and Johan P. Olsen. "A garbage can model of organizational choice." Administrative science quarterly (1972): 1-25.

[41] Feldman, Martha S. 1989. 231 Order without Design: Information Production and Policy Making. Stanford University Press.

[42] Such as via poor analogies or the misuse of history: Khong, Yuen Foong. 1992. Analogies at War: Korea, Munich, Dien Bien Phu, and the Vietnam Decisions of 1965. Princeton University Press; and Neustadt, Richard E, and Ernest R May. 1986. Thinking in Time: The Uses of History for Decision Makers. Simon and Schuster.

[43] Simon, Herbert A. 1955. “A Behavioral Model of Rational Choice.” The Quarterly Journal of Economics 69(1): 99–118. https://doi.org/10.2307/1884852.

[44] Halperin, Morton H, and Priscilla Clapp. 2007. Bureaucratic Politics and Foreign Policy. Brookings Institution Press.

[45] Haas, Ernst B. 1990. 22 When Knowledge Is Power: Three Models of Change in International Organizations. Univ of California Press; and, Sagan, Scott Douglas. 1995. The Limits of Safety: Organizations, Accidents, and Nuclear Weapons. Princeton University Press.

[46] Malis, Matt. “Conflict, Cooperation, and Delegated Diplomacy.” International Organization 75, no. 4 (2021)

[47] Saunders, Elizabeth N. 2017. “No Substitute for Experience: Presidents, Advisers, and Information in Group Decision Making.” International Organization 71(S1): S219–47.

[48] Hardt, Heidi. 2018. NATO’s Lessons in Crisis: Institutional Memory in International Organizations. Oxford University Press.

[49] Zegart, Amy B. 2000. Flawed by Design: The Evolution of the CIA, JCS, and NSC. Stanford University Press.

[50] Nichols, Tom. 2017. The Death of Expertise: The Campaign against Established Knowledge and Why It Matters. Oxford University Press.

[51] Khong, Yuen Foong. 1992. Analogies at War: Korea, Munich, Dien Bien Phu, and the Vietnam Decisions of 1965. Princeton University Press.

[52] Avey, Paul C, Michael C. Desch, “What Do Policymakers Want From Us? Results of a Survey of Current and Former Senior National Security Decision Makers,” International Studies Quarterly, Volume 58, Issue 2, June 2014, Pages 227–246.

[53] Neustadt, Richard E, and Ernest R May. 1986. Thinking in Time: The Uses of History for Decision Makers. Simon and Schuster.

Previous
Previous

Can you change your mind? Decision-making and the debate on AI regulation

Next
Next

Raiding the Ivory Tower: How to Seek Academic Research Like an Expert