A Critique of “Evidence-Based Decision Making”
Jeffrey L. Mayer
Abstract: Today, policy analysts of every ideological stripe proclaim their commitment to “evidence based decision making.” The words feel good to say; they affirm our claim to authority and put us theoretically at the center of the policy making action. Nonetheless, this essay offers four reasons for using them more mindfully. I argue: (i) that “evidence-based decision making,” as invoked by analysts across the ideological spectrum, is less a description of practice than a reformist ideology; (ii) that nonetheless, and for progressives ironically, it is also fundamentally conservative; (iii) that it expresses an elitist arrogance that is at odds with its humane objectives; and (iv) that, in the policy development process, it is operationally weak. To address these problems, I encourage: (i) a professional humility that includes greater tolerance for the disorder of democratic politics; (ii) an increased willingness to study ourselves and thereby to foster a sense of obligation to the past and future of our profession; and (iii), especially in this era of political polarization, a determination to cultivate and practice an ethics of conversation that teaches a willingness and ability not only to understand but to feel the sources of the arguments we reject.
I. Introduction
This essay has two purposes. The first is to help colleagues think more critically about whether their invocation of “evidence-based decision making” [1] really announces a commitment to higher standards of truth, or is merely a ritual incantation—used almost unconsciously, as a preemptive authority claim for their enterprise and status in the policy development process. The second is to engage colleagues in a conversation we often ignore: In a world where expertise is discounted, what’s an expert to do?
To begin that conversation, I encourage analysts to cultivate a sense of professional humility that includes greater tolerance for the disorder of democratic politics; to study ourselves and thereby to foster a sense of obligation to the past and future of our profession; and, especially in this era of political polarization, to nurture and practice an ethics of conversation that teaches a willingness and ability not only to understand but to feel the sources of the arguments we reject.
II. Why policy analysts like to say our projects are “evidence-based”
In America, the idea that social science should inform public policy has a long and disappointing history (Karl 1969). Through the 1950s at least, its motivating spirit was neo-liberal (Lasswell 1951). But the idea’s most recent expression is distinctly bi-partisan: tracing its roots, on one side, to the neo-conservative critique of Great Society social programs and the demand that policy decisions should henceforth reflect a “calculus of efficiency”; and, on the other, to liberal remorse over the wreckage of the Johnson administration’s War on Poverty (Lynn 1999) [2], and a self-protective reaction to a Reagan-era anti-intellectualism that expressed itself for a time as an almost complete erasure of the word “policy” from the nomenclature of the federal executive (Mayer 2003).
Today, analysts of every ideological stripe proclaim their commitment to “evidence-based decision making.” The words feel good to say; they have self-affirming appeal. “Evidence-based decision making” is, after all, the province of those specially trained to develop the evidence. Right and left, the phrase has become axiomatic because it affirms our claim to authority and puts us theoretically at the center of the action. However, there are good reasons to reference the axiom more self-consciously than we do.
III. Four problems with “evidence-based decision making” as a principle of action
I argue here: (i) that “evidence-based decision making” is less a description of practice than a reformist ideology; (ii) that, nonetheless and for progressives ironically, it is also fundamentally conservative; (iii) that it expresses an elitist arrogance that is at odds with its humane objectives; and (iv) that, in the policy development process, it is operationally weak.
1. “Evidence-based decision making” is a reformist ideology.
One reason to consider “evidence-based decision making” an ideology rather than a description or even an aspiration is that the assertion itself is not evidence-based, at least in the sense that we policy analysts spend little time examining the history of our profession, the auto-effects of our expertise and, closer to our daily business, the ways we choose questions, identify evidence, and conduct analyses. Four decades ago, Aaron Wildavsky (Wildavsky 1979, 214) called on us to maintain a critical perspective on ourselves, our organizations, and—for those of us in government—our political masters. We’re still not listening.
In America, the impulse to base decisions on a systematic collection and consideration of evidence first appeared as an activist ideology—a belief, embodied in the marriage of social science and social reform, that a marshalling of evidence about social problems could point the path and summon the will to invest in public solutions. The ideology had its first official expression in Herbert Hoover’s Presidential Research Committee on Social Trends. Hoover expected his Committee to identify problems and propose solution across the entire range of social ills. He lost interest in its work in 1931, however, when it failed to produce evidence that the national crime wave had been vastly exaggerated. By then, of course, Mr. Hoover had other things on his mind (Karl 1969, 383).
In time, neo-conservative reaction and progressive disillusionment squeezed the reformist zeal out of “evidence base decision making.” More worrisome, it made clear that the link between “evidence-based” and “decision making” was more problematic than today’s casual use of the connected phrase suggests. Viewed coldly, the “evidence” is almost always insufficiently compelling to justify bold “decision making” [3]—in part because it is usually varied enough to support alternative causal stories predicated on competing value frames and assumptions about how people behave [4]. Thus, for example, in the case of poverty research, a predilection for stories that feature personal responsibility may lead analysts to examine links between poverty and the dissolution of nuclear families or illegitimacy, or to explore the possibility that public aid programs create dependent populations. Alternatively, an inclination to understand poverty as a structural phenomenon—an intractable consequence of historic inequality, say, or an inevitable by-product of capitalist economic growth—might encourage research on the breadth and perniciousness of the problem itself (e.g., its damaging effects on women and children), or the inadequacy of piecemeal or voluntarist solutions.
Similarly, in the continuing debate on trade liberalization, free-market economists are likely to adduce evidence linking domestic industrial decline to cyclical factors, or imbalances in saving and investment, or long-term shifts in comparative advantage. Meanwhile, economists who are less confident in market discipline or less discomfited by activist government often focus on the symptom itself (e.g., a precipitous loss of U.S. market share) as evidence of foreign predation; or to explore the notion that increasing returns to scale confer permanent advantages on national winners in global competition. In each of these and many other cases, the stories analysts choose to tell, depend heavily on their entering biases, and rarely point decision makers in a single direction.
Moreover, in government practice, evidence-based policy guidance that departs substantially from the preferences of political leadership is generally ignored. Actual decision makers regularly disrespect the evidence. Like President Hoover, they don’t like waiting for it, and are most likely to use it to justify pre-determined positions [5]. On these terms, the only truth test they require is plausibility. Still, adherents of “evidence-based decision making” continue to hold as an article of faith the ideal that thought should precede action, that analysis can realistically aspire to objectivity, and that in a better world, its conclusions would transcend politics. The futility of our faith is revealed on a daily basis—which is, I suppose, what makes it faith or, in the context of this argument, ideology.
2. “Evidence-based decision making” is fundamentally conservative.
Despite its reformist origins, “evidence-based decision making” is structurally conservative, in both theory and practice. The tendency inheres in the apparently sensible notion that people should think before they act. Thought delays action, enables indecision, and leads too often to ambiguous outcomes. One irony of Lyndon Johnson’s War on Poverty, however, is that public action not grounded in careful thought may lead to roughly the same place. In Henry Aaron’s account of that War (Aaron 1978, 159), “[at first] a sense of urgent need to act…. commanded analysts and others to suggest policies best calculated, given available information, to achieve desired ends.” But “[w]hen the passion waned,” partly “because of frustration at the apparently mixed results of actual policies, the imperatives of the analytical process won out” [6]. Arguably, one overhang of this seminal disappointment is that policy analysts today is unwilling or unable to consider big solutions to big problems [7].
In the 1970s, in a nation that seemed obsessed by its limits, the conservative potential of policy science became actual in the form of a shifting disciplinary culture and an evolving choice of analytic methodologies. Both of these developments were foreshadowed, if not foreseen when, in August 1965, President Johnson ordered a replication of the Defense Department’s Program Planning and Budgeting System across the civilian wing of the executive bureaucracy. The cultural significance of Johnson’s decision was that it increased the likelihood that the offices created to implement the new approach would be staffed eventually by economists rather than sociologists (Guillemin and Horowitz 1983) [8]. Economists I know are probably less susceptible than sociologists to reformist enthusiasm.
This cultural change coincided with a methodological shift away from program design driven by “a sense of urgent need to act” and toward evaluation research and cost-benefit analysis. Proponents of evaluation research, animated by a spirit of prosecutorial inquiry, looked backward to the indifferent success of Great Society programs, and declared that “[t]he role of social science lies not in the formulation of social policy, but in the measurement of its results” (Moynihan, 1970, 193). Advocates of cost-benefit analysis looked forward, intending in the name of reason to discourage any further experiment with large-scale social engineering—which, indeed, they did.
By the beginning of Ronald Reagan’s presidency, cost-benefit analysis had become a conservative mantra [9]—partly because it sought to limit analysis to monetizeable costs and benefits; partly because costs are almost always easier to measure than benefits (as in the case of balancing the cost of increasing the CDC’s budget against the benefit of a pandemic that doesn’t happen); and partly because estimates of costs and benefits are often matters of analytic choice. This means that they are almost always manipulable on one side and assailable on the other. Advanced in the name of precision, therefore, the results of cost-benefit analysis are almost never dispositive—especially where the issue is a social good rather than a battleship.
3. “Evidence-based decision making” is unavoidably elitist
Adherence to “evidence-based decision making” implies belief in the scientific and political superiority of expertise over other forms of knowing and deciding. After all, the ability to identify and analyze evidence requires specialized training and the mastery of sophisticated testing methodologies. Not everyone is qualified to support “evidence-based decision making.” Moreover, proponents seem to suggest that opposing policy views are not evidence-based, that their proponents confuse facts and values, embrace fragmentary evidence, or apply inferior truth tests. By implication at least, they also anoint a scientific priesthood that is divorced from, and even contemptuous of the people “evidence-based decision making” is supposed to serve. To the extent that policy professionals assume that mantle uncritically, wrapping themselves comfortably in a kind of scientistic privilege, we become complicit in our own anti-elitist vilification. And we deserve what we get.
Moreover, for policy analysts in a democracy, professionalism raises a moral issue. The profession’s scientific aspirations and analytic methods cultivate resistance to its own basic purposes—which must be to dignify people and improve the quality of their lives. Critics explain this lack of sympathy partly as a by-product of scientific posture, the dedication to objectivity, which is also implicitly an ambition to transcend moral concern in the determination of fact; and partly as a result of an appetite for explanatory power, for general rather than particular descriptions and explanations. In effect, they say, the training and experience of policy analysts encourage them to discount popular expressions of non-general suffering both as authoritative knowledge and as predicates for public initiative.
The critics also point to other factors: first, to the principle of skepticism that underlies all modern science and which biases the policy sciences in particular toward cautious and incremental prescriptions; and second, to the social sciences’ increasing emphasis on quantification—the creation and use of statistics. The need to quantify, they contend, cloisters analysts in a reality of abstraction divorced from the intensity of direct experience, and discourages their interest in problems and solutions that are hard to measure.
4. “Evidence-based decision making” is an appeal from dialectical and political weakness
In practice, “evidence-based decision making” fails doubly—as a principle of inquiry and as a predicate for action. It fails, first, as a claim to authority in ongoing conversations about public problems and solutions because, as noted above, policy experts working in good faith, starting from different or even the same value positions, regularly disagree. And where the evidence supports different causal stories—as it usually does—people, including public decision makers, have broad freedom to believe what they will. In a more cynical vein, it is also clear that some experts do not work in good faith, but only hire out to politicians looking for credentialed voices willing to say what they want the public to hear. And that too subverts the authority claims of expertise.
As a predicate for public action, “evidence-based decision making” fails again, partly for the same reason—because the sciences marshalled to support it almost never produce compelling guidance. Experts invoking the principle seem to assume, in Paul Cairney’s words, that “the evidence could ever speak for itself” (Cairney 2016, 24). The failure of this assumption stems partly from its embrace of a decision-making model that excludes important variables from the decision function. Public decision makers likely bring all of themselves to the act of decision—their knowledge, their ignorance, their personal and political interests, and especially their emotions: their fear, faith, greed, vengefulness, pride, and loyalty. In contrast, expert analysts operate in the single dimension of reason and fact. Whole people practice politics; fragments of people practice analysis. As a result, in head-to-head competition, analysis is hopelessly overmatched. And the appeal for “evidence-based decision making” is regularly revealed as a feeble maneuver for status and influence in a process that analysis cannot control.
IV. So, what’s an expert to do?
Having conceded the point that “evidence-based” policy analysis is never dispositive, how should policy analysts understand what they do? Some students of the profession, reckoning with the limits of their science as an instrument of inquiry and basis of professional authority, find themselves demoted to an auxiliary status in an egalitarian and participatory system of democratic decision making. See, for example, Deborah Stone’s critique of “the rationalist project” and embrace of practical reason or “political reasoning” (Stone 2002, 9); and Mary Jo Bane’s 2001 Association for Public Policy Analysis and Management presidential address. “The most important thing we [policy analysts] should do in thinking about our roles in policy making,” Bane advised, “is to shift our perception from seeing ourselves mostly as expert problem solvers to seeing ourselves mostly as participants in democratic deliberation” (Bane 2001, 194-195).
A second line of reinvention concedes the weakness of policy analysis at the point of decision, but argues, in Carol Weiss’s telling, that it works powerfully over time to shape the operative perception of problems and solutions (Weiss 1977). Adding to this argument, Charles Lindblom and Edward Woodhouse (1993, 137) declare that policy professionals “have a significant role to play in helping hundreds of millions of humans to think more clearly and press more assertively for effective social problem solving”—i.e., in improving the quality of democracy.
Yet these idealized descriptions of the role of policy professionals in democratic systems—as shapers of perception and enrichers of public conversation—fall short as accounts of what we do every day. And because our methods do not often produce compelling conclusions, what we do is argue. In the classic formulation of our role, we argue to senior decision makers for one or another point of view or policy choice. We may also argue directly to the public—e.g., through opinion essays and journal articles, as TV talking heads, and over all manner of more recently established electronic media. But increasingly, our most important interlocutors are other policy analysts. “Analysts, not their clients,” Beryl Radin argues, “[have become] the first-line conduit for policy bargaining” (Radin 2000, 36).
In this regard, Giandomenico Majone argues that policy analysts are more like lawyers than engineers or scientists; and their basic skills are argumentative (Majone 1989). Unlike a court proceeding, however, which is at base a dialogue between the defense and the prosecution, policy analysts engage in a multi-logue with similarly trained analysts arrayed across the levels, branches, and separate agencies of government, and in think tanks, universities, professional associations, and public interest groups—in what Walter Williams (1992, 117) has described as a flowering of “multiple advocacy.”
This flowering yields at least two kinds of fruit. In describing the first, Deborah Stone argues that the most important policy discussions define problematic reality—that is, they establish the existence of public problems, develop operative notions of cause, suggest how costs of remediation should be assessed, and say who should be responsible for implementing solutions. The process may reflect high-quality analysis, but it is also inescapably competitive and political. And the “ultimate test of [its] political success is whether it becomes the dominant belief and guiding assumption for policy makers” (Stone 2002, 203); or in Aaron Wildavsky’s terms, whether it achieves “conceptual hegemony” (Wildavsky 1979, 13).
A second, less Hobbesian potential of policy multi-logue is that adversarial conversation among similarly skilled advocates might lead, if not to consensus, then at least to mutual understanding. “One promise of policy analysis,” Wildavsky wrote, “is that through repeated interactions, common understandings (though not necessarily, of course, common positions) will grow, so that action will be better informed” (Wildavsky 1985, 33). Today, in a more polarized era, when conversation across political boundaries often seems futile, it may still be reasonable to hope that competing elites, shaped by the same disciplinary norms, and examining similar bodies of evidence with similar analytic tools might, in the right circumstances (e.g., somewhere out of public view), be able to hear one another.
Even if it does not produce better policies, the resulting conversation would be desirable for the same reasons that democracy is desirable: because of the values it embodies and its effects on the minds and hearts of politically interested citizens. In Deborah Stone’s rendering, the experience of communicating reasons for policy choices affirms and cultivates political community. “In the process of articulating reasons, we show each other how we see the world. We may not see eye to eye, yet there is a world of difference between a political process in which people honestly try to understand how the world looks from different vantage points, and one in which people claim from the start that their vantage point is the right one” (Stone 2002, 380). Stone’s observations invite a more general discussion of how policy analysts should conduct ourselves as professionals, of how we can reach out to other analysts, especially those who do not share our own world views and causal stories, and of what we should teach our students. These are the subjects of this essay’s addendum on curriculum.
V. ADDENDUM ON CURRICULUM
In graduate schools of public policy, but of course not there alone, discussions of curriculum are necessarily bounded, first, by the limited demand for courses that do not help students compete successfully for future jobs, and second by the professorate’s resistance to incursions on their academic turf. Nonetheless, the following paragraphs assume that curricula are always works in progress. And they are offered as part of that process, for my colleagues to consider as possible enrichments of current course material.
A. Teaching humility
Policy analysts are uncommonly vulnerable to the sin of pride. Our technocratic inclinations, reinforced by specialized training and cultural isolation, always tend to dominate our more abstract allegiances to relieving human misery and upholding democratic ideals. To address this challenge, Wildavsky prescribes “a real but unknown intensity of self-criticism” (Wildavsky 1979, 214). As a principle of thought and feeling, “self-criticism” implies not only an active skepticism about expert knowledge claims—scientific method alone would require as much—but also respect for other knowledge claims and, indeed, for non-cognitive objectives in the democratic decision process (e.g., liberty, community, equality). It should teach us tolerance for the disorder of democratic politics and induce, among those of us working in government agencies, a powerful predilection toward political control of the career bureaucracy.
B. Studying ourselves
Policy professionals suffer from a structural authority deficit. Unlike law or medicine, the profession has no core body of knowledge, no comparable focus of common study. As a consequence, we do not always sense our community or consider our experience in general terms. This weakness of self-awareness undercuts our authority as professionals and limits our influence in the public decision process. Alisdair MacIntyre’s Burkean description of a “practice” captures some of the idea of community we should, as a consequence, be striving to build. “To enter into a practice,” MacIntyre observes, “is to enter into a relationship not only with its contemporary practitioners, but also with those who have preceded us in the practice, particularly those whose achievements extended the reach of the practice to its present point” (MacIntyre 1984, 194). On these terms, we policy analysts should take time—not only at the beginning of our careers—to study ourselves, and thereby to maintain a sense of direction, a sense of our obligation to the past and future of our profession, and to the respect we owe one another.
C. Creating an ethics of conversation
Students of the profession often argue that a healthy competition of policy ideas requires honest conversation among policy adversaries. In 1968, Lindblom and Woodhouse challenged analysts to think about how they could “best serve to make partisan interactions more thoughtful and effective” (Lindblom and Woodhouse 1993, 127). In 2001, Mary Jo Bane renewed the challenge, urging her colleagues “to engage as a profession with issues around the rules of the game,” including not only rules of “professional integrity in the standard disciplinary sense, but also [rules] about how we present ourselves and ask to be recognized as participants in discussion” (Bane 2001, 196).
Bane’s ethics of conversation includes some elements that are beyond criticism—e.g., protecting open discussion except in the case of those who would use information to degrade the process or “stereotype or close off debate.” It also includes elements that appeal in principle, but may be problematic in practice—e.g., putting our “whole thinking process on the table for discussion and examination,” including “background assumptions, value choices, and assessments of what value outcomes are more and less important” (Bane 2001, 195). But my purpose here is not to offer a complete list of prescriptions; it is rather to focus on what I take to be a core requirement of successful communication between ideological camps—feeling opponents’ arguments.
The best expression of this idea I know is John Stuart Mill’s treatment of it in On Liberty (1968, 97) as a requirement of “forensic success”: “He who knows only his own side of the case,” says Mill, “knows little of that…. Nor is it enough that he should hear the arguments of adversaries from his own teachers, presented as they state them, and accompanied by what they offer as refutations.” Mill continues:
That is not the way to do justice to the arguments, or bring them into real contact with his own mind. He must be able to hear them from persons who actually believe them; who defend them in earnest, and do their very utmost for them. He must know them in their most plausible and persuasive form; he must feel the whole force of the difficulty which the true view of the subject has to encounter and dispose of; else he will never really possess himself of the portion of truth which meets and removes that difficulty.
I see three ways to encourage frames of mind that make Mill’s ideal approachable. The first, and perhaps the least appealing, is two-memo assignments asking students to argue for and against a given policy and, on each side, requiring them to refute the likely arguments of the other. In practice, however, especially on the side not favored by the writer; such memos rarely plumb adversaries’ minds and hearts. A second approach might involve faculty recruitment. Stuart Butler, when he taught the McCourt School ethics course, used his considerable intellectual charm to get students to see seriousness in the Heritage Foundation’s side of things. It’s unclear if his effort made a lasting impression, but his course was always well-attended. Of the three possibilities, however, a third approach, role playing in the context of a policy game, seems the most promising. If the infamous Stanford prison experiment could make students playing guards and prisoners feel like the real things, a policy game might help policy students feel the perspectives of people they dislike and discount.
+ Author Biography
Jeff Mayer co-directs the McCourt School Writing Center. He retired from the federal government in 2013 after nearly four decades of service, much of it as Director of Policy Development in the Commerce Department’s Economics Statistics Administration. He is a graduate of Amherst College and the London School of Economics, and received his Ph.D. in political philosophy from Columbia University.
+ Endnotes
[1] I understand the term “evidence-based,” to mean a statement resulting from the rigorous interrogation of systematically gathered facts of experience—statements that can withstand the kinds of truth tests taught in graduate schools of public policy. In the public sphere, linking the terms “evidence-based” and “decision making” expresses the serious, if insufficiently examined, hope that the former term can influence the latter.
[2] Lynn (415) recalls the like-mindedness of early liberal advocates of “evidence-based policy advice”—among them Alice Rivlin, Charles Schultze, Aaron Wildavsky, Hugh Heclo, and Arnold Meltsner.
[3] On the practical disjuncture of analysis and policy from a neo-conservative perspective, see Edward Banfield (1980) and Alasdair MacIntyre (1984). Banfield warned that “policy scientists” might come to dominate the public decision process so that “policy makers…find the bureaucracy more resistant than ever to control” (3). MacIntyre argued that policy experts simply use social science to disguise their arbitrary preferences, such that the “most effective bureaucrat is the best actor” (107). At about the same time, observers at the other end of the political spectrum were lodging virtually the same criticism. See for example, Henry Aaron’s debrief on the War on Poverty (Aaron 1978). A cardinal weakness of social science as a guide to policy and source of authority, Aaron wrote, was its inability to generate compelling conclusions. Inadequate data, the need to compartmentalize problems and to abstract complex realities into a few key variables, and the complexity of human behavior itself virtually guaranteed the opportunistic use and extended criticism of social science research and its conclusions.
[4] Deborah Stone (2002) offers a fine development of this idea. See especially page 197: “[P]olicy politics,” she writes, “involves strategically portraying issues so that they fit one causal idea or another. The different sides in an issue act as if they are trying to find the ‘true’ cause, but they are always struggling to influence which idea is selected to guide policy.”
[5] My first exposure to this prevailing feature of decision making in the executive bureaucracy occurred in the late summer of 1978. With the White House facing pressure for financial assistance from then-ailing American Motors Corporation (AMC) and its congressional sponsors, the President’s chief domestic policy advisor Stuart Eizenstat asked the Secretary of Commerce to consider the circumstances in which it might be appropriate for the federal government to provide financial aid to a major corporation. As a member of the Secretary’s policy staff, and with considerable help from my colleagues, through August and September, I plumbed the historical precedents, analyzed AMC’s business prospects, gauged the employment effects of a possible bankruptcy, pondered the politics of alternative solutions, and prepared a memorandum evaluating in detail five graduated options for federal intervention. The White House response was swift and disdainful, and went something like: “Look, we’re going to get them $50 million or $100 million. Just tell us which it should be and why.” As I recall, the second memorandum was easier to write than the first.
[6] Bruce MacLaury’s Foreword (Aaron 1978) summarizes Aaron’s argument that “research tends to be a conservative force because it fosters skepticism and caution by shifting attention from moral commitment to analytic problems that rarely have clear cut or simple solutions.”
[7] I do not mean to say here that American governments can no longer undertake great projects—only that the conventional tools of policy analysis can’t point the way. As I write this note, the Biden administration has enacted perhaps the largest economic stimulus program in the nation’s history. Analytic documents undergirding the Biden plan, if they exist, are not publicly available. Based on the press reports, however, the most telling argument supporting the President’s decision to press forward—referencing the Obama administration’s attenuated response to the Great Recession of 2008 and the current administration’s potentially narrow window of opportunity—seems to have been to “go big or go home.”
[8] Guillemin and Horowitz (1983, 192) mark the cross-over year as 1972.
[9] Neo-conservative cost-benefit analysis has focused especially on federal regulations. In the last year of the Ford administration, when I started work as a policy staffer at the Commerce Department, the departmental secretariat included an office dedicated to developing a regulatory budget. As far as I could tell, the benefits of interest in that budget were estimated not as outcomes like cleaner water or fairer markets, but as reductions in burdens on American business. Cost-benefit analysis has remained the tool of choice for regulatory reform, especially in Republican administrations. One of the Trump administration’s first acts, Executive Order 13771 (January 30, 2016), mandated departmental regulatory budgets that limited the net cost of all regulatory changes to zero or less.
+ References
Aaron, Henry J. 1978. Politics and the Professors. Washington, DC: The Brookings Institution.
Bane, Mary Jo. 2001. “Presidential Address—Expertise, Advocacy and Deliberation: Lessons from Welfare Reform.” Journal of Policy Analysis and Management 20, no. 2 (Spring): 191-197.
Banfield, Edward. 1980. “Policy Science as Metaphysical Madness,” in Bureaucrats, Policy Analysts, Statesmen, Who Leads?, edited by Robert A. Goodwin, 1-19. Washington, DC: American Enterprise Institute.
Cairney, Paul. 2016. The Politics of Evidence Based Policy Making. London: Palgrave Macmillan.
Guillemin, Jeanne and Irving Lewis Horowitz. 1983. “Social Research and Political Advocacy,” in Ethics, the Social Sciences, and Policy Analysis, edited by Daniel Callahan and Bruce Jennings. New York: Plenum Press.
Karl, Barry. 1969. “Presidential Planning and Social Science Research: Mr. Hoover’s Experts.” In Perspectives in American History III, 347-409. Cambridge, MA; Charles Warren Center for Studies in American History.
Laswell, Harold. 1951. “The Policy Orientation.” In The Policy Sciences, edited by Daniel Lerner and Harold Lasswell, 3-15. Stanford, CA: Stanford University Press.
Lindblom, Charles and Edward Woodhouse. 1993. The Policy-Making Process. 3rd ed. Englewood Cliffs, NJ: Prentiss Hall.
Lynn, Laurence E. 1999. “A Place at the Table: Policy Analysis, Its Postpositive Critics, and the Future of the Practice,” Journal of Policy Analysis and Management 18, no. 3 (Summer): 411-424.
MacIntyre, Alasdair. 1984. After Virtue. 2nd ed. Notre Dame, IN: University of Notre Dame Press.
Majone, Giandomenico. 1989. Evidence, Argument and Persuasion in the Policy Process. New Haven: Yale University Press.
Mayer, Jeffrey L. 2003. “Present at the Revision.” Journal of Policy Analysis and Management 22, no. 2 (Spring): 310-12.
Mill, John Stuart. 1859. Utilitarianism, Liberty, Representative Government. Reprint, 1968. London: Dent, Everyman’s Library.
Moynihan, Daniel Patrick. 1969. Maximum Feasible Misunderstanding. Reprint, 1970. New York: Macmillan paperback.
Radin, Beryl. 2000. Beyond Machiavelli—Policy Analysis Comes of Age. Washington, DC: Georgetown University Press.
Stone, Deborah. 2002. Policy Paradox—The Art of Political Decision Making. Rev ed. New York: W.W. Norton.
Weiss, Carol. 1977. “Research for Policy Sake: The Enlightenment Function of Social Research,” Policy Analysis 3, no. 4 (Fall): 531-545.
Wildavsky, Aaron. 1985. “The Once and Future School of Public Policy,” The Public Interest 79 (Spring): 25-41.
Wildavsky, Aaron. 1979. Speaking Truth to Power. Boston: Little, Brown.
Williams, Walter. 1992. “White House Domestic Policy Analysis.” In Organizations for Policy Analysis—Helping Government Think, edited by Carol Weiss. 101-121. London: Sage.