Policy Analysis

Nudges, Sludges, and the Ethics of Choice Architecture

From Thaler and Sunstein's 'Nudge' to the UK Behavioural Insights Team — how default effects, friction, and libertarian paternalism reshaped policy, and why critics worry about manipulation.

Reckonomics Editorial ·

The Book That Launched a Policy Movement

In 2008, the economist Richard Thaler and the legal scholar Cass Sunstein published Nudge: Improving Decisions About Health, Wealth, and Happiness. The book was not a work of original research — it synthesized decades of findings from behavioral economics and psychology — but it achieved something that academic papers rarely do: it changed how governments thought about policy design. Within a few years of its publication, “nudge units” had been established in the United Kingdom, the United States, and dozens of other countries. The word “nudge” entered the policy vocabulary alongside “tax,” “regulate,” and “ban” as a distinct tool of governance.

The core argument was straightforward. People make decisions within architectures — physical, institutional, digital — that shape their choices in ways they often do not notice. The order of items on a menu, the layout of a form, the default option on a software installation, the placement of healthy food in a cafeteria — all of these are elements of choice architecture. Someone has to decide how to arrange these elements, and that someone is a choice architect. The question is not whether choice architecture exists but whether it is designed well or badly.

Thaler and Sunstein argued that choice architects should design environments to help people make better decisions as judged by people’s own stated preferences, while preserving their freedom to choose otherwise. They called this philosophy “libertarian paternalism” — libertarian because it preserves freedom of choice, paternalistic because it steers people toward outcomes they would, on reflection, prefer. A nudge, in their definition, is “any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives.”

The definition was deliberately narrow. A nudge is not a ban, not a tax, not a mandate. Moving fruit to eye level in a cafeteria is a nudge; banning candy is not. Auto-enrolling employees in a retirement savings plan is a nudge; requiring them to save is not. The narrowness was strategic: it allowed Thaler and Sunstein to argue that nudges were a minimal, freedom-preserving intervention that both liberals and libertarians could support — a “third way” between command-and-control regulation and laissez-faire.

The Power of Defaults

The single most important finding behind the nudge agenda is the power of default options. When a choice has a default — a pre-selected option that takes effect unless the person actively opts out — a large fraction of people stick with it. This is true even for decisions with significant consequences and even when opting out is easy.

The most dramatic demonstration comes from organ donation. Countries that use an opt-in system (you must actively register to be a donor) have donation rates of 4% to 27%. Countries that use an opt-out system (you are presumed to be a donor unless you actively decline) have rates of 85% to almost 100%. The difference is not explained by cultural attitudes toward donation — surveys show similar levels of support across countries — but by the friction of opting in versus the inertia of the default. Johnson and Goldstein’s 2003 study of European donation rates made this case with striking clarity, and it has become one of the most cited examples in the behavioral policy literature.

Retirement savings tell a similar story. Before auto-enrollment, participation rates in employer-sponsored retirement plans in the United States hovered around 60% to 70%, even when employers offered matching contributions. After auto-enrollment became widespread — prompted in part by the Pension Protection Act of 2006, which was directly influenced by behavioral research — participation rates rose to over 90% in many firms. The change was not in the plan’s generosity or in workers’ financial literacy; it was in the default.

Thaler and his colleague Shlomo Benartzi developed “Save More Tomorrow,” a program that asked employees to commit in advance to allocating a portion of future raises to retirement savings. Because the increase came out of money workers had not yet received, it did not feel like a loss from current income. The program increased savings rates dramatically — from an average of 3.5% to 13.6% over four pay cycles in the original study. It worked because it aligned with, rather than fought against, the psychological forces identified by prospect theory: loss aversion, present bias, and inertia.

Sludges: Friction That Works Against You

If a nudge is a design choice that makes it easier to do something beneficial, a sludge is a design choice that makes it harder. Thaler introduced the term in a 2018 paper to describe excessive or unjustified friction in processes — paperwork, confusing forms, unnecessary steps, buried opt-out procedures — that prevent people from accessing benefits they are entitled to or making choices they want to make.

Sludges are everywhere. Claiming a tax rebate that requires filling out a 20-page form is sludge. Canceling a gym membership that requires a certified letter sent to a specific address is sludge. Applying for government benefits through a byzantine online portal that crashes on mobile devices is sludge. Filing an insurance claim that requires three phone calls, each with a 40-minute wait, is sludge.

The concept of sludge clarified something important about choice architecture: it is not neutral. The status quo is never a “no-design” option. Every process has friction or smoothness built into it, and those design choices have distributional consequences. Sludges disproportionately burden people with less time, less education, less access to technology, and less ability to navigate bureaucracy — which is to say, they disproportionately burden the poor. Herd and Moynihan’s 2019 book Administrative Burden documented how the complexity of government benefit programs functions as a de facto eligibility screen, excluding many people who technically qualify but cannot navigate the application process.

Sludge auditing — systematically reviewing processes for unjustified friction — has become a distinct practice within behavioral policy. The idea is simple: if you want people to exercise a right or access a benefit, you should make it easy. If a process is hard, there should be a good reason (preventing fraud, ensuring informed consent), and the friction should be proportional to the justification.

The UK Behavioural Insights Team: Nudge Goes to Government

The most prominent institutional expression of the nudge agenda was the UK’s Behavioural Insights Team (BIT), established in 2010 within the Cabinet Office under Prime Minister David Cameron. Initially staffed by a small group including David Halpern, its head, the team was given a two-year mandate: demonstrate measurable policy improvements using behavioral insights or be shut down.

BIT survived. Its early projects became case studies in behavioral policy. It redesigned tax collection letters, adding a single sentence — “Most people in your area have already paid their tax” — that increased timely payment rates by several percentage points, generating millions of pounds in accelerated revenue. It simplified enrollment processes for government programs. It tested different approaches to job-seeker support, finding that small changes in how commitments were framed could significantly increase reemployment rates.

The team’s methodology was distinctive: it insisted on randomized controlled trials (RCTs) for every intervention. This gave its findings a credibility that traditional policy analysis, which relies heavily on observational data and modeling, often lacks. By 2014, BIT had spun out of the Cabinet Office as a partly privatized “social purpose company,” and its model had been replicated in the United States (the Obama administration’s Social and Behavioral Sciences Team), Australia, Canada, Singapore, and many other countries.

The success of BIT and its imitators demonstrated that behavioral interventions could be implemented at scale, at low cost, and with rigorous evaluation. But the success also raised questions that the early nudge literature had not fully addressed.

The Critique: Manipulation, Autonomy, and Who Decides

From the beginning, libertarian paternalism attracted criticism from multiple directions.

From the libertarian side, the objection was that “libertarian paternalism” is an oxymoron. If the government is choosing defaults to steer behavior, it is exercising paternalistic judgment about what is good for people, even if it technically preserves the option to choose otherwise. The power to set defaults is the power to determine outcomes for the large majority who never change them. Calling this “liberty” stretches the word beyond recognition. Glen Whitman and Mario Rizzo, in Escaping Paternalism (2020), argued that the slippery slope from nudges to harder forms of paternalism is real, not hypothetical — once the principle that government knows better is accepted, the pressure to move from opt-out to mandate is difficult to resist.

From the left, the objection was different: nudges are too small, too individualistic, and too deferential to existing structures of power. If people are poor, unhealthy, or underinsured, the cause is usually not a badly designed form or a suboptimal default — it is low wages, expensive healthcare, inadequate public investment, and structural inequality. Nudging people to save more in their 401(k) does not help people who do not have a 401(k), who earn too little to save, or who face predatory financial products. The risk, critics argued, is that nudges become a substitute for redistributive policy — a cheap, technocratic alternative to the hard work of changing power relations.

From moral philosophy, the concern was about manipulation. If a nudge works by exploiting cognitive biases — loss aversion, inertia, social proof — rather than by providing information and reasons, is it compatible with respect for autonomy? The philosophical literature on manipulation is complex, but the basic worry is that nudges bypass deliberation rather than informing it. A person who saves more because of auto-enrollment has not decided to save more; they have simply failed to decide not to. Whether this counts as a “choice” in any meaningful sense is debatable.

Cass Sunstein has responded to these criticisms extensively, most fully in The Ethics of Influence (2016). His core argument is that choice architecture is inevitable — there is no neutral baseline, no “un-nudged” environment — and that the relevant comparison is not between nudging and not nudging but between good nudges and bad ones. If a cafeteria must put some food at eye level, it might as well be the healthier option. If an enrollment form must have a default, it might as well be the one most people would choose if they thought carefully about it. The alternative — deliberately randomizing defaults or choosing them arbitrarily — is not more respectful of autonomy; it is merely indifferent to welfare.

This response has force, but it does not fully address the harder questions. Who decides what people “would choose if they thought carefully”? In practice, the answer is: whoever controls the choice architecture, which is usually a government agency, an employer, or a technology company. The assumption that these actors can reliably identify people’s “true” preferences — as opposed to their own institutional interests — is optimistic. When Facebook or Google designs a default privacy setting, it is not engaging in benevolent paternalism; it is maximizing data collection. When a health insurer designs a complex claims process, it is not nudging people toward better health decisions; it is reducing payouts. Choice architecture is a tool, and like all tools, it serves whoever wields it.

The Scalability Question

A subtler limitation of the nudge approach concerns scalability and durability. Many of the most celebrated nudge successes involve one-time changes to defaults or form designs. These are valuable but finite. Once you have auto-enrolled everyone in a retirement plan and redesigned the tax letter, the low-hanging fruit has been picked. The question is whether behavioral insights can address larger, more complex policy challenges — climate change, healthcare cost growth, educational inequality — or whether they are inherently suited to marginal improvements in well-defined, narrow domains.

The evidence so far suggests both possibilities. Behavioral insights have informed energy conservation programs (social comparison reports showing neighbors’ energy use reduce consumption by 1-3%), health behavior (text message reminders improve medication adherence), and financial regulation (simplified mortgage disclosures improve borrower comprehension). But the effect sizes are typically modest — a few percentage points here, a small increase there. For the deep structural problems of modern economies, nudges are complements to, not substitutes for, conventional policy instruments like taxes, spending, and regulation.

Thaler himself has acknowledged this. In a 2018 essay, he wrote: “Nudges are not a panacea. They are a tool. And like all tools, they work better for some tasks than for others.” The danger is not that anyone claims nudges can solve everything, but that the appeal of cheap, non-coercive, evidence-based interventions distracts attention from more expensive but more powerful policy levers. A government that invests heavily in nudge units while cutting public services is not optimizing welfare; it is cutting costs.

When Nudges Aren’t Enough

The starkest limitation of the nudge paradigm appears when you consider the populations that need the most help. The people who respond least to nudges are often the people in the worst circumstances: those dealing with poverty, addiction, chronic illness, housing instability, or cognitive overload from managing too many urgent problems simultaneously.

Sendhil Mullainathan and Eldar Shafir’s Scarcity (2013) showed that poverty itself imposes a “bandwidth tax” — the cognitive load of managing insufficient resources reduces the mental capacity available for other decisions. People in scarcity are not making bad choices because they lack information or because the default is wrong; they are making bad choices because they are exhausted, stressed, and cognitively depleted by the demands of getting through the day. No amount of choice architecture redesign addresses this. What addresses it is money — income support, affordable housing, accessible healthcare, reduced economic precarity.

This is not an argument against nudges. It is an argument against treating nudges as sufficient. The most effective policy toolkit combines structural interventions (adequate income, universal services, anti-poverty programs) with behavioral interventions (well-designed defaults, simplified processes, reduced sludge). The two approaches are complementary, not competing. But in a political environment where structural interventions are expensive and contentious, there is a real risk that nudges become the consolation prize — the thing governments do because it is cheap, even when the problem calls for something more.

The Ongoing Experiment

Fifteen years after Nudge, the behavioral policy movement has matured. The early optimism about “nudging for good” has been tempered by experience, criticism, and a clearer understanding of the limits of small-scale interventions. The sludge concept has added an important dimension, shifting attention from designing beneficial nudges to identifying and removing harmful friction. The ethical debates about manipulation and autonomy have become more sophisticated, if not fully resolved.

What remains valuable in the nudge framework is its insistence on empirical testing, its attention to implementation details, and its recognition that policy design is never neutral. Every regulation, every form, every enrollment process embodies assumptions about human behavior. If those assumptions are wrong — if they presume that people are perfectly informed, perfectly rational, and perfectly motivated — the policy will underperform. Behavioral insights do not replace political judgment, but they can make political judgment better informed.

The final lesson may be the most uncomfortable one for nudge enthusiasts: the same behavioral tendencies that make nudges work — inertia, loss aversion, present bias, sensitivity to framing — also make it hard to change the political and institutional structures that generate many of the problems nudges are designed to address. People are loss-averse about their tax rates, their property values, their existing benefits. They discount the future and overweight vivid, present costs. They anchor on the status quo. These are the very forces that make structural reform politically difficult and that make nudges attractive as a less confrontational alternative. The challenge for the next generation of behavioral policy is to use these insights not just to design better defaults but to understand — and perhaps overcome — the behavioral barriers to deeper change.