Friedman's 'Long and Variable Lags' and the Art of Central Banking
Milton Friedman warned that monetary policy operates with unpredictable delays, making fine-tuning the economy a fool's errand. That warning still haunts every central banker alive.
The Plumber’s Problem
Imagine you are trying to fill a bathtub to the perfect temperature, but the faucet has a delay. You turn the hot handle, and nothing happens for thirty seconds. You turn it more. Still nothing. Then, all at once, scalding water pours in. You panic, crank the cold handle, and wait. Nothing changes for another half-minute. Then the water goes freezing. You have overcorrected, and the oscillations continue until you give up or the tub overflows.
This is, in essence, Milton Friedman’s case against discretionary monetary policy. The economy is the bathtub. The central bank is the person at the faucet. And the delay between turning the handle and feeling the temperature change is what Friedman called the “long and variable lags” of monetary policy. It is one of the most consequential ideas in macroeconomics, and central bankers — whether they admit it or not — grapple with it every time they set interest rates.
The Argument Takes Shape
Friedman did not arrive at this insight overnight. It grew out of his massive empirical project with Anna Schwartz, “A Monetary History of the United States, 1867-1960,” published in 1963 but rooted in work that stretched back to the late 1940s. In that book, Friedman and Schwartz documented the relationship between money supply changes and subsequent movements in output and prices across nearly a century of American economic history. The patterns they found were striking: changes in monetary growth preceded changes in economic activity, but the time between cause and effect varied enormously — sometimes six months, sometimes eighteen months, sometimes longer.
This was not merely an academic curiosity. It was a direct challenge to the Keynesian consensus that dominated postwar economic policy. If the Federal Reserve could identify economic weakness, quickly adjust monetary policy, and see results within a predictable timeframe, then active management of the business cycle made sense. But if the lags were long and unpredictable, then the central bank was flying blind, and its attempts to stabilize the economy might actually make things worse.
Friedman made this case most forcefully in his 1961 monograph “A Program for Monetary Stability,” based on lectures delivered at Fordham University. He proposed a simple rule: the Federal Reserve should increase the money supply at a fixed rate — perhaps 3 to 5 percent per year — regardless of current economic conditions. No discretion. No fine-tuning. No emergency interventions. Just a steady, predictable growth path for the money supply that would anchor expectations and allow the private economy to do what it does best: adjust.
Inside Lags and Outside Lags
To understand why Friedman considered the lag problem so devastating, it helps to break the concept into its two components: the inside lag and the outside lag.
The inside lag is the time between an economic shock and the central bank’s response. It includes the recognition lag — how long it takes to realize the economy has changed — and the decision lag — how long it takes to agree on and implement a policy change. Economic data arrives with a delay and is frequently revised. The initial estimate of GDP growth in any given quarter is often wrong, sometimes dramatically so. Central bankers must also build political consensus, communicate their intentions, and navigate institutional constraints. The Federal Open Market Committee meets eight times a year; emergency meetings are rare and signal panic.
The outside lag is the time between a policy action and its full effect on the economy. When the Fed cuts interest rates, the immediate impact is on short-term borrowing costs. But the transmission to the broader economy — through mortgage rates, business investment, consumer spending, asset prices, exchange rates, and expectations — unfolds over months and years, and the speed varies depending on the state of the financial system, the level of household debt, the degree of uncertainty, and a dozen other factors that are impossible to calibrate in advance.
Friedman’s point was that even if inside lags could be shortened — with better data, faster analysis, more decisive leadership — the outside lag was fundamentally unpredictable. The central bank might cut rates in response to a recession that was already ending, and the stimulus would arrive just as the economy was overheating. Or it might tighten in response to inflation that was already subsiding, tipping the economy into an unnecessary downturn. The policy was not just imprecise; it was potentially destabilizing.
The Thermostat Analogy and Its Limits
Defenders of discretionary policy often invoke the thermostat analogy. A thermostat senses temperature, compares it to a target, and adjusts heating or cooling to close the gap. Surely a central bank can do the same?
Friedman had a sharp rejoinder. A thermostat works because the lag between action and result is short and predictable. Turn on the furnace, and the room warms within minutes. The feedback loop is tight. Now imagine a thermostat where the furnace takes somewhere between two and eighteen months to kick in, and you don’t know which. The thermostat would produce wild temperature swings, not stability. That, Friedman argued, was the reality of monetary policy.
This analogy also reveals a deeper philosophical divide. The Keynesian tradition — and later the New Keynesian synthesis — assumes that economic models can capture enough of the economy’s structure to predict the effects of policy with reasonable accuracy. Friedman was more skeptical. He believed the economy was too complex, too dynamic, and too poorly understood for that kind of confident calibration. His preference for rules over discretion was not just a technical judgment; it reflected a broader epistemological humility about what economists and policymakers can actually know.
Rules Versus Discretion: The Deeper Debate
The rules-versus-discretion debate did not begin with Friedman. Henry Simons, a University of Chicago economist and one of Friedman’s intellectual mentors, made the case for rules in a famous 1936 article. Simons worried about the concentration of power in unelected officials and the temptation to use monetary policy for short-term political gain. Rules, in his view, were a check on both incompetence and corruption.
Friedman absorbed this tradition and gave it empirical teeth. His money-growth rule was designed to be simple enough that anyone could verify compliance and rigid enough that no central banker could rationalize departures. The point was not that the rule was optimal in every state of the world — Friedman acknowledged it was not — but that it was better on average than the alternative. Discretion, he argued, inevitably led to mistakes because of the lag problem, and those mistakes compounded because policymakers were slow to recognize and correct them.
The case for discretion rests on the opposite intuition: that the economy is hit by diverse and unpredictable shocks, and a rigid rule cannot accommodate them all. A fixed money-growth rule, for example, would not allow the central bank to act as lender of last resort during a financial crisis. It would not permit the kind of aggressive intervention that the Federal Reserve undertook in 2008-09, which most economists credit with preventing a second Great Depression. Discretion allows the central bank to respond to circumstances that no rule could anticipate.
This debate has never been definitively resolved because both sides are partly right. Rules provide credibility and predictability, which anchor expectations and reduce uncertainty. Discretion provides flexibility and responsiveness, which are valuable when the world surprises you. The challenge — and it is the central challenge of modern central banking — is to combine the virtues of both.
The Volcker Experiment
The most dramatic test of Friedman’s ideas came in October 1979, when Federal Reserve Chairman Paul Volcker announced that the Fed would shift from targeting interest rates to targeting the growth rate of the money supply. This was, in a sense, an application of Friedman’s framework: focus on controlling the quantity of money and let interest rates go wherever the market takes them.
The results were extraordinary — and extraordinarily painful. Interest rates spiked above 20 percent. The economy plunged into the deepest recession since the 1930s. Unemployment reached 10.8 percent in late 1982. Farmers drove tractors to the Federal Reserve building in Washington to protest. Auto dealers mailed coffin-shaped boxes containing car keys to the Fed. The political pressure to reverse course was immense.
But inflation broke. From a peak of 14.8 percent in March 1980, consumer price inflation fell to 3.2 percent by 1983. The disinflation was faster and more complete than almost anyone had predicted.
Friedman’s framework helps explain both the success and the cost. The Volcker disinflation worked because it represented a credible regime change: the Fed demonstrated that it would tolerate severe economic pain to bring prices under control, and that credibility eventually brought inflation expectations down. But the lag problem was on full display. The tightening began in late 1979, and the full effect on inflation did not materialize for more than two years. During the intervening period, the economy suffered enormously.
The Volcker experiment also exposed the limits of strict monetarism. In practice, targeting money supply growth proved operationally difficult. Financial innovation — money market mutual funds, new deposit instruments, changing patterns of money demand — made the relationship between the monetary aggregates and the broader economy unstable. The Fed quietly abandoned strict money supply targeting by 1982, though it continued to cite monetary data for years afterward. Friedman, characteristically, blamed the Fed for not sticking to the rule rather than acknowledging that the rule itself might be flawed.
Forward Guidance and the Modern Response
Today’s central banks have, in a sense, internalized Friedman’s critique while rejecting his solution. They acknowledge that monetary policy operates with long and variable lags. They recognize that past attempts at fine-tuning produced mixed results. But instead of adopting a mechanical rule, they have developed a suite of tools and practices designed to manage the lag problem within a discretionary framework.
Inflation targeting is the most important of these. Pioneered by New Zealand in 1990 and adopted by most major central banks by the 2000s, inflation targeting combines a clear numerical objective (usually 2 percent) with discretion in how to achieve it. The target provides the credibility that Friedman sought from a rule — it anchors expectations and makes policy predictable — while leaving the central bank free to respond to unexpected developments.
Forward guidance is another response to the lag problem. If the outside lag makes it impossible to affect the economy quickly through interest rate changes alone, perhaps the central bank can speed up the transmission mechanism by communicating its future intentions. By promising to keep rates low for an extended period, or until specific economic conditions are met, the central bank can influence long-term rates and expectations immediately, even before the policy action itself takes effect. The Federal Reserve embraced forward guidance aggressively after 2008, and it has become a standard tool.
Data dependence — the practice of conditioning future policy moves on incoming economic data rather than committing to a predetermined path — is yet another attempt to navigate the lag problem. By remaining flexible and transparent about what would trigger a change in policy, central banks try to avoid the overcorrection problem that Friedman warned about.
These innovations are clever, but they do not fully solve the problem Friedman identified. Forward guidance can lose credibility if the central bank frequently changes its signals. Data dependence can become a source of uncertainty if markets cannot predict how the central bank will interpret incoming information. And inflation targeting, while effective in normal times, can become a straitjacket when supply shocks push inflation above target even as the economy weakens — exactly the situation many central banks faced in 2021-2023.
The Post-Pandemic Stress Test
The inflationary episode that began in 2021 was, among other things, a vivid illustration of Friedman’s lag argument. Central banks — especially the Federal Reserve — maintained extremely accommodative monetary policy through 2021 and into early 2022, on the theory that the inflation was “transitory” and driven by supply-chain disruptions that would resolve on their own. When it became clear that inflation was more persistent than expected, the Fed began raising rates aggressively in March 2022.
But the lag problem meant that the effects of this tightening would not be felt for many months. Inflation peaked in mid-2022 and declined through 2023 and into 2024, but the path was uneven and punctuated by periods of renewed concern. Throughout, there was genuine uncertainty about whether the Fed had done enough or too much — precisely the kind of uncertainty Friedman predicted would plague discretionary policy.
The episode also revived interest in the inside lag. The Fed was slow to recognize the inflationary threat, partly because its models and frameworks — conditioned by decades of low inflation — were poorly suited to identifying a regime change. The recognition lag was not months but arguably more than a year. Friedman would have nodded.
Why the Debate Endures
Friedman’s specific prescription — a fixed money-growth rule — is no longer taken seriously by most economists or central bankers. The instability of money demand, the proliferation of near-money financial instruments, and the increasing complexity of the financial system have made monetary aggregates unreliable guides to policy. In that narrow sense, Friedman lost the argument.
But his deeper insight — that monetary policy operates with long and variable lags, and that this uncertainty should make policymakers humble about their ability to manage the business cycle — has won. It is embedded in the practice of modern central banking, even if it is honored imperfectly. Every inflation-targeting framework, every piece of forward guidance, every dot plot and press conference is, in part, an attempt to deal with the problem Friedman identified.
The rules-versus-discretion debate endures because it maps onto a tension that cannot be permanently resolved: the tension between predictability and flexibility, between humility and responsiveness, between the recognition that we do not know enough and the necessity of acting anyway. Friedman stood firmly on the side of humility and predictability. His opponents stood for flexibility and responsiveness. Both positions are defensible. Both have costs.
Perhaps the most honest assessment is that central banking is, as the title of this article suggests, an art. It involves judgment under uncertainty, and no rule — however well-designed — can substitute for good judgment. But Friedman’s contribution was to insist that good judgment includes knowing the limits of your knowledge. When you are operating a faucet with a two-year delay, the wisest course is often to move slowly, communicate clearly, and resist the urge to keep turning the handle.
That advice sounds simple. It is, in practice, extraordinarily difficult to follow. Which is why, more than six decades after Friedman first articulated the problem, central bankers are still grappling with it — and probably always will be.