Theory

What Is a 'Model'? (And Why Every Article About the Economy Has One)

Economists talk about 'models' constantly. Here's what they actually mean, why models are both essential and dangerous, and how to read economic claims without being bamboozled.

Reckonomics Editorial ·

The Invisible Machinery Behind Every Headline

When a newspaper reports that “economists expect the Federal Reserve’s rate hike to slow growth by 0.3 percent,” something large and complicated is hiding behind the bland confidence of that sentence. Somewhere, someone has built a model—a simplified, abstract representation of how the economy works—and fed it some numbers, and the model has produced a prediction. The journalist has reported the prediction. The model itself, its assumptions, its limitations, and the fierce disagreements among economists about whether it is any good, remain invisible.

This invisibility matters. Models are the basic tools of economic reasoning. Every claim about the economy—whether it comes from a government agency, a central bank, a think tank, or an opinion columnist—rests on a model, whether or not anyone says so. Sometimes the model is explicit and mathematical. Sometimes it is implicit and verbal: a set of assumptions about how people behave, how markets work, and what causes what. But it is always there. Learning to see the model behind the claim is one of the most useful skills a citizen can develop, and it does not require a PhD in economics.

What follows is a guide to what economists mean by “model,” what kinds of models exist, why they are both indispensable and dangerous, and how to read economic claims with appropriate skepticism.

What Economists Mean by “Model”

A model, in economics, is a deliberate simplification of reality designed to illuminate some aspect of how the economy works. The key word is “simplification.” The real economy has billions of people, millions of firms, thousands of products, and an effectively infinite number of interactions, decisions, feedbacks, and contingencies. No human mind—and no computer—can comprehend this totality. A model selects a few elements that the modeler believes are important for a particular question, specifies the relationships among them, and ignores everything else.

This is not a deficiency. It is the point. A map that included every feature of the terrain at full scale would be useless—it would be the terrain. A model that included every feature of the economy would be the economy, and we would be no closer to understanding it than before. The purpose of a model is not to replicate reality but to clarify it, to strip away the noise so that the signal—the causal mechanism, the relationship, the dynamic—becomes visible.

The economist Dani Rodrik, in his book Economics Rules: The Rights and Wrongs of the Dismal Science (2015), offers a useful analogy. Economic models, he suggests, are like fables. A fable is not a factual account of the world. No one believes that a fox actually spoke to a crow, or that a tortoise raced a hare. But fables convey truths about human behavior—truths that are made more vivid and more portable by their very simplification. “Slow and steady wins the race” is not literally true in all circumstances, but it captures something real about the relationship between persistence and success. Economic models work the same way: they tell simplified stories that illuminate specific mechanisms.

The analogy is imperfect—models are typically more rigorous than fables, and their claims are supposed to be testable—but it captures an essential feature of economic reasoning that non-economists often miss. When an economist says “assume a two-good, two-country world,” they are not claiming that the world has only two goods and two countries. They are saying: let’s isolate the logic of comparative advantage by stripping away everything else, and then see whether the logic, once understood, helps us interpret the far messier real world.

Types of Models

Economic models come in several varieties, and the choice of type matters because it shapes what you can and cannot see.

Mathematical models are the dominant form in academic economics. They express assumptions as equations and derive implications through algebra, calculus, or other mathematical tools. The great advantage of mathematical models is precision: the assumptions are explicit, the reasoning is transparent (at least to those who can read the math), and the conclusions follow logically from the premises. The great disadvantage is that the requirement of mathematical tractability limits what can be modeled. Some of the most important features of the economy—radical uncertainty, institutional change, the influence of culture and politics—are difficult or impossible to express in a system of equations.

Verbal models are older and more flexible. Adam Smith’s account of the division of labor in a pin factory is a model: it identifies a mechanism (specialization increases productivity), specifies the conditions under which it operates (a large enough market to support specialized workers), and draws implications (the extent of the market limits the division of labor). Smith did not write down equations, but his reasoning was rigorous, and his model has been remarkably durable. Verbal models sacrifice precision for breadth—they can incorporate complexity, nuance, and context that mathematical models cannot—but they are also more vulnerable to ambiguity and logical error, because the discipline of mathematics is absent.

Statistical and econometric models use data to estimate relationships. A regression model might estimate the relationship between education and earnings, or between interest rates and investment, or between trade openness and growth. These models are empirical: they tell you what the data say about correlations and, under certain assumptions, about causal relationships. They are enormously useful for policy analysis—central banks, finance ministries, and international institutions rely on them daily—but they carry a fundamental limitation: they are estimated on historical data, and there is no guarantee that relationships observed in the past will hold in the future, especially if the policy environment changes.

Computational models use computers to simulate economies that are too complex for closed-form mathematical solutions. Within this category, agent-based models (ABMs) have attracted growing interest. In an ABM, the economy is populated by many individual “agents”—consumers, firms, banks—each following simple behavioral rules. The agents interact, and the model tracks the emergent patterns that arise from these interactions. ABMs can capture phenomena that are difficult or impossible to model with equations: herding behavior, network effects, cascading failures, the emergence of institutions. They are increasingly used in financial regulation, epidemiology, and the study of inequality. Their limitation is that they are difficult to validate—with enough parameters, you can generate almost any pattern, and it is hard to know whether the model is capturing genuine mechanisms or merely producing plausible-looking noise.

Dynamic stochastic general equilibrium (DSGE) models deserve special mention because they have dominated macroeconomic policy analysis since the 1990s. DSGE models are highly mathematical, built on explicit microfoundations (the behavior of optimizing households and firms), and incorporate random shocks to capture uncertainty. They are the workhorses of central bank forecasting. They were also spectacularly unhelpful during the 2008 financial crisis, because their structure—representative agents, rational expectations, efficient financial markets—excluded precisely the phenomena (bank runs, contagion, fire sales, panic) that drove the crisis. This failure prompted a wave of critical reflection within the profession about the limitations of DSGE models, though they remain widely used.

The Lucas Critique: Why Models Break

One of the most important ideas in economic methodology is the Lucas critique, articulated by Robert Lucas in a 1976 paper that reshaped macroeconomics. The argument is deceptively simple and devastatingly powerful.

Before Lucas, macroeconomic models were largely estimated from historical data: you observed the relationship between, say, inflation and unemployment over some period, fitted a statistical model, and used it to predict the effects of policy changes. Lucas pointed out that this approach has a fundamental flaw: the parameters of a statistical model are not structural constants. They are themselves the product of the policy regime in which they were estimated. If the policy regime changes, the parameters will change too, and predictions based on the old parameters will be wrong.

Consider the relationship between inflation and unemployment, the famous Phillips curve. In the 1960s, data from many countries showed a stable inverse relationship: lower unemployment was associated with higher inflation, and vice versa. Policymakers interpreted this as a menu of choices—you could “buy” lower unemployment by accepting higher inflation. But when governments actually tried to exploit this tradeoff by running persistently expansionary policy, the relationship broke down. Workers and firms, anticipating higher inflation, adjusted their behavior: they demanded higher wages, raised prices preemptively, and the Phillips curve shifted. The statistical relationship that had looked stable was not a structural feature of the economy but an artifact of a particular policy environment.

Lucas’s solution was to build models based on “deep parameters”—preferences, technology, and the structure of information—that are (in principle) invariant to policy changes. This led to the rational expectations revolution and the DSGE models mentioned above. Whether Lucas’s solution is satisfactory is hotly debated—many economists think that rational expectations is itself an unrealistic assumption that introduces its own distortions—but the critique itself is almost universally accepted. It is a permanent warning: do not assume that past relationships will survive a change in policy.

Friedman’s “As If” Methodology

A different and equally influential methodological argument was made by Milton Friedman in a 1953 essay, “The Methodology of Positive Economics,” one of the most discussed (and most misunderstood) pieces in the philosophy of economics.

Friedman’s central claim was that the realism of a model’s assumptions is irrelevant. What matters is whether the model generates accurate predictions. He illustrated the point with a famous example: an expert billiards player. You could model the player’s shots using Newtonian mechanics—calculating angles of incidence, friction coefficients, spin, and momentum transfer. The model would generate accurate predictions about where the balls end up. But no one believes that the player actually performs these calculations in her head. The model works “as if” the player were a physicist, even though the actual cognitive process is entirely different.

By analogy, Friedman argued, a model that assumes firms maximize profits can be useful even if no actual firm consciously performs a profit-maximization calculation. What matters is whether firms behave as if they maximize profits—that is, whether the model’s predictions are empirically accurate. If they are, the unrealism of the assumption is not a defect; it is a feature, because it simplifies the model and makes it tractable.

Friedman’s argument has been enormously influential—it is, in effect, the methodological license that allows economists to build models with highly stylized assumptions without feeling obligated to defend each assumption on its own terms. But it has also been widely criticized. The philosopher Alex Rosenberg has pointed out that “as if” reasoning can justify almost any set of assumptions, no matter how bizarre, as long as the predictions happen to match the data in the particular sample under examination. The epistemologist Mary Morgan has argued that Friedman conflated prediction (getting the right number) with explanation (understanding why), and that a model that predicts well for the wrong reasons is dangerous because it will fail unpredictably when conditions change.

The Lucas critique can be read as a specific instance of this danger. The old Phillips curve models predicted well—until they didn’t, because the assumptions behind them were not structural truths about the economy but contingent features of a particular historical period.

The Map Is Not the Territory

All of these methodological arguments circle around a single insight, expressed with varying degrees of sophistication: a model is not the economy. It is a map, and as the philosopher Alfred Korzybski observed, “the map is not the territory.”

This sounds obvious, but its implications are routinely ignored. When a model produces a result—“a 10 percent tariff will reduce GDP by 0.5 percent”—there is a powerful temptation to treat the result as a fact about the world rather than as a conditional statement: if the model’s assumptions are correct, and if the parameters are estimated accurately, and if the policy change does not alter the structural relationships in the model, then the predicted effect is approximately 0.5 percent. Every link in that chain is uncertain, and the uncertainty compounds.

The temptation is especially strong when the model is complex, the math is impressive, and the result comes with decimal points. Decimal points convey a false precision that is one of the great rhetorical weapons in policy debates. A model that says “between 0.2 and 1.5 percent, depending on assumptions” is more honest than one that says “0.47 percent,” but it is less likely to make the news.

Rodrik’s prescription for intellectual hygiene is useful here. He argues that the problem with economics is not that it uses models—it must use models, because the alternative is either silence or hand-waving—but that economists too often mistake a model for the model. The discipline, he notes, contains a vast library of models, many of them contradictory. One model says free trade benefits everyone; another shows how trade can increase inequality; a third demonstrates that trade with imperfect competition leads to different outcomes than trade with perfect competition. All are logically valid. The skill of economics is not in building the model but in choosing the right model for the context—and being honest about the choice and its limitations.

How to Read Economic Claims

For the non-specialist reader, the practical question is: how do you evaluate economic claims when you can’t check the math?

Several heuristics help.

Ask what’s being assumed. Every economic argument rests on assumptions, and the assumptions matter more than the conclusions. When someone says “raising the minimum wage will destroy jobs,” they are invoking a model in which labor markets are competitive and employers can easily substitute capital for labor. When someone says “raising the minimum wage will boost consumer spending,” they are invoking a different model in which workers spend a higher fraction of their income than employers, and aggregate demand matters. Neither model is wrong in the abstract; the question is which set of assumptions better matches the situation at hand.

Look for the boundary conditions. A good economic argument will specify the conditions under which its conclusions hold. “Under perfect competition with no externalities, free trade maximizes welfare” is a very different claim from “free trade is always good.” If someone presents the conclusion without the conditions, they are either simplifying for a general audience (forgivable, if they acknowledge the simplification) or misleading you (less forgivable).

Watch for the move from correlation to causation. Much of empirical economics involves observing that X and Y move together, and then arguing that X causes Y. This move is often contested and sometimes wrong. Countries with more education tend to be richer—but does education cause growth, or does growth cause education, or do both reflect some third factor (institutions, culture, geography)? The best empirical economics uses clever research designs to isolate causal effects, but the reader should always ask: is this a correlation or a demonstrated causal relationship?

Be skeptical of precision. As noted above, decimal points in economic forecasts convey a false sense of exactitude. The honest answer to “what will happen to GDP if we raise interest rates by 0.25 percent?” is almost always “it depends, and we’re not sure.” A model that produces a single number is concealing a distribution of possible outcomes, and the tails of that distribution—the low-probability, high-impact scenarios—are often where the action is.

Consider who built the model and why. This is not a counsel of cynicism—most economists are honest—but of awareness. Models are built by people with training, funding, institutional affiliations, and intellectual commitments. A model built by an industry-funded think tank to evaluate a regulation affecting that industry may be perfectly competent, but its assumptions deserve extra scrutiny. The same is true of models built by advocacy organizations, government agencies, and international institutions. Everyone has a perspective; the question is whether the perspective is disclosed and whether the model is open to scrutiny.

Why Models Are Both Indispensable and Dangerous

The case for economic models is straightforward: the economy is too complex to understand without simplification, and models are the best tools we have for structured simplification. Without models, economic policy would be pure guesswork—or pure ideology, which is often worse. The remarkable success of some economic insights—comparative advantage, the gains from trade, the quantity theory of money, the Keynesian insight that recessions can be caused by insufficient demand—owes everything to the clarity that models provide.

The case against economic models is equally straightforward: they can be wrong, they are often wrong in systematic ways, and their users frequently forget that they are simplifications rather than descriptions of reality. The 2008 financial crisis was, among other things, a crisis of model failure—models that assumed efficient markets, rational expectations, and the absence of systemic risk turned out to be catastrophically inadequate. The cost of that failure was measured in trillions of dollars, millions of jobs, and immeasurable human suffering.

The resolution is not to abandon models but to use them with the humility and skepticism they demand. A model is a lens, not a window. It reveals some features of the landscape and conceals others. The discipline of economics at its best involves looking through many lenses, acknowledging what each one shows and hides, and making judgments that reflect the full range of available evidence—quantitative and qualitative, formal and institutional, historical and theoretical.

The philosopher of science Nancy Cartwright has argued that the laws of physics “lie”—not because they are false, but because they describe idealized situations that never obtain in practice, and their application to real situations always involves judgment, approximation, and local knowledge. The same is true of economic models, only more so. The economy is not a laboratory experiment; it is a human institution, shaped by history, power, culture, and contingency in ways that no model can fully capture.

Understanding this does not make economics useless. It makes it useful in the right way—as a set of tools for thinking, not a set of answers to be memorized. The reader who understands what a model is, what it can do, and what it cannot do is better equipped to navigate the torrent of economic claims that fills the news, the policy debate, and the political arena. That, in the end, is what economic literacy looks like: not knowing the right model, but knowing that there is always a model, and learning to ask the right questions about it.