Alfred Marshall
British economist whose Principles of Economics synthesized marginal analysis with classical insights and established the framework of supply-and-demand reasoning that dominated economics for a century.
Founded c. 1870s
The term “neoclassical” was coined by Thorstein Veblen in 1900, and he did not mean it as a compliment. Veblen used it to describe the economics of Alfred Marshall and his contemporaries, who had grafted the new marginal analysis onto the older classical framework while retaining its basic outlook: that markets tend toward equilibrium, that economic agents are rational, and that the price system coordinates production and consumption in a broadly efficient way. The label stuck, even though most economists working within the tradition rarely use it to describe themselves. Today “neoclassical economics” refers loosely to the mainstream of the discipline as it developed from the 1870s through the twentieth century — a body of theory united less by specific conclusions than by a shared set of methods and assumptions.
The foundation of neoclassical economics is the marginal revolution of the early 1870s, when three economists working independently — William Stanley Jevons in England, Carl Menger in Austria, and Leon Walras in Switzerland — developed the theory of marginal utility. The classical economists, from Smith through Ricardo and Mill, had grounded value in the cost of production, ultimately in labor. The marginalists shifted the basis of value to the demand side: the value of a good is determined by the utility of the last (marginal) unit consumed. This resolved long-standing puzzles, most famously the diamond-water paradox (water is more useful but less valuable than diamonds because water is abundant and its marginal utility therefore low).
Marginalism provided a unified framework for analyzing consumer choice, firm behavior, and factor pricing. The consumer maximizes utility subject to a budget constraint; the firm maximizes profit subject to a production function; the wage equals the marginal product of labor; the interest rate equals the marginal product of capital. The logic of constrained optimization, applied to agents across every market, became the core method of economic analysis.
Alfred Marshall, whose Principles of Economics (1890) dominated economics teaching for half a century, developed the partial equilibrium approach that remains the workhorse of applied microeconomics. Marshall’s method analyzes one market at a time, holding conditions in all other markets constant (the “ceteris paribus” assumption). His supply and demand diagrams — with the now-familiar intersecting curves — offered an intuitive and powerful tool for analyzing the effects of taxes, subsidies, price controls, and shifts in technology or preferences.
Marshall was careful to qualify his models with attention to time (distinguishing between market period, short run, and long run), to industrial organization, and to the complexities of real economic life. He famously warned that economic reasoning should use mathematics as a shorthand, not a substitute for economic intuition. Later generations of neoclassical economists did not always heed this warning.
Where Marshall analyzed one market at a time, Leon Walras attempted to analyze all markets simultaneously. His system of general equilibrium, articulated in Elements of Pure Economics (1874), described an economy in which the prices and quantities in every market are determined simultaneously through the interaction of all agents. Walras proved (informally) that if all but one market is in equilibrium, the last must be as well — a result known as Walras’s Law.
The Walrasian program reached its most rigorous expression in the work of Kenneth Arrow and Gerard Debreu in the 1950s. The Arrow-Debreu model demonstrated that, under certain conditions (complete markets, no externalities, convex preferences and production sets, perfect competition), a competitive equilibrium exists and is Pareto efficient — no one can be made better off without making someone else worse off. This is the First Fundamental Theorem of Welfare Economics. The Second Welfare Theorem shows that any Pareto efficient allocation can be achieved as a competitive equilibrium given appropriate redistribution of initial endowments.
These results are often misunderstood. They are theorems about an abstract mathematical structure, not descriptions of actual economies. The conditions required for the welfare theorems to hold — perfect competition, complete markets, no externalities, no increasing returns — are never fully satisfied in practice. The theorems are better understood as a benchmark: they tell us what would have to be true for unregulated markets to produce efficient outcomes, and by implication, they identify the sources of market failure (externalities, public goods, imperfect competition, asymmetric information) that justify policy intervention.
Paul Samuelson, more than any other figure, shaped the form of modern neoclassical economics. His Foundations of Economic Analysis (1947) reformulated economics as a unified mathematical discipline grounded in constrained optimization and comparative statics. His textbook, Economics (first edition 1948), trained generations of students and established what he called the “neoclassical synthesis”: the combination of neoclassical microeconomics with Keynesian macroeconomics. At the micro level, markets work roughly as the marginalists described. At the macro level, the government can and should use fiscal and monetary policy to maintain full employment. The two levels, in Samuelson’s vision, were complementary rather than contradictory.
The neoclassical synthesis dominated economics from roughly the 1950s through the 1970s. It was challenged from the right by monetarism and new classical economics (which sought to make macroeconomics fully consistent with rational, optimizing microeconomic foundations) and from the left by post-Keynesian and institutionalist critics (who argued that the synthesis had gutted Keynes’s most important insights). The result, by the late twentieth century, was New Keynesian economics — a framework that retained rational optimization and general equilibrium but added market imperfections (sticky prices, imperfect competition, asymmetric information) to generate Keynesian-style results.
Neoclassical economics has faced sustained criticism from multiple directions. Heterodox economists — Marxians, post-Keynesians, institutionalists — argue that its core assumptions (rational agents, equilibrium, methodological individualism) exclude precisely the phenomena that matter most: power, class, institutions, historical change, and fundamental uncertainty. Complexity economists contend that equilibrium models cannot capture the emergent, nonlinear dynamics of real economies. Behavioral economists have documented systematic deviations from rational choice, though most work within the neoclassical framework rather than against it.
The Cambridge capital controversies of the 1960s demonstrated logical problems with treating aggregate capital as a factor of production with a well-defined marginal product — a challenge that the profession has largely set aside rather than resolved. The 2008 financial crisis renewed criticism of mainstream macroeconomic models that had no role for financial instability, bank runs, or debt deflation.
Despite these critiques, neoclassical economics remains the dominant framework in the discipline. Several factors explain its persistence. Its mathematical formalism provides a common language and a set of tools that are portable across subfields. Its emphasis on optimization and equilibrium yields clear, testable predictions. Its welfare theorems provide a benchmark for policy analysis that, however idealized, has no widely accepted rival. And its capacity to absorb critiques — incorporating behavioral findings, information economics, game theory, and institutional analysis into the optimizing-equilibrium framework — has made it a remarkably flexible and adaptive tradition.
Whether that flexibility represents genuine intellectual progress or a tendency to co-opt challenges without confronting their deepest implications remains one of the central debates in economics.
British economist whose Principles of Economics synthesized marginal analysis with classical insights and established the framework of supply-and-demand reasoning that dominated economics for a century.
America's first great mathematical economist, whose pioneering work on interest, money, and debt-deflation was overshadowed by the most spectacularly wrong prediction in financial history.
French-Swiss economist whose general equilibrium theory provided the mathematical architecture of modern economics, despite being largely ignored in his own lifetime.
American economist whose Foundations of Economic Analysis and bestselling textbook Economics defined the mathematical and pedagogical standards of the profession, while his neoclassical synthesis reconciled Keynesian macroeconomics with classical microeconomics.
The mathematical framework showing how prices in all markets can simultaneously adjust to clear supply and demand, forming the theoretical backbone of modern microeconomics.
The principle that the value of a good is determined by the satisfaction gained from one additional unit, resolving the classical water-diamond paradox and revolutionizing price theory.
The tendency for people to take greater risks when they are insulated from the consequences, a concept that leapt from insurance theory to the center of financial crisis debates.