Joseph S. Lucas and Donald A. Yerxa, Editors
Joel
Mokyr, "The Riddle of 'The Great Divergence': Intellectual and Economic
Factors in the Growth of the West"
James
Hoopes, "Managerialism: Its History and Dangers"
Alfred
J. Andrea, "The Silk Road: Part II"
Bertram
Wyatt-Brown, "Honor, Hatred, and America's Middle East"
"Rethinking
Local History: An Interview with Joseph A. Amato"
William
Palmer, "The World War II Generation of Historians"
James
Hitchcock, "Christopher Dawson and the Demise of Christendom"
Dermot
Quinn, "Christopher Dawson and the Challenge of Metahistory"
Pattern
and Repertoire in History: An Exchange:
Ronald
Fritze, "A New Repertoire for Historians?"
John
Lukacs, "'Scientific' History?"
Bertrand
M Roehner, "Predictions and Analytical History"
Padraic
Kenney, "The Threads of Revolution: Central Europe's Moment"
Jeremy
Black, "Europe, 1550-1800: A Revisionist Introduction"
Clark
G. Reynolds, "Odysseys to Planetary History"
Barry
Strauss, "From those Wonderful People Who Killed Socrates"
Letters
Historically
Speaking: The Bulletin of the Historical Society
September
2003
Volume V, Number 1
THE
RIDDLE OF “THE GREAT DIVERGENCE”: INTELLECTUAL AND ECONOMIC FACTORS IN
THE GROWTH OF THE WEST[+]
by
Joel Mokyr
There
is a newly found interest among economists in the issue of what Eric Jones
two decades ago called “The European Miracle” and what Kenneth Pomeranz
more recently termed “The Great Divergence.” Even in an age in which Eurocentrism,
triumphalist or otherwise, has fallen out of favor, the most radical social
scientists have to concede that whatever one might think of its causes,
modern economic growth has transformed the world. This is equally true
for countries that experienced economic growth early and in large doses
and those hopeful or frustrated societies who are now known as “developing
economies.”
How
did it all begin? Economists have increasingly demanded an answer from
historians, threatening that if they do not get one, they will supply one
themselves.[1]
The story they tell is that the Industrial Revolution started in the late
18th century and (with a lag) prompted what they call “modern economic
growth.” Before the Industrial Revolution the world was “Malthusian” and
experienced little, if any, economic growth. In the 19th century the West
started suddenly to grow, making (almost) everyone in it rich. This was
accompanied by growing investment in human capital (mostly education),
inducing people to have fewer (but better educated) children, and sustained
technological progress, reinforcing the process of growth.
Historians
can poke endless holes in these stylized facts. There was more growth in
the premodern economies than economists suppose, and evidence that the
world was in a “Malthusian trap” before 1750 is decidedly mixed. The connection
between education, economic growth, and human fertility is hugely complex
and differs across various Western societies. The details reveal a far
more nuanced and subtle story than the crude stylized facts that inspire
economists to build their models. But the basic story must be right: whether
the world economy before 1750 was indeed in a stasis or not, there can
be no doubt that economic growth as we know it is a novel phenomenon. For
instance, income per capita in the UK in 1890 was about $4100 in 1990 international
dollars. It grew in the subsequent years by an average of 1.4% per year.
Had it been growing at that same rate in the previous 300 years, income
per capita in 1590 would have been $61, which clearly is absurdly low.
Whatever the exact cause, the century of the pax Britannica witnessed the
departure of the Occident Express with the British car in the lead. The
train ride took the Western world to unprecedented prosperity, leaving
much of the world behind on the platform. By 1914 the current division
between rich and poor countries, termed by one wry economist “twin peaks,”
had been created.[2]
A century later it is still there in rough lines, and whereas some nations
have been able to hop across the chasm, it is hard to deny that the chasm
between rich and poor economies is a signal characteristic of our planet.
And there is no sign it is getting any narrower.
How
are we to think of this? Should we think that there was something defective
about societies such as Paraguay, the Philippines, or Cameroon which prevented
them from growing? Or was there something unique about the Western experience
in the 18th century that mutated and created a new kind of society, unlike
anything seen before? This way of phrasing the question may seem to some
similar to whether a zebra is white with black stripes or vice versa. Yet
if we do not want to think of much of the world as failures, we are forced
to think about a unique event that occurred in the West before the Industrial
Revolution. One approach is to postulate that all economies have the potential
to grow, but that obstacles prevented them from realizing this potential
until the Industrial Revolution. What we have to search for above all is
something that helped dissolve these “blocking factors” which before 1750
imposed upper bounds on the economic growth of preindustrial society.[3]
In
my view, the Enlightenment is one factor that could have made the difference.
The Enlightenment combined a belief in the possibility and desirability
of social progress with a faith in rationality and the ability of knowledge
to improve the human condition. It resided firmly in the realm of discourse,
rhetoric, persuasion, and beliefs. Oddly enough, few economists have made
explicit attempts to connect ideological and intellectual changes with
economic development. Most economists tend to believe that changes in power,
technology, income, and prices determine economic outcomes, and in the
end they, not ideology or beliefs, are “exogenous variables.” Ideological
change does not affect economic history, some economists believe, because
it cannot be verified; it is “a grin without a cat.”[4]
A growing number of economists, led by Nobel laureate Douglass North, have
begun to demur with regard to this view. What people believe about one
another, or about their physical environment and its creator, has an impact
on how they go about their lives, what they aspire to, and on the institutions
they create. Moreover, because technology and institutions are changed
by a fairly small number of people, the beliefs of the elite and the leadership
matter more than public opinion or the ideology of the common man. The
Enlightenment was a movement of elites: the literate, the educated, and
the powerful. Yet these were the people responsible for the crucial changes
that made the take-off” into sustained growth possible.
Let
me address two points. The first is the myth that the Industrial Revolution
was purely British affair, and that without Britain Europe would still
be largely a subsistence economy. The historical reality was that many
if not most of the technological elements of the Industrial Revolution
were the result of a joint international effort in which French, German,
Scandinavian, Italian, American, and other Western innovators collaborated,
swapped knowledge, corresponded, met one another, and read each other’s
work. The technological basis of the Industrial Revolution was no more
confined to Britain than the Enlightenment was confined to France, although
it was in Britain where its first successful products emerged. The second
point to note is that the real difference between the Industrial Revolution
and other episodes characterized by a clustering of macroinventions was
not just in the harnessing of steam or the sudden rise to prominence of
cotton in the 1780s. While the impact of the great inventions of the years
of Sturm and Drang on a number of critical industries stands undiminished,
the most important difference between the Industrial Revolution and previous
clusters of macroinventions is not that technological breakthroughs occurred
at all, but that their momentum did not level off and peter out after 1800
or so. In other words, what made the Industrial Revolution into the “Great
Divergence” was the persistence of technological change after the first
wave. The first wave was followed after 1820 by a secondary ripple of inventions
that may have been less spectacular, but these were the microinventions
(extensions and improvements to earlier breakthroughs) that provided the
muscle to the downward trend in production costs. The second stage of the
Industrial Revolution adapted novel ideas to new and more industries and
sectors, and eventually these are the ones that show up in the productivity
statistics: the perfection of mechanical weaving after 1820; the invention
of Roberts’s self-acting mule in spinning (1825); the extension and adaptation
of the techniques first used in cotton to carded wool and linen; the continuing
improvement in the iron industry through Neilson’s hot blast (1829) and
other inventions; the continuing improvement in steampower, raising the
efficiency and capabilities of the low pressure stationary engines, while
perfecting the high pressure engines of Trevithick, Woolf, and Stephenson
and adapting them to transportation; the advances in chemicals before the
advent of organic chemistry (such as the breakthroughs in candle-making
and soap manufacturing thanks to the work of Eugene- Michel Chevreul on
fatty acids) and the introduction and perfection of gas-lighting; and the
breakthroughs in engineering and highprecision tools by Maudslay, Whitworth,
Nasmyth, Rennie, the Brunels, the Stephensons, and the other great engineers
of the second generation.
How,
then, did the Enlightenment bring all this about? I propose two complementary
mechanisms: the first focuses on what I have called the “Industrial Enlightenment”;
the second on what may be called the “Institutional Enlightenment.”
There
was an important part of the Enlightenment that was obsessed by “the useful
arts,” regarding them as indispensable in bringing about the progress that
Enlightenment philosophes believed in and hoped for. The power of knowledge,
enunciated by Francis Bacon (deeply admired by Enlightenment thinkers),
was at the center of attention. Progressive manufacturers, landowners,
and administrators embraced Kant’s proposed slogan for the Enlightenment,
sapere aude (dare to know). Specifically, the Industrial Enlightenment
contained three elements. One was the insistence on understanding the natural
regularities that lay beneath operating techniques and made them work.
If such principles could be understood and generalized, it was reasoned,
techniques could be improved, extended, and applied to new areas. In his
famous essay on “Arts” in his Encyclopedie, Diderot noted that “it is clear
from the preceding that every ‘art’ [technique] has its speculative and
its practical side. Its speculation is the theoretical knowledge of the
principles of the technique; its practice is but the habitual and instinctive
application of these principles. It is difficult if not impossible to make
much progress in the application without theory.”
A second
element of the Industrial Enlightenment concerned the diffusion of and
the access to existing knowledge. The philosophes fully realized that knowledge
should not be confined to a select few but should be disseminated as widely
as possible. Some Enlightenment thinkers believed this was already happening:
the English philosopher and psychologist David Hartley (1705–1757) believed
that “the diffusion of knowledge to all ranks and orders of men, to all
nations, kindred and tongues and peoples . . . cannot be stopped but proceeds
with an ever accelerating velocity.” Again, Diderot perhaps said it best:
“We need a man to rise in the academies and go down to the workshops and
gather material about the [mechanical] arts to be set out in a book that
will persuade the artisans to read, philosophers to think along useful
lines, and the great to make at least some worthwhile use of their authority
and wealth.” The idea was that in addition to improving existing techniques,
workers be exposed to what economists call . . . Enlightenment thinkers
began slowly to realize that “freedom” included the freedom to innovate
and not have one’s machines smashed by Luddites or prohibited by authorities.
“best-practice” techniques. The somewhat naive view was that if only artisans
and farmers could see how the best of them carried out their practice,
they would adopt these and progress would occur.
Third,
the Industrial Enlightenment sought to facilitate interaction and communication
between those who controlled propositional knowledge and those who carried
out the techniques contained in prescriptive knowledge. The philosophes
of the Enlightenment echoed Bacon’s call for cooperation and the sharing
of knowledge between those who knew things and those who made them. This
type of interaction is exemplified by such organizations as the Lunar Society,
in which manufacturers and engineers rubbed shoulders with scientists and
medical doctors. Jean Antoine Chaptal, the French chemist and politician
(1756–1832), made a distinction between savants and fabricants and stressed
the importance of their communication. Not only would this place the knowledge
of scientists at the disposal of artisans, it would also bring to the attention
of scientists the problems faced by farmers, sailors, and artisans, and
help set the agenda of natural philosophy. Needless to say, some of the
most prominent members of the Industrial Enlightenment— such as Benjamin
Franklin, Count Rumford, Claude Berthollet, Humphry Davy—combined the features
of the inventor and the scientist. But other inventors had recourse to
reading technical works and manuals that became increasingly available
after 1815, or talking to scientists in the so-called philosophical societies.
As
a result, the Industrial Revolution of the 1760s and 1770s did not fizzle
out the way earlier waves of technological progress had. By drawing upon
an ever-growing basis of knowledge and understanding of natural regularities
and phenomena, engineers, mechanics, chemists, pharmacists, and others
concerned with production efficiency could keep up the rate of invention,
which led to the advances in steel, chemicals, and electricity that mark
the “second” Industrial Revolution. Not all this knowledge was “correct”
(by our standards), and none of it was complete. Much of it was based on
a very partial understanding of the physics and chemistry in question.
But it was much more than had been there in 1700. Moreover, the growing
“flow” of inventions itself kept enriching the knowledge base on which
it drew, thus creating a “positive feedback effect.”
The
second mechanism I am suggesting concerns political and social ideology
and might be called the “Institutional Enlightenment.” Mercantilist thought
had been based on the notion that economic activity was a zero-sum game,
among nations as much as among individuals. The notion that mutual exchange
at market prices, the allocation of resources to wherever they yielded
the highest return, and free entry and exit into trades and mobility between
them actually enriched the entire economy—whereas monopolies, exclusions,
arbitrary taxes, and tariffs imposed a cost on society—had a lengthy gestation.
Adam Smith’s The Wealth of Nations, one of the most persuasive documents
ever published, summarized and popularized many decades of thought. Equally
important,
Enlightenment
thinkers began slowly to realize that “freedom” included the freedom to
innovate and not to have one’s machines smashed by Luddites or prohibited
by authorities. Enlightenment thinkers and the politicians inspired and
informed by them slowly started to fight the uphill battle against privileges
and exclusionary rules. In some absolutist countries their efforts depended
on the goodwill of the ruler: the work of Joseph von Sonnenfels in Austria
and Pedro Rodriguez de Campomanes in Spain to weaken guilds and enhance
occupational freedom and unregulated markets depended on the goodwill of
their monarchs and was more often than not cast aside. The fate of the
reforms (the famous “six edicts”) proposed by Turgot upon his appointment
in 1774 was sealed soon enough when Louis XVI fired him two years later;
but the suppression of the guilds and the abolition of labor services were
passed without a second thought—together with a plethora of other Enlightenment
inspired reforms—after 1789.
The
French Revolution and the ensuing wars created an institutional response
in other European nations. Many of them realized that restrictions on labor
and occupational mobility, unstandardized weights and measures, internal
tolls and tariffs, unequal taxes, badly defined property rights, and other
medieval debris encumbered economic performance. The best example is Prussia,
where many of the post-1806 reformers explicitly acknowledged their debt
to Adam Smith and the Enlightenment. In areas where the French influence
was too little or too late, or where internal resistance to Enlightenment
ideas was strong, the Institutional Enlightenment did not have the desired
impact, and economic “rationalization” did not get far. In those regions—Iberia,
Southern Italy, Eastern Europe—economic development was slow.
In
Britain things were different. The Institutional Enlightenment did not
need the shock therapy administered by Napoleon’s armies. In the 17th century
Britain had attained the institutional nimbleness that allowed it to change
the rules of the game when these were realized to be harmful, without the
dependence on the goodwill of an absolute monarch. Thus when the road network
was deemed inadequate, turnpikes emerged; when the Calico Acts or the Statute
of Apprentices and Artificers were deemed to be obstacles to economic performance,
Parliament abolished them (1774 and 1811, respectively). Luddism was mercilessly
suppressed and innovation, if not actively encouraged, surely was looked
on favorably. After 1820, with the ascent of the “Liberal Tories” under
William Huskisson, Britain began slowly to move toward freer trade, though
it took three more decades to get there. On the whole, the process was
smoother and easier in Britain than on the Continent, but the difference
was one of degree. By contrast, nothing of the sort occurred in China,
the Ottoman Empire, or Africa.
It
should be emphasized that the various institutions that needed to be reformed
were not just tossed out as anachronistic relics of past ages; somebody
usually stood to benefit from them and fought tooth and nail to retain
them. This was even true for the bewilderingly complex system of weights
and measures that the rationalizers replaced with the metric system.[5]
It was true a fortiori for the guilds, the tax exemptions, the price controls
on bread, and the rights of common fields and free pasture. What economists
call “rentseeking” (economic activity meant to redistribute rather than
to create wealth) had been the hallmark of the ancien régime. While
no society has ever been without it, the attitude toward it can be of prime
importance. In order to overcome the resistance of vested interests, often
concentrated and well-organized, policy makers had to be persuaded that
it was in the national interest to carry out the reforms. By 1800 or so,
mercantilist thought was being replaced in the minds of Europe’s leaders
by Enlightenment ideas, whether they got them first hand from The Wealth
of Nations or from some other source.
Economic
historians ignore ideology and cultural beliefs at their peril. It seems
ironic that economic science today should settle for a historical materialism
in which beliefs and knowledge have no autonomous existence, in which rhetoric,
argument, persuasion, and the intellectual life of nations have no impact
on institutional change. Whether or not the Enlightenment “caused” the
American and French Revolutions, it is hard to think of their courses without
the power of enlightened ideology. To quote one rather influential 20thcentury
economist: “I am sure that the power of vested interests is vastly exaggerated
compared with the gradual encroachment of ideas . . . . soon or late, it
is ideas, not vested interests, which are dangerous for good or evil.”[6]
Joel
Mokyr is Robert H. Strotz Professor of Arts and Sciences and professor
of economics and history at Northwestern University. His The Enlightened
Economy: An Economic History of Britain, 1700–1850 will be published
by Penguin Press in 2004.
[+]
Some of the material in this paper is adapted from my books The Gifts
of Athena: Historical Origins of the Knowledge Economy (Princeton University
Press, 2002) and The Enlightened Economy: An Economic History of Britain,
1700–1850 (Penguin Press, 2004), and my paper “Thinking about Technology
and Institutions,” presented at the Macalester International College Roundtable,
“Prometheus’s Bequest: Technology and Change,” October 10–12, 2002.
[1]
For instance, Oded Galor and David Weil, “Population, Technology, and Growth,”
American Economic Review 90 (2000): 806–828; Robert E. Lucas, Jr.,
Lectures on Economic Growth (Harvard University Press, 2002); Charles
I. Jones, “On the Evolution of the World Income Distribution,” Journal
of Economic Perspectives 11 (1997): 19–36; and William Easterly, The
Elusive Quest for Growth (MIT Press, 2001).
[2]
Danny T. Quah, “Twin Peaks: Growth and Convergence in Models of Distribution
Dynamics” The Economic Journal 106 (1996): 1045–1055.
[3]
On the matter of “growth as a normal condition” and the blockages and obstacles
to it, see especially Eric L. Jones, Growth Recurring (Oxford University
Press, 1981) and Stephen L. Parente and Edward C. Prescott, Barriers
to Riches (MIT Press 2000).
[4]
Robert B. Ekelund, Jr. and Robert D. Tollison, Politicized Economies:
Monarchy, Monopoly, and Mercantilism (Texas A&M University Press,
1997), 14.
[5]
Kenneth Alder, “A Revolution to Measure: The Political Economy of the Metric
System in France,” in M. Norton Wise, ed., The Values of Precision
(Princeton University Press, 1995).
[6]
John Maynard Keynes, The General Theory of, Employment, Interest, and
Money (Harcourt, Brace, 1936), 383–84.
MANAGERIALISM:
ITS HISTORY AND DANGERS
by
James Hoopes
About
twenty-five years ago the United States began to be a business civilization
in a new, largely unnoticed, and subtly undemocratic way. The rise in corporate
power over the past quarter century has been obvious. It has been less
clear, however, how radically our time differs from previous periods of
business preeminence in American history. In the past the nation’s democratic
political tradition had intellectual resources—Puritanism, republicanism,
individualism, pragmatism—with which to challenge business power. But now
corporations themselves possess a sophisticated social philosophy that
speaks the language of democracy.
This
new corporate social philosophy— managerialism, I will call it in this
essay—had its origins in the 1920s and 1930s at the Harvard Business School.
The school’s dean, Wallace Donham, aimed to create a managerial worldview
that would be central not only to business but to society at large. His
vision now seems to be coming to realization. Witness George Bush, our
first president with a master’s degree in business administration, which
he earned at Harvard. New York Mayor Michael Bloomberg is another businessman
become politician with a Harvard MBA. Ditto for Massachusetts Governor
Mitt Romney.
Those
old enough to remember Mitt Romney’s father, George, understand at least
the surface difference between the previous and present generations of
corporate politicians. George Romney, governor of Michigan in the 1960s
and president of the American Motors Company in the 1950s, was a liberal
Republican who wrecked his chance of winning the presidency in 1968 with
his famous gaffe that military brass had “brainwashed” him into his initial
support of the Vietnam War. He and other CEOs who went into politics in
that era suffered terribly from foot-in-mouth disease—one thinks of “Engine”
Charlie Wilson, Eisenhower’s defense secretary, and his much lambasted
remark: “What’s good for General Motors is good for America.” Corporate
politicians of that era often came across—somewhat unfairly in the cases
of George Romney and Charlie Wilson—as less prepared to answer to a sovereign
people than to give it its marching orders.
Our
latter-day corporate politicians are far more sensitive to the nuances
of democratic politics than their predecessors, thanks in part to their
MBA educations. In their graduate business studies Bush, Bloomberg, and
Mitt Romney learned the mélange of democratic and elitist ideas
that is today’s managerial ideology. That ideology enables many thousands
of powerful CEOs and managers to conceive of themselves as democrats, both
in corporate life and in the political arena.
In
my recent book, False Prophets: The Gurus Who Created Modern Management
and Why Their Ideas Are Bad for Business Today, I attempted to write
the history of managerialism and the mostly forgotten men and women who
created it. These pioneers devoted enormous amounts of intellectual energy
to glossing over the conflict between America’s official values of freedom
and democracy on the one hand and, on the other, the reality that most
citizens earn their living under corporate bosses with a great deal of
arbitrary power over their work lives. This managerial tendency to hide
rather than face the contradiction between our political values and our
workaday lives began almost as soon as big business first emerged in America.
Even the notoriously dictatorial Frederick W. Taylor (1856–1915) tried
to present himself as a progressive social reformer, a friend and helper
of working people.
But
the main genesis of managerialism lay in the human relations movement that
took root at the Harvard Business School in the 1920s and 1930s under the
guiding hand of Professor Elton Mayo. Mayo, an immigrant from Australia,
saw democracy as divisive and lacking in community spirit. He looked to
corporate managers to restore the social harmony that he believed the uprooting
experiences of immigration and industrialization had destroyed and that
democracy was incapable of repairing. A man of enormous personal charm,
Mayo was a gifted psychotherapist but also an intellectual charlatan who
molded his data to fit his preconceived ideas. He was one of the most important
social scientists in American history and still exerts enormous influence
over the way we live and work. Mayo and the researchers he gathered around
him—the “Harvard human relations group”—created the mix of democratic appearance
and elitist substance that still dominates the management movement in America
and, increasingly, the world.
Mayo
was the main interpreter of the famous Hawthorne experiment at a telephone
assembly plant of that name outside Chicago. The experiment is remembered
mainly for the “Hawthorne effect,” the idea that employees raised their
output for no other reason than that they were working under the observation
of the experimenters. Much of modern management’s emphasis on sympathetic
understanding and generous recognition of employee effort is an attempt
to generalize the Hawthorne effect into a people-handling technique.
But
the Hawthorne experiment, as Mayo interpreted it, had other, more important
effects on management ideology that have not received as much attention.
Mayo argued that the relaxed supervisory regimen of the experiment allowed
the workers to form themselves into an organic community. The emotional
satisfaction of belonging to that community supposedly accounted for their
increased productivity. But as scholars have shown (notably, Richard Gillespie
in his excellent 1991 book, Manufacturing Knowledge: A History of the
Hawthorne Experiments), a significant number of the employees in the
experiment were anything but happy. They disliked the intense observation
to which the experiment subjected them and saw it as an invasion of their
privacy. They raised their output not because of the satisfactions of organic
community but because the experimenters’ desire to control variables gave
the workers unusual assurance that the piecerate that determined their
pay would not be cut, no matter how much they earned. Mayo’s bogus interpretation
of the Hawthorne experiment started the idea— true, of course, to a certain
extent—that costfree emotional and spiritual rewards can substitute for
money as a motivator of employees.
In
addition to the supposed pleasures of organic community, the Harvard human
relations group offered workers the satisfaction of bottom-up power. As
political scientists have long observed, even the most oppressed people
have some reciprocal power. The Harvard group, pointing to workers’ power
to resist organizational goals, pushed managers toward gentle human relations
techniques such as understanding, empathy, and recognition. The more extreme
members of the Harvard group propounded the idea—still accepted by a fair
number of management theorists—that managers have little real power or
even none at all.
If
managers have no power, why do employees obey their orders? The human relations
group answered that workers’ feeling of powerlessness in the face of a
manager’s order is a self-delusion, a cover up of their moral weakness.
Workers want to feel powerless in order not to face their fear of accepting
responsibility for the organization’s success or failure. A manager’s order
only appears to be a downward delegation of authority. In reality an order
is an occult ritual by which powerful workers fearfully delegate responsibility
upward to the manager!
Of
course, there is a great deal of truth to the idea of bottom-up power.
Employee commitment and participation spell the difference between profit
and loss in many enterprises. Eliciting employee involvement is an important
part, maybe the most important part, of a manager’s job. That is why the
human relations techniques taught in courses such as Organizational Behavior
(OB) are an integral part of management education. The first course in
OB—created at the Harvard Business School in the 1950s—was the brainchild
of Fritz Roethlisberger, Mayo’s foremost disciple within the human relations
group and co-author of a massive tome on the Hawthorne Experiment, Management
and the Worker (1939). OB is now a required course in business schools
across the United States and, increasingly, around the world. The OB idea
of bottom-up power seems ever more relevant in an increasingly competitive
global economy where companies with fast, flexible teamwork win the race
to bring new products to market.
But
the idea of bottom-up employee power also has a cultural function in democratic
society at large, which explains why management ideologues greatly overstate
it. On the face of it, bottom-up power seems democratic and consistent
with America’s historic political culture. When employees bring their democratic
values to work, managers assuage their feelings of lost freedom and dignity
with the idea that the corporation is a bottom-up organization, not the
top-down hierarchy that in reality it is. A certain linguistic confusion
in the management ideology helps accomplish this cultural function. Managerialism
confuses “power” in the physical sense of power to do the job with “power”
in the political sense of power over oneself and others. By emphasizing
the first sense of power, management gurus imply that employees also enjoy
power in the latter sense.
By
waxing eloquent about the importance of employee empowerment, gurus enable
the managers who pay their fees to feel like humanist liberators or at
least like good American democrats. In prosperous companies able to afford
high wages, generous benefits, and good working conditions, employees may
indeed conceive of themselves as not just crucial participants in doing
the company’s work but as more or less autonomous members of a workplace
community. One can scarcely doubt that such good feelings add to the health
of the enterprise. An economic downturn and layoffs, however, may disabuse
employees of such illusions. But managers, who share the normal human inclination
to think well of themselves, may cling all the more strongly to the idea
that they are not top-down bosses but democratic leaders of employees’
bottom-up effort. If managers have no top-down power, how do they run the
company? The human relations group answered that although power is bottom-up,
morality is top-down. The manager is not a boss but a leader who inspires
employees’ cooperative effort through exemplary moral courage in accepting
responsibility for results which not he but the workers have the power
to control. Management theorists generally overlook or ignore the potential
of this idea for fatuous flattery of managers and, worse, its potential
to produce conceit and moral arrogance in “leaders.”
Corporate
America pays enormous sums to leadership gurus to run training seminars
for managers. As with the idea of bottom-up power, there is of course much
to the idea of the importance of leadership in business organizations.
But a good deal of leadership training is counterproductive. It overemphasizes
moral influence and underemphasizes top-down power as management tools.
Managerialism
deceives managers into thinking they are leaders more than bosses. The
consequences for managers when they try to put such fables into action
can be terrible and, for employees, still worse. When employees don’t leap
to follow the “leader,” the fortunate managers are those who do not lash
out in anger and destroy what moral influence they really do have. Managerialism
can also promote corrupt management. Overstating the importance of moral
leadership vis-à-vis top-down power is, paradoxically, a recipe
for immorality. A fair part of the immense amount of CEO arrogance, conceit,
and thievery we have witnessed in recent years has been perversely rationalized
and supported by managerialism and its unethical idea that the manager
is a moral leader. It is easy enough to understand managerialism’s insidiously
corrupting logic in the abstract. Why shouldn’t leaders, who exist on a
different moral plane from their followers, trust themselves to skirt rules
they expect their followers to obey?
In
short, management ideologues, by proclaiming the idea of the leader as
a moral examplar, have made many corporate executives moral egotists. What
managers really need to learn is moral caution in the use of their power.
But how can managers learn that their undemocratic power endangers their
souls when their ideology denies that they even have power?
Managerialism
aims unrealistically to make corporations democratic. It is more likely
to make democracy corporate. Messrs. Bush, Bloomberg, and Romney are scarcely
the only Americans with an MBA degree. Our universities now churn out 100,000
MBAs a year or a million a decade. Many more undergraduates major in business
disciplines than in American history. Other citizens pick up management
ideas from social contacts at work, from company training seminars, and
from the how-to-manage pulp sold in airport bookstalls. Our political leaders
at all levels, from the White House to local government, will increasingly
have been schooled in managerialism.
This
change in the education of many of our prominent citizens raises the disturbing
possibility that the principles of constitutional law and liberal democracy
will be more and more challenged by the principles of organizational behavior
as guidelines for governing America. Jefferson’s prescient warning that
“power believes it has a great soul” and is therefore always suspect is
more and more likely to be forgotten (or rather proven) in favor of the
unwittingly undemocratic claim that our officials are moral leaders. The
danger, of course, is that our leaders, and we too, may question our democratic
right to doubt such paragons.
Managerialism
is scarcely the only antidemocratic force loose in our culture. But many
of the others—religious certitude, xenophobia, intolerance, ignorance,
irrationality, and fear—are old and well known. Managerialism is newer
and easier to miss, partly because it speaks the bottom-up language of
democracy with the cool civility it has learned in corporate life. But
its threat to democracy is real.
All
the more irony, then, that managerial theorists created their false ideology
in an often sincere but always mistaken attempt to make corporate life
consistent with democracy. They would have done better both for business
and democracy to admit the truth that Americans live two lives. At work
we create wealth under top-down power that contradicts the freedoms and
rights we cherish in the rest of our lives. The best possible act of corporate
“social responsibility” would be to acknowledge the conflict and therefore
help maintain the balance between the topdown management power that has
made us rich and the bottom-up political values that keep us free.
Managerial
ideologues could well argue that managerial corporations have shown themselves
the most effective institutions for improving the material conditions of
human life. Yet even if that is true, the moral dangers of management make
the corporation at best a necessary evil in an imperfect world. Abandoning
the self-righteous, moralistic pretense that has become fashionable for
managers in recent decades may be hard on some managers’ egos. But it would
be the surest, though hardly perfect, safeguard against the undemocratic
moral arrogance their power promotes.
One
suspects that the inducements of democratic virtue and the chance to undo
managerialism’s subtle undermining of democratic culture will not alone
change the corporate mindset. The job will require the efforts of citizens
outside the corporation as well as in it. We all need to better understand
the risks and rewards of the new managerial ideology. That is a good reason
for cultural historians to become business historians. To paraphrase Clemenceau
on war and generals, management is too important to leave to CEOs.
James
Hoopes is Distinguished Professor of History at Babson College. His most
recent book is False Prophets: The Gurus Who Created Modern Management
and Why Their Ideas Are Bad for Business Today (Perseus, 2003).
Join
the Historical Society and subscribe to Historically Speaking
THE
SILK ROAD: PART II
by
Alfred J. Andrea
IN
THIS ESSAY’S first installment Alfred J. Andrea considered the first golden
age of the Silk Road—the era of the Han, Kushan, Parthian, and Roman Empires—and
looked briefly at the early influx of Buddhism into China following the
collapse of the Han dynasty in 220 C.E.
This
second and last installment picks up the thread of Buddhist penetration
into China and surveys the succeeding 1300 years of the classical Silk
Road. The period from roughly 200 C.E. to about 600 C.E. witnessed the
domination of the trade routes of Inner Asia by the Sogdians, an eastern
Iranian people. Sogdiana was the Greek name for the steppe region just
east of the Amu Darya (Oxus) River, roughly modern Uzbekistan, Tajikistan,
and southern Kazakhstan and Kyrgystan. The Sogdians were politically divided
into small city-states, such as Bukhara and Samarkand, but were culturally
united. More than any other people, they served as the overland merchants
who connected the lands of western Eurasia with those of the East and were
major vectors of goods and ideas. Indeed, “Sogdian” was a synonym for “merchant,”
and the Iranian language of Sogdiana was the lingua franca of Central Asia’s
trade routes. Sculptures of Sogdian merchants, with their conical hats
(Phrygian caps), full beards (often colored red), and large noses, were
a standard item for Chinese ceramic sculptors for centuries.
The
Sogdians dealt with strong, fairly stable states in Persia, the Eastern
Mediterranean, and India, but China was a political mess. Still, Sogdian
merchants carried on their work in Inner Asia, often protected by nomadic
Turkic tribes who benefited from the Sogdians’ activities. And that work
was more than just buying and selling goods....
Join
the Historical Society and subscribe to Historically Speaking
HONOR,
HATRED,AND AMERICA’S MIDDLE EAST
by
Bertram Wyatt-Brown
In
light of the perilous situation we face in dealing with foes bent on liquidating
American power, national authorities and the public at large must acquire
a greater comprehension of how those enemies think and why their hatreds
consume them. In fact, some leaders have begun to recognize the hazards
of our cultural illiteracy. Senator Chuck Hagel of Nebraska, for instance,
observes: “America will require a wider-lens view of how the world sees
us, so that we can better understand the world, and our role in it.”[1]
This is especially true for U.S. relations with the populations and governments
of the Middle East. Sadly, our apprehension of these countries and their
cultures, religious divisions, and ethical conventions is woefully inadequate.
For years the U.S. intelligence services misunderstood the perils of religious
extremism in the Arab world. One former officer recalled that throughout
the 1990s his fellow members of the security agencies were told that religious
factors had to be considered “soft and nebulous—as well as potentially
embarrassing in those years of epidemic political correctness.” Forced
by the recent tragic events, the American military, secret services, and
State Department authorities at last have become all too aware of Wahabbism,
the extremist Arab fundamentalism, along with the particularities of the
Shiite and Sunni branches of Muslim faiths. But the crucial role of honor
in Middle Eastern societies has been largely unacknowledged despite its
pervasiveness and motivating potency.
The
ethic of honor serves as the fundamental system of justice in communities
and nations where civil society has few institutions and where the rule
of jurisprudence of a more elaborate and objective character exists only
weakly or not at all. Under these circumstances, honor is not perceived
as an ideal of upright individualism. Rather, the possessor of honor has
maintained or achieved a high reputation in the public arena for his martial
valor, familial loyalty, and male protectiveness and authority over possessions
both material and human, that is, his female dependents and offspring.
Upon his arms and that of his tribe or clan rests the security of his household.
. . .
[1]
Chuck Hagel, “Defining America’s Role on the Global Stage,” USA Today
131 (May 2003).
Join
the Historical Society and subscribe to Historically Speaking
RETHINKING
LOCAL HISTORY: AN INTERVIEW WITH JOSEPH A.AMATO
IN
AN IMPORTANT NEW BOOK, Rethinking Home: A Case for Writing Local History,
Joseph A. Amato makes the case for an innovative approach to writing
local history. Drawing on his background in European cultural history and
a wealth of experience writing about the local and regional history of
southwestern Minnesota, Amato argues convincingly that local history should
be vastly more captivating and important than it currently is. Local history,
as conceptualized in Rethinking Home, is a long way from the antiquarian
stereotype so often disdained by professional historians. But neither should
it serve as the mere microcosmic reflection of academic historians’ macrocosmic
constructions. In the process of advancing a new blueprint for local historians,
Amato raises a number of important questions about the relationship of
the various scales of historical inquiry, indeed about the very nature
and purpose of historical inquiry itself....
Join
the Historical Society and subscribe to Historically Speaking
THE
WORLD WAR II GENERATION OF HISTORIANS
by
William Palmer
In
recent years much has been written about an extraordinary generation of
Americans, those born between about 1910 and 1922, usually known as the
World War II generation. This generation has become synonymous with hardship
and sacrifice. In the 1930s their generational identity was formed in the
crucible of the Great Depression, the rise of fascism, and World War II.
Their triumph as a generational force was announced in 1961 by John Kennedy,
himself a member of the generation, when he remarked in his inaugural address
that his election signified the “passing of the torch to a new generation
of Americans.”
There
is also a World War II generation of historians, a group that exerted comparable
influence over the historical profession. Those entering graduate school
between the mid-1950s and the early 1980s were likely to find their reading
lists dominated by works written by members of this group. In American
history it is scarcely conceivable that anyone could have emerged from
graduate school without a substantial study of the work of Richard Hofstadter,
Edmund Morgan, C. Vann Woodward, or Kenneth Stampp. In English history,
the works of Christopher Hill, Lawrence Stone, Hugh Trevor-Roper, and Geoffrey
Elton were equally sacred texts. . . .
Join
the Historical Society and subscribe to Historically Speaking
CHRISTOPHER
DAWSON AND THE DEMISE OF CHRISTENDOM
by
James Hitchcock
The
English historian Christopher Dawson (l889–l97l) is perhaps thought of
primarily as a medievalist. However, by far the largest body of his work
dealt with the 19th and 20th centuries. As much as he was a historian,
he was even more a cultural critic searching for historical answers to
the crises of modern times. He remained a relentless critic of industrialism,
urbanism, and acquisitive capitalism, all the forms of materialism which
he believed were at the root of modern disorders. To these he opposed the
Catholic idea of a universal spiritual society and, without idealizing
the Middle Ages, believed that this universal society had come closest
to realization during the 13th century. He identified religion as the heart
of every culture and considered religion and art more important than economics.
Few historians have ranged over the entire panorama of history as boldly
as he.
His
best-known and most widely read book, The Making of Europe (1937),
dealt with what modern people call the Dark Ages. But, while acknowledging
the material decline which followed the fall of Rome, Dawson insisted that
the centuries after 400 were among the most spiritually rich in Western
history. . . .
Join
the Historical Society and subscribe to Historically Speaking
CHRISTOPHER
DAWSON AND THE CHALLENGE OF METAHISTORY
By
Dermot Quinn
Christopher
Dawson, the great Catholic historian of the last century, has fallen out
of favor. It was not always this way. When Dawson’s Gifford Lectures of
1948–49 were published as Religion and the Rise of Western Culture
he was acclaimed as “the most exciting writer of our day” by the Saturday
Review of Literature. He was, that paper thought, “unequalled as a
historian of culture.” The New Zealand Tablet saw the book as “a
primer for all thinking men of the 20th century.” The Spectator
praised it as “one of the most noteworthy books produced in this generation
about the medieval world.” Even the New York Times exclaimed that
“it would be difficult to find a volume of comparable scope in which the
institutions of medieval life are so brilliantly characterized.” Similar
enthusiasm greeted Dawson’s Dynamics of World History. Commonweal
hailed Dawson as “certainly the first scholar in the English-speaking Catholic
world, perhaps in the whole world.” The New York Times extolled
a writer who “for breadth of knowledge and lucidity of style” had “few
rivals.” A pattern was thus established. Dawson’s work consistently garnered
the highest praise, often from publications of radically different perspectives.
This should not surprise us. From his earliest days his reputation was
exceptional. Ernest Barker, who tutored him at Oxford, thought Dawson “a
man and a scholar of the same sort of quality as Acton and von Hugel.”
David Knowles considered him “in his field the most distinguished Catholic
thinker of this century.” As impressive as these testimonials are, they
may also affect us in a more melancholy way. Generous, even extravagant,
they seem to mark the distance between the generation that admired Dawson
and the generation that has left him behind. A profound change has occurred,
and not for the better. We should weep not for Dawson but ourselves. .
. .
Join
the Historical Society and subscribe to Historically Speaking
PATTERN
AND REPERTOIRE IN HISTORY: AN EXCHANGE
CONTINUING
A THEME begun in the February 2003 issue of Historically Speaking,
this exchange once again addresses the relationship between historical
scholarship and scientific methodology. In Pattern & Repertoire
in History (Harvard University Press, 2002) Bertrand Roehner and Tony
Syme attempt to bridge the methodological gap between the social and natural
sciences by advancing a case for “analytical history.” Here two historians,
Ronald Fritze and John Lukacs, offer their reactions to Pattern &
Repertoire in History. Their essays are followed by Bertrand Roehner’s
reply.
Join
the Historical Society and subscribe to Historically Speaking
A
NEW REPERTOIRE FOR HISTORIANS?
by
Ronald Fritze
Roehner
and Syme are two scholars seeking to formulate principles for the practice
of historical sociology or sociohistory by which they hope to create a
discipline that combines the best of history and sociology. History will
provide the detailed information about the past while sociology will provide
the theoretical structure and empirical organization. Many scholars reject
the possibility of such a truly scientific approach to the study of history.
They argue rightly that major historical events like the French Revolution
or World War II are unique phenomena. Roehner and Syme agree that a large
event like the French Revolution is unique in its entirety. But they argue
that such big and complex events can be broken down into simpler, component
parts. These simpler and smaller scale events are not unique and so are
susceptible to an empirical and scientific approach. In the 17th century
René Descartes advocated such a process of simplifying complex phenomenon
into smaller units with the goal of revealing underlying organization or
patterns. This method is called the “modular approach,” but Roehner and
Syme have also labeled it “analytical history" . . . .
Join
the Historical Society and subscribe to Historically Speaking
“SCIENTIFIC”
HISTORY?
by
John Lukacs
This
ambitious book is yet another breathless attempt to assert that the study
of history can (and ought) to be made “scientific,” since history itself
demonstrates recurrent elements that follow the “laws” of natural science.
From
beginning to end its assertions are unconvincing.
Join
the Historical Society and subscribe to Historically Speaking
PREDICTIONS
AND ANALYTICAL HISTORY
by
Bertrand M. Roehner
Certainly
we would not suggest that the modular approach can be useful to all historians.
Although it may provide a more unified perspective in many fields, only
in a few can it lead to testable predictions. For the time being it would
be great if a few historians can keep these ideas in a corner of their
minds in order to put them to use whenever they come across a case to which
they can be applied with profit. Although Pattern & Repertoire
tries to offer a more “scientific” approach, the essays by Ronald Fritze
and John Lukacs made us realize that we did not go far enough in this direction.
This in itself is a crucial step forward and an important achievement of
this forum. . . .
Join
the Historical Society and subscribe to Historically Speaking
THE
THREADS OF REVOLUTION: CENTRAL EUROPE’S MOMENT
by
Padraic Kenney
“In
Poland it took ten years, in Hungary ten months, in East Germany ten weeks:
perhaps in Czechoslovakia it will take ten days!”1 These words, spoken
by Timothy Garton Ash to Vaclav Havel in late November 1989 during a planning
session in Prague’s Magic Lantern Theater, have become one of the memorable
phrases of that revolutionary time. There are a few others (every revolution
has them, of course): Gennadi Gerasimov’s quip that the Brezhnev doctrine
had been replaced by the “Sinatra Doctrine” (itself a misquote, but never
mind); Lech Walesa’s devastating retort, in a television debate, that the
Polish communists might be taking the country to the future, but on foot,
while everyone else was traveling by car; and Ronald Reagan’s Berlin exhortation:
“Mr. Gorbachev, tear down this wall!”
Garton
Ash’s bon mot, though, seems to encompass the entire sequence of events,
in a way dear to the hearts of students everywhere. In fact, I offer it
to my students every semester, as a handy mnemonic device. It was also
instantly popular in Czechoslovakia itself. Before Garton Ash could relate
his story in the New York Review of Books, the phrase had been immortalized
on hand-painted banners in Prague; in Poland commentators dryly added the
ominous prediction that in Romania the revolution would require only ten
minutes.
Yet
when it comes to making sense of the revolutions of 1989, Garton Ash’s
quip is not a very good guide. It was not, of course, intended to be so;
it would be a dreary world were people held accountable for the accuracy
of their jokes! Nevertheless, Garton Ash’s line is incorrect in three fundamental
ways: it underestimates the past; elides the present; and allows for an
overly benign view of the future. . . .
Join
the Historical Society and subscribe to Historically Speaking
EUROPE,
1550-1800:A REVISIONIST INTRODUCTION
by
Jeremy Black
At
the beginning of the 1990s criticism of conventional views of absolutism
led to the publication of a number of self-consciously revisionist works
including Nicholas Henshall’s The Myth of Absolutism: Change and Continuity
in Early Modern European Monarchy (1992) and my own A Military Revolution?
Military Change and European Society, 1550–1800 (1991). Henshall was
particularly vigorous, dismissing absolutism as an anachronistic historical
construct, imposed on the period by later generations.
An
awareness of the pendulum of historical research suggested that this approach
would, in time, be countered, and indeed in 2002 John Hurt published Louis
XIV and the Parlements: The Assertion of Royal Authority, an impressive
work that explicitly contests the revisionist interpretation and, instead,
seeks to demonstrate a harsh reality of expropriation and lack of consultation
at the core of absolutist government.
This
riposte encourages another look at the whole question. The conceptual points
that nurtured early revisionism can now be clarified by a mass of important
new research. In particular, studies of court culture have mushroomed,
while there has been a welcome “thickening” of the geographical matrix
of our detailed knowledge of the period. It is also easier now to include
the overseas activities of the European states in the analysis. This is
far less true, however, of the necessary comparative dimension with non-European
powers, much of which still requires probing. . . .
Join
the Historical Society and subscribe to Historically Speaking
ODYSSEYS
TO PLANETARY HISTORY
by
Clark G. Reynolds
IN
“CROSSING BORDERS” (Historically Speaking, November 2002)
Peter Paret reflected on his distinguished career as a military and art
historian. Paret’s essay was the first in a series of autobiographical
sketches by senior historians that will appear in these pages. The series
provides prominent and accomplished historians with the opportunity to
comment on their careers and to offer reflections on professional lives
dedicated to historical inquiry.
Our
second installment is from the recently “retired” (from teaching only)
Clark Reynolds. One of the nation’s leading naval historians, Reynolds
is known for his work in the history of naval aviation and for his brilliant
conceptual analyses of history and the sea. The Naval Institute Proceedings
selected his The Fast Carriers: The Forging of an Air Navy (McGraw-Hill,
1968) as one of the ten best English-language naval books published in
the Institute’s first one hundred years, and his 1975 essay “American Strategic
History and Doctrines: A Reconsideration” (reprinted in History and
the Sea) received the American Military Institute’s Moncado Prize.
For
better or worse, no history graduate student to my knowledge has ever been
encouraged, much less subsequently hired, to focus on global history per
se. Our pedagogical canon demands specialized preparation in European,
U.S., Asian, or any national history except as linked to some broader sub-discipline
like the history of science, technology, religion, gender, or the military.
Anything bigger—namely, the universal history which went out of style between
the world wars—has been eschewed by the profession until the very recent
(and still problematic) vogue for world history.
Yet
I was not only encouraged by my graduate teachers to think beyond the specialty
but was expected to do so. As a result, I moved toward world history, though
by no means consciously or even deliberately. I have continued to harbor
deep suspicions of most of those historians who claim distinction as world
historians, especially if their frame of reference is strongly biased (anti-
or pro- this or that for moral, political, religious, racial, gender, or
other reasons). My quest to understand the bigger picture developed as
a series of ultimately converging odysseys shaped by inner fires of curiosity
and propitious intellectual conditions. . . .
Join
the Historical Society and subscribe to Historically Speaking
FROM
THOSE WONDERFUL PEOPLE WHO KILLED SOCRATES
Barry
Strauss
Those
of us lucky enough to teach in a university today know that holding a professorship
is not merely a privilege but a public trust. Virtually every university
in the United States, including private universities, depends upon some
degree of public funding. Without taxes paid by people who do not enjoy
the good life of the university, few of us could be professors. Without
the tax-free status of all universities, few professorial jobs would exist.
The public has given us a great gift; the minimum that we owe the public
in return is to debate the great issues in our disciplines.
For
historians, that means three things. First, it means discussing everything
from politics, economics, and diplomacy, to war, revolution, and class
struggles, and from ideas, work, and sex, to the family, slavery, and religion:
in short, the full spectrum of subjects that make up the historical experience,
of which these are just a sample. Second, it means speaking and writing
in a language that educated laypeople might understand. Third, it means
presenting and discussing every political opinion. In short, historians
must be universalists. . . .
Join
the Historical Society and subscribe to Historically Speaking
LETTERS
To
the Editors:
Niall
Ferguson’s essay (April 2003 Historically Speaking) grafts the globalization
of current times onto that of the British Empire and finds a non-contextual
resemblance between the International Monetary Fund’s prescriptions and
late 19th- and early 20thcentury British economic policy. Yet Ferguson
skillfully avoids the fact that too often Britain acted as a protectionist
nation vis-à-vis its colonies. John A. Hobson wrote in Imperialism:
A Study (1902): “Imperialism repudiates Free Trade, and rests upon
an economic basis of Protection.” Moreover, Britain’s trade with its colonies
as a proportion of its trade with other countries in the latter half of
the 19th century either remained stagnant or declined, which clearly contradicts
Ferguson’s “unequivocal” claim that the “policy of free trade was beneficial
. . . to her colonies.”
With
regard to the issue of “unprecedented” overseas investment, Ferguson neglects
to mention that shares in the “immense” Indian railways came with a 5%
sovereign-guaranteed return. British capital investment in India was small,
and excluding the guaranteed investments in railways and public debt, negligible.
Ferguson
cannot explain why imperial Britain’s IMF-style policies did not lead to
economic growth, while similar policies have doubled India’s growth rate
in the past decade. Perhaps the “autarky” after independence should be
reevaluated, and not just dismissed as “wrong conclusions” arrived at by
the nationalists.
Left
to their own devices, countries like India could have gone the Japanese
route. Despite its handicaps —many of them direct consequences of imperialism
—India has made great strides in areas like nuclear and space technology,
and this after only fifty years of independence.
Anup
Mukherjee
Jabalpur,
India
To
the Editors:
In
the June 2003 issue of Historically Speaking Stephen G. Brush (“Why
Did [or Didn’t] It Happen?”) asks why the Scientific Revolution happened
in Europe in the 17th century but does not provide an answer. An explanation,
if not a simple determinant, recalls factors and conditions already well
known to historians, and my argument here is that their coincidence was
essential for the birth and maturation of the Scientific Revolution.
The
emergence of humanism in the 16th century, deriving mainly from the recovery,
or new translations, of the ancient classics, revived an awareness of the
Greek inquiry into nature and, especially, the Greek search for natural
laws. In the same century, the long tension between spiritual and temporal
power was finally tipped in favor of the latter. Examples: the self-promotion
of Henry VIII to be supreme head of the Church of England; the Concordat
of Bologna (1516) that gave Francis I the authority to nominate French
bishops; and the principle of cujus regio ejus religio as proclaimed in
the Religious Peace of Augsburg, neither a religious nor a theological
principle, but a political one.
Secular
power was achieving control only of religious authority, not of religion
itself. Yet those actions provoked fears that the precedent might eventually
encourage secular authority to tinker with the articles of faith. And the
eventual logical response was to advocate the separation of church and
state, at the very least to insist on the toleration of dissent. The movement
to deny religious authority the enforcement of belief pertained equally
to a refusal to allow religious authority to censor new scientific ideas.
In
his essay Brush refers in particular to Copernicus, Kepler, and Galileo,
all remembered as astronomers. Copernicus, as well, had studied canon law
and earned his living as a cathedral canon. Yet he never took orders. During
his study for a medical degree in Italy, he heard discussions of Pythagorean
astronomical ideas, convincing him of a heliocentrism that remained to
be demonstrated. Kepler trained for the Lutheran ministry, in the course
of which he received a good classical education. Later employed as a mathematician-astrologer,
he ended up not in the ministry but in astronomy. Galileo was meant to
be a physician, but failed to complete his medical degree. But he did attend
the lectures of the physician-botanist Andrea Cesalpino, who sought to
demonstrate the natural order in the plant world. Whatever their differences,
these scholars represented the new emphasis on the study of nature rather
than the supernatural, and they were undermining the authority of medieval
Aristotelianism.
Such
individuals were generally laymen, another manifestation of the secularization
evident in the 16th century. Members of the clergy, to be sure, were not
instantly overshadowed; but the gradual emergence of laymen as founders
and leaders of intellectual and civic organizations, virtually completed
in the 18th century, has been well documented and would be considered one
measure of the advancement of the Enlightenment. The new intellectual climate,
gaining command in secular society, gave people the freedom to inquire
into the properties of nature and permitted the hope that new knowledge
could contribute to the improvement of this life, not the life beyond the
grave. Described as useful knowledge, the phrase implied a new knowledge
meant to liberate us from ignorance and superstition.
Here
is one illustration of these trends: it would have been a rare monastery
or convent that did not have an herbal garden, maintained by the religious
for medicinal benefits. But in the 16th century, coinciding with the burst
of exploratory expeditions overseas, botanical gardens made their first
appearance in Western Europe. Without exception, they were founded by secular
authority and administered exclusively by laymen. Although the initial
impulse was religious—to create new Gardens of Eden—they evolved quickly
under lay direction to become gardens meant to display the natural order
in the plant world—from a classification based upon medicinal affinities
to a classification based upon natural plant families.
My
argument, in short, explains the Scientific Revolution as the beneficiary
of the recovery of Greek natural science, the ascendancy of secular over
religious authority, the dominance of laymen in civic and intellectual
organization, the right to dissent in a liberal climate, and the emphasis
given to the natural order over the supernatural. Nowhere else but in Western
Europe did these essential ingredients ever coincide.
It
is understandable that scholars in the 19th century, who saw in the 18th-century
Enlightenment the ascendancy of secularism in both government and ideas,
should have defined modernity as the inevitable triumph of the secular
over the religious. As Thomas Albert Howard (“A ‘Religious Turn’ in Modern
European Historiography?” Historically Speaking [June 2003]) recognizes,
the upswing of conservative Islam, popular Catholicism, and especially
of Protestant Pentecostalism, which is beyond dispute, testifies to a widespread
decline in secularization. Its further message is the erosion of modernity,
replaced by postmodernism, not only in popular religion and popular culture,
but on campuses as well. Historians in the future may well call this reaction
the Anti-Scientific Revolution.
Finally,
it should be noted that a conventional interpretation of the French Revolution,
which asserts that the revolutionaries had set out to dechristianize the
nation, is no longer tenable. Only the lunatic fringe, les enragés,
conspired to extirpate religion. A substantial majority of the revolutionaries,
no doubt, as children of the Enlightenment, were nonbelievers. But they
also knew that a great portion of the population, whether they were true
believers or not, had not shared in the Enlightenment. Beyond the cruelty
inherent in the denial of faith to those whose lives depended upon religion
was the maintenance of social order, which was profoundly Christian in
formation. Their perception of the necessary coexistence of two cultures,
one incomprehensible to the other, considerably antedated the dichotomy
described by C.P. Snow in 1959. While Sir Charles clearly predicted danger
ahead if the breach between the literary and scientific cultures was not
bridged, in the ensuing debate about the validity of his argument the persistence
of the earlier and more profound dichotomy faded from view. In the aftermath
of the Cold War, its vitality has again taken center stage, and the triumph
of secularism and the other goals of the Enlightenment, liberty and civility,
seem more distant than ever.
Roger
L. Williams
University
of Wyoming
Stephen
G. Brush replies:
I
thank Roger Williams for his letter summarizing a plausible reason why
the Scientific Revolution happened in Europe in the 17th century. As he
states, his explanation is based on “factors and conditions already well
known to historians,” and I am sure he knows that historians of science
have also proposed other plausible explanations, some of them directly
opposed to his. The problem with this approach is that if you have only
one event of this kind to be explained, there is no convincing way to show
which of the several plausible but competing explanations is the correct
or at least the most satisfactory one, or to prove that, as Williams asserts,
the “coincidence” of a specific set of factors and conditions “was essential”
to cause that event to occur.
One
way around that difficulty is to study counterexamples: other historical
situations in which some but not all of those factors and conditions were
present, and in which a major scientific revolution did not occur. Historians
of the English Revolution/Civil War such as Conrad Russell consider this
method to be a legitimate historiographic approach, yet most historians
of science avoid applying it to the Scientific Revolution. In particular,
it is generally agreed that both China and Islam advanced a significant
distance toward achieving such a breakthrough at a time when science was
at a much more primitive stage in Europe. Therefore it seems to me that
we should look more closely at China and Islam, perhaps with Williams’s
factors in mind, to see why the Scientific Revolution did not happen there,
and that might provide a way to test proposed theories for why it did happen
in 17th-century Europe. Unfortunately, I am too old to spend several years
learning the languages and cultural histories of China and Islam well enough
to do this myself, so I can only try to encourage younger historians of
science to do it.
Stephen
G. Brush
University
of Maryland t