Joseph
S. Lucas and Donald A. Yerxa, Editors
President's
Corner | Agriculture as History |
The Study of the Nobilities of Latin Europe |
Crossing
Borders | Interview with J.R. McNeill and William
H. McNeill | Big History | Dispatch
from Germany |
The
Past Perfect | The
Third American Republic | The Battle Over Constitutional
Interpretation | The Prisoners
| Letters
Volume
IV, Number 2
WEBS
OF INTERACTION IN HUMAN HISTORY† J. R. McNeill and William H. McNeill
One
of the most pressing tasks world historians have had in recent years is
to develop a more adequate conceptualization of human history as a whole,
one that combines the insights of the comparative history of separate civilizations
with world systems analysis. Any schema, moreover, that fails to take into
account the importance of humanity’s encounters and collisions with the
organisms of the earth’s ecosystem and the reciprocal impact of climate
and environment on human history is clearly inadequate. Admittedly, the
attempt to understand human history as a whole is a daunting task, but
one that historians cannot avoid simply because of its magnitude and complexity.
With that in mind, we advance the notion of the centrality of webs of interaction
in human history as the basis of a more satisfactory account of the past.
The career of webs of communication and interaction, we submit, provides
the overarching structure of human history.
A web,
as we see it, is a set of connections that link people to one another.
These connections may take many forms: chance encounters, kinship, friendship,
common worship, rivalry, enmity, economic exchange, ecological exchange,
political cooperation, even military competition. In all such relationships,
people communicate information and use that information to shape their
future behavior. They also communicate, or transfer, useful technologies,
goods, crops, ideas, and much else. Furthermore, they inadvertently exchange
diseases and weeds, items they cannot use but which affect their lives
(and deaths) nonetheless. The exchange and spread of such information,
items, and inconveniences, and human responses to these, shape history.
What
drives history is the human ambition to alter one’s condition to match
one’s hopes. But just what people hoped for, both in the material and spiritual
realms, and how they pursued their hopes, depended on the information,
ideas, and examples available to them. Thus webs channeled and coordinated
everyday human ambition.
Although
always present, over time the human web changed its nature and meaning
so much that we will speak of webs in the plural. At its most basic level,
the human web dates at least to the development of human speech. Our distant
ancestors created social solidarity within their small bands through exchanges
of information and goods. Beyond this, even tens of thousands of years
ago, bands interacted and communicated with one another, if only sporadically.
Despite migrations that took our forebears to every continent except Antarctica,
we remain a single species today, testament to the exchange of genes and
mates among bands through the ages. Moreover, the spread of bows and arrows
throughout most of the world (not Australia) in remote times shows how
a useful technology could pass from group to group. These exchanges are
evidence of a very loose, very far-flung, very old web of communication
and interaction: the first worldwide web. But people were few and the earth
was large, so the web remained loose until about 12,000 years ago.
With
the denser populations that came with agriculture, new and tighter webs
arose, superimposed on the loose, original web. The first worldwide web
never disappeared, but sections within it grew so much more interactive
that they formed smaller webs of their own. These arose in select environments
where agriculture or an unusual abundance of fish made a more settled life
feasible, allowing regular, sustained interactions among larger numbers
of people. These webs were local or regional in scope.
Eventually,
after about 5,000 years, some of these local and regional webs grew tighter
still, thanks to the development of cities that served as crossroads and
storehouses for information and goods (and infections). They became metropolitan
webs, based on interactions connecting cities to agricultural and pastoral
hinterlands, and on other interactions connecting cities to one another.
Metropolitan webs did not link everyone. Some people (until recent times)
remained outside, economically self-sufficient, culturally distinct, politically
independent. The first metropolitan web formed 5,000 years ago around the
cities of ancient Sumer. The largest, formed about 2,000 years ago by a
gradual amalgamation of many smaller ones, spanned most of Eurasia and
North Africa.
Some
of these metropolitan webs survived, spread, and absorbed or merged with
others. Other webs prospered for a time but eventually fell apart. In the
last 500 years, oceanic navigation united the world’s metropolitan webs
(and its few remaining local webs) into a single, modern worldwide web.
And in the last 160 years, beginning with the telegraph, this modern worldwide
web became increasingly electrified, allowing more and faster exchanges.
Today, although people experience it in vastly different ways, everyone
lives inside a single global web, a unitary maelstrom of cooperation and
competition.
• •
•
All
webs combine cooperation and competition. The ultimate basis of social
power is communication that encourages cooperation among people. This allows
many people to focus on the same goals, and it allows people to specialize
at what they do best. Within a cooperative framework, specialization and
division of labor can make a society far richer and more powerful than
it might otherwise be. It also makes that society more stratified, more
unequal. If the cooperative framework can be maintained, the larger the
web gets, the more wealth, power, and inequality its participating populations
exhibit.
But,
paradoxically, hostile competition can also foster the same process. Rivals
share information too, if only in the shape of threats. Threats, when believed,
provoke responses. Usually responses involve some closer form of cooperation.
If for example, one kingdom threatens another, the threatened king will
seek ways to organize his subjects more effectively in defense of the realm.
He may also seek closer alliance with other kingdoms. Competition at one
level, then, promotes cooperation at other levels.
Over
time, those groups (families, clans, tribes, chiefdoms, states, armies,
monasteries, banking houses, multinational corporations) that achieved
more efficient communication and cooperation within their own ranks improved
their survival chances and competitive position. They acquired resources,
property, followers, at the expense of other groups with less effective
internal communication and cooperation. So, over time, the general direction
of history has been toward greater and greater social cooperation—both
voluntary and compelled—driven by the realities of social competition.
Over time, groups tended to grow in size to the point where their internal
cohesion, their ability to communicate and cooperate, broke down.
The
tight webs of interaction, linking groups of all sorts, tended to grow
for several reasons. They conferred advantages on their participants. Through
their communication and cooperation, societies inside metropolitan webs
became far more formidable than societies outside. Participation in a web
brought economic advantages via specialization of labor and exchange. Military
advantages came in the form of larger numbers of warriors, often full-time
specialists in the arts of violence, aware of (and usually choosing to
use) the cutting edge of military technology. Epidemiological advantages
accrued to people living inside metropolitan webs, because they were more
likely to acquire immunities to a wider array of diseases than could other
people.
All
these advantages to life inside a web came at a cost. Economic specialization
and exchange created poverty as well as wealth. Skilled warriors sometimes
turned their weapons against people who looked to them for protection.
And people acquired disease immunities only by repeated exposure to lethal
epidemics. Nonetheless, the survivors of these risks enjoyed a marked formidability
in relation to peoples outside of webs.
But
there was more to the expansion of webs than this. Webs were unconscious
and unrecognized social units. But nonetheless they contained many organizations—lineages,
tribes, castes, churches, companies, armies, empires—all of which had leaders
who exercised unusual power. These leaders caused webs to expand by pursuing
their own interests. Leaders of any hierarchy enjoy an upward flow of goods,
services, deference and prestige. They normally struggle to expand the
scope of their operations so as to increase this upward flow. Their followers
help, to avoid punishment and to earn a share (however paltry in comparison
to the leaders) of the anticipated rewards. In the past, this urge to expand
often came at the expense of people outside of existing webs, who were
poorly organized to defend their own persons, property, territory, or religion.
Survivors found themselves enmeshed in new economic, political, and cultural
linkages, in a word, in a web. Thus leaders of organizations within a web,
in seeking to enhance their own power and status, persistently (if unconsciously)
expanded the webs in which they operated.
Webs
also tended to expand because communications and transport technology improved.
Writing, printing, and the Internet, for example, were major advances in
the transmission of information. Each reduced information costs, and made
it easier to build and sustain larger networks of cooperation. Sailing
vessels, wheels, and railroads, similarly, cut transport costs and promoted
cooperation and exchange over broader spaces and among larger populations.
So
webs involved both cooperation and competition, as their scale tended to
grow. So too did their influence upon history. The original worldwide web
lacked writing, wheels, and pack animals. The volume and velocity of messages
and things that circulated within it was always small and slow by later
standards. Its power to shape daily life was weak, although it could occasionally
transmit major changes. But the more tightly woven metropolitan webs that
evolved in the past 5,000 years transmitted more things more quickly, and
thus played a larger role in history. As webs grew and fused, fewer and
fewer societies existed in isolation, evolving in parallel with others,
and more and more existed and evolved in communication with others. Between
12,000 and 5,000 years ago at least seven societies around the world invented
agriculture, in most cases quite independently: parallel pressures led
to parallel solutions. The steam engine did not have to be invented seven
times to spread around the world: by the 18th century once was enough.
The
power of human communication, cooperation, and competition shaped the earth’s
history as well as human history. Concerted human action upset prevailing
ecological relationships, first through the deliberate use of fire, coordinated
hunting of big game, and the domestication of animals and plants. Eventually,
humankind learned to divert ever-larger shares of the earth’s energy and
material flows for our own purposes, vastly expanding our niche and our
numbers. This, in turn, made the infrastructure of webs—the ships, roads,
rails, and Internet— easier to build and sustain. The process of web-building
and the process of enlarging the human niche supported one another. We
would not be six billion strong without the myriad of interconnections,
the flows and exchanges of food, energy, technology, money, that comprise
the modern web.
How
people created these webs of interaction, how those webs grew, what shapes
they took in different parts of the world, how they combined in recent
times into a single worldwide web, and how this altered the human role
on earth is the subject of our short book.
William
H. McNeill, the Robert A. Millikan Distinguished Service Professor Emeritus
at the University of Chicago, is widely considered to be the dean of world
historians. His most recent book is Keeping Together in Time: Dance and
Drill in Human History (Harvard University Press, 1995). J.R. McNeill is
professor of history at Georgetown University. His Something New Under
the Sun: An Environmental History of the Twentieth-Century World (Norton,
2000) was co-winner of the World History Association Book Prize.
An
Interview with J.R. McNeill and William H. McNeill
Conducted
by Donald A. Yerxa in anticipation of J.R. McNeill and William H.
McNeill, The
Human Web: A Birds-Eye View of World History (Norton, forthcoming in
2003).
William
H. McNeill, the Robert A. Millikan Distinguished Service Professor Emeritus
at the University of Chicago, is widely considered the dean of world historians.
His most recent book is Keeping Together in Time: Dance and Drill in
Human History (Harvard University Press, 1995). J.R. McNeill is professor
of history at Georgetown University. His Something New Under the Sun:
An Environmental History of the Twentieth-Century World (Norton, 2000)
was co-winner of the World History Association Book Prize.
Yerxa:
What are you trying to accomplish with The Human Web?
JR
McNeill: A number of things at the same time, I suppose. On the
intellectual level, my dad and I are trying to get across a vision of world
history—one that is, we hope, coherent, accessible, and compelling. And
our vision, simply stated, is that the means of interaction among communities
throughout history have served to provoke the changes that are the main
currents of history, and this is quite consistent from the earliest human
times to the present. The emphasis is on communications, networks of communications,
technologies of communications, and of transport as well.
Yerxa:
How do your webs of communication and interaction improve upon existing
conceptual schemes in world history?
JR
McNeill: There isn’t a wide variety of existing conceptual schemes
within world history, if by world history we mean attempts to tell
the whole story of the human experience—or perhaps I should put it better—attempts
to give structure, pattern, and meaning to the whole history of the human
experience. By far the dominant approach, certainly within the English-language
historical tradition, has been to divide the world up—at least over the
last 5,000 years—among various civilizations. That is, to take elite culture
as the primary unit of analysis, because it is elite culture that defines
a given civilization, whether that is Egyptian, Chinese, or what have you.
And this is the approach that informs most of the textbooks, but it is
not the only one. In the last fifteen to twenty years a rival vision has
popped up—one that my father has done something to advance beginning forty
years ago—and that is to see world history as the story of interaction
among various cultures and to privilege cross-cultural exchanges, influences,
contacts, etc. I would say the primary exponent of this view currently
is Jerry Bentley, editor of the Journal of World History and author
of what I believe is the best textbook on the market. But that’s about
it in terms of coherent visions of world history. So what this web concept
tries to do is take the latter of these two positions a little bit further
and try to give some structure to the concept of, not perhaps cross-cultural
interactions, but cross-community interactions. That is, people need not
be of different cultures when interacting; they can be approximately of
the same culture and yet locked in some competitive struggle or, equally,
locked in some sort of cooperative arrangement. So we try to give a bit
more structure and pattern to the notion of group interaction than does
any other vision of world history that I’m aware of.
William
McNeill: Let me add a bit here. When I was young, there were two visions
of world history that were commonplace. One was based on notions of the
Judeo-Christian revelation, and it understood meaningful history as the
history of God’s relationship to men. This was far from dead; there were
lots of people in the United States, and in other countries as well, for
whom this version of world history was the true one. That is, God’s relationship
to man was what really mattered. At the core of this vision of world history
was the assumption that Christians were the people who had received the
true revelation. Muslims had exactly the same view of their world, but
it was a different revelation from the same God. And then there was the
18th- and 19th-century secularization of the Christian epos—as I
like to call it—that was taught in the universities. This vision of world
history was anchored in the notion of progress, interpreted in largely
material terms: technological improvements, printing, and all the changes
that followed from that, as well as changes of ideas. The notion of European
progress had been ascendant up to the First World War. And when I was a
young man, the First World War presented a tremendous challenge to vision
of human progress. It contradicted everything in which those who thought
Europe was progressive had believed. This was simply unresolved by historians
who still thought of progress as the old, Enlightenment sort of vision.
For them, history stopped with 1914 and the controversy over war guilt,
and there was no effort to meet this great intellectual challenge to the
picture of progress. Progress was one of the great ideas of the Western
world, and I was brought up with all that. I distinctly remember the week
in which I encountered Toynbee as a second-year graduate student at Cornell.
I suddenly realized that the history I had been taught had been confined
to ancient Greece and Rome and the Western world, and the rest of the world
only joined history when the Europeans conquered it.
Yerxa:
What year was this when you first encountered Toynbee?
William
McNeill: It must have been 1940. I didn’t know Toynbee’s
reputation at all. Cornell's library only had the first three volumes of
A Study of History, and since Cornell did not have any formal graduate
courses then, I had free time—something that I’ve never had since—to explore
Toynbee's thought. I was captivated by his picture of a multiplicity of
civilizations, each—as he said—philosophically equivalent to another, and
this meant that the world was enormously wider than I had previously understood.
When I wrote my Rise of the West [1963], I was still very much influenced
by Toynbee and his vision of the multiplicity of civilizations as the way
to handle world history. And then later I was influenced to some extent
by world-systems analysis of Immanuel Wallerstein and others, but I also
felt that the world-systems approach was also not totally satisfactory.
So now here we are advancing an alternative model, the web. The web extends
from every word you say, every time, every message. This is the texture
of human life within families, within primary communities, within cities,
and among all kinds of subgroups with professional linkages. We have, it
seems to me, a conceptual scheme that puts the world together in a far
more effective fashion than had been true before.
Yerxa:
Seen through the lens of webs of interaction, history reveals a trajectory
toward greater size and complexity. Is this necessarily so? To borrow from
the late Stephen Jay Gould, if it were possible to replay the tape of human
history from the beginning, would we be likely or unlikely to see the same
or similar trajectory?
JR
McNeill: Certainly, this is a meta-question. My answer to it, which
I offer without great confidence, is that if we were to replay the tape
of human history from precisely the same initial starting conditions—let’s
say 100,000 years ago or one million years ago—that the probability is
that we would end up with approximately the same results in very broad
patterns. And I stress approximately. We would likely see an evolution
toward greater complexities of social arrangements and an evolution toward
larger and larger units of society. Now, that is not to say that we would
necessarily arrive at an industrial revolution; we would not necessarily
arrive at a world with lots of democracies in it. Those seem to me on this
scale to be matters of detail that could easily have turned out differently.
But the proposition that the general drift of cultural evolution is toward
more complex and larger-scale social units seems to me highly probable.
Not absolutely ironclad, guaranteed, but highly probable.
Yerxa:
What does all this suggest about human agency? If these webs function as
the driving force of historical change, how should we view human agency?
JR
McNeill: I would say that the webs are the shaping force
of human history, not so much the driving force. The driving force
is the ambitions—individual and collective—of people. There’s plenty of
room for human agency, but as Marx put it, men make their own history,
but not just as they please. I think this is an apt aphorism. The webs
shape what is possible, but the driving forces are the opportunities and
challenges that people see. Now those challenges and opportunities that
they see, what they are aware of—all that is dependent upon the information
that comes to them. Information that comes to them comes via these webs
of interaction. So human agency, within the context of the webs, works
to shape ultimate results. And then on the more detailed level—getting
away from the meta-scale and the grand vision—there is plenty of room for
contingency and human agency on what for most historians are rather large-scale
questions: whether it’s the nature and character of the French Revolution
or the Taiping Rebellion or of Alexander’s Macedonian empire. On these
scales, which are pretty big scales for historians to operate on, there
is still plenty of room for contingency and human agency. Had Alexander
died at age sixteen instead of thirty-three, things would have been quite
different.
Yerxa:
We’ve seen an explosion of books dealing with macro-historical themes.
What do you make of this?
JR
McNeill: I’m delighted to see an explosion of macrohistories. I note
that many of them are not written by historians; they are written by journalists
or, in the case of Jared Diamond, someone who is part ornithologist and
part human physiologist. This seems fine to me; the additional perspectives
of people outside of the cadre of professional historians is very welcome.
But I do wish that historians would also more frequently adopt the macro-perspective
for two reasons. First of all, intellectually, historians, as a group,
need to operate on every scale—not that every individual historian needs
to do so, but as a group. That is, micro-studies are necessary and valuable,
but to make them maximally interesting and useful, they need to be situated
and contextualized in larger-scale macrohistorical patterns. At the same
time, macrohistories are impossible without the large number of microhistories.
So both scales and, by extension, all scales between the smallest and the
largest are helpful and useful. But the professional training of historians
in this and other countries is very slanted toward producing small scale
studies, and they exist in great profusion. I do not object to that. I
do wish that there were more historians eager to operate on the larger
scale at the other end of that spectrum. Secondly, I think it’s important
because historians, at present, have some purchase on the public imagination.
This is delightful, but it is not a birthright of historians. There are
academic disciplines that exist tenuously because they don’t have anything
directly to say to the general public. I’m thinking, for example, of classics,
which one hundred years ago was a vital discipline in the universities
and is now a marginal one within certainly American universities. And this
has happened to other disciplines as well. History, happily, has avoided
developing its own impenetrable jargon, although there are historians who
have succeeded magnificently in writing impenetrable jargon. Nonetheless,
on the whole, historians still write accessibly, which rather few academic
disciplines still do. And a number of historians write things that the
general public is happy to read. In order to maintain that situation and,
ideally, in order to improve it and expand the general interest in the
work of historians, I think historians need to write at the big scale.
The general public in most cases will not be interested in microhistories.
There’s always an exception to that; there’s always a market for certain
kinds of military or presidential histories in this country. But in general,
it’s the bigger pictures, the bigger sweeps that are going to be the most
appealing to the general public. So I am eager to see historians do that
and not leave the field of macrohistory to historical sociologists, ornithologists,
and journalists.
Yerxa:
What is your assessment of the present state of world history?
JR
McNeill: I’m pretty cheerful about it, for a couple of reasons.
First of all, I believe in world history; I think it is a feasible project
both as a teaching enterprise and as a writing enterprise. Obviously, I
think the latter; otherwise I wouldn’t have written this book with my dad.
I also believe it is appropriate as an educational mission for young people
in this and in all countries. And I’m happy to say that I think it’s getting
better, in at least two respects: the number of people in the field is
growing and the quality of work in the field is improving. In the English-language
community, I think that’s primarily due to the Journal of World History,
which has been going for about twelve years now and has served as a forum
for ideas, very effectively in my view. And then as a pedagogical matter,
the number of world history courses in this country is growing by leaps
and bounds, and in some other countries that’s true as well. I don’t know
if that’s true generally around the world, though I wouldn’t be surprised
if it were. And this seems to me a positive development. Now more than
ever—although I think it has always been the case—it would be desirable
to educate people in the broadest human context rather than in their own
national context or, for example in this country, in the Western Civ. context.
Those things are actually useful and valuable, but on their own they are
quite incomplete. As a pedagogical matter, the growth of world history
is a very favorable development.
William
McNeill: The field is experiencing very rapid evolution. There
are a lot of people interested in world history all of a sudden, and fortunately
there are some very good minds at work. One of most impressive, in my opinion,
is David Christian. Of course, many others are doing serious work. Peter
Stearns has joined the chorus, and, if I may say, my son and I are doing
serious work. This maturation of world history is not surprising.
Obviously, the world is a tightly integrated whole today, and anyone who
looks at the world knows that the European past—much less the American
past—is not the whole past. We are immersed in this worldwide web. And
I think it is very important to know how it got that way, which is why
my son and I wrote our book.
Yerxa:
Do you think rank and file historians have paid sufficient attention to
world history?
William
McNeill: Of course not.
Yerxa:
Why not?
William
McNeill: In my opinion, one of the problems has been the historical
profession’s resistance to history that is not based in primary texts.
We have an enormous fixation on, what seems to me to be, the naïve
idea that truth resides in what somebody wrote sometime in the past. If
it's not written down, it isn’t true. And that’s absurd. But it’s the way
historians are trained: you have to have a source, and if you don’t have
something you can cite from an original source, in the original language,
then you’re not a really good historian, you’re are not scientific, you’re
not true. The idea that truth resides in what was said is highly
problematic. People in the past didn’t always know what was most important
even when it was going on around them. Similarly, we probably don’t know
what’s most important going on around us today. To assume that only our
conscious awareness of what we think we are doing is what should constitute
history is silly. What happens is a process in which hopes and wishes and
consciousness enter, but we don’t get what we want; we get something mixed
with what other people want, with unsuspected and surprising results for
all concerned, over and over again. Now if you take only what has been
written down—that which happens to have been preserved, which is a small
fraction of what was actually written down—as what historians should deal
with, you automatically abbreviate the human career. You leave out pre-history;
you leave out all the non-literate populations; and you concentrate in
effect on a very small number of people, often a very skewed example of
the upper classes even, the clerics, the literate, which was sometimes
very, very small. So clearly I consider the obsession with written
sources to be an absurdity if you’re trying to understand what happened.
It means that people who write books such as ours—which is full of lots
of hypotheses based upon little or no material evidence and great leaps
of the imagination—may be dismissed as having engaged not in history but
historical speculation. I think this is why many people avoid world history.
They have their own Ph.D. to work on; they have to do a book; having done
that they’re now an expert on whatever it is; they have new problems to
look at and new sources to consult; and they’re too busy to think of the
larger context in which their own particular study takes place. My son’s
remarks about the importance of world history’s context for more specific
history are exactly right. Don't misunderstand me; I don’t wish to overthrow
textual history, history based on sources. Far from it. It’s the interweaving
of that with larger concepts that I support. Ever since 1914 there has
been no received sense of the whole drift of human history. After the notion
of progress was basically discredited, no one dared ask what mattered for
the history of humankind as a whole. I think that if we can begin to do
that, there will be a great healing for history and history will be in
much more fruitful contact with the other social and biological sciences.
Join
the Historical Society and subscribe to Historically Speaking
President’s
Corner
by
George Huppert
It
seems fitting, as my term of office has come to an end, that I should say
a few words about how we are doing as an association. Perhaps I should
also provide a few candid observations regarding my experiences while in
office. It was in June of 2000, at our second national conference in Boston,
that I found myself nominated for the office of president of our society.
I accepted, not without some reluctance. Now, two years later, I have to
confess that this has been a pleasurable and instructive experience. Of
course, there is some fairly time-consuming work involved, but the net
benefit, which I had not counted on, has been a remarkable widening of
my horizon. As a historian of Renaissance France, I have maintained relations
over the years almost exclusively with other specialists in my field, both
here and in Europe. I rarely used to read books or articles unrelated to
my research. This is not something I am proud of, but somehow there was
never enough time to stray from my own work. This failure of mine is probably
not unique in our profession.
Over
the course of the past two years, things have been quite different. Inevitably,
mixing with all kinds of historians, I have come into contact with any
number of interesting colleagues whose work, of necessity, I had to sample,
at least in passing. This has turned out to be an exhilarating experience.
I recommend it highly.
During
my tenure, the Historical Society has matured, and in the process, undergone
considerable changes. We started out, four years ago, in reaction to trends
in the profession that concerned us. As we went along, talking to each
other and inviting contributions to our meetings and publications, we gradually
realized that we were creating a new forum and a new community. We may
have started out like angry prophets in the desert, calling the wrath of
heaven down on miscreants who were wrapping trivia in the swaddling cloth
of ideology. But now, four years later, we have created a unique organization,
a smaller, calmer meeting ground, where we can talk to each other “in an
atmosphere of civility, mutual respect, and common courtesy.”
This
we have achieved. The next step is to overcome fragmentation. This is the
daunting task with which I have saddled the program directors for our next
conference.
As
it happens, one of the program directors, Peter Coclanis, is the new president
of the Historical Society. We could not have chosen a better man for the
job. Peter is one of the many colleagues whom I would never have gotten
to know had I not accepted the job of president. In the past two years,
I have not only read his work, which is first rate, but also worked closely
with him and learned to appreciate his affable and supportive presence.
I look forward to further close collaboration with him.
George
Huppert is professor of history at the University of Illinois at Chicago.
His book After the Black Death: A Social History of Early Modern Europe
(Indiana University Press, 1986) is now available in a second, expanded
edition.
Join
the Historical Society and subscribe to Historically Speaking
Agriculture
as History
by
Peter A. Coclanis
Peter
A. Coclanis is the Historical Society's new president
Farming
and farmers don’t get much attention, much less respect in American academic
circles any more. With farmers constituting such a small proportion of
the U.S. labor force, and with the cultural turn in the humanities two
decades ago, agriculture and agriculturalists now seem so elemental, so
material in the eyes of many as to disqualify these subjects from
the list of those deemed worthy of serious—i.e., fundable and prize-worthy—scholarship.
It’s hard to make hogs or harrows transgressive, I guess.
This
inattention, however depressing, does have its comic aspects, one of which
relates to the growing distance and estrangement of agriculture from the
mainstream cultural milieu. A few years ago, for example, in the Daily
Tar Heel, the University of North Carolina-Chapel Hill’s student newspaper,
a graduate-student columnist (in history, no less) opined that college
was the time for young people to “sew” their wild oats.[1]
(Where is Isaac Merrit Singer when you need him?) Similarly, in the actress
Mary Frann’s 1998 New York Times obituary, a friend was quoted as
saying that in the last year or two before her death, Frann had had “a
hard road to hoe.”[2]
Most roads would be hard to hoe, I suspect. And just last spring,
I was second reader on a senior honors thesis at UNC, the writer of which
at one point included a passage stating that “in a capitalistic society
. . . one man weeps, the other sows . . .”[3]
Fair enough, but if true, it must follow, then, that in socialist societies
people reap what they need, not necessarily what they sew, right?
Other
types of evidence attest to my point as well. In the late 1980s,
for example, that venerable body for rural youth, the Future Farmers of
America, officially changed its name to the National FFA Organization.[4]
Now the change may have come about in part because farming qua vocation
doesn’t have much of a future, but it arguably also came about—like Kentucky
Fried Chicken’s name change to KFC—because farming, like frying, is now
considered déclassé. More tellingly still, an article
entitled “Auburn Seeks to Revamp Aggie Image,” appeared in the Wall
Street Journal a few years back. According to the piece, the school
was changing the name of the Department of Agricultural Engineering to
the Department of Biosystems Engineering, because the old name was turning
off the public, which increasingly views agriculture in negative terms
as “an unsophisticated, low-technology field.”[5]
Auburn! When a place like Auburn—a land-grant school since 1872—is embarrassed
to be associated with farming, you better believe that academic research
on the history of agriculture is in trouble.
In
its brief history, the Historical Society has tried to remedy this problem,
publishing impressive pieces in the Journal of the Historical Society
on farmers and farming by Victor Davis Hanson and Louis A. Ferleger, respectively. [6]
Every little bit helps, of course, but I’m still pessimistic. For a lot
of reasons. There is the case involving one of my US history colleagues,
a senior historian, who recently told me that he leaves farmers completely
out of his survey on US history since 1865 because they don’t contribute
to the narrative thrust of the course. In the spirit of Farmers’ Alliance
firebrand Mary Elizabeth Lease, I should have replied with a “Thrust this!”
I’m still kicking myself that I didn’t.
One
can go on and on with this type of anecdotal evidence. The editorship of
the journal Agricultural History is currently open, and I was approached
about the position, an attractive one in many ways, but one that demands
a modest level of institutional support. I approached my home institution
about the possibility of such support, and learned that agriculture is
not where we as an institution want to position ourselves for the future.
Cultural studies, nanoscience, or globalization, anyone? Finally, last
summer I gave a set of lectures in China on agricultural history. In working
on one of the lectures, I had to check on some developments in English
agricultural history, so I walked over to UNC’s Walter R. Davis Library
and went straight to the source: the much-renowned, multi-volume series
The Agrarian History of England and Wales, edited over the years
by luminaries such as H.P.R. Finberg, G.E. Mingay, and Joan Thirsk, among
others.[7]
I needed to look at four volumes, three of which were published in the
1970s and 1980s, and one published in 2000. None of the former had been
checked out since 1991, and the most recent volume had never been checked
out at all. O tempora, o mores!
What
then to do? Hold ’em or fold ’em? A tough call. I’m not much for tilting
at windmills, spitting into the wind, etc., but I really do believe there
are ways to interest a new generation in agriculture. In fact, in one of
my lectures in China and in two papers I’m preparing for publication, I’ve
outlined a way to do just that. It won’t be easy, and it will require agricultural
historians to reinvent themselves, to shift cultivation, as it were. Briefly
put, what I’m pushing is for scholars interested in agriculture to adopt
a broader research purview, and, in so doing, to devote less attention
(in relative terms at least) to isolated farmers in the field and to embed
the study of farming—particularly in the modern period—in an analysis of
what might be called the “food system” as a whole. Such an approach will
at once allow scholars to develop a more sophisticated and holistic approach
to agricultural production itself, and more explicitly to relate the farm
sector to developments in the manufacturing and service sectors of the
economy. If we are lucky, we may as a concomitant attract the interest
of bright, ambitious young scholars— scholars who right now wouldn’t be
caught dead pursuing research in agricultural history as traditionally
conceived. (Not to mention the attention of readers interested in subjects
such as industrialization, trade and marketing, consumer behavior, technology,
and cooking.) Anyone who doubts the appeal of the last of these need only
check out the slick new journal Gastronomica, published by the University
of California Press, or check out the ratings of “The Iron Chef” on the
Food Network!
What,
one might ask, do I mean by the term food system? What activities
are included under this rubric? To answer the latter question first: all
activities involved directly or indirectly in the production, storage,
processing, financing, distribution, and consumption of outputs produced
in the farm sector per se. As such, the “food system” approach in
some ways resembles the so-called commoditychain approach championed by
an increasing number of social scientists around the world. In this approach,
a scholar interested in the athletic shoe industry, for example, would
begin with rubber tapping in Malaysia or Indonesia and not end until he
or she has adequately explained how and why poor African- American youths
in US ghettoes buy so many pairs of expensive Michael Jordan and Allen
Iverson model basketball shoes.
In
principle, then, adoption of the “systems” approach to agriculture entails
study of: the process by which farm inputs are acquired; cultivation practices
broadly conceived; the manner in which production is financed; how and
where farm output is stored; the processing of such output; transportation
logistics; all levels of marketing and distribution; and, of course, consumption.
Nor do these matters, even taken together, exhaust the list of relevant
areas of inquiry. Clearly, the state’s role, positive or negative, is crucial
to the systems approach, particularly assessing the state’s ability to
provide a stable context for production and exchange to take place, and
its capacity to develop human capital generally and to foster and sustain
the production and dissemination of specialized agricultural knowledge
specifically. In these matters, private and non-statal institutions and
organizations obviously also play roles, whether we’re speaking of market
stabilization (trade associations, co-operatives), risk reduction (futures
markets, formal or informal crop-insurance schemes), or human capital development
(private schools, NGOs, and international donors and foreign aid).
To
pursue the systems approach one need not necessarily study all of these
aspects of agriculture, certainly not in a methodical way. Just as a business
firm need not be completely vertically integrated to achieve some of the
benefits of the strategy, a researcher can achieve positive gains by approaching
and conceptualizing agricultural problems in a relational, systemic, process-oriented
way. In this regard the words of Eugene Genovese are very instructive:
No
subject is too small to treat. But a good historian writes well on a small
subject while taking account (if only implicitly and without a direct reference)
of the whole, whereas an inferior one confuses the need to isolate a small
portion of the whole with the license to assume that that portion thought
and acted in isolation.[8]
To
complicate our task a bit more, let me say that however much I appreciate
the intellectual value added by the “commodity-chain” approach, and by
the “chain” metaphor itself, other metaphors actually come closer to what
I have in mind in making my case. History is not linear and agriculture
is not plane geometry. Human behavior is more complex than that. Rather
than leaving you with an image of a chain linking together the various
components of the food system, I would propose a more intricate and elaborate
metaphor, one that could be rendered graphically via a threedimensional
image of some type or another perhaps, or verbally by invoking the image
of a web, or—borrowing from physics—that of a field. In each case, the
idea is to move us away from the rigidity and constraints imposed by metaphors
invoking chains.
Once
we begin to think of things in this way, we can begin to connect Farmer
Alpha to Consumer Zed, not to mention History Professor X from Boll Weevil
A & M to History Professor Y from Skyscraper Metropolitan. In other
words, let’s implicate more people and more processes in the farmer’s “plot.”
We can ask no less, for the stakes are too great. As Ferleger points out
in the piece mentioned above, as much as 50% of the world’s labor force
may still be directly involved in agriculture today. And as Hanson suggests
in his essay, knowing something about agricultural history may help us
“moderns” to understand, if not remember, something important about balance
in life, ethical purpose, and moral restraint.
The
Historical Society’s current president, Peter A. Coclanis is Albert R.
Newsome Professor and chairman of the history department at the University
of North Carolina- Chapel Hill. He is the author, with David L. Carlton,
of The South, the Nation, and the World: Perspectives on Southern Economic
Development (University Press of Virginia, forthcoming in 2003).
[1]
Daily Tar Heel, January 26, 1998, 10.
[2]
New York Times, September 25, 1998, A22.
[3]
Robert Vice, “The Portland Canal,” Senior Honors Thesis, History, University
of North Carolina at Chapel Hill, 2002, 54.
[4]
Richard P. Horwitz, Hog Ties: Pigs, Manure, and Mortality in American Culture
(St. Martin’s Press, 1998), 181.
[5]
Wall Street Journal, October 21, 1998, S1, S2. The quote in the text appears
on page S1.
[6]
Victor Davis Hanson, “Agricultural Equilibrium, Ancient and Modern,” Journal
of the Historical Society 1 (Spring 2000): 101–133; Louis A. Ferleger,
“A World of Farmers, But Not a Farmer’s World,” Journal of the Historical
Society 2 (Winter 2002): 43–53.
[7]
The Agrarian History of England and Wales, eds. H.P.R. Finberg, et al.,
8 vols. (Cambridge University Press, 1967–2000).
[8]
Eugene D. Genovese, “American Slaves and Their History,” in Genovese, In
Red and Black: Marxian Explorations in Southern and Afro-American History
(Vintage Books, 1972), 103.
Join
the Historical Society and subscribe to Historically Speaking
A
Fair Field Full of Folk (But Only Beyond the Sea): The Study of the Nobilities
of Latin Europe
by
D'Arcy Jonathan Dacre Boulton
The
field in which I work is one of the oldest in the history of historiography,
and one of the most vibrant in the discipline today, attracting not only
scores of active historians, but some of the best minds in the profession.
Nevertheless, in North America my field has never been popular, and has
become increasingly marginalized since World War II. The field I refer
to is the history of those hereditary societal elites that are most commonly
called “nobilities,” groups distinguished by a claim to descent from ancestors
who held one or more of a certain range of honorable statuses, and by the
transmission of membership to all of their legitimate children at birth.
Nobilities have dominated not only in high barbarian societies like those
of the Celtic and Germanic peoples in the first centuries of the Christian
era, but in most agricultural civilizations and in a number of important
societies (including the British Empire) in the first stages of industrial
civilization. In most of Europe, and in some of the overseas colonies of
France, Spain, and Portugal as well, the nobility remained the dominant
order of society in the economic sphere into the 19th century, and in both
the cultural and the political spheres until the early 20th century.
Unfortunately,
the violent overthrow between 1917 and 1919 of many of the socio-political
regimes in which nobilities had been dominant cast a cloud over many aspects
of nobiliary history. Studies of such themes as heraldry, knighthood, knightly
orders, chivalry, courts, and courtliness fell suddenly out of fashion
among professional historians, and even were treated with scorn in some
countries. Historians did not cease to study nobiliary history, but they
now restricted themselves to themes that seemed politically neutral or
particularly relevant to contemporary questions. Scholarly attention everywhere
focused not on the nobility or its culture as such, but upon the institutions
through which its members had exercised their political and military authority—especially
those that had come to be associated with the artificial construct “feudalism."
Since
the end of World War II, however, under the influence of such distinguished
historians as Georges Duby, Jacques Boussard, Pierre Feuchère, and
Philippe Contamine, in France; Léopold Génicot in Belgium;
Gerd Tellenbach, Karl Bosl, Karl-Ferdinand Werner, and Werner Paravicini
in Germany; Claudio Sánchez-Albornoz and Salvador de Moxó
in Spain; Giovanni Tabacco, Gina Fasoli, and Sylvie Pollastri in Italy;
and K. B. McFarlane, Maurice Keen, Malcolm Vale, Chris Given-Wilson, Peter
Coss, David Crouch, and Nigel Saul in England, interest in the history
of the classic nobility before the Reformation has grown steadily in most
European countries. The growth in scholarly production increased slowly
to 1960, but exponentially thereafter, and now exceeds 100 works a year.
The expansion of studies of the later medieval nobilities in the last fifty
years has involved not merely a growth in the number of books and articles
published, but an increase in the number of subfields devoted to different
themes. Some of these were essentially revivals. After half a century of
neglect, studies of knighthood, the knightly social stratum, knightly culture
(especially the ideology and mythology of chivalry), and knightly organizations
began to appear in the early 1970s, and have given rise to major works
of synthesis, and to regular international colloquia in England and Portugal.
Similarly, the history of royal and princely courts, of the great households
that formed the institutional core of such courts, and of the ideology
of courtliness that governed the behavior of their habitués,
have become important themes of scholarship in the last twenty years. So
has the history of the buildings in which courts and households functioned,
including villas, halls, manor-houses, palaces, and castles. The new science
of castellology is a particularly vital subfield, and inventories of castles
(with plans and maps) have been made for several countries. Finally, the
ancillary fields of heraldry and sigillography (the study of seals) have
been revived in most of the countries of Europe through the founding of
national societies and of an international academy to promote comparative
scholarship.
These
subfields all owe something to the antiquarian tradition, in which they
began in the 16th century, but others have been created in response to
fashionable trends in the discipline as a whole. Although racial distinctions
in the modern sense were not relevant to the history of European nobilities
before the 15th century, the increasingly important notion of the “purity
of blood” characteristic of nobilities from that century onward contributed
to later forms of racism, and has accordingly been examined with some care.
More importantly, the rise of gender studies has inspired work on noblewomen,
the noble family as a functional unit, and the ideals of both womanliness
and manliness in the noble order.
Studies
of nobiliary gender roles (an important element of the ideology of chivalry)
are closely related to other studies by historians and anthropologists
of the growing set of social phenomena associated with the word “honor.”
The importance in most pre-industrial societies of what I call the “honor
nexus”—the set of culturally determined attitudes and behaviors associated
with esteem and its positive and negative bases and manifestations—was
admirably set forth in a recent essay in these pages by Bertram Wyatt-Brown
(Historically Speaking, June 2002). Noble status actually rested upon a
recognized claim to superior honorableness derived both from descent from
honorable ancestors and from adherence to their behavioral code—the core
of which was usually a form of pugnacious hypermasculinity, requiring the
assertion and defense of the honorableness of both individual and kindred.
Historians of nobiliary honor before 1500 have traditionally concentrated
on the nature of the current version of the code (including the codes of
chivalry and courtliness), and the attitudes and behaviors they engendered.
Recently, however, historians have been giving increased attention to the
ways in which inherited honorableness could be augmented through the acquisition
of new domains (the principal source of honorable wealth, power, and rank)
or, alternatively, on the conferral of increased honorableness through
such acts of royal grace as admission to the nobility or the general order
of knighthood, promotion in nobiliary rank, appointment to an honorific
office, or admission to a princely order of knighthood like the Garter
or the Golden Fleece—all practices that grew steadily from about 1280/1325
to about 1660, and persisted thereafter.
While
many long-neglected areas of nobiliary history have thus been investigated
since 1970, the established subfields have themselves been revolutionized
by new techniques and supplemented by new ancillary sciences. Feudo-vassalic
institutions, for example, have been carefully distinguished and examined,
and their changing position in the military systems of different kingdoms
has been set against the development of various alternative systems of
recruitment and organization, especially retaining by contract. The latter
system was also employed after about 1350 to establish political clienteles
or “affinities” comparable to vassalages, and this phenomenon has also
attracted considerable attention from scholars since World War II. Growing
numbers of detailed studies of the formation and growth of particular principalities,
baronies, castellanies, and manors in Latin Europe, or of all of the dominions
of one of these classes of a particular region, have given us far more
precise ideas about the history of these nobiliary jurisdictions. And historical
cartography can now depict very clearly the complex relationships among
jurisdictions on all levels.
Historians
have also given us a much clearer notion both of the range of sources of
nobiliary incomes and of the changing ways in which noble estates were
administered. Biographical, genealogical, and lineal or dynastic studies
(generally possible only for the nobility before about 1500) have continued,
and have been increasingly supplemented by anthroponymic and prosopographical
ones. They have also been supplemented, especially in France and England,
by detailed studies of the nobility of a single district in a period of
from one to three centuries. The study of the nobility’s place as the second
of the functional “estates” of society was also revived in the 1970s, as
was the study of the closely related function of noblemen as members of
one or more of the chambers of the representative assemblies that began
to be created in the decades around 1300: the Parliaments of England, Ireland,
and Scotland, the Estats of France and its regions and provinces,
the Cortes of Castile, and so forth.
In
many European countries, the history of the nobility both before and after
1500 has moved steadily in the direction of becoming a recognized division
of the discipline since its revival after World War II. Ever larger numbers
of historians have devoted ever larger parts of their scholarly activities
to questions in which some part of some nobility, at least, occupies a
central place. By contrast, the place of nobiliary studies in the pre-industrial
historiography in North America was never very great, and despite the prominence
and productivity of some of its practitioners (including Sidney Painter,
Joseph Strayer, Bryce Lyon, Warren Hollister, Bernard Bachrach, John Freed,
Joel Rosenthal, Theodore Evergates, Patrick Geary, and Constance Bouchard
among the medievalists), its share of the places in the profession has
actually declined in relative terms (along with that of medieval studies)
since 1970.
It
disturbs me that a subfield of historiography that is not only important
because of the centrality of its object in the history of most pre-industrial
civilizations, but is actually flourishing in most of Europe, should be
so small and isolated in North America. Several factors seem to have contributed
to its marginality in the United States in particular. In the first place,
nobiliary studies—like pre-industrial studies more generally—appear to
have been a victim of the parochialism and presentism that underlie most
academic historiography in the United States, and lead a substantial majority
of historians here to concentrate on the history of this country and on
themes that explain some aspect of its current culture. Since there is
nothing resembling a nobility in the United States today, and the upper
class has hidden itself so effectively since 1945 that most citizens believe
it has ceased to exist, it is easy for American historians to believe that
there is no point in studying the history of nobilities.
Another
factor that surely contributed to the decline in interest in the field
of nobiliary history after 1970 is the triumph of the radically populist
and anti-elitist attitudes associated with the New Social History, and
the subsequent rise of its various politically correct offshoots. The more
extreme adherents of these schools have not been content to promote the
study of their particular neglected category (something that in itself
can only be praised for widening the scope of historiography), but have
actively opposed the study of elites of all kinds, whom they typically
portray as hateful oppressors of the poor, weak, and non-white. This attitude—defensible
in political contexts, but not in academic ones—not only discourages graduate
students from undertaking work on nobilities, but also makes the hiring
and tenuring of those who have undertaken such work extremely difficult.
My
own position is that nobilities should be studied regardless of one’s personal
feelings about them for the simple reason that they were the socially,
politically, and culturally dominant element of most agricultural civilizations—and
therefore for all but the last century or two of recorded history. I also
believe that the primary obligation of historians is to understand the
totality of the recorded past as fully as possible, not merely to explain
present conditions through highly selective lines of inquiry that ignore
most of past reality.
Even
from a presentist perspective, however, it can be argued that the history
of the classic nobility of Latin Europe, at least, is worthy of more widespread
attention in the United States. Many elements of American culture—those
associated with constitutional government, political liberty, and property
rights, as well as those associated with comfort, security, courtesy, fashion,
and “high culture”— were created either by or for members of that nobility:
especially, of course, in England. Indeed, in the decades between about
1870 and 1900, the ideals of the English gentry spread throughout the newly
crystallized upper class of the United States generally, and are not wholly
extinguished in that class today. Just as the common culture of the United
States cannot be understood without understanding the culture of England
from which it is still very largely derived, so the culture of its most
powerful class cannot be understood without a good knowledge of that of
its English predecessor. The culture of chivalry has also been maintained
in the officer corps of the armed forces of the United States, whose members
are all officially “gentlemen” and still carry knightly swords on formal
occasions.
Let
me finally propose a very different argument in favor of the study of nobilities
and the societies they dominated. One of the principal reasons for studying
the culture of agricultural civilizations, including that of Latin Europe,
is that they were very different in many fundamental respects from the
industrial civilizations of today, and invariably held very different beliefs
about the just distribution of prestige, civil rights, property, and power
in society. With a tiny handful of partial exceptions, they maintained
steep and rigid social hierarchies composed of juridical and differentially
privileged orders rather than economic classes, and were
organized politically as hybrids of monarchy and various forms of aristocracy.
As recent events suggest, Americans need a much greater awareness of just
how recent and unusual their most cherished social and political beliefs
really are by world-historical standards, how far they must seem from being
“self-evident truths” in the eyes of the members of other civilizations
even today—and how precarious their own egalitarian, libertarian democracy
really is.
In
any case, refusing to study the classic nobilities of Latin Europe because
they are irrelevant to Americans is clearly an indefensible position. Refusing
to study them simply because one does not approve of elites is downright
silly. After all, few today approve of slavery, racism, fascism, or genocide,
but these unpleasant topics have nevertheless attracted a good deal of
attention from historians. Nobilities, for all their faults, made many
contributions to historical cultures that are still generally regarded
as positive, and are at least equally deserving of the attention of scholars.
D’Arcy
Jonathan Dacre Boulton is a fellow of the Medieval Institute and associate
professor of history at the University of Notre Dame. His most recent book
is an expanded second edition of his The Knights of the Crown: The Monarchical
Orders of Knighthood in Later Medieval Europe, 1326–1520 (Boydell Press,
2000).
Join
the Historical Society and subscribe to Historically Speaking
Crossing
Borders[*]
by
Peter Paret
In
1942, at the age of eighteen, I became a freshman at Berkeley. The basic
history surveys seemed to me—no doubt unfairly—little better than verbal
equivalents of the lists of dates and events familiar from my school years
in Europe. Different altogether were the elegantly clear lectures on British
history by George Guttridge, which he permitted me to attend although the
course was limited to upper-division students. I was drafted the following
year, and did not return until the fall of 1946. From my second period
at Berkeley I recall with special pleasure William Hesseltine’s summer
session course on the American Civil War, his left-wing take on events
only rarely exploding in bursts of sarcasm; and a survey of Sicily in the
Middle Ages, delivered—chanted would be more accurate—by Ernst Kantorowicz.
It was not difficult to see that his search for universals, at its most
impressive when animated by such specifics as the phrasing of a Papal missive
or the architecture of a Norman watchtower, could reflect only a part of
how things had actually been. His respect for evidence was that of the
artist as much as that of the scholar.
I was
twenty-five years old when I graduated. Family obligations took me back
to Europe, and I did not begin graduate study at King’s College, London
until 1956. The choice of the University of London was motivated by my
wish to remain in Europe for the time being, and by the presence at King’s
of Michael Howard, at that time a lecturer in war studies. My intention
to write a dissertation on war seemed to clash with my preoccupation with
literature and the fine arts. But service in an infantry battalion in New
Guinea and the Philippines raised questions that did not disappear when
the fighting stopped. How men faced danger; how the course of fighting
could be described and interpreted with some accuracy; how a country’s
social and political energies were transformed into organized violence—these
were matters to investigate and understand.
Graduate
study at King’s placed few restraints on the student. I audited Howard’s
excellent survey of the history of war, took the necessary qualifying examinations,
and attended two seminars at the Institute of Historical Research: one
conducted by Howard; the other, on European history, by W. Norton Medlicott,
who with great professionalism and kindness oversaw a large, international
group of students trying their hand on a wide range of topics and approaches.
One of his assistants, Kenneth Bourne, later Medlicott’s successor as professor
of international history at the London School of Economics, became a life-long
friend.
My
dissertation addressed the change in Prussian infantry tactics at the end
of the 18th century and in the Napoleonic period. The subject may seem
narrow and technical, but it was closely linked to issues of broad significance,
from an expansion of operational and strategic possibilities to the treatment
of the common soldier and his place in society. The so-called break-up
of the linear system of the ancien régime—more correctly, its modification
by column and open order—had long been associated with the American and
French Revolutions, an interpretation emotionally validated by the powerfully
symbolic contrast between citizen soldiers and mercenary automata. Not
surprisingly, I found that the change was the outcome of developments antedating
1776 and 1789. In two later articles I demonstrated—at least to my satisfaction
—that the War of American Independence had little impact on European military
practice.
While
working on the dissertation, I gained teaching experience as a resident
tutor in the Delegacy of Extra-Mural Studies at Oxford. I began to review
books for several journals, but with a troubled conscience, not yet having
written a book myself. I also published a number of articles, but on subjects
far removed from my dissertation topic: two on current defense issues,
which led to membership in the recently founded Institute for Strategic
Studies, a third in the Bulletin of the Institute of Historical Research
on the history of the Third Reich. The latter resulted from a chance discovery.
While searching for 18th-century military manuals in the library of the
Royal United Service Institution, I noticed a crudely bound folio, used
by library attendants as a tray for their tea mugs, which turned out to
be the register of a Gestapo prison established on July 21, 1944, the day
after the attempt on Hitler’s life. It had come as war booty to the library,
where it remained unnoticed. The volume was of some historical importance
because it dated the admission, release, or execution of several hundred
prisoners. With the help of a former inmate, the Rev. Eberhard Bethge,
a relative and biographer of Dietrich Bonhoeffer, I located and interviewed
survivors, as well as relatives and associates of the victims, and reconstructed
the history of the prison and of the prison community.
Until
I defended my dissertation in the spring of 1960, I gave little thought
to finding an academic position. The expansion of colleges and universities,
especially in the United States, was at high tide, and opportunities seemed
to exist everywhere. The difference between conditions then and now can
hardly be exaggerated. Today the pernicious institutional reliance on temporary
appointments places a burden on a new generation of scholars that forty
years ago we could not have imagined. At the time I never doubted that
something would turn up, even though I labored under two handicaps: at
thirty-six I was older than many of my peers, and I had no academic contacts
in the United States, to which I now wanted to return, my future wife,
a psychologist, having been offered a position in New York. It was lucky
for me that Klaus Knorr, then the associate director of the Center of International
Studies at Princeton, was starting a research project on “internal war.”
At a conference in Oxford, at which I gave a paper on the subject, he offered
me a one-year renewable appointment as a research associate at the Center.
In
Princeton I was one of two historians among a group of highly theoretical
social scientists. My fixation on the tangible and specific prevented me
from profiting as much as I had hoped from the variety of abstract orientations
I now encountered; the efforts of my new colleagues to create hypotheses
and taxonomies by committee baffled me. I made up for my theoretical naïveté
by being among the first to publish work credited to the project—a study
of the constants of internal war and pacification as illustrated by the
uprising of the Vendée during the French Revolution, and with John
W. Shy, at the time an instructor in the history department, an article
on “Guerrilla War and U.S. Policy.” The latter piece was reprinted in various
places, and led us to write a short book, Guerrillas in the 1960s,
which appeared in 1961. A revised edition followed in 1962. The book evidently
filled a need; for some years even the service academies used it, although
our argument that resorting to unconventional war would pose serious difficulties
for this country went against a strong current of official opinion. Two
years later, I wrote a book on a related subject, guerre révolutionnaire,
the doctrine, developed by the French army in Indochina and Algeria, of
opposing insurgents with variants of their own methods. The doctrine was
an impressive intellectual response to a military-political problem, but
deeply compromised by psychological and moral shortcomings.
The
part of the book I most enjoyed writing was a chapter on the doctrine’s
historical antecedents; clearly I was not cast for a career as a defense
analyst. Several historians, among them Felix Gilbert, whom I had met even
before he came to Princeton in 1961, knew my dissertation and some papers
I had written on German history and historiography, and recommended me
to departments that had openings in modern European history. In the fall
of 1962 I joined the history department of the University of California
at Davis.
Davis
had only recently expanded from the agricultural college of the university
to a general campus, and the history department made new appointments every
year. Its rapid growth was guided by several able men: the agricultural
historian James Shideler; Bickford O’Brien, author of two books on Muscovy;
and Walter Woodfill, whose book on musicians and society in England I had
read some years earlier. They made certain that the department became known
for strong teaching and that faculty research was supported. During a period
of constant change, with its inevitable tensions, they maintained a rare
sense of mutual responsibility and collegiality.
In
1966, the year I was promoted to professor, I published my revised dissertation
on the era of Prussian reform. Among the changes was a fuller discussion
of Clausewitz’s activities in the reform movement. In the following years
I wrote several articles on his life and thought, and by 1969, when I joined
the Stanford history department, I had decided to write a book on the development
of Clausewitz’s ideas in conjunction with a study of his life. Above all,
I wanted to analyze the interaction of Clausewitz’s intellectual growth
and creativity with the culture in which he lived, and with the political
and military changes he experienced. I was far less interested in Clausewitz’s
“influence”—especially on modern war. I studied him as I would have studied
Montesquieu for his political theories or Kleist for his drama and prose—
for what they achieved in their time, not for their impact on French political
thought or German literature in the 20th century.
During
the years I worked on the biography, I diverted time to related projects,
among them the translation, with Michael Howard, of Clausewitz’s On War,
and a new edition of Makers of Modern Strategy, a collection of essays
by different authors, first published in 1943, and now reissued with seven
of the original essays and twenty-two new ones. These works confirmed my
identity as a military historian—a specialization of doubtful esteem in
this country, even if some departments offered courses in the field, usually
by professors of American history. The subject suffered from the emotional
and political reverberations of the war in Vietnam, and from the dominance
of social history at the time, although it seems strange that social historians
should not be interested in the impact that arrangements for attack and
defense have always had on society. Nor did it help that much writing on
war consisted of possibly dramatic but hardly analytic campaign narrative.
That more than a few military historians took a broader, integrative approach
did not lessen the sometimes unpleasantly doctrinaire dismissal of a field
of study the significance of which for our understanding of the past is
self-evident.[1]
The
Stanford history department, like my former department at Davis, paid attention
to teaching. Only the chair— George H. Knoles when I arrived—and a few
others with significant administrative responsibilities had reduced teaching
loads. At various times I taught the second and third quarters of the year-long
freshman survey, an undergraduate course on the history of war; but I mainly
taught undergraduate and graduate courses in European history and culture.
A graduate colloquium, in which we defined and analyzed the political and
social themes of Daumier’s lithographs from the 1830s to 1880, has stayed
in my mind as particularly stimulating.
Clausewitz
and the State was published in 1976. A few articles followed that continued
to explore themes of the book from different perspectives or as new material
became known. Dispatches from British diplomats in the British Museum and
the Public Record Office allowed me to reconstruct Clausewitz’s failed
effort, against conservative opposition, to exchange active service for
a diplomatic appointment—an episode indicative of his fascination with
politics as well as of the reactionary pressures to which he was exposed
in Prussia after 1815.
But
the main focus of my research was shifting away from war to the interaction
of art and politics. Once again, a personal element gave direction to my
work. Even as a boy I had been superficially aware of the conflict over
modernism in German art at the end of the 19th century, because my maternal
grandfather, an art dealer and publisher, had been active in support of
the new. Art historians generally approached the subject from an exclusively
modernist perspective. To strip the conflict of its myths and to understand
its historical as well as art historical significance called for more balanced
treatment. In archives in Berlin, Potsdam, and Merseburg, especially in
the files of the imperial Zivilkabinett, I found information on
official policy and on conflicting attitudes toward modern art in the cultural
bureaucracy to complement documentation on the avant garde. The first result
was an article in 1978 (“Art and the National Image: The Conflict over
Germany’s Participation in the St. Louis Exposition,” Central European
History 11 [June 1978]). Two years later I published a book on the
politically most significant of the secessionist movements that in the
1890s sprang up throughout Central Europe: The Berlin Secession: Modernism
and its Enemies in Imperial Germany.
In
a generously positive review, Carl Schorske noted that I had scarcely tried
to analyze the work of the artists whose struggle for acceptance I traced
(American Historical Review 87 [October 1982]). His point was well
taken. Although it can be useful to separate the social and political impact
of a work of art from its aesthetic characteristics, and the degree of
attention given to content and style should fit the historian’s purpose,
my study of the quarrel between Wilhelm II and the Secession should have
said more about the art itself. But at the time, more so than today, I
was troubled by the subjectivity of aesthetic theory, and I was very conscious
of the fact that such phenomena as patronage policies are more susceptible
to firm interpretations than would be a work’s strengths and weaknesses.
The
German translation of the book led to an exhibition of Secessionist artists
in Berlin, which I helped curate. Since then I have worked on a number
of exhibitions, and closer contact with paintings and graphics has given
me some of the art historical freedom I previously lacked. I did not stop
writing on subjects that concern art without being about art: in 1981“The
Tschudi Affair,” a study of Wilhelm II’s attempt to replace the director
of the Berlin National Gallery with a conservative, which was defeated
largely by senior officials in the cultural bureaucracy;[2]
and the 1984 Leo Baeck Memorial Lecture— The Enemy Within—which
discussed the Jewish painter Max Liebermann as president of the Prussian
Academy of Arts from 1920 to 1932—a tenure the Right denounced as a subversion
of German culture. But increasingly the work of art itself moved toward
the center of my research.
In
1986, I became a professor at the Institute for Advanced Study. Two years
later I published Art as History, which in works of art and literature
traced the rise and decline of German liberalism. The book included a study
of the historical poems of Theodor Fontane, a writer I have admired since
adolescence, when—much too early—I read Effi Briest. It was the beginning
of a sequence of essays and of talks at meetings of the Theodor Fontane
Gesellschaft, which continue to the present.
Two
further works centered on images. I collaborated with Beth Irwin Lewis—her
just published book Art for All? continues her important exploration
of the role of art in German society—and my son Paul on a study of posters
as historical documents, Persuasive Images, which appeared in 1992.
Several preliminary studies—one a seminar paper at the Institute, later
published as a pamphlet, Witnesses to Life: Women and Children in some
Images of War, 1789–1830—led in 1997 to Imagined Battles: Reflections
of War in European Art, which brought together my two principal research
interests.
Earlier
essays became building blocks for a third book. Among the posters we examined
for Persuasive Images were vicious and powerful specimens by the
National Socialist propagandist who worked under the pseudonym “Mjölnir”—hammer
of the Norse god Thor. I combined his work and the record of his trial
after 1945 in a talk at a meeting of the American Philosophical Society. [3]
After thirty-three years I had returned to the history of the Third Reich.
In an expanded version, which noted that Mjölnir hounded the sculptor
Ernst Barlach as a degenerate artist, the piece was included in a collection
of my essays on cultural history, entitled German Encounters with Modernism,
1840–1945. To strengthen the volume’s continuity, I wrote two new
essays, one on Barlach, which made me realize that I still had more to
say about him. The result was An Artist against the Third Reich: Ernst
Barlach, 1933–1938, which is being published this coming spring. Again,
as in the conflict over modernism in Wilhelmine Germany, I have tried to
address the motives and actions of all sides: the reasons for Hitler’s
hatred of modernism in art, which was anything but a whim; the search by
Goebbels and some other National Socialists for an acceptable “Nordic modernism”;
and Barlach’s refusal to be intimidated, expressed above all in the further
radicalization of his art.
Forty-five
years ago, when I first attempted to write about history, my subjects were
social groups—inmates of a Gestapo prison, sons of the poor forced to serve
in the armies of the ancien régime and of the First Republic—or
such impersonal phenomena as military doctrines. Later I turned toward
the creative individual, whether in war, art, or literature. Creativity
and its place in society became primary interests. In writing about social
groups or about war or high culture, I searched for links between different,
even disparate elements. I made comparisons not only to generate questions
and sharpen delineations, but also to explore the relationship among some
of the many components of the networks of ideas and action that together
constitute historical reality. Crossing borders that separate fields of
study has not been invariably welcomed. Dogma, categories, and the pride
of specialization can be remarkably powerful. Still, the edgy freedom of
our academic world has been a forgiving environment in which to catch hold
of bits of the past, perhaps add to their factual substance, and give them
new—temporary— meaning.
Peter
Paret is Mellon Professor in the Humanities Emeritus in the School of Historical
Studies of the Institute for Advanced Study. He is the author of An Artist
against the Third Reich: Ernst Barlach, 1933–1938 (Cambridge University
Press, forthcoming in 2003).
[1]
I have discussed the opposition to the history of war in the 1960s and
1970s in “The History of War and the New Military History,” in my collection
Understanding War: Essays on Clausewitz and the History of Military
Power (Princeton University Press, 1992) and, more recently, in “The
History of Armed Power,” in Lloyd Kramer and Sarah Maza, eds., A Companion
to Western Historical Thought (Blackwell, 2002).
[3]
Journal of Modern History 53 (December 1981).
[3]
“God’s Hammer,” Proceedings of the American Philosophical Society 136
(June 1992).
With
Professor Paret's essay, we launch a series of occasional essays wherein
senior historians recount and reflect upon their careers. –The Editors
Join
the Historical Society and subscribe to Historically Speaking
BIG
HISTORY
by
Marnie Hughes-Warrington
It
is a common complaint that world history—as practiced by historians—does
not live up to the scope of its terms. Michael Geyer and Charles Bright,
for instance, have argued that the “central challenge of a renewed world
history at the end of the 20th century” is to tell of the world’s past
in a global age.[1]
Some have interpreted this as a call for the study of human interactions
through frameworks wider than that of the nation-state, while others see
it as an invitation to consider something bigger: the origins and evolution
of the earth and its inhabitants. For a small but growing number of historians,
though, even the shift from world to global history is not enough. What
they seek is way beyond the commonly perceived boundaries of history, and
thus the comfort zone of many historians. For them, history must tell the
biggest story of all, that of the origins and evolution of human beings,
life, the earth, and the universe—hence, “big history.”
Unlike
the Big Bang, big history does not begin with a single point. Probably
the strongest claim we can make on its origins is that it arose in the
context of the enormous growth of historical sciences such as cosmology,
evolutionary biology, evolutionary psychology, and geology in the 1980s.
How it reached those trained in areas traditionally considered far away
from the sciences is via the vast outpouring of popular science publications
that followed. Of particular relevance are those works in which writers
draw together separate fields, such as the Big Bang (cosmology) and the
origins of life (biology) (e.g. Isaac Asimov, Beginnings [1987];
Preston Cloud, Cosmos, Earth and Man [1978]; Arnaud Delsemme, Our
Cosmic Origins [1998]; Siegfried Kutter, The Universe and Life
[1987]; Harry McSween and Brian Swimm, Fanfare for Earth [1997];
and Thomas Berry, The Universe Story [1992]). Such works evidently
revealed the possibilities of interdisciplinary studies and suggested a
bigger project uniting the sciences and humanities.
It
is only with the publication of works by David Christian and Fred Spier
in the 1990s that we begin to see big history assume a historiographical
profile. Christian’s interest in big history first emerged, rather pragmatically,
during a lively staff meeting in 1988 at Macquarie University, Sydney,
where he taught until 2001. At the meeting, Christian suggested that first
year classes should “start at the beginning.” Thinking more about that
suggestion, he became intrigued by the questions “What is the whole of
history?” and “Where does human history begin?” and was led back to the
point where there is no evidence or certainty about “before:” the Big Bang,
some 12–15 billion years ago. In 1989, “HIST112: An Introduction to World
History” began and, two years later, his “The Case for ‘Big History’” appeared
in the Journal of World History. That Christian came to big history
via teaching rather than theory shows very clearly in his various writings
on the subject: he readily adopts and adapts ideas from an incredibly varied
range of sources without the fear of someone trained to know his historiographical
boundaries. Even his decision to describe what he was doing as “big history”
suggests a “work in progress”:
When
I first used the label “big history” in the early 1990s, I felt it was
simple and catchy; and it helped me avoid some simple circumlocutions.
In retrospect, I fear the label was also grandiose, portentous, and somewhat
pretentious. So I need to make it clear . . . that I use the phrase with
some hesitation. I continue to use it because it has acquired some currency
in the last ten years, and . . . I can’t think of anything better![2]
Though
Christian’s account of the past, present, and future shifts continually,
his work is at base a “map of reality” or “a single, and remarkably coherent
story, a story whose general shape turns out to be that of a Creation Myth,
even if its contents draw on modern scientific research.”[3]
The modern creation myth begins with the origins of the universe (as suggested
in the Big Bang theory) and goes on to tell about the origins of the stars
and planets, the earth and life, human beings and societies and ends with
speculations about our cosmic future.
Christian’s
historiographical reflections on big history are very much focused on those
who engage with it: it is a project in which past, present, and future
must be drawn together for understanding of self and of others. Spier’s
historiographical writings, on the other hand, concentrate more on patterns
in the subject matter. Spier was introduced to big history by Johan Goudsblom,
who in turn first learned about it from Christian. Spier and Goudsblom
introduced a big history course to the University of Amsterdam in the 1995–96
academic year, and Spier has convened the course since Goudsblom’s retirement
in 1998. Goudsblom has written about the way the course was set up in Stof
Waar Honger uit Onstond (2001), but their “Big History Project” will
be better known to English readers through Spier’s The Structure of
Big History (1991, revised as Geschiedenis in het Groot: Een alomvattende
visie, 1999). Drawing on his training in historical sociology, Spier
argues that “regimes” are the organizing principle of big history. The
term “regime” enjoys such a wide usage in sociology that it is difficult
to attribute any technical meaning to it at all. Spier suggests that all
uses of the term refer to “more or less commonly shared behavioral standards,”
“patterns of constraint and self-restraint,” and “an interdependency constellation
of all people who conform more or less to a certain social order.” Such
definitions clearly refer to human behavior, so in order for the term to
be useful in big history, he extends its meaning to “a more or less regular
but ultimately unstable pattern that has a certain temporal permanence.” [4]
Spier detects such patterns at all levels of complexity and in a wide range
of places and times, from the fundamental atomic forces, through hunter-gathering
societies, to the orbits of the planets.
As
Spier notes, there is clearly an alignment between regimes and what Christian—drawing
on Stephen Jay Gould—calls “equilibrium systems.” These are systems, as
Christian writes, “that achieve a temporary but always precarious balance,
undergo periodic crises, re-establish new equilibria, but eventually succumb
to the larger forces of imbalance represented by the principle of ‘entropy.’”[5]
In Christian’s more recent writings, though, the concept of “equilibrium
systems” has disappeared. He is still interested in patterns, and the “constantly
shifting waltz of chaos and complexity,” as This Fleeting World
attests, but he does not want to stake out a technical term or system of
structures as Spier does.
Christian
and Spier, like all “big historians” are interested in patterns of balance
and imbalance between order and disorder, but they express their interest
through different conceptual frames. Drawing on Marshall Hodgson’s concept
of “transmutations,” for instance, John Mears suggests that in human history,
for instance, we see three radical periods of imbalance that led to the
introduction of deep changes in the organization of societies: the revolution
of the Upper Paleolithic, the advent of complex societies, and the global
integration of human societies. Akop Nazaretyan, in Intelligence in
the Universe: Origin, Formation, Prospects (1991) and Civilization
Crises within the Context of Universal History (2001), argues that
in all contexts of historical change—the universe, earth, biota—we see
punctuated equilibria, but that, overall, history shows a directional tendency
toward negentropy (loosely, the increase of order and information
in a system). Eric Chaisson also looks to entropy, but realizes its implications
more fully through physics. In Cosmic Evolution (2000), Chaisson
notes that the universe appears to be getting more complex: after the Big
Bang, elementary particles came together to form simple atoms; gravitational
attraction among atoms laid the foundations for galaxies; within galaxies,
stars, and planetary systems differentiated; and in these, with the emergence
of the heavier elements, complex chemical, biological, and ultimately cultural
entities arose. Chaisson argues that this increase in complexity is
consistent with the second law of thermodynamics. The second law, in
its statistical-mechanical interpretation, suggests that disorder (the
opposite of complexity) increases in closed systems, but as structures
like galaxies, stars and organisms are in an open system, they are
able to generate and sustain complexity by exporting enough disorder to
the surrounding environment to more than make up for internal gains. For
Chaisson, complexity is to be found as energy density. He analyzes the
flows of energy through various objects and shows how these flows seem
to be related to the complexity of the objects. The greater the energy
flow, the greater the complexity. And through a table and a series of graphs,
he shows that complexity increases from atoms to galaxies to societies
and therefore also increases over time. This is what is meant by “cosmic
evolution.”
As
the case of entropy shows, conceptual frames and emphases vary among big
history practitioners. Indeed, apart from a common interest in the large-scale
patterns in the history of the universe, earth, and life, it might be difficult
to identify them as “big historians” at all. Of help here is Wittgenstein’s
family resemblances view of concepts. On this view, concepts like “big
history” are not characterized by a list of criteria that all works and
practitioners must satisfy, but rather by a network of overlapping similarities
or “family resemblances.”
Conceptual
matters aside, where are we to locate big history in “the house of history?”
Recently, Christian has spoken of big history as macrohistory: might this
offer us a clue? Historiographically, macrohistory refers to the study
of large-scale social systems or social patterns. On the face of it, this
definition would certainly fit well with the sociological orientation of
both Spier’s and Johan Goudsblom’s works. In stretching “regimes” beyond
the social, though, they step way beyond the territory of sociology. Consequently,
macrohistory is not big enough to encompass big history.
Big
history can be more fruitfully located in the tradition of universal history
that began with the new internationalism fostered by Alexander the Great.
Writers like Diodorus of Sicily (ca. 90–21 BC) claimed that peoples of
different times and places could be connected by universal history into
one body through the efforts of historians, “ministers of Divine Providence”:
For
just as Providence, having brought the orderly arrangement of the visible
stars and the nature of men together into one common relationship . . .
so likewise the historians, in recording the common affairs of the inhabited
world as though they were those of a single state, have made of their treatises
a single reckoning of past events . . . . (The Library of History,
§1.1.3–4)
This
idea of a “single reckoning of past events” was readily adapted in an eschatological
fashion by Christian and Islamic writers such as St. Augustine of Hippo
(City of God [413–26]), Paulus Orosius (Seven Books of History
Against the Pagans [ca.417]), Bishop Otto of Freysing (The Two Cities
[1146]), Ibn Khaldun (Muqaddimah [1357–58]), and Jacques Bénigne
Bossuet (Discourse on Universal History [1681]). Growing information
about the non-European world from the 16th century onward revealed the
limitations of monotheistic narratives, but universal history continued
to thrive. In the hands of Gimbattista Vico (The New Science [1744])
and later Johann Gottfried Herder (Reflections on the Philosophy of
the History of Mankind [1784–91]), Immanuel Kant (“Idea of a Universal
History from a Cosmopolitan Point of View” [1784]), G. W. F. Hegel (Philosophy
of History [1822–31]), and Leopold von Ranke (Universal History
[1884]), universal history was transformed into a “new science” with a
philosophical foundation. These writers searched for the presuppositions
that shaped human actions and concluded either—as in the case of Vico—that
history revealed a circular or spiral pattern of birth, life, decline,
and regrowth or—as in the case of Hegel—the progressive realization of
freedom. Later in the 19th century, Marx inverted Hegel’s philosophical
program, suggesting that the material conditions of life shape human consciousness
and society, not the other way around, and Oswald Spengler tracked the
birth, growth, and decline of eight cultures, Western Europe included.
Spengler’s
work enjoyed enormous popular success, but increasingly, universal history
was marginalized in the discipline. The epic works of H. G. Wells, Arnold
Toynbee, and Pitirim Sorokin were judged to be overly speculative and insufficiently
attentive to detail. Some felt as Pieter Geyl did about Toynbee: “One follows
[Toynbee] with the excitement with which one follows an incredibly supple
and audacious tight-rope walker. One feels inclined to exclaim: ‘C’est
magnifique, mais ce n’est pas l’histoire.’”[6]
Many dismissed these writers as an embarrassment to a discipline trying
both to put itself on a scientific footing and to recover the experiences
of ordinary people, people traditionally passed over in silence in surveys
of “civilization.” To most historians, universal history was like a rogue
relative that no one wants to talk about.
Histories
on a larger scale did not of course disappear in the latter half of the
20th century, as the world- and macro-historical works of William McNeill,
Marshall Hodgson, Andre Gunder Frank, Immanuel Wallerstein, Fernand Braudel,
Philip Curtin, Peter Stearns, Alfred Crosby, Eric Wolf, and Clive Ponting
testify. But the perception was that single, all-encompassing, unified
histories—which seemed to fit Lyotard’s description of “grand narratives”—could
not withstand methodological or ethical scrutiny. Allan Megill, for instance,
concludes his entry on universal history in the Dictionary of Historians
and Historical Writing in the following fashion:
One
historiographical strategy in what is now called “world history” is the
making of limited comparisons between different parts of the world that
the historian selects for comparison in the hope of generating insight.
Such work, however, is clearly not universal history as it was known in
the past, but a mark of its absence.
Except
for big history, that is. The interest that big historians have in a “single
. . . coherent story” marks them out as universal historians. But they
have also adapted universal history in at least two important ways. First,
big history stretches much further backwards in time than any earlier universal
history. This is clear from the introduction to even the most recent universal
histories such as H. G. Wells’s The Outline of History. Wells’s
history begins with the geology of the earth: prior to that there is no
history, for space is “cold, lifeless, and void.” Wells is not unusual
in this conclusion, for up until about forty years ago, the majority of
people—including scientists—believed that the universe was static, unchanging,
steady. Now, all but a tiny number of scientists believe that the universe
has an origin—an explosive event dubbed the Big Bang some 12–15 billion
years ago; that it is expanding and cooling and thus changing; and that
it will continue to do so long into the future. The Big Bang is the agreed
starting point for big historians as it is for physicists.
Second,
big history veers away from the anthropocentrism of earlier universal histories.
Traditionally, universal historians—if they consider the sciences at all—have
presented the origins and evolution of the earth and life as a prologue
to human history. Big history relocates humans in the biota, on the earth,
in the universe. In doing so, it reveals how small, destructive, and recent
a phenomenon we are. Such a view of humanity appears to clash with the
conventional historiographical desire to seek out the individual, to seek
out agency. It is as if the lens through which we view the past has got
stuck at a certain magnification—the “viewing individual actions” lens—and
that over time we have forgotten that other lenses are available. Big history
invites us to consider the past over different scales and helps us to see
new patterns, like those of regimes, punctuated equilibrium, negentropy,
or cosmic evolution. Thus it is with big history, I believe, that we see
the realization of William H. McNeill’s claim that historians can contribute
to what Edward O. Wilson calls “consilience,” the unity of knowledge.[7]
[1]
Michael Geyer and Charles Bright, “World History in a Global Age,” in Ross
Dunn, ed.,
The New World History (Bedford, 2000) 566.
[2]
David Christian, “The Play of Scales: Macrohistory,” unpublished ms. Presented
at the annual conference of the American Historical Association, January
2002, n. 5.
[3]
David Christian,
This Fleeting World: An Introduction to “Big History”
(University of California Press, forthcoming), 100, 103.
[4]
Fred Spier,
The Structure of Big History: From the Big Bang until Today
(Amsterdam University Press, 1996), 5, 14.
[5]
Quoted in Spier,
Structure of Big History, 3.
[6]
Pieter Geyl,
The Pattern of the Past: Can We Determine It? (with
A. Toynbee and P. Sorokin) (Beacon, 1949) 43.
[7]
William. H. McNeill, “History and the Scientific Worldview,”
History
and Theory 37 (1998):1-15; Edward O. Wilson,
Consilience: The Unity
of Knowledge (Abacus, 1998).
Join
the Historical Society and subscribe to Historically Speaking
A
Dispatch from Germany
by
Michael Hochgeschwender
Recent
discussions among both German historians and the German public about German
history have been shaped by two apparently contradictory schemes. On the
one hand, the events of 1989-1991—the fall of the Berlin wall, the sudden
collapse of the Soviet empire, and German reunification—led to an intensified
and renewed interest in the history of the German nation-state. On the
other hand, in the light of European unification, globalization, and the
events of September 11, 2001, Germans also feel the need to see their history
in an international context.
Since
1989, German historians have tended to focus on national history, especially
the Nazi past and the murder of European Jewry. Mass media, especially
television, fed this trend. For example, Guido Knopp, Germany’s leading
TV historian, concentrated his attention on the history of the Third Reich
and the Second World War. Further, he and the influential weekly Die
Zeit—together with other media—covered all the major historiographical
debates relating to 1933-1945, such as the heated controversies about the
theses of Daniel Goldhagen, Norman Finkelstein, and Peter Novick. This
media coverage, however, was only a small part of a larger effort to come
to terms with the German past.
Professional
historians tried to cope with the problems of the pre-Nazi era. This research
produced detailed and subtle treatments of German history. Hans-Ulrich
Wehler and Heinrich-August Winkler, for example, published multi-volume
master narratives of 19th- and 20th-century German history. Yet the focus
of these and other works produced some negative side effects. The obsession
with the nation-state provincialized German historiography. Non-German
themes were neglected. The histories of Eastern Europe, the U.S., Asia,
and Africa became marginalized—what value did these have for interpreting
German national identity?
At
the Historikertage (meetings of the German Historical Association)
of 1998 and 2000 this insularity reached its peak. Non-German history—even
European history—was barely mentioned. This phenomenon did not escape the
critical notice of the public and the media. And today, the one-sidedness
of German historiography in the 1990s seems especially inappropriate.
Germans
(and other Europeans) have to learn to cope intellectually with their common
future and to think in terms of a common past. At present, there are two
popular book series here focused on a European history (Europäische
Geschichte, edited by Wolfgang Benz, and Europa bauen, edited by the French
historian Jacques LeGoff). If only there were more. Viewing German history
from the perspective of European or world history need not be an attempt
to get rid of the negative apects of a specifically German past. Just the
opposite is true.In an international context, the particularities of German
history will become even clearer.
In
a recent article in Geschichte und Gesellschaft, Sebastian Conrad argues
for a transnational approach to German history. Conrad believes that the
the histories of Africa, America, Asia, and Europe form a common human
past. Travel, commerce,slavery, even disease contributed to this shared
history. In Germany, however, those historians who, in recent years, have
looked beyond the nation’s borders have tended to comb European and North
American archives for the secrets to the mystery of modernization.
According
to Conrad, modernization is indeed an important subject, and he thinks
it can be understood better if scholars widen their range of vision. Europe’s
colonies, he argues, were laboratories for the modernizationprocess in
Europe and the United States. Therefore, it should be possible to form
a research design that interprets, for instance, urbanization in different
nations and cultures, including non-Western ones. Conrad was not the first
to argue for a more transnational approach to German history. Since the
1980s a minority of German historians have been trying to escape the narrowness
of the nation-state as the dominant point of reference fortheir work. German-American
relations has proved to be an especially valuable field of experimentation.
The notions of Americanization (Volker R. Berghahn) and Westernization
(Anselm Doering-Manteuffel, Axel Schildt) arose from the analysis of German-
American relations. Students of Americanization have examined the material
and social impacts of American popular culture on Western Europe, especially
West Germany and with a special focus on the post-World War II era. Thereby,
its research program at a very early time became transnationalized, treating
business and entrepreneurial strategies, working-class culture, or the
icons of pop art not primarily in a comparative way, but as a history of
interchanges.
Scholars
who endorse the notion of Westernization, on the other hand, believe that
political culture in the U.S. and Western Europe has a transnational social
basis. They have argued, for instance, for the long-term development of
common concepts amongAmerican liberals and European socialists based on
mutual experiences during the 1930s. While Americanization stresses the
importance of American influence in these processes, Westernization assumes
a certain parity between Americans and Europeans in the cultural sphere.
Other
historiographical approaches, such as the study of intercultural transfer
(Johannes Paulmann) or the new approaches to the history of international
relations developed by Jürgen Osterhammel and Eckart Conze, may better
be able to satisfy the growing desire for the globalization of German historiography.
Paulmann broadens the scope of the Westernization approach by going deeper
into the past. He extends the analysis of early modern political thought—initiated
in the 1960s by, among others, Quentin Skinner, J.G.A. Pocock, Bernard
Bailyn, and Gordon S. Wood—into the 20th century. In much of his work,
transnationally acting intellectuals and academics occupy center stage.
Osterhammel, on the other hand, writes transnational social history, with
a particular focus on industrialization.
A younger
generation of German historians is developing its own way of dealing with
German history based on a broader focus. The 2002 German Historikertag,
which took place in Halle, may serve as evidence. While the traditional
focus on the German nation-state was still influential, more and more panels
started to ask different questions. Even the German Bundespräsident,
Johannes Rau, invited German historians to write a new German history with
a more multicultural, multiethnic, and, perhaps, transnational focus. It
is still not clear whether or not Halle was a breakthrough, yet some new
departures in German historiography are clearly in the works.
Dr.
Michael Hochgeschwender teaches history at Tübingen University and
is the author of Freiheit in der Offensive? Der Kongress für kulturelle
Freiheit und die Deutschen (Oldenbourg, 1998).
The
Past Perfect, or Understanding History
by
Stanley Sandler
In
the midst of the Civil War, a little volume entitled The Last Men of
the Revolution diverted Americans by chronicling the lives of the last
living veterans of the War for Independence.[1]
The book treated these old soldiers as antediluvians who had survived to
witness the modern age of railroads, ironclads, the telegraph, and the
unprecedented social, political, and economic challenges of the 1860s.
But The Last Men of the Revolution can tell us more about the American
Civil War than about the American Revolution.
When
we write history, we often compare the past with our own era. And if we
resist the temptation, our readers are certain to draw the comparisons
themselves. Like the authors of The Last Men of the Revolution,
we often define as relevant historical scholarship that which uses the
past to speak to the present, even when we know the people who lived the
history we write obviously did not see themselves simply as precursors
to our time.
This
hardly counts as a novel observation. Perceptive historians know, for example,
that such recreated slices of the past as Colonial Williamsburg are viewed
through the distorting prism of the present.Visitors to today’s pristine
historical reconstructions need not fear vermin, filth, slavery, or the
hazard of being roasted over a slow fire by marauding Indians. Historians
might complicate this observation by pointing out that the 17th and 18th
centuries were basically unconcerned with flies and slavery.
When
we consider meticulously recreated historic communities, military reenactments,
or carefully restored technologies such as antique aircraft or automobiles,
we might conclude that historical restorations, like historical scholarship,
fascinate us because they provide context and points of origin for the
present in which we live. We fix upon the giant radial piston engines,
leaking oil by the quart; or the non counter-sunk rivets of a World War
II B-17; or the gold-thread upholstery and soaring tail fins of a 1958
DeSoto. By recognizing and recreating these details, we feel that these
artifacts can give us some insight into our past.
We
know that these restorations represent a history that serves the needs
and desires of the present. But we must recognize that, in their
times, they were anything but antique—again, a fairly obvious observation.
Yet this insight is hard to sustain.Rather, we persist in our inability
to view, for example, a Model A Ford with the eyes of 1927, which would
have seen it as something wonderfully advanced and modern, a symbol of
“The New Age”. For us, the Model A Ford or the 1958 DeSoto are embedded
almost inextricably in their times. Historians can well sympathize with
the frustration of one of Randall Jarrell’s characters as he viewed a daguerreotype
of a 19th-century battlefield:
[W]e
look at an old photograph and feel that the people in it must surely have
some intimation of how old-fashioned they were. We feel this even when
the photograph is a photograph of corpses strewn in their old-fashioned
uniformsalong an old-fashioned trench.2
Is
it possible, then, to interpret the past without the expectations and preconceptions
of the present? The late Barbara Tuchman thought that she might have come
close when she claimed that she wrote “as of the time without using the
benefit of hindsight, resisting always the temptation to refer to events
stillahead.”3 The effort is admirable and no doubt necessary, but even
the most determined can never achieve a completely blank slate.
A far
more attainable solution to the problem of historical interpretation lies
in the past’s past, what I would term the “past perfect.” Every point in
history, obviously, had its own history to shape it. Egypt’s pyramids once
glistened new and white in the sun, and there was a time when Stonehenge
must have beenconsidered the work of the “younger generation.” Beowulf,
the earliest surviving intact literary work from the Anglo-Saxon period,
was once fresh, and more than one stanza harks back to earlier times:
Then
was song and revel The aged Scylding [Dane] From well-stored mind Spoke
much of the past.
Yet
what in the 21st century is more “of the past” than Beowulf?
Looking
at the past has been compared to viewing a distant scene through a powerful
telescope, in which the foreground appears in its essentials, but the background
seems foreshortened or blurred. We tend to focus on the foreground and
treat its background as mere prologue. Yet it is that background, the past
perfect, that shaped the foreground we study, and we come closer to an
understanding of any period of the past by comparingit to its precedents
rather than to the times in which we live. If we can understand the hold
that its own past exercised upon any era, we should be well on our way
to understanding that era. For example, something new and unprecedented
captured the image of those long-dead soldiers who so frustrate Mr. Jarrell’s
character. Perhaps if we remember that the earliest photographs were hailed
as “Sun pictures,” wondrous inventions of the age of unprecedented progress,
we can better understand the mid-19th century. The soldiers’ “antique”
uniforms came from the military reforms of Louis XIV, who had replaced
the earlier hodge-podge of martial accoutrements with uniform dress—and
if uniforms had to be gaudy to distinguish friend from foe on a smoky battlefield,
so much the better. Furthermore, those trenches were far from old fashioned;
rather, they represented an innovative response to the new killing power
of rifled small arms. Without this historical understanding, we could only
agree with Malcom Muggeridge that “[t]he camera always lies.” Or take again
the example of the “old-fashioned” Model A Ford. The 1929 model may appear
awkwardly angular to our modern eyes, but it represented the epitome of
advanced design and technological efficiency to its owners, who compared
the 1929 Model A to the spindly vehicles of 1919 or 1909, particularly
the Model T Ford. (Henry Ford, himself, alarmed by the advances incorporated
in his Model A, exclaimed, “We can’t put that car on the road! We’ll kill
them all!”)
Soldiers
in 1864 and motorists in 1927 could look back only to their own pasts,
the past perfect, to define their present experiences. Understandably,
they imagined themselves to be members of the most advanced and technologically
sophisticated age the world had ever known. Although they lived in what
some might term “the good old days,” when, we believe in our ignorance,
that the pace of life was slower and technology had not overtaken daily
life, they experienced their age as fast-paced, rapidly changing, and technologically
advanced. It is a sentiment shared in every “modern age.”
Historical
interpretations that rely on comparisons to later periods often result
in history that passes judgment on the past based on criteria that postdate
it. Those who lived in the 1920s did not experience the decade as a precursor
to the Great Depression. We must compare the 1920s with itspast years—say,
1919, a year of massive strikes, inflation, race riots, and the Big Red
Scare. With that comparison in mind, a vote for the GOP ticket seems hardly
ignoble, whatever one now thinks of Harding or Coolidge. For those who
lived through the previous decade, the 1920s seemed to inaugurate something
close to “normalcy,” witha lowered cost of living and a dollar that, once
again, was worth, by 1926, exactly 100 cents. Given the past perfect, is
it fair to characterize the 1920s as trivial? Or would it be more accurate
to characterize the decade as a general sigh of relief after the upheaval
of the preceding decade?
Another
example of the power of the past’s past lies in histories of World War
II that deal with the Allies’ refusal to take seriously reports of the
Nazis’ near-extermination of the Jews of Occupied Europe, damning the Allies
as “blind” and “insensitive,” or that implicate the Allies in the Holocaust
because they ignored reports of Hitler’s Final Solution. The “past perfect”
of the Allies’ inaction to prevent or end the Holocaust reveals at least
a partial explanation. During World War I, the Allies spread horrifying
stories of German atrocities throughout the world. After World War I, these
stories, which had been widely believed, were proven in the main to be
false. Thus what appears to be a callous disregard on the part of the Allies
in World War II to the fate of millions of people may have resulted, in
part, from a reaction to the inflated and groundless stories of German
atrocities in World War I.
Each
generation, at least in the West, looking back, seems to have believed
that it was in something like the final stages of an historical process.
Medieval Christians, for example, generally believed that the world had
passed through six stages of life and was now in a seventh and, of course,
final “end by putrefaction.” Later, Edmund Spenser, overwhelmed by the
speed of life in 16th-century England, moaned that “the world's runne quite
out of square.” Ralph Waldo Emerson, referring to the technological innovations
of his era, exclaimed, “How is the face of the world changed from the era
of Noah to that of Napoleon!” And Thomas Carlyle roared about “velocity
increasing . . . as the square of time . . . the world has changed less
since Jesus Christ than it has done in the last thirty years.” President
Lincoln, in an oft-quoted passage, informed Congress that “[t]he dogmas
of the quiet past” were “not adequate to the stormy present,” and Woodrow
Wilson opined that “[h]istory grows infinitely complex about us; the tasks
of this country are no longer simple.”
People
have often believed that they were living on the hinge of history, swamped
by the forces of accelerating change and decay. The early 21st century
is no different. Military historian Sir Michael Howard insists that “history
has become the record of ever more rapid and bewildering change throughout
the world.”4 But as another late 20th-century historian noted perceptively:
The
belief that the world is changing rapidly and in fundamental ways may simply
be one of those ideas that survive as prejudices and formulas after they
have ceased to be true. It is flattering to ourselves to think that wherever
society is going, it is going there fast . . . . Whether the direction
of the change is toward ruin or toward accomplishment, it is intolerable
to think that the process is mediocre.5
Indeed,
viewing past times as “slow-moving,” their artifacts as “simple” or “primitive,”
and eras rich in complexity as “old” or “pre-” or “early,” can cause us
to misunderstand and even to denigrate the past. If past times were less
complex, then they probably were less challenging as well—and, ultimately,
less important than our present day. In this view, two world wars and a
global depression can diminish into mere preludes to the cataclysmic “postmodern
era.” Not too long ago, “sophisticated” commentators declared World War
II more straightforwardand simple than the Vietnam conflict because, in
the earlier war, our enemies were out in the open and in uniform; their
leaders were obviously evil; and the conflict was supported on the home
front. But the argument weakens considerably when one interviews survivors
of the Bataan Death March or Omaha Beach.
To
approximate the past perfect more clearly, historians might consider dropping
such labels as “Early American,” “Late Gothic,” the “Old Kingdom,” “Mid-Victorian,”
and so on, which, again, define the past as it relates to later periods.
Such terms invite imprecision and inaccuracy. They also can be historically
confusing. The “Old [U.S.] Navy,” for example, usually referred to the
pre-1890s service of wooden hulls, muzzleloaders, and spit-and-polish.
But the term could refer also to the pre-Pearl Harbor U.S. Navy, its backbone
the battleship, and its carriers termed, somewhat condescendingly, “the
eyes of the fleet.” Yet that Navy had been built at enormous cost in time,
money, and controversy to project U.S. power around the globe and to shoot
straight at long range. And it, too, often was called the “New Navy.”
We
stand on somewhat firmer historical ground with the “new,” in that most
ages and eras have tended to regard themselves as being “new,” or “modern.”
But for historians to use the term to refer to their own times leaves them
open to the charge of presentmindedness —and certainly to a lack of originality.
The “New History” now seems dated, a fate that will undoubtedly befall
“La Nouvell Histoire,” the “New Military History,” the “New Social History,”
or the “New Political History,” not to mention the New Left and the New
Right. “New” rarely wears well; consider, for example, New College, Oxford,
founded in 1379; or New York’s New School of Social Research, now well
over sixty years old. One would think that historians would be the first
to point out that Young Turks soon enough become Old Bolsheviks. In most
cases, straightforward chronological descriptions, such as “early 20th
century,” “postwar,” etc., would provide more accurate historical categorization.
Accuracy
is not the only casualty when historians forget the past perfect. By labelingand
categorizing the past through direct or indirect comparison to the present,
we often obliterate the sense of wonder felt by the witnesses of history.
Some listeners apparently fainted when they first heard polyphony, and
the use of perspective in paintings must have shocked the first viewers
who were confronted with it. What do these experiences tell us about the
Renaissance? No one challenged Thomas Edison’s phonograph patent application
because no one could imagine putting sound in a box and taking it out again.
Can we dismiss this prodigious leap of Edison’s imagination by speaking
of his phonograph as “primitive?” Early motion pictures strike us as, for
the most part, technically clumsy, poorly staged, and grossly over-acted.
And yet, to the audiences who viewed the first films, the images struck
them as nothing short of miraculous. The first moving pictures sparked
some audience members to jump from their seats when a locomotive seemed
to come hurtling out of the screen towards them.
Of
course, historians know how it will all end, or at least what comes next.
This knowledge lends a sense of completion, perhaps even a patness, to
historical inquiry that is difficult to overcome. Events seem to move logically
through inception and development toward resolution. If nothing else, such
progression certainly makes for boring history for our students. For example,
few historians who write about the Great Depression of the 1930s take up
one obvious reason why the Depression was so depressing: those living through
it could not foresee its end. No one could predict the long-term boost
that rearmament and wartime production would give to the American economy.
In fact, given the intensely pacifistic sentiments of the 1930s in the
democracies, such a knowledge would have seemed repulsive.
Our
foreknowledge of the story’s resolution numbs the tension, fear, and hope
felt by history’s participants. Perhaps this is inevitable. But historians
should make the attempt to recapture the immediacy, the frightening uncertainty,
of the time and events they describe. Studying and interpreting history
should be a humbling exercise. The achievements of other eras offer far
more than simply the building blocks or preludes to our own times—that
way lies the discredited “Whig” interpretation of history. “Our times”
will soon enough cease to be ours as they are daily transformed into the
history of the future.
Historical
interest looms large in contemporary America and much of the rest of the
world. Professional historians may groan when they see a group of British
World War II re-enactors in authentic U.S. Army uniforms cruising England’s
coastal roads in carefully restored jeeps, army trucks, and half-tracks,
their pockets stuffed with Wrigley's gum and Life Savers, having the time
of their lives as they pretend to be “Yanks” in training for D-Day. But
these historical re-enacters’ fanatical attention to details of the past
speaks to our collective hunger for the immediate experience of historical
events, evidenced in the History Channel, a site which was unimaginable
even ten years ago. History matters to the 21st century, and not only as
a prelude to the present. A decent respect for the past, and a realization
that then, as now, an age could only measure itself against its own past—the
past perfect—should not be too much to ask of historians if we are to provide
a better understanding of history.
Stanley
Sandler retired in 1999 as a command historian with the U.S. Army Special
Operations Command, Fort Bragg, North Carolina. Among his more recent books
is The Korean War: No Victors, No Vanquished (University of Kentucky Press,
2001).
1 N.
A. and R. A. Moore, The Last Men of the Revolution: A Photograph of Each
From Life . . . Accompanied by Brief Biographical Sketches (Hartford, CT,
1864). 2 Randall Jarrell, Pictures From an Institution (Farrar, Straus
& Giroux, 1954), 207. 3 Barbara Tuchman, Practicing History: Selected
Essays (Knopf, 1981), 71. 4 Sir Michael Howard, Times Literary Supplement,
June 23, 1989. 5 R. MacGillivray, The Slopes of the Andes: Four Essays
on the Rural Myth in Ontario (Mika Publishing Company, 1990), 179.
[1]
N. A. and R. A. Moore,
The Last Men of the Revolution: A Photograph
of Each From Life . . . Accompanied by Brief Biographical Sketches
(Hartford, CT, 1864).
Join
the Historical Society and subscribe to Historically Speaking
Coming
to Terms with the Third American Republic
by James Livingston
Only in
the United States do the losers, deviants, miscreants, and malcontents
get to narrate the national experience. In historiographical time, for
example, coming to terms with the second American republic codified in
the Fourteenth Amendment took almost a century. Why? Because the formative
moment known as Reconstruction was originally defined by historians who
identified with the good old causes of Southern honor and white supremacy.
But an even better example is the historiography of the Progressive Era,
which qualifies, by all accounts, as an equally formative moment in the
making of the nation. Here, too, professional historians who have proudly
identified with the good old lost causes (especially, but not only Populism)
have been able to define the moment in question, and to shape research
agendas accordingly. In this sense, coming to terms with the third American
republic—the one that resides in the emergence of corporate capitalism,
ca. 1890-1930—has been no less difficult than coming to terms with the
second.
Several
years ago, in a graduate reading seminar that covered the period 1880 to
1930, I was reminded of just how difficult it has been for professional
historians to acknowledge the legitimacy of this third republic. I assigned
Richard Hofstadter's Age of Reform (1955), a key text in the making
of the discipline as well as an important interpretation of the Progressive
era from which we can still learn a great deal. I knew that the graduate
students would be suspicious of a so-called consensus historian who was
agnostic on the “democratic promise” of Populism. But I was not prepared
for their refusal to consider the possibility that Hofstadter wrote from
the Left—the possibility that we can be agnostic on the anti-monopoly tradition
and yet keep the democratic faith, or, what is the same thing, that we
can treat the relation between corporate capitalism and social democracy
as reciprocal rather than antithetical. Their refusal taught me that these
possibilities cannot be contemplated, let alone realized, unless we learn
to treat the historical fact of corporate capitalism as the source rather
than the solvent of a social-democratic promise, that is, until we acknowledge
the legitimacy of the third American republic.
We—historians
and their constituents —need a new way of thinking about the 20th century.
We need to be able to think about it as something other or more than the
non-heroic residue of the tragedy staged in the 1890s, when the Populists
were defeated, or in the 1940s, when the anti-corporate animus of the New
Deal expired, the Congress of Industrial Organization settled for collective
bargaining rather than workers’ control of the assembly line, and intellectuals
turned their backs on the unions if not the masses. We need a new way of
thinking about progress in the Progressive era and after, particularly
the “after” that is our own time.
We
also need a way of thinking that doesn’t require that we worship at the
shrine of Theodore Roosevelt’s “New Nationalism” in measuring progress
in the Progressive era. We need, that is, to get beyond a narrowly political,
policy-relevant approach to the accomplishments of this era; in this sense,
we need to accredit recent social, cultural, intellectual, and women’s
history, and thus acknowledge the extraordinary innovations made “out of
doors” in domains that were, and that remain, invisible to professional
politicians. And once we’re out there in civil society, we must get beyond
the reflexive critique of consumer culture, for its ideological function
is to reinstate the model of subjectivity —the “man of reason,” the self-determining
producer of himself—invented in the early modern period. In today’s world,
this sort of independence is no longer possible. Progressive era intellectuals
understood that the days of the self-sufficient, independent man were long
gone. In the early 20th century, pragmatists and feminists converged on
a “critique of the subject,” demonstrating that the transcendental (Kantian)
subject was a metaphysical conceit that had lost its explanatory adequacy
because it excluded any historical periodization of selfhood and remained
resolutely male in both its theoretical expressions and practical applications.
Pragmatists and feminists insisted that there was “no ego outside and behind
the scene of action,” as John Dewey put it: the autonomous self and the
moral personality were not the prior conditions of authentic subjectivity
or moral decisions; they were the results. These same pragmatists and feminists,
from William James, Dewey, and Jane Addams to Jane Flax and Judith Butler,
also sponsored the notion of a “social self.” Further, they located the
historical origins of this notion in the socialization of property and
markets—“the regress of self sufficiency and the progress of association,”
as Henry Carter Adams put it—which was permitted and enforced by corporate
enterprise.
• •
•
Conservatives
and radicals agree that the United States is exceptional because revolution
—like socialism—is a foreign import without historical roots in the American
experience. Conservatives revere the American past because they believe
it was never disfigured by revolution, while radicals renounce this past
because they believe it was never redeemed by revolution. They agree, then,
that revolution is by definition a complete break from the past, a unique
moment when the conflict between “ought” and “is”— between ethical principles
and historical circumstances —is violently resolved in favor of the former,
and anything becomes possible. If we refuse the either/or choice between
conservatism and radicalism by mediating between them as pragmatism and
feminism teach us to, we might posit a different definition of revolution
and a different attitude toward our history. We might define revolution
as the suture rather than the rupture of past, present, and future, or
as the politically synthetic effort to make the relations between these
temporal moments continuous rather than incommensurable. To do so would,
however, require that we follow the lead of Hannah Arendt and learn to
treat the French Revolution (and its Bolshevik reprise) as just one species
in a profuse and increasingly plural genus. We would also have to learn
that American revolutions remain unfinished because the nation itself has
been, and will always be, a work in progress.
James
Livingston is a professor of history at Rutgers University. His most recent
book is Pragmatism, Feminism, and Democracy: Rethinking the Politics of
American History (Routledge, 2001).
Join
the Historical Society and subscribe to Historically Speaking
The
Battle Over “Turning Back the Clock” in Constitutional Interpretation
by
Stephen B. Presser
Do
we still have a Constitution? There is no doubt that the “original” understanding
of Constitutional exegesis was that the document ought to be interpreted
in an objective manner, according to the “original” understanding of Constitutional
terms.[1]
As everyone who has gone to law school since 1954 knows, however, the United
States Supreme Court, in the middle of the 20th century, departed from
that original philosophy of Constitutional interpretation, in order to
bring the Constitution more in line with what the Justices thought to be
convenient for a modern democracy. The theory, consistent with the still-prevailing
jurisprudence of “legal realism,” was that the Constitution ought to be
viewed as a “living document,” which should change with the times. As Earl
Warren remarked, in deciding the landmark school desegregation case of
Brown v. Board of Education of Topeka (1954), “we cannot turn the
clock back” to the time when the Fourteenth Amendment was passed to understand
what it ought to mean today. Warren’s Court (and, to a great extent, the
Burger and Rehnquist Courts which followed his) altered the meaning of
many provisions of the Constitution, in order substantially to limit the
scope of what the states could do in terms of policy-making, and in order
to promote a secularized and individualized philosophy of government.
Woe
be to those who challenge the Warren (and Burger and Rehnquist) Court’s
decisions in the areas of race, religion, and, in particular, abortion.
Consider, for example, Robert Bork, the victim of a grossly unfair attack
by so-called “public interest groups” and their sympathizers in the Senate
which kept him off the Supreme Court. More recently, Bork argued in the
New Criterion that the Supreme Court’s decisions in the second half
of the 20th century regarding “speech, religion, abortion, secularity,
welfare, public education and much else” made the Court, in effect “the
enemy of traditional culture.”[2]
With this essay, Bork sought to rally support for judicial nominees who
espouse original intent. In response, Jeffrey Rosen, a law professor and
the legal affairs editor of the New Republic (a reliable indicator
of what passes for moderation among media views), blasted Bork in the New
York Times Magazine, as “living in a dystopian time warp.”[3]
Bork
blamed the Court for the current “suffocating vulgarity of [our] popular
culture,” but Rosen argued that “MTV, the Internet, the expansion of sexual
equality and other democratizing forces of popular culture” have led to
the decline of traditional values. Indeed, Rosen praised the “Supreme Court’s
relatively moderate compromises on abortion and religion.” For Rosen, and
for most of the legal academy for the last forty years, the role of the
Court is to reflect the decisions that have been made by the popular culture.
In the words of Oliver Wendell Holmes, Jr., the patron saint of the currently
dominant jurisprudential school of legal realism: “The substance of the
law, at any given time, pretty nearly corresponds, so far as it goes, with
what is then understood to be convenient.”
Holmes got it wrong. So does Rosen. Bork is right.
A Constitution that is a “living document” is a rope of sand, and offers
little hope of doing what the Constitution is supposed to do. The Constitution
was intended to reign in the excesses of the state legislatures, which
were seeking to appease popular opinion by issuing increasingly worthless
paper currency and by suspending debts. The Constitution put in place a
federal government that was to make commercial markets more secure by centralizing
the creation of currency, and by forbidding interference with private contracts
on the part of the state governments. As high school students used to learn,
the federal Constitution also attempted to put in place structural mechanisms
which would prevent even the federal government from becoming too powerful.
These were basically two-fold. One was the principle that we now know as
“federalism” or “dual sovereignty,” the notion that the federal government
was to be one of limited and enumerated powers, with the state governments
as the primary legislative and policy-making bodies for the country. The
Constitution was supposed to deal with common concerns of all the states,
such as interstate commerce, navigation, a circulating currency, and national
defense. Other matters, including the basic doctrines of contracts, torts,
property, criminal law, family law, and religion were to be the province
of state and local governments. The second structural safeguard was borrowed
from Montesquieu, and that was the idea that there ought to be separation
of powers, so that the legislature would be the maker of law, the executive
would carry out the law (pursuant to the directives of the Constitution
and Congress), and the courts would police the Constitutional scheme, in
particular making sure that Congress (and the states) did not go beyond
the bounds of Constitutional directives.
Until the New Deal, by and large, the original
Constitutional scheme was kept intact. After the Civil War, with the passage
of the Reconstruction Amendments (the Thirteenth, which abolished slavery;
the Fourteenth, which forbade states from depriving any person of “due
process,” the “equal protection of the laws,” or the “privileges and immunities”
of United States citizenship; and the Fifteenth, which forbade denial of
the right to vote because of “race, color, or previous condition of servitude”),
Congress was given the power to enforce the Amendments by appropriate legislation,
which did involve the federal government in new areas of concern. Yet Congressional
enforcement efforts with regard to these three Amendments were modest.
Some further economic regulation was accomplished through the establishment
of the Interstate Commerce Commission in 1881 (which was given the power
to regularize railroad rates), and the Sherman Antitrust Act of 1890 (which
gave the federal government the power to act against restraints of trade
in interstate commerce). Still, most of the national economy, and most
political and judicial activity, remained regulated at the state and local
levels. This changed dramatically under Franklin D. Roosevelt’s New Deal,
which sought comprehensively to regulate both interstate and intrastate
trade in a manner ostensibly calculated to get the nation out of the most
horrific depression it had yet experienced. The Supreme Court at first
balked at the scope of this new federal legislation, and tossed out some
early New Deal measures. But from 1937 until the middle of the 1990s, no
federal law was thrown out on the grounds that exceeded Congress’s authority
to regulate interstate commerce. Congress was permitted to set up comprehensive
schemes permitting collective bargaining, the regulation of drugs, of aviation,
of trading in national securities, and of a whole host of other matters.
The range of legislative action which the courts now permitted to Congress
was matched in scope by the range of matters the federal courts now moved
to supervise. For the first time, the Court undertook a wholesale examination
of what state and local law enforcement could and could not do because
of alleged strictures of the Fourth and Fifth Amendments. This was the
genesis of the famed Miranda warnings and the exclusionaryrule, which forbid
the use of evidence obtained by practices suddenly declared toviolate the
Fourth Amendment, even if that evidence clearly establishes the guilt of
the defendant. The U.S. remains the only Western nation to employ such
a rule. Employing a simplistic conception of “one person, one vote” nowhere
to be found in the Constitution, but presumably anchored in the Fourteenth
Amendment’s guarantee of “equal protection,” the Warren Court proceeded
to declare unconstitutional any state bicameral legislative scheme that
seated a branch of government in any manner other than election based on
population. Thus, the bicameral model of the state legislatures, which,
like the federal government, had featured one legislative body selected
by historical political subdivisions (the states for the federal government,
and, usually, counties for the states), and the other apportioned on the
basis of population—a model which had been in use in some states for nearly
two centuries, was suddenly declared to violate the Constitution. In the
early 1960s, a series of decisions which continued into the late 20th century
ruled that states could not require bible reading or prayers in the public
schools, and,eventually, that they could not mandate prayers at high school
graduations or permit them at football games, or even permit the display
of the Ten Commandments on public property. Until quite late in the 20th
century the Supreme Court permitted state and local governments to apportion
resources, or to allocate opportunities, on the basis of the race of individuals
(affirmative action), even though the historical understanding of the Fourteenth
Amendment’s equal protection clause (and of the Fifteenth Amendment) would
seem to forbid anything but equal treatment for all. Finally, even though
the original federalist scheme would clearly have left sucha domestic matter
to the discretion of the state and local governments, in the last four
decades of the 20th century the Court discovered an implicit “right to
privacy” inherent somewhere in the Bill of Rights which forbids the states
from prohibiting married adults (and later unmarried ones) access to contraception
and then, eventually, from prohibiting most restrictions on abortion. Constitutional
historians scrambled to find some justification for what theWarren and
Burger Courts had done in these areas. They claimed that the Justices were
engaging in “representation reinforcing,” wisely applying insights from
moral philosophy, or functioning in a “religious” manner similar to that
of the Old Testament prophets.4 The problem, of course, was that the historic
role of the courts was not, in a forward-looking manner, to promote social
change according to some conception of what was best for the polity, but
rather, in a conservative manner, to apply the pre-existing rules of law,
leaving law-making and policy formulation to the legislative or executive
branches, or to the state and local governments.
Much of this explosion of judicial law and policy-making
was anchored in what has come to be called the “incorporation” doctrine.
This interpretive tool began to appear in the middle of the 20th century,
when a few Justices argued that the Bill of Rights, in the light of the
Fourteenth Amendment, ought to protect individuals against not only the
federal but also the state and local governments. Thus, the criminal procedure
decisions of the Warren Court were defended on the grounds that the Fourth
and FifthAmendment strictures ought to be applied, through the Fourteenth
Amendment, against the states. And the school prayer decisions were now
justified because the First Amendment’s prohibitions (e.g. “Congress shall
make no law regarding an establishment of religion”) should be read as
if it applied to state and local legislatures as well as Congress.
This was an absolutely unparalleled act of judicial
legerdemain, because the original purpose of those who drafted the Bill
of Rights was to restrain the federal government and, in particular, any
adventurous federal judges. But the Warren, Burger, and Rehnquist Courts,
through their application of the incorporation doctrine, engaged in the
very conduct the Bill of Rights’ framers had feared. To be fair to the
historians who defended the incorporation doctrine (and that has become
one of the great cottage industries among Constitutional historians),5
there were at least some members of Congress whose words at the time of
the passage of the Fourteenth Amendment could be broadly construed to allow
such an interpretation. Still, given the extraordinary reach of such a
doctrine and how it, in effect, would have had to have been understood
as radically restructuring the prerogatives of state and local governments,
one would have expected much clearer articulation of such an “incorporation
purpose” and much more resistance to the idea in the debates over the Amendments.6
One can argue passionately (if not cogently) for the correctness of what
the Warren, Burger, and Rehnquist Courts did with the incorporation doctrine,
but one cannot base that argument easily on the historical understanding
of the Fourteenth Amendment. Even so, those who dare to question the incorporation
doctrine these days are excoriated, as, for example, was Reagan’s attorney
general Edwin Meese (a former law professor) when he expressed some skepticism.
Though it is almost never acknowledged, what seems
to be driving the current controversy over “judicial ideology” and President
Bush’s nominees to the federal bench are the beginnings of a return to
“original understanding” on the part of the Supreme Court. When he was
campaigning for the White House, Bush took the very unusual step of indicating
that he believed that the models for his appointments to the judiciary
should be Associate Justices Antonin Scalia and Clarence Thomas. They are
the two Justices who have most frequently indicated their belief in interpreting
the Constitution according to its original understanding; they are the
two who have most clearly rejected the “living constitution” theory; and
they are among the four Justices who have indicated a desire to overrule
Roe v. Wade. Scalia and Thomas, along with Chief Justice William Rehnquist
and Justices William Kennedy and Sandra Day O’Connor, have also been involved
in the spate of decisions which, for the first time since the New Deal,
rejected federal legislation as unauthorized by the commerce clause and
reveal a new willingness to understand the state governments as the nation’s
primary law-making bodies. Finally, these five Justices have begun more
narrowly to construe the Warren and Burger Courts’ interpretations of the
First Amendment’s purported separation of church and state, so that, for
example, they permitted religious extra-curricular organizations to use
public school property on the same basis as non-religious organizations,
and they permitted states and localities to operate “voucher programs”
which benefited religious as well as secular schools. None of the Justices
has yet come out against the incorporation theory itself, but this is not
inconceivable, and it must be part of what worries the Court’s critics.
These critics, principally the Senate Democrats,
but also the liberal public interest groups such as People for the American
Way, the American Civil Liberties Union, Planned Parenthood, and the National
Organization of Women, believe that if Bush is to have his way, the entire
edifice of individually- oriented “living constitution” jurisprudence might
well be threatened. Thus, for the first time in American history, a political
party has taken the position that a president (from a different party)
should not be able to have his nominations to the bench confirmed if their
“judicial ideology” would result in a majority of judges or Justices who
believe in the jurisprudence of “original understanding.” The defenders
of what the Senate Democrats and their “public-interest” allies are up
to claim that “judicial ideology” is not a new means of criticizing nominees,
and they point to such things as the rejection of George Washington’s nomination
of Associate Justice John Rutledge as Chief Justice and Herbert Hoover’s
nomination (in 1930) of John Parker as early examples. But in these cases
and others like them, it was the particular views of the individual (in
Rutledge’s case his approval of the Jay Treaty and in Parker’s case his
“perceived aversion to the labor movement and to civil rights for blacks”)
and not their common adherence to a jurisprudential perspective (that was
presumed to be the only legitimate one at our founding) that subjected
them to criticism.7 Strangely, members of the history discipline have been
virtually silent about this fact, and even Republican politicians have
not made an appeal to history to defend the judicial philosophy of their
nominees.
Perhaps a reason for this hesitation is a deep-seated
ambivalence over the framers’ project, and a belief that their world was
so different from ours that there really is something illegitimate about
implementing the “original understanding” of the Constitution. After all,
many of the framers were slaveholders and virtually all of them believed
in limiting the franchise to white male owners of a substantial interest
in landed property. Most, if not all of the framers, believed in the law
of nature and nature’s God, and nearly all of them believed that the United
States was a “Christian Country.” Moreover, it was a generally understood
maxim of interpretation in the late 18th century that one could have no
order without law, no law without morality, and no morality without religion
(and a religion probably of some Protestant variety). How could anything
worth saving come from people whose views were so obviously different from
the views of those who now embrace democracy, legal realism, racial equality,
feminism, secularism, and individualism? The simple answer is that even
though the framers may have gotten some things wrong, they did understand
some things regarding what they called “the science of politics” that we
have all but forgotten. They managed to state some timeless truths which
are dangerous for us to ignore.
This isn’t the place to go into details,8 but it
is enough to remark here that the framers understood that our government
had to be a republic, not a democracy, because one can’t govern a polity
of hundreds of millions of people (or millions in the late 18th century)
by thoroughly democratic means. Some hierarchy in society is inevitable,
they knew, and American government could only protect “ordered liberty”
(a phrase generally attributed to the framers, though probably coined in
the later 19th century) if a structure were erected which protected the
Constitution from the people, the people from their repub-lican rulers,
and the rulers from themselves.
Demagoguery in our politics, a federal leviathan,
a set of strange notions about human nature, and judges who make law have
taken us far away from the original understanding on which the Constitution
was based. These days, what passes for an understanding of human nature
among our judges is too often the sentiment expressed in the famous “mystery
passage” from the plurality opinion in Planned Parenthood v. Casey, a 1992
decision which upheld Roe v. Wade’s interpretation of the Constitution.
Said three of the Justices: “At the heart of liberty is the right to define
one’s own concept of existence, of meaning, of the universe, and of the
mystery of human life.” But that is hardly what liberty is really about,
as our Constitution’s framers understood the term. Meaning in life comes
from associations with others, and the radical individualism of the mystery
passage and of modern American culture tends more toward despair and self-indulgence
than to the pursuit of happiness Jefferson had in mind. Indeed, the framers
probably also understood better than many in government and culture today
that the good life requires adherence to traditions, religious faith, and
altruism rather than secularism and self-actualization. It is true that
a Constitution forged in the late 18th century, without changes, is not
fit for governance in the 20th. But the Constitution does contain an amendment
process which was put in place to meet that need.
Not judges, but the American people, through their
Congressional representatives and state legislatures, following the Constitution’s
own procedures, should be in the business of altering the Constitution.
None of this is anything new, of course, but all of it is now regarded
with skepticism, if not downright hostility in most of the American academy.
There are signs, though, that this is changing. The current battle over
“judicial ideology” has at least the benefit of forcing us to articulate
the proper conception of the judicial role, and, indeed, of our regime
of separation of powers, and, again, as the framers understood, it is always
useful to restate the first principles of our government. Even more encouraging
is a spate of new works by academics comfortably situated in the highest
ivory towers who have been able to rearticulate with admiration, appreciation,
and endorsement several principles of the framers’ Constitution. For example,
Princeton professor Keith Whittington has recently published a brilliant
defense of the “original understanding” method of interpretation, Philip
Hamberger of the University of Chicago has written a thorough monograph
debunking the notion that “separation of church and state” was a constant
in American government, and Albert Alschuler, also of the University of
Chicago, has given us a devastating critique of the jurisprudence of legal
realism and its spiritual father, Oliver Wendell Holmes, Jr.
Those of us who share Benjamin Franklin’s belief
that the hand of “Providence” was evident in the framing of the Constitution
cannot help but be encouraged by the appearance of this recent turn in
political science, Constitutional history, and judicial biography. Right
now, we don’t really have our Constitution, but there are laudable efforts
underway to recapture it. Some in the Senate and in the media would like
to pretend otherwise, but if we lose the benefits of the wisdom of the
framers, as Judge Bork was trying to suggest, we’ve failed as historians.
Stephen B. Presser is the Raoul Berger Professor
of Legal History at Northwestern University School of Law, a professor
of business law in Northwestern’s Kellogg School of Management, and an
associate research fellow at the Institute of United States Studies of
the University of London. He is the author, with Jamil S. Zainaldin, of
Law and Jurisprudence in American History, 4th ed. (West Group, 2000).
[1]
The now seminal piece on “original understanding,” as the originally-intended
means of Constitutional interpretation is H. Jefferson Powell, “The Original
Understanding of Original Intent,” Harvard Law Review 98
(1985): 885.
[2]
Robert Bork, “Adversary Jurisprudence,” New Criterion 20 (2002).
[3]
Jeffrey Rosen, “Obstruction of Judges,” New York Times, August 11,
2002.
4 Michael J. Perry, The Constitution, the Courts,
and Human Rights: An Inquiry into the Legitimacy of Constitutional Policymaking
by the Judiciary (Yale University Press, 1982) presents a good summary
of these arguments. Perry advances the theory that the Court should function
in what he calls a “religious” manner.
5 Among the best such defenses are Akhil R. Amar,
The Bill of Rights: Creation and Reconstruction (Yale University Press,
1998); Michael Kent Curtis, No State Shall Abridge: The Fourteenth Amendment
and the Bill of Rights (Duke University Press, 1986); and William E. Nelson,
The Fourteenth Amendment: From Political Principle to Judicial Doctrine
(Harvard University Press, 1988).
6 This was the most impressive argument made in
the most widely-read recent critique of the “incorporation doctrine,” Raoul
Berger, Government by Judiciary: The Transformation of the Fourteenth Amendment,
2nd ed. (Liberty Fund, 1997).
7 For the attempt to use the Rutledge and Parker
imbroglios as support for the current conduct of the Senate Democrats,
see, e.g., Randall Kennedy, “The Law: Rejection Sustained: Republicans
are Suddenly Steamed that ‘Politics’ is Holding Up Judicial Appointments.
But Politics is the Point,” Atlantic Monthly 290 (2002).
8 For the details see, e.g., Stephen B. Presser,
Recapturing the Constitution: Race, Religion, and Abortion Reconsidered
(Regnery Publishing, 1994).
The
Prisoners
by
Armstrong Starkey
The
detainment of hundreds of suspected terrorists at Guantanamo has left President
Bush’s administration groping for definitions of their legal status. Furthermore,
the debate over the possible prosecution of prisoners through military
or civil courts raises questions about the definition of war. Since the
18th century, war has been defined in international law as conflict between
established nation-states. In the 18th century, soldiers came to be recognized
as the servants of increasingly centralized and impersonal states, and
uniforms clearly marked their nationality: British red, Prussian blue,
and the white uniforms of Catholic countries. Soldiers were perceived as
the lawful agents of those states and, as such, were entitled to protection
when wounded or captured by the enemy. In 1758, the Swiss writer Emmerich
de Vattel published The Law of Nations which applied traditional
just war theory to modern conditions. Even though he published his great
work during the Seven Years’ War, one of the most destructive conflicts
of the era, Vattel praised the advances in the humane conduct of war, particularly
with respect to the good treatment of prisoners. In doing so, he established
a norm that influenced 19th-century codes such as the Lieber Rules, established
to guide the conduct of the Union army during the American Civil War.
Although
Vattel extended his standards to cover militias and participants in civil
wars, The Law of Nations was intended only to provide direction
for wars between established states. Stateless peoples such as Native Americans
or Pugachev’s Cossack rebels could not expect the protection of the law.
Vattel was explicit in stating that a nation attacked by irregular means
in an informal, illegitimate war was not obliged to observe the formal
rules of war, and could treat the enemies as robbers. Savage nations, which
observed no rules, could be dealt with severely.
International
conventions of the 19th century confirmed the direction established by
Vattel. These conventions were binding upon signatory states. It was understood
that war was between states and that the purpose of the conventions was
to ameliorate conflict for the benefit both of the warring parties and
the international community of nation-states. On occasion, the conventions
strived to distinguish civilians from soldiers and to restrict warfare
to the latter. They also tried to place limits upon the destructiveness
of weapons. And the rights of prisoners remained a primary focus of these
agreements which, for example, recognized the International Red Cross as
an agency of prisoner assistance.
Today,
prisoners are defined by the 1949 Geneva Convention Relative to the Treatment
of Prisoners of War and the two 1977 Geneva Protocols Additional. The Protocols
reflected the evolving nature of war; Protocol I extended the application
of the Geneva Convention to include “armed conflicts in which peoples are
fighting against colonial domination and alien occupation and against racist
regimes in exercise of their right of self-determination.” The Protocols
addressed what critics saw as a pro-European and imperialist bias in international
law, which could be traced to the guidelines established by Vattel. They
recognized the rights of guerrillas who fought against colonial or other
occupying regimes. War was no longer limited to conflict between established
states. Vattel’s view that international law did not apply to conflicts
between savage and civilized peoples—so often used to justify atrocities
against non-European peoples—seemed to have lost meaning . . . .
LETTERS
To
the Editors,
John
L. Harper’s “America and Europe”
in the June 2002 Historically Speaking is mistitled. It should have
been “What Anti-American Europeans Think about America.” The author, who
never speaks for himself but attributes everything to anonymous “Europeans,”
pretends to be offering insights into the “gulf in perceptions” between
Europe and America, but everything is seen from one side of the gulf only.
It’s actually an accurate summary of what you will read in the left-wing
and center European press, particularly from its most America-hating and
Jew-baiting sectors. But the author never utters a syllable of criticism
of anything “Europeans think,” no matter how idiotic, and never allows
the American side to be heard at all. One wonders what the purpose of this
article could be. Does Professor Harper really think we have not heard
this litany before? Does he think anyone will be persuaded by this one-sided
rehearsal of European Americanophobia at its most reflexive and mindless?
I seriously wondered if the article could be meant as a parody. Unfortunately
I guess it isn’t.
Here
are a few examples of what “Europeans think.” “Europeans also tend to believe
that most countries stocking missiles and/or weapons of mass destruction
do not do so because they contemplate a suicidal first strike on the West
but because they do not want to be bullied by local rivals or the United
States.” That’s what Americans believe, too. Nobody objected to the stocking
of weapons of mass destruction by “most countries,” such as Britain, France,
Israel, and India. It becomes a problem when dangerous regimes like Iraq
do this. As for Iraq: “Where is the evidence, Europeans ask . . . that
Iraq (assuming it were able to) would be mad enough to attack Western targets?”
Well, there is the little matter that within three years Saddam Hussein
will have nuclear weapons and missiles to deliver them, and then he will
think, probably correctly, that he can do whatever he wants. A short list
of what he is known to want: reconquer Kuwait; master the world’s oil supply
and blackmail the West; start a nuclear confrontation with Israel; and
offer an impregnable base for the many terrorist groups he subsidizes so
they can blow up places like Bologna with impunity, etc.
But
here is the lowest point. “Europeans are appalled by Palestinian suicide
bombings, but even more so by what many see as Ariel Sharon’s brutal,
strategically blind attempt to crush the Palestinian Authority (a policy
in place before the current wave of suicide bombings) with tacit U.S. support”
(my emphases). One never ceases to be amazed that “Europeans think” they
can rewrite history like this. Less than two years ago Israel offered the
Palestinians a Palestinian state with its capital in Jerusalem; the Palestinians
rejected the offer and launched a new jihad with the avowed aim of destroying
the Jewish state and people. But in the version of events constantly reiterated
nowadays by European leftists, it was exactly the other way around: Israel,
not the Palestinians, destroyed the peace process; the Israeli peace offer
never existed; there was a secret Jewish plot (already “in place,” Professor
Harper assures us) to crush the Palestinian Authority, which merely acted
in self-defense. It’s unfortunate that all those Jewish children have to
be killed by nail bombs dipped in rat poison, but after all there is nothing
“brutal” about that; what would be “brutal” would be removing the murderers
from office. That is the part that made me wonder if I could be reading
a parody.
Finally,
Professor Harper solemnly warns that if America does not attend to what
“Europeans think” it may have to go it alone in the campaign against terrorism.
Professor Harper must have been out of the country a long time. The great
majority of Americans have long realized that the invasion of Iraq is necessary
and that we will have to go it alone, except perhaps for the other English-speaking
countries. They know that the Europeans, except for the British, are useless
as military allies; that their uselessness will not prevent them from acting
as if they have a right to direct American foreign policy; and that in
any case most of them can be expected to jump on board as soon as they
see the war is won.
For
a clear-headed and temperate statement of what Americans think about Europe,
I should recommend Andrew Sullivan’s splendid essay “Memo to Europe: Grow
Up On Iraq” in the Sunday Times of London (August 11, 2002, and
also available on his Website, www.andrewsullivan.com). His conclusion:
“The question for European leaders is therefore not whether they want to
back America or not. The question is whether they want to be adult players
in a new and dangerous world. Grow up and join in—or pipe down and let
us do it. That’s the message America is now sending to Europe.
And it’s a message long, long overdue.”
Doyne
Dawson
Sejong
University
John
Harper replies:
In
reply to a similar letter from Professor Dawson, Professor Tony Judt of
New York University dealt effectively with the implication that anyone
with the temerity to criticize Arik Sharon must be anti-Israeli and/or
anti-Semitic. (See the New York Review of Books, May 9, 2002.)
Anyone following the European press knows that the positions I presented
are in the mainstream, and not a parody of left-wing opinion. As to the
merits of those positions, readers will judge for themselves. I don’t see
how they can dispute the point that well before September 11, 2001, and
before the wave of Palestinian suicide bombings, Sharon was not plotting
but quite openly trying to dismantle the Palestinian Authority and to extend
Israeli settlements in the occupied territories (for Professor Dawson that
might be the “so-called occupied territories.”) As to the unofficial offer
made to Arafat at Taba, my personal opinion is that he was very foolish
not to accept it. But it is not often pointed out that the whole Camp David-Taba
process was pushed to suit Barak’s and Clinton’s electoral timetables,
that pro-Western Arab governments pressured Arafat not to accept the deal,
and that if he had accepted it there is no guarantee that a majority of
Israelis would have. Likud and its allies would have fought it tooth and
nail.
Since
the heart of the matter is now Iraq, here are the questions which skeptics
in Europe and elsewhere are asking. Professor Dawson might reflect on them,
too. His observations are easy to mistake for a parody of the overwrought
neoconservative view of the world.
1)
Is there hard evidence linking Iraq to al Qaeda and/or September 11, 2001?
Not so far, and not for lack of trying to find it. Why then did Washington
decide to conflate the war on al Qaeda with regime change in Iraq?
2)
Does the fact that Iraq used chemical weapons against its own people mean
(as the White House mantra goes) that it would use them, unprovoked, against
Israel or the United States? Does the bully who beats up smaller boys in
his neighborhood attack a heavily-armed squad of police?
3)
Having suffered a crushing defeat in 1991, with the U.S. in control of
two-thirds of Iraqi airspace, his army half the size it was in 1991, and
facing a formidable U.S. military presence in the area, is Saddam Hussein,
as some claim to believe, a serious candidate to become the Hitler of the
Middle East? Why did the chief of staff of the Israeli armed forces and
Israel’s chief of military intelligence recently say (see the New York
Times, Oct. 7, 2002) that they are not losing any sleep over “the Iraq
threat.”
4)
Does Iraq have nuclear weapons? No informed person claims that it does.
The question is how long it would take Iraq to make them if it could obtain
sufficient fissile material. The experts disagree.
5)
If Iraq did have nuclear weapons, and the missiles to deliver them, would
it launch them in an unprovoked first strike against Israel or the United
States? Where is the evidence that in addition to his other pathologies
Saddam wishes to commit suicide?
6)
If Saddam Hussein had a handful of nuclear weapons could he successfully
use them to blackmail the U.S. into surrendering Kuwait or some other piece
of territory? Does anybody believe that in such a case the U.S. would cave
in, any more than it would have if the USSR had threatened to seize West
Berlin? When in history has anyone used a small—or a large—nuclear arsenal
successfully to advance a policy of territorial aggrandizement?
7)
If Iraq managed to obtain a few nuclear weapons might it give them (or
other types of weapons of mass destruction) to terrorists? This cannot
be ruled out, but the last terrorist act that the White House attributes
to Saddam Hussein was the attempt to kill George H.W. Bush in 1993. If
al Qaeda is looking for nuclear materials wouldn’t they be more likely
to do so in a country that has them and where al Qaeda is well connected?
Is anybody talking about regime change in that country? (I note that Pakistan
is not on Dawson’s list of safe possessors of weapons of mass destruction.)
As the CIA has suggested to the Senate Intelligence Committee, the one
circumstance in which Iraq would surely be tempted to give weapons of mass
destruction to terrorists would be to organize a revenge attack on a U.S.
or Israeli target to be carried out during or after a U.S. invasion of
Iraq.
8)
Can one legitimately make war on the basis of conjecture? If the U.S. makes
that claim, should it be surprised if Russia, India, China, and other countries
do likewise? What kind of world are we going to have if preventative military
action becomes the rule?
9)
Since a U.S. war (with token support from a few countries) looks like a
foregone conclusion, is it going to be a “cakewalk”? Will the population
of Baghdad deck U.S. forces with garlands of flowers, or will the U.S.
have to take serious casualties?
10)
After getting rid of Saddam Hussein will we have an Iraqi Karzai and a
wave of liberalization across the Middle East, helping to solve the Israel-Palestine
problem in the process, as Dennis Ross and administration officials like
Paul Wolfowitz seem to believe? Or will we have an Iraqi Musharraf presiding
over a highly unstable situation, with the United States turning its attention
elsewhere and the UN and the EU expected to clean up the mess on the ground?
11)
Or is Washington’s real design to create a full-fledged client state in
Iraq, with U.S. bases and privileged access to Iraqi oil, allowing the
U.S to dispense with Saudi Arabia and point a gun at the backs of Syria
and Iran? Once we “own” Iraq, does Iran abandon or accelerate its nuclear
weapons program? Assuming it’s the latter, is it going to be “next stop,
Teheran”?
12)
Will military action against Iraq lead to less or to more terrorism against
Western targets? Does al Qaeda gain or lose? As elements of al Qaeda quietly
move from Pakistan back into Afghanistan, the U.S. “crusaders” shift resources
and attention away from that country, antagonize Muslim opinion worldwide,
possibly involve Israel in a military confrontation with an Arab state,
and topple a regime which al Qaeda has always despised. Doesn’t it start
to look like a script written by bin Laden himself?
13)
Was it necessary to threaten invasion in order to get Iraq finally to readmit
UN inspectors? Or is that an academic question because U.S. insistence
on regime change will eliminate whatever possibility there might have been
that Iraq will permit their return?
14)
Might George W. Bush have ulterior motives in pressing for action against
Iraq by early 2003? His personal psychodrama? The state of the U.S. economy?
The 2004 elections?
In
lambasting Judt, Professor Dawson wrote, “I think all reasonable observers
have come to the conclusion that there is no hope for peace until the Palestinians
have been dealt a crushing military defeat. Sharon appears to be doing
that quite well so far.” People were saying exactly the same thing as Sharon’s
army besieged Beirut twenty years ago. Like the Bush administration today,
Sharon, too, thought he had a plan to cut the Gordian knot and recast the
Middle East by starting a war. The idea was to crush the PLO and consolidate
a pro-Western state in Lebanon. The Begin-Sharon government had not only
overwhelming military superiority, but (in the beginning at least) broad
public support at home. The result, however, was international embarrassment,
isolation, and more terrorism. As Thomas Friedman shows in his brilliant
reportage, From Beirut to Jerusalem, Hezbollah was born as the Israeli
Defense Forces rolled over Shia villages. The Syrians promptly assassinated
the younger Gemayel, Sharon’s man in Beirut. The Likud government would
have done well to heed Machiavelli’s words, "Because anybody can start
a war when he wants, but not finish it, before taking on such an enterprise,
a prince must measure his strengths, and govern his conduct on that basis."
Let
us hope the Bush administration heeds them as well. At the moment, not
only is it giving Sharon a free hand in the territories, it shows signs
of adopting his model of foreign policy on a grander scale, namely, the
iron fist and (aside from lip service) to hell with what the world thinks.
As an expert on warfare over the centuries, Professor Dawson must know
that this has not proven to be a winning formula.
John
L. Harper
Bologna
Center of the Johns Hopkins University School of Advanced International
Studies
Join
the Historical Society and subscribe to Historically Speaking