Joseph
S. Lucas and Donald A. Yerxa, Editors
Origins
of Globalization | How Not to Deal with Enlightenments
| In the Archives | European Integration
The Making of the Modern American Home | The
Present Emergency | Whiteness and the Historians'
Imagination
February
2002
Volume
III, Number 3
The
Medieval Church and the Origins of Globalization
by
James Muldoon
For
good or ill, globalization is upon us. At the mundane level, this means
that an enormous range of products from the most ordinary to the most sophisticated
can be found everywhere. An American fuels his Japanese-made car with gasoline
brought from Saudi Arabia and drives it while drinking coffee from the
Caribbean and waiting for a call on his cell phone made by a company in
Finland. On the other hand, it is also possible to travel throughout much
of the world without leaving American culture. One can glide over the surface
of the earth, renting from Hertz, staying in Holiday Inns, and eating in
McDonalds, remaining in a hermetically-sealed American bubble.
Even
those who criticize globalization because they see it as destroying their
traditional way of life often dress rather like American sports fans or
college students, so that someone who has no idea of the intricacies of
the squeeze play wears a New York Yankees cap. Furthermore, because of
the ubiquity of CNN and other news services, foreign demonstrators thoughtfully
wave signs written in English, although, like American student papers,
the demonstrators’ protest slogans occasionally contain misspellings and
grammatical errors. There is a sense that English, specifically American
English, is the language of the global village, a village that grows ever
larger. With the development of the hand-cranked radio, it is no longer
even necessary to live near electric power lines or to possess an unlimited
supply of batteries to hear the news and to learn about the world. No matter
how hard one tries to avoid the consequences of globalization, like the
“Hound of Heaven” it follows us “down the arches of the years.”
To
the extent that we conceive of globalization in such material terms, however,
our understanding of what is happening remains superficial. Globalization
is much more than the creation of a universal consumer culture. After all,
it is quite possible to wear western clothes, use western technology, and
watch CNN while rejecting the culture that underlies them. Globalization
is above all a conception or a vision of the right order of the world.
It is also a vision that is inextricably linked to Western Europe’s Christian
past. Globalization exists not only because Europeans created worldwide
trade networks, but also because Europeans conceived of all humankind as
ultimately forming a universal community.
The
term globalization and related terms such as developed and
underdeveloped nations mask what is really taking place throughout
the world. These bland and neutral terms suggest that there is a universal,
natural, and ongoing process of human development that operates in every
society. Belief in the existence of this process is based upon the premise
that human beings form a single species and therefore every human society
should eventually develop along the same general lines. To say that there
are underdeveloped societies is to suggest that outside forces have intervened
to block or at least to slow down the natural course of development that
should be occurring. Furthermore, because humankind forms a single species,
globalization also means that there are universal or international standards
of humanitarian behavior that are applicable to all societies. What is
really happening throughout the world is not, however, some inevitable
natural process leading to the formation of similar societies everywhere.
What
is really happening is the westernization or the Europeanization— not simply
the Americanization—of the world, a process that has been going on not
from 1492 but from the 11th century. This does not mean the European domination
of the world in military terms or even in economic terms. It does mean
the spread of European cultural values and their imposition on, or their
adoption by, non-European peoples. To think globally, that is to see humankind
as forming some kind of coherent whole, is in effect to see the world as
Europeans have come to see it. This is a vision rooted in the medieval
Christian tradition; modern conceptions of globalization and of a global
society echo the medieval ecclesiastical vision of all humanity as a single
spiritual community.
The
process of Europeanization began with the formation of medieval Europe
out of the ruins not only of the Roman Empire but, even more important,
from the ruins of the Carolingian Empire in the 10th century. Europe was
formed by combining the Germanic invaders and the classical tradition within
a matrix created by the Christian Church. The elements of this distinctive
culture emerged from the Carolingian heartland and spread gradually in
all directions as a result of conquest and missionary efforts. Eventually
the peoples along the frontiers of this Christian culture, the Celts, the
Scandinavians, and the Slavs, became Christian. The fundamental assumption
underlying the medieval missionary effort was that all rational creatures
are descended from Adam and Eve, all bear the marks of Original Sin, and
all are redeemed by the death of Christ on the cross. It was therefore
the obligation of Christians to preach Christ’s Gospel to all humankind,
a task that became increasingly important as apocalyptic visions of the
imminent coming of the end of the world appeared in medieval Christianity
at the end of the 12th century.
The
medieval Christian vision of the world was not, however, restricted to
the spiritual realm. By the 13th century, European society had also assimilated
the thought of Aristotle and from that obtained a conception of what it
meant to be civilized. In his Politics, Aristotle argued that only
in a settled agricultural society could people find all the elements necessary
for the fullest development of human existence. Having first emerged within
the city-state world of the Mediterranean, Aristotle’s conception of civility
was completely acceptable to Christian thinkers. The administrative structure
of the Church was territorially grounded in the parish and the diocese,
units that replicated the province structure of the Roman world, which
in turn derived from the Mediterranean city-state. Therefore, missionaries
who encountered uncivilized people had not only to introduce them to Christianity
but to the civilized way of life as well.
But
how did a people become civilized? Aristotle did not deal with this issue,
apparently believing that the development of the city-state was a natural
process that required no explanation. Subsequently, however, the Roman
lawyer and politician Cicero provided sketch of humankind’s rise to civility.
He described society’s original state as a primitive life in the fields
and forests. Then a manarose to communicate to his fellows a vision of
a better way of life, the city-state. Christian missionaries could identify
with this Promethean orator who first communicated the vision of a better
way of life. They themselves would inform primitive people not only about
a better way of life in this world, but also about salvation in the next.
There
was another aspect of this Christian conception of the world, however,
one that is often overlooked. The medieval Church not only preached a message
of salvation in the next world and had a broad concept of universal human
development in this world. It also developed the outlines of a universal
legal system. Church law, the canon law, can legitimately be seen as the
first truly international law. Fundamentally, the canon law was the law
of the Church for its own members, but inasmuch as the medieval Church
saw itself as universal in scope and membership, its law could embrace
all humankind. Unlike modern international law, however, the canon law
dealt with individuals and societies but not with states. The canon law
also dealt with relations that could exist between Christian and non-Christian
societies everywhere.
In
the canon lawyers’ conception of the world, there was a tripartite legal
order that encompassed all humankind within a universal moral order. In
the first place, there was the canon law that governed Christian life.
There was also the Mosaic Law that governed Jewish society. Everyone else
was subject to the natural law, a law accessible to all rational creatures.
Seen in this light, the medieval canonists were probably the first to conceive
of humankind in truly global terms.
The
expansion of Europeans overseas beginning in the early 15th century was
directly linked to this global vision. Obviously, trade, a desire to outflank
the Muslim world, curiosity about the world beyond Europe, and other factors
contributed to the move overseas. However, the framework within which this
expansion took place was ecclesiastical and legal, a fact often overlooked
by historians. The formal justifications for occupying newly discovered
lands from the Atlantic Islands off the coast of Africa in the early 15th
century to Alexander VI’s Inter caetera in 1493 always explained
contact and conquest in terms of the Church’s universal mission. Various
popes granted to the Portuguese and the Castilians a monopoly of European
contact with specific regions in order to rationalize European entry into
these new worlds. One goal was to prevent conflict between the emerging
overseas empires by recognizing specific spheres of interest in the newly
encountered regions. A second goal was to insure that these Christian monarchs
would use some of the wealth gained by conquest for the purpose of supporting
missionary efforts aimed at converting native inhabitants to Christianity.
Force could be employed if the inhabitants violated the natural law by
engaging in wicked practices such as cannibalism or prevented the entry
of peaceful missionaries and merchants. Force could also be employed to
bring these people to the civilized level of existence. The pope acted
as a kind of universal judge. He determined if a particular people were
violating the natural law and, if so, asked a Christian ruler to punish
them.
One
of the consequences of the Protestant Reformation was the rejection of
this medieval view of humankind. Hugo Grotius, the founder of modern international
law, explicitly rejected the papal claim to universal jurisdiction. He
also reduced the scope of international law to relations among the nations
of Europe. Most important, he began the process by means of which law was
distinguished from moral theology. In the wake of Grotius’s work, international
law and relations grew increasingly secular.
In
fact, as European states came to control most of the earth’s surface by
the 18th century, the vision of a global society actually declined. Instead,
Europeans increasingly saw the world as a series of competing empires without
any overarching supervision, a kind of Hobbesian war of all against all.
By the end of the 19th century, however, there emerged in some quarters
an interest in creating institutions for the peaceful resolution of international
conflicts. This led to the creation of the World Court at The Hague and,
later, the League of Nations and the United Nations.
Underlying
these recent efforts at world order are some of the same elements that
underlay medieval notions of world order. In the first place, contrary
to 18th- and 19th-century racial theories about humanity, current theories
of world order assume that humankind is a single species, not several species
with different intellectual and moral capacities. In the second place,
it is widely believed that all societies possess the capacity to rise to
the European and American level of development. At the same time, many
Europeans and Americans think that international humanitarian standards
of behavior should be imposed on societies that do not adhere to them.
This, of course, leads to the problem of who is to determine what these
standards are and when and where force should be used to uphold them. The
standards being imposed are said to be universal, but in reality they reflect
the values of Europe and America. Where the medieval Church asserted that
there was a natural law that all rational creatures should obey, modern
thinkers contend that there are internationally agreed upon standards to
which all people should adhere. Finally, where the pope could authorize
the occupation of a country whose ruler engaged in violations of the natural
law, in the modern world the president of the United States, perhaps with
the approval of the secretary general of the United Nations, can act in
a similar capacity.
What
we now call modernization and globalization is really a continuation of
a process that has been proceeding for a millennium as European Christian
society has expanded far beyond its original home. To a medievalist at
least, the vision of world order that has been coming into play since World
War II—a vision of a universal human community with universal standards
of behavior and an agency authorized to determine whether or not a society
is adhering to those standards—is reminiscent of the medieval papacy’s
conception of the world. Many years ago, Carl Becker argued that the philosophes’
ideas about the political order were a secularized version of medieval
Christian thought. The same can be said of postmodern conceptions of world
order, which revive some very pre-modern ideas in secular guise.
James
Muldoon, emeritus professor of history at Rutgers University, is presently
an invited research scholar at the John Carter Brown Library. He is the
author of Empire and Order: The Concept of Empire, 800–1800 (St. Martin’s
Press, 1999).
Join
the Historical Society and subscribe to Historically Speaking
How
Not to Deal with Enlightenments
by
Roger L. Emerson
I
work on the Enlightenment, particularly on the Scottish Enlightenment.
I have written on clubs and societies, universities, patronage, religion,
and science. Throughout my career, I have tried to see Scottish history
in relation to things going on elsewhere, not just in England but also
on the continent and sometimes in America. The world to which the Scots
belonged was not a particularly British world but a much wider one. I have
tried to see the period in the round and not become too wedded to one or
another set of problems or themes.
I
care about getting the parameters of the Scottish Enlightenment set correctly.
This, it seems to me, involves, first of all, the realization that, as
with every period, one is dealing with a term that is imposed upon a range
of years because these years reveal persisting, if changing, traits which
are convenient to label since they co-exist and then cease at some later
point to cohere. The labels are ours and will change as the period recedes
into the past. All labels are temporary. Still, in the case of the European
and American Enlightenments, the traits are generally those that many people
in the period ca. 1660-1830 found useful to notice and discuss. Like the
advanced thinkers of any age, those of that period thought, although usually
at differing times in various parts of Europe, that they were doing things
both novel and important, and related to similar things being done by like-minded
men and women elsewhere. They were not shy about pointing that out. Further,
most of them had an interest in tracing the origins and forebears of the
beliefs and institutions that they sought to defend, change, or eradicate.
They did not see themselves, and we should not see them, as having no antecedents
and no sense of the way things were related in their intellectual worlds.
We should, I believe, look for the continuities with the past that they
found it interesting to assert as well as for the changes that they sought
to make. In doing so, we need to consider a wide range of problems and
ideas which preoccupied the thinkers of what we call the Enlightenment.
Contemporary
intellectual historians tend, however, to focus on only a segment of Enlightenment
thought and, consequently, miss much of what was important about that era
both in general and in the particular contexts which interest them.
John Pocock’s recent volumes on Edward Gibbon (Barbarism and Religion:
The Enlightenments of Edward Gibbon, 1737-1764, vol. I, and Narratives
of Civil Government, vol. II [Cambridge University Press, 1999]) offer
what will be a widely-read example of this unfortunate trend. Pocock
is interested in the traditions and men from whom Gibbon drew, but his
work often reads as if he were giving a character to the Enlightenment
or Enlightenments in which he situates Gibbon. His exciting, valuable,
and erudite volumes will be taken by many to be authoritative statements
about the Enlightenments with which he deals as well as about the Enlightenment
of Edward Gibbon. As a guide to Gibbon, we may perhaps trust him; I think
we should not when we think about the Enlightenment and particularly about
the Scottish one.
Professor
Pocock views the European Enlightenment as “a process at work in European
culture,” defined primarily by secularization and a system of balanced
powers or states, and supported by commerce, as well as by an evolution
of manners which commerce and the new political balance required (I:4f).
The period’s exciting historiography, like its thought in general, was,
according to Pocock, almost exclusively taken up with debates about civility
and morals, politics, and “the forces making for modernity about 1500—navigation,
printing, gunpowder, and the revival of letters—and those operating about
1700: standing army and public credit, commerce, and the new philosophy”
(II:370). In Pocock’s view, it is not principally in the realm of ideas
but in that of power and the contests for it that one comes to understand
the Enlightenment or Enlightenments.
What
makes Pocock’s account even more curious is its location of a specific
time and place at which the Enlightenment may be said to begin in Europe—at
Utrecht in 1713. At the same time, he tells us that Enlightenment “occurred
in too many forms to be comprised within a single definition and history,
and that we do better to think of a family of Enlightenments, displaying
both family resemblances and family quarrels (some of them bitter and even
bloody)” (I:9). One still must ask what the members of this family had
in common; did they have a common birth date in 1713? If nothing can be
specified as common, there is no sense in talking about variations, including
the ones he has stipulated and which he repeatedly treats as the only ones
that matter because they contributed to Gibbon’s enlightened narrative.
Even Peter Gay’s old conception of a family of Enlightenments (which in
turn popularized a notion of Ludwig Wittgenstein) was more adequate since
it offered as two exemplars of Enlightenment Voltaire and Hume.
We
read a good deal in Pocock’s volumes of Voltaire as an historian who “both
defines ‘Enlightenment’ and presents ‘the Enlightenment narrative’ in terms
of a history of manners, attempted on a scale and with a panache not found
before him” (II:73) and of his hatred of religion, barbarism, and a certain
kind of European chauvinism. We hear little of the man of the English
Letters, in which Bacon and Locke are applauded for reasons having
to do with knowledge and how it may be advanced and used, or of the man
who made Newton’s accomplishments available to the French reading public,
or of the man who experimented to find the “pabulum of fire” and wrote
plays meant to be modern in form as well as content. In short, we hear
little of the philosophe whose agenda was so much wider than that
of the Enlightenments Pocock offers us. Any definition of Enlightenment
as circumscribed as Pocock’s misses too much of what was happening in the
period and what brought it about. What may suit Gibbon should not be extended
to all.
Pocock
does list “philosophy” as an ingredient of the process he sees as crucial
to Enlightenment. But the great achievement of the philosophers of the
17th century (one which extended into the 18th century) was the invention
of new theories of knowledge and new methods by which things could be known
and, once known, improved or changed. It was this epistemological revolution
that inspired the efforts of so many to discover, generate, and act upon
the new knowledge that they found within their grasp using the methods
of Boyle, Sydenham, Huygens, and Newton, to cite four of Locke’s heroes.
They, like Locke, were given to changing the world for the better, as well
as to understanding it by a new set of empirical procedures. In accounting
for the origins of the “process” of Enlightenment, it seems absurd to attend
so little to the new philosophy and the sciences it engendered— mathematics,
natural philosophy, and natural history. Yet this is precisely what Pocock
does. His definition relegates a lot of thinkers and doers whom we might
want to include as major participants in Enlightenment to the status of
forerunners to its preliminaries or hangers on to its achievements because
they were not central to the social and political changes that he seeks
to describe and which he believes fascinated Gibbon. He has more or less
done what he has accused others of doing: “bringing [the varied Enlightenments]
within a single formula—which exclude[s] those it cannot be made to fit”
(I:9).
Professor
Pocock also fails to notice in any serious way that when the Enlightened
thought about what they were doing and how they should proceed, they did
not limit themselves to the sorts of things in which he is interested and
understands. In an age in which schooling provided a sort of synopsis of
the intelligible world, to ignore the unity of that intellectual world
when considering the Enlightenment is to impose on it a character which
is his and not that of the time. Bacon’s well known scheme of the arts
and sciences, organized around the faculties of the mind— memory, reason,
and imagination—was one form of asserting this unity; a form echoed in
Hume’s Treatise of Human Nature where everything is brought back
to sense, reason, and imagination. Another instance is provided by university
curricula; a third by the encyclopedic trees of various reference works
including those of Woolf, Chambers, Diderot, and d’Alembert. These, like
classical educations and Christian instruction, persisted through the Enlightenment
and helped to insure that its intellectuals were broad in outlook and not
constricted to the parameters set out by Pocock.
But,
if one believes that Enlightenment thought was systematic, then it is imperative
to relate it to natural philosophy, which was as much a part of the enlightened
world as religion and the realm of grace. Similarly, many in the late 17th
and 18th centuries shared the belief that improvements of every sort were
possible and within their means had they but the will to make changes.
The devotion to improvement was European in scope and not the possession
of particular peoples, although there were certainly differences in the
times at which such views were expressed and in the number and kinds of
people who held them. Still, this was a cosmopolitan, complex world in
which there is not a clear line to be drawn between many of the virtuosi
(those earlier pansophists who sought to know everything and to apply their
knowledge for human betterment) and the philosophes. Gibbon’s outlook
was not altogether typical of his time because he, like Pocock in these
books, lacked an interest in, if not some knowledge of, the sciences.
To
apply this broad perspective to the Scottish Enlightenment is to ask and
answer a number of questions that Pocock and those on whose scholarship
he relies (principally Nicholas Phillipson, John Robertson, and Richard
Sher) tend to ignore. These relate to science and improvement, to religion,
to the place of Edinburgh in the Scottish Enlightenment, to patronage,
and to the end of the period and the causes of that end.
When
did Scots first begin to find a place for the improvers and scientists?
The improvers, who as a group looked back to Bacon and other virtuosi,
became visible in the 1680s although it was not until 1723 that a national
improvement society appeared. When it did, it embodied the outlook of men
from the 1680s, men like Sir Robert Sibbald, who never lost hope that a
survey of the country would provide information for improving purposes.
The survey was finally realized in the Statistical Account of Scotland,
edited by Sir John Sinclair at the end of the 18th century. The Honourable
Improvers in the Knowledge of Agriculture of 1723 was but the first of
many organizations that pursued the aim of making Scotland better through
the application of new knowledge in as many ways as possible. It was followed
in the 1750s by the Edinburgh Society which, for a few years, enrolled
all the members of the Select Society. The latter is always cited by those
who think along Pocock’s lines, but its sister society, dedicated to improving
activities, enlisted more members and cost them a larger membership fee.
It is usually ignored in discussions of the Scottish Enlightenment.
The
scientists were visible perhaps a bit earlier but lived mostly abroad until
ca. 1670. They were more common in Scotland by 1700 and constituted during
the 18th century, if the country’s more than 5,000 medical men trained
in Scotland during the century are included, a body roughly equal in size
to the 1,000 or so clerics that the country possessed at any given time.
By 1800, the universities were as much dedicated to the pursuit of science
and medicine as to anything else. A boom in medical and scientific education
flourished from ca. 1714, after the founding of the first medical chairs.
One would not learn that from an account of the Scottish Enlightenment
that sees it as very much a post-1745 product of moderate Presbyterians
reacting against Calvinism (II:312f). This, one hardly needs to note, excludes
most of the Select Society membership as well as most of those one would
like to call enlightened in Glasgow and Aberdeen. Pocock has not really
described a Scottish Enlightenment, but merely the views of an Edinburgh
coterie and a few of their Glaswegian friends. If one wishes to understand
the Scottish Enlightenment, one must look at those whom he has excluded—doctors,
gentlemen, lawyers—and not just the clerics and university professors who
wrote works related in some fashion to his and Gibbon’s interests in history.
Perhaps the most emblematic figure of the period was Lord Kames, who wrote
learnedly on the law and its improvement, on aesthetics and morals, on
agriculture and flax husbandry, on education and physics—as well as on
philosophy and history. And, if we want to consider enlightened Scottish
clerics, why not start with John Simson, the professor of divinity at Glasgow
whose career has recently been so well studied by Anne Skoczylas [Mr.
Simson’s Knotty Case: Divinity, Politics, and Due Process in Early Eighteenth-Century
Scotland (McGill-Queen’s University Press, 2001)]? If Pocock is interested
in the Protestant Enlightenments of the Dutch and Swiss, he should pay
some regard to their Scottish followers. Simson was emulating the Swiss
and Dutch when, in 1714, he got into trouble for doing so; that was long
before Francis Hutcheson came to Glasgow as a professor in 1729. Indeed,
most of the moderates whom Pocock considers were born after 1714.
Every
Enlightenment depended upon the patronage of the great both to secure places
for its members and to insure a hearing for their views. The Enlightenments
that form Pocock’s family took their shape and orientation from what patrons
were willing to coun- tenance. Too little attention is paid to this not
only in Scotland but elsewhere. Monographs are written about the rather
marginal women patronesses of the salons, but we get few studies of the
truly successful and influential patrons. No one has written on the patronage
of the great Scottish political figures other than the 3rd Earl of Bute,
whose patronage was arguably less culturally important than that bestowed
by the Squadrone lords in the period ca. 1710–1724, or by the 3rd Duke
of Argyll ca. 1723–1761, or by Henry Dundas ca. 1780–1806. Argyll’s patronage
did much to shape Scotland in his own image. He was a virtuoso interested
in medicine, botany, law, and improvements of all sorts; his wide range
of secular interests is mirrored by his library holdings. He, as much as
any of the men whose careers he furthered, shaped the Scottish Enlightenment.
It came to reflect his secularism, his tolerance, and his practical and
academic interests. He and his political faction also gave jobs to most
of the Scots who interest Pocock: Lord Kames, William Robertson, John Millar,
Adam Ferguson, Adam Smith, but not to Hume who resented Argyll’s lack of
support. Surely, such patrons and their interests deserve more attention
from historians.
Finally,
one must consider what brought the Enlightenment to an end. In the case
of Scotland, the period seems to have ended for a number of reasons. The
acceptance of utilitarian ideas and more vocational educations tended no
longer to privilege the systematic presentation of knowledge in college
curricula or the hierarchy of sciences which these once embodied. Empirical
methods had made the unity of knowledge a methodological one. At the same
time, the growth of industry and a transportation revolution linked Scotland
economically to the rest of Britain and eroded the bases of a separate
national identity. England and the Empire became more accessible and necessary
to Scots who found less to criticize in their world. Their old ties with
the continent were disrupted. Scots turned into the North Britons that
many had for a long time thought they should become. The period ended with
an increasing and profound emphasis on feelings, which devalued “reason.”
With the revulsion against the French Revolution came a return of intolerance
but also of religion. None of this has much to do with the end of a system
established by the Peace of Utrecht or with the Arminianism that Pocock
supposes were so important to some of his Enlightenments.
Now
it may be said that John Pocock did not set out to do a history of the
Scottish Enlightenment but to relate Gibbon’s outlook to the work of several
of his predecessors working in various Enlightenments from which he drew
inspiration, examples, and methods. I am quite willing to concede this
and to applaud Pocock’s real achievements; nevertheless, I expect a fairer
account of the Enlightenments upon which Pocock draws. In showing what
that might be in one case, I have, I think, set out some of the reasons
for my dissatisfaction with Enlightenment studies more generally.
Roger
L. Emerson is emeritus professor of history at the University of Western
Ontario and author, with Paul Wood, of a forthcoming study of the scientific
community in Glasgow, 1690–1800, to be published in Science and Medicine
in the Scottish Enlightenment by Tuckwell Press.
Join
the Historical Society and subscribe to Historically Speaking
In
the Archives: A Visit to Arkhangel’sk in 1999
by
Lynne Viola
In
late May of 1999, I flew from Moscow to the old port city of Arkhangel’sk
(or Archangel) in Russia’s far North. Fearing the co-ed arrangements of
the Russian overnight train compartments, even the “lux” variety, I booked
an evening flight on Arkhangel’sk Airlines. To my untrained eye, the plane
looked alarmingly old. It was a Tupolev from the Aeroflot fleet, decentralized
and denationalized upon the fall of the Soviet Union. Fortuitously (because
I am a nervous flyer), I sat in the row behind the emergency exits. Above
the window of the emergency exit, I read a small sign alerting passengers
to the “emergency rope.” The sign was attached to a latched compartment
door upon which the Russian woman sitting in the window seat had immediately
hooked her carry-on bag (in Russia, a hook demands a bag). Dutifully, I
took out the emergency instructions from the rear pocket of the seat ahead
of me. With great interest, I read that the emergency rope had notches
on it to facilitate the climb down. And there was more. At the front of
the plane, there were also emergency exits. Fortunately, these did not
have ropes, but rather emergency carpets that had to be held taut at the
bottom by the first two people (men in my picture) out. Passengers were
instructed to empty their pockets of wallets, pens, combs, bottles, and
other potentially dangerous objects, as well as to remove high heel shoes,
eye glasses, and ties before deplaning. In minutes, the female flight attendant
came down the aisles handing out paper bags for flight sickness, or as
one of my neighbors explained to an elderly woman, bags for “unpleasantness.”
I
was traveling to Arkhangel’sk to work in the archive of the Northern Regional
Committee of the Communist Party. As a member of the University of Toronto’s
Stalin-Era Research and Archive Project, I had decided to focus my attentions
on the far North in my continuing studies of the repression of the Russian
peasantry under Joseph Stalin. In the early 1930s, millions of peasants
were forcibly deported, under horrendous conditions, to the most desolate
regions of the Soviet Union, mostly in the dead of winter. In 1930, close
to 75,000 peasant families traveled in unheated boxcars to the far North.
They were then transported to remote forest and marsh lands and told to
build homes and villages. The ablebodied worked in the timber industry
under slave-like conditions, while the non-ablebodied attempted to clear
the lands for farming and eke out a miserable existence from the soil.
I
had nearly exhausted the central archives in Moscow that were open to me
and decided to follow my peasant deportees, or “special migrants” as they
were euphemistically called, to one of their places of exile. I decided
to concentrate on Russia’s far North because it appeared to be the least
studied region of deportation. The previous year I had worked in the sleepy
town of Vologda, located halfway between Moscow and Arkhangel’sk. There
I was able to observe the exiles’ lives on a local level. Now I was determined
to examine the papers of the regional-level bureaucrats who largely determined
their fate.
I decided
to combine my time in Arkhangel’sk with a side trip to the notorious Solovetskii
Islands, the site of a 15th-century monastery turned into a concentration
camp in the 1920s and 1930s. Some of the peasants repressed under Stalin
had ended up there. The Islands were both beautiful and terrible. Pristine,
ornamented with a dilapidated, but magnificent monastery, and situated
in the middle of an archipelago, the Islands are known for their natural
splendor, especially in summer and autumn. A small population of about
1,000 people live there, mostly engaged in one way or another in the tourist
business, such as it is (I didn’t see any other tourists that weekend).
But such desolation I had never seen before in my twenty odd (and often
very odd) years of travel in the former Soviet Union and Russia.
Farm animals seemed to have taken over the center of the island: goats
in little packs scurrying away from people, cows wearing bells and eyeing
me, I thought, aggressively, and dogs wildly chasing motorcycles, the main
means of transportation in the village. Everything was gray, damp, muddy,
dirty, and in a state of disrepair.
The
“hotel” was a small, two-story wooden structure, largely unfinished on
the outside, but homey and comfortable (though bloody cold) on the inside.
To convey the “exoticism” of the Solovetskii Islands, they had planted
in the hotel courtyard a 1937 vintage “Black Maria,” the car that the secret
police used in the dead of night to take away newly arrested people in
the towns. My hosts, like all Russian hosts, were warm, generous, and welcoming.
I toured the monastery, visited a museum exhibition on the early Soviet
concentration camp, and later traveled to a more distant concentration
camp, notorious for the grand, outdoor staircase, down which prisoners
with manacled wrists would be thrown by debauched guards. The visit to
Solovki, as it is known familiarly, was sobering and the perfect preface
to my archival work.
Back
in Arkhangel’sk, on the morning of my first work day, I called the director
of the archives of the former Northern Regional Committee of the Communist
Party. The director was on otpusk, that is, on vacation. The deputy
director agreed to see me. She kindly explained how to reach the archive
from my hotel, mischievously noting that they “worked behind Lenin’s back.”
Not quite understanding that last comment, I walked from my hotel, crossed
the square where indeed Lenin still stood with his back to the town hall,
and walked up the steps to the archives. The deputy director couldn’t have
been more gracious. As luck had it, a lovely, intelligent woman in charge
of the publication office had been working on my topic, with a focus on
the tens of thousands of Ukrainian peasants who were deported to the North.
She shared with me her findings, relevant archival inventories, and document
folders in a most collegial way. Later, we had several opportunities to
chat over tea, and she told me about her own Ukrainian roots and the reasons
for her interest in the subject.
From
the first hours of research, I was transported into a world of almost indescribable
horror. I read of the Regional Party Committee’s plans to feed the exiles’
families (including some 88,000 children) according to “starvation norms.”
The majority of loca- tions designated for settlement were completely uncleared
marsh or forest lands, accessible only by river or by foot and then only
for a part of the year. While awaiting transfer into the interior, people
were crowded into barracks or nationalized churches, with space per individual
calculated as “smaller than a grave” and the temperature not above four
degrees. I read about the “colossal death rate” of the children and the
epidemics of typhus that raged through the exile population. I read the
desperate letters home written by young male exiles sent ahead into the
interior to build the “special villages.” One young man wrote, “It is impossible
to describe life here, they treat us worse than cattle . . . . They don’t
even give us water, not to mention food . . . .” Another wrote, “We live
very poorly, there is nothing, each day we expect death. Daily 20 to 30
to 40 people die.” During the 1932–33 famine there were cases of cannibalism.
Drunken
village commandants regularly beat and tortured exiles, confining them
in cold cellars for days at a time. They also embezzled state funds intended
for the exiles and raped exile women. I read about the homes constructed
for exile children whose parents had died of disease and exhaustion, about
the high death rates in those homes, about state inspectors finding children’s
corpses hidden in barns. There were secret deals with cemetery caretakers
to bury the children unofficially so the home directors could continue
to receive funding for these little “dead souls.”
At
the end of my time in Arkhangel’sk— and with plans for a return visit in
the year 2000—I had a tour of the city. My driver, a new English-speaking
guide (with whom I insisted on speaking Russian), and a young man anxious
to discuss history and try out his English took me on the worst roads I
have seen in any Russian city. I clandestinely swallowed Gravol pills,
hoping to avoid “unpleasantness.” The car swerved from one side of the
road to the other to avoid potholes (fortunately there was not much traffic
in Arkhangel’sk). While I listened to the English- speaking guide minutely
and painfully describe Arkhangel’sk’s history, economy, and especially
factories (a holdover from the days of the workers’ paradise), I surveyed
the ruins of what had been a thoroughly sovietized city. Gone were the
quaint wooden structures that so gracefully lined the streets of Vologda.
In their stead were erected the ugly, poorly constructed high-rise apartment
buildings of the Brezhnev era, now woefully unfit for habitation. But therein
live the people of Arkhangel’sk, no doubt many the descendants of my Ukrainian
and Great Russian “special settlers,” who continue to make the best of
conditions that they know are deplorable, and most often with a sense of
humor, a level of civility, and a graciousness extraordinary for their
surroundings.
Lynne
Viola is professor of history at the University of Toronto. Her most recent
book is Peasant Rebels Under Stalin: Collectivization and the Culture of
Peasant Resistance (Oxford University Press, 1996).
Join
the Historical Society and subscribe to Historically Speaking
European
Integration: An Event for Reflection
by
George Ross
On
January 1, 2002, citizens in twelve European Union societies cashed in
their drachmas, deutschemarks, francs, and florins for new Euro notes and
coins. This was the final step in the most significant creation of a new
currency zone since the coming of the dollar in the U.S. The event reminds
us that European Integration deserves its place among the major historical
processes of the 20th century. Today the European Union (EU) is a unified
economic space that rivals the U.S. in GDP, productivity and innovative
capacity.[1]
The Europe that invented the Westphalian state system and then explored
its most perverse consequences is now peaceful and the EU is even moving
tentatively toward a common foreign and defense policy. The EU remains
an “unidentified flying political object,” as Jacques Delors has often
called it, a unique political system which, based on nation states, includes
dense confederal arrangements, several genuinely federalized policy areas,
transnational institutions that can decide important issues, and a juridical
system whose rulings bind member states. How can we characterize this great
process of integration? And, since it is far from finished, what can we
expect next?
ORIGINS
Europe
was clearly in need of a new beginning after World War II. The predominant
power of the United States over Western Europe and American redefinitions
of national interests for the Cold War were clearly central. The defeat
of the Axis powers had left Western Europe broke, destroyed, vulnerable,
divided, and partly occupied. Indeed, American military power in Western
Europe after 1945 temporarily removed military options from the hands of
Western European leaders. The power of the United States in the Cold War
and Europe’s need to reconstruct were what established new incentives for
Western European nations both to commit to democracy and to reconsider
their relationships with one another.
The
first step came in 1950, when Jean Monnet, the post-war French economic
planner and consummate international “fixer,” persuaded French foreign
minister Robert Schuman to propose a “common market” in coal and steel
in response to American insistence on rehabilitating Germany. The pressing
choice for France and Germany, the central players, was between resolving
post-war differences on their own terms or having the Americans do it for
them. The result was the European Coal and Steel Community (ECSC) established
in 1951, composed of the six members (France, Germany, Italy, Belgium,
the Netherlands, and Luxembourg) who would later form the European Economic
Community (EEC). The British, determined to sustain the Commonwealth, stayed
out. The ECSC was an institutional pioneer for what followed, with a “High
Authority” of appointed officials wielding considerable supranational power,
checked mainly by a “Council of Ministers” from the member states. After
the 1954 defeat of the European Defense Community, another Monnet project,
new talks, pushed forward by Benelux initiatives, eventuated in the 1957
Treaty of Rome, which founded the European Economic Community (EEC) and
Euratom (a European Atomic Energy Agency, Monnet’s contribution to this
round).[2]
The
Rome EEC treaty, signed by the ECSC six, absent the British again, outlined
a twelve-year period to establish a customsfree internal market surrounded
by a single external tariff, accompanied by a Common Agricultural Policy
(CAP) to promote agricultural modernization (insisted upon by the French)
and several other common policies. Implementation would start in Brussels,
designated as the EEC’s administrative center, by an appointed supranational
commission with the job of proposing and implementing legislation and enforcing
the treaties. The new EEC’s institutional structure, which resembled that
of the ECSC, had a legislative arm in its Council of Ministers. There was
also a European Court of Justice (ECJ) in Luxembourg to build up a body
of European law binding on member states. Finally, there was a weak parliamentary
assembly whose members were appointed by member state governments—today’s
more serious European Parliament is a much more recent creation.
The
goals for the first period were reached earlier than the treaty had envisaged,
but not without major problems. The British, who founded the European Free
Trade Area (EFTA) in 1960 as an explicitly non-supranational competitor
to the EEC, very quickly realized that the EEC was going to work and that
they needed to be inside it. French president de Gaulle then vetoed British
applications first in 1963 and again in 1966, denouncing the British as
insufficiently “European” and too closely tied to the United States. In
geopolitical terms, de Gaulle wanted the EEC to become a “third force”
in international affairs, a pole between the two Cold War superpowers.
Then in 1965, de Gaulle faced down the fledgling European Commission and
other member states in the “empty chair” crisis, bringing EEC business
to a halt for six months. Here the issues were institutional. The Rome
Treaty had projected eventual Council of Ministers’ decision-making by
a “qualified majority” of votes (with votes weighted to reflect the size
of member states). French obduracy instead led to the 1966 “Luxembourg
compromise” in which any member state could veto a proposal that it judged
contrary to its basic interests.
This
new intergovernmentalist equilibrium reflected deeper realities. The EEC
was primarily a “Common Market” between quite separate national economies.
EEC Member states, for the most part, were pursuing particular national
economic strategies. The EEC provided them new outlets for trade, some
protection from the outside world, in particular from a U.S. pressing constantly
for trade liberalization, and a painless way of modernizing agriculture
through the CAP. Further supranationalization might have deprived EEC member
states of the economic tools to steer their national economies.
CRISIS
AND RENEWAL
At
first, the 1970s brought new energy. The European Parliament gained budgetary
powers and plans were drawn up for “economic and monetary union” (the Werner
Report of 1970), an EEC-wide social policy, and a more ambitious regional
development policy. Moreover, the EEC enlarged to include four EFTA members—Great
Britain, Ireland, Denmark, and Portugal (there would have been five had
the Norwegians not voted against it in a referendum). The new energy did
not produce much, however, because the EEC’s economic situation changed
abruptly in the mid-1970s. Chronic inflation, compounded by the oil shocks
of 1973 and 1979, was accompanied by rising unemployment. Quite as important,
the American abandonment of the Bretton Woods international monetary system
fed international currency instability, which threatened the Common Market.
Significant problems then flowed from the divergent economic choices of
EEC members in response to these new economic challenges. One symptom was
a sauve qui peut rise in non-tariff barriers in intra-EEC trade.
Finally, integrating the new members proved difficult, particularly because
the British, who had struck a bad bargain to get in, obstinately demanded
renegotiation to the point of blocking the Brussels machinery cold.
The
great experiment could have ended at this point. Monetary issues, more
than anything else, probably prevented this from happening. Monetary instability
threatened the entire Common Market, but it took collaboration between
German chancellor Helmut Schmidt and French president Giscard d’Estaing
to found a new European Monetary System (EMS). The currencies of countries
choosing to belong to the “exchange rate mechanism” (ERM) of EMS were kept
within “bands” around specific valuation formulae. EMS
was initially constructed around equivocation between France and Germany
about ultimate goals, however. The Germans, with a tough, price-stabilityoriented
independent Bundesbank, wanted the ERM and EMS to follow Bundesbank inclinations.
The French, inflation-prone and wont to use devaluation rather than deflation
as a tool, hoped to use EMS to soften the Germans.
The
early 1980s were a low point. Member states could not agree on much of
anything, and as “Europessimism” spread, unresolved problems piled up.
A majoFr recession began in 1979, stretching well into the 1980s. After
elections brought the Left to power in 1981, the French tried to respond
to this with a new program of national economic voluntarism and redistributive
reform that was at odds with the policies of other EEC members. The monetary
effects of this ultimately tested the equivocation behind EMS and created
the conditions for reenergizing integration.
Difficulties
in implementing the new Left program meant that the French needed to devalue
(they did so three times in two years, in fact). Each time they asked to
do so within the ERM, however, the Germans stipulated strong new conditions
for French economic policy to the point where finally in 1983 the French
faced a choice between continuing their nationalist reformism or staying
in EMS. The first choice might have ended European integration, or at least
blocked it for an indefinite future. Rather than deal a potentially fatal
blow to European integration, French president Mitterrand reversed French
policies. EMS survived while the French joined others in tough policies
to combat inflation. The French choice in the mid-1980s turned out to be
as much for Europe as for a changed economic policy as Mitterrand set out
to settle many of the big issues on the EU’s table. A solution was quickly
brokered for the “British check” problem, stalled negotiations about Spanish
and Portuguese membership were renewed, leading to their accession in 1986,
and a new Commission president, Jacques Delors, was appointed.
The
Delors Commission set immediately to work with a White Paper to “complete
the Single Market” by the end of 1992, setting out several hundred measures
to make a single integrated European economy out of the interconnected
national economies of the Common Market. Agreement, facilitated by desires
for market liberalization among major EU members, led quickly to a multilateral
“Intergovernmental Conference” (IGC) to change the existing treaties to
facilitate implementation of the new program. The Single European Act (SEA,
ratified in 1987) was to be the first in a series of revisions to the European
“constitution” (i.e., its treaties). It brought “qualified majority” voting
on Single Market programs, amending powers to the European Parliament (which
had been elected by universal suffrage since 1979), new environmental policies,
a more coordinated European research and development program, regional
redistribution, and more social policy.
The
next large step was Economic and Monetary Union (EMU), again connected
to the workings of EMS. The French felt victimized again by German hard-nosed
dealings in new currency instability in the mid- 1980s and, in response,
proposed the establishment of EMU to dilute German monetary power in transnational
arrangements. The “Delors Report” on EMU was approved in 1989, and two
new IGCs were scheduled to begin in later 1990. The first was to adapt
existing treaties for EMU and the second, a German suggestion, was to discuss
“political union” focusing on democratic accountability and responsibility,
the powers of the European Parliament in particular, and foreign policy
cooperation. The Maastricht Treaty on European Union, ratified in 1993,
was the result, setting out daring new priorities. EU member states agreed
to pool sovereignty over monetary policy in an independent European Central
Bank (ECB) committed to price stability and a single European currency,
later named the Euro, by 1999. Maastricht also set the EU toward establishing
a Common Foreign and Security Policy (CSFP), common policies in matters
of Justice and Home Affairs (largely police matters), and greatly increased
the powers of the European Parliament by allowing “codecision” with the
Council of Ministers.
Yet
another IGC, to review the workings of Maastricht, produced the Amsterdam
Treaty. Signed in 1997, it further extended and simplified “codecision,”
added new clauses on social policy, modified procedures for the CFSP, moved
parts of Maastricht’s intergovernmental “third pillar” on Justice and Home
Affairs into the Community’s “first pillar,” and inserted new provisions
on flexible participation by member states in EU matters. A fourth IGC
in 1999, focused on adapting EU institutions in the light of pending enlargement
to the east, created the minimalist Nice Treaty. Such an accumulation of
multilateral conferences to rewrite the EU treaty base was no accident.
Europe has clearly engaged in a major constitution-writing exercise that
will stretch well into the next millennium.
THE
NEW MILLENNIUM
European
integration is one of the great political success stories of the second
half of the 20th century. While European integration may have succeeded,
it is far from concluded. Indeed, the EU’s recent mutations have created
new situations in which the EU must decide anew whether to continue to
grow and change. In the new century Europeans cannot avoid addressing the
really hard questions about what they have done. “Where is the EU going?”
“What is it for?” “Who is it for?”
The
introduction of Euro notes and coins in January 2002 is an appropriate
memorial to 50 years of work, but it also points to perplexing questions
that will need future resolution. EMU has “Europeanized” monetary policy
in a European Central Bank while leaving EU member states in charge of
their own macroeconomic and fiscal policies, creating serious coordination
issues that could lead to sub-optimal outcomes and unpleasant policy competition,
in particular “tax dumping.” In time, will EMU’s “one-size fits all” monetary
policy enhance broader welfare or privilege certain regions at the expense
of others? Next, will the ECB’s pursuit of price stability strangle European
growth and push up already high unemployment? Does the weakness of the
Euro vs. the dollar presage international differences between the EU and
U.S.? Why are key EU member states so reluctant to create an open European
capital market to complement the single currency? Finally where will the
Central and Eastern European new members of the EU fit in EMU?
One
neglected dimension of EU history is its role as a magnet for aspiring
new democracies in Europe. In the 1980s, three former authoritarian countries
(Spain, Portugal, and Greece) consolidated democratic polities and modernized
economically with EU help and inspiration. Ireland, another recent member,
has been changed economically and socially beyond recognition. Now EU Europe
is about to undertake the biggest experiment in fostering democratization
it has ever faced. It is not surprising that such an experiment also raises
many new questions.
Ten
Central and Eastern European countries (plus Cyprus) are poised to join
the EU, probably in 2004–2005, and several more are in the waiting room.
It is not easy to join the EU. Applicants are only considered if they have
functioning market economies with democratic political systems following
a rule of law; they are admitted only when they have fully conformed to
what is called the acquis communautaire, an 80,000-page compendium
of all of the EU’s existing laws, rules, regulations, and procedures. EU
membership could thereby ensure and consolidate democracy in many countries
where it historically has never existed. Things could go wrong, however.
The EU’s method of inducting new members could come to be seen as quasi-colonization,
for example. A new enlargement to countries much poorer than those in the
EU core could establish an enduring area of relative underdevelopment next
to the rich West unless existing EU members find new ways to redistribute
resources. Unstable politics and low social policy standards could feed
back to the West. Problems with ethnic minorities in the new member states
could explode.
The
EU, which is rapidly becoming a political union, has become an actor of
growing importance in international affairs, almost despite itself. The
EU is already a co-equal player with the U.S. in the politics of international
trade, particularly in the World Trade Organization. Moreover, as the EU
grows, its international interests are bound to cease being strictly regional.
Enlargement will create a very long and much more vulnerable eastern border
to manage. The entire Mediterranean region, including the Middle East,
is central for the EU’s future. In the medium term, all of this implies
a changing balance between the EU and U.S. But does the EU really want
to have a focused foreign and defense policy? How strong is Europe’s political
will in this area? Can Europe become a more serious partner of the United
States through NATO and what will this do to transatlantic relations? As
the Euro makes the EU a monetary and financial rival to the U.S., will
this create more challenges for transatlantic relations?
Finally,
and perhaps most important, the EU faces significant institutional questions.
First off, its existing institutional makeup may prove inadequate, even
paralyzing, when membership rises from the current 15 to 27 over the next
few years. This is a serious matter, since the future efficacy of European
institutions will determine what an enlarged EU can actually do. It is
hard to be optimistic here, since in the repeated multilateral conferences
of recent years member states have proven remarkably reluctant to consent
to the kinds of institutional changes that might make the future work better.
A lot is at stake, and very little has been done.
The
institutional issue goes much deeper, however. Over the years member states
have agreed to pool significant dimensions of sovereignty to enhance economic
success and the general welfare. In doing so, however, they have also created
a Euro-level political system that poses problems for citizen scrutiny,
control, and exercise of preferences over decisions. The Rome Treaty established
a system with a distinct lack of transparency and direct political responsibility.
The European Commission, with its formal monopoly of proposal, was appointed.
The Council of Ministers, the EU’s co-legislature, and the European Council,
the EU’s strategic guide, are intergovernmental and have always functioned
behind screens of diplomatic secrecy. The European Parliament, despite
its growing power, remains an odd body to which no government is responsible
and which has no majority or opposition, making for very foggy debates
and communications with European peoples. European law is juridically superior
to national law, but the origin and nature of European law is badly understood
by citizens, and the workings of the ECJ are difficult to understand. Finally,
all of these institutions work with a baffling multiplicity of decisionmaking
processes which only specialists can really follow.
Real
politics in Europe remains national politics, and there exists as yet little
European political culture. Despite this, national parliamentary discussions
rarely feature European issues, and elections to the European Parliament
remain tightly linked to national political debates. With notable exceptions
(like Denmark) most national parties and interest groups have only begun
to embrace European matters. One could fault member state leaders, analyze
the reasons for their behaviors, and investigate the institutional and
other incentives at the national level that encourage such practices. But
whatever this might turn up, the gap between the thickness of national
democratic deliberative practices and the thinness of these practices at
the European level is clear.
Since
1985, substantial national state capacities and control have been transferred
away from EU member states. In some areas capacities have been relocated
from familiar national democratic places to less familiar and less democratic
transnational ones, shining new spotlights on the insufficiencies of Brus-
sels institutions. In addition, important matters that had been debated
publicly and decided democratically at the national level have been shifted
to the market. The story gets even more complicated. Decisions producing
this re-localization of state capacities have often been made by European-level
methods whose relationships to democracy are murky. The European arena
presents European political elites with a place where politically risky
medium-term policies can be set out far more easily than at national levels.
The consequences of such Euro-level decisions targeted on large problems
and proposing medium-range remedies come to constrain national democratic
choices at a later point in time. The effect of this is that the European
political arena, with its perceived democratic deficiencies, effectively
structures many options in national polities before these polities have
had a chance to deliberate and decide.
The
ways in which democratic representation and accountability work in EU member
states vary, but are sufficiently well understood by citizens to allow
the legitimation of national authority. The same cannot be said for the
EU, which is a promiscuity of different types of representation. National
political elites are represented in the workings of the Council of Ministers
and the European Council, but given the intergovernmental scope of these
institutions, they can skirt accountability in many ways, diluting the
importance of their provenance in national elections. The Commission is
not supposed to represent any particular Europeans, but to serve “Europe.”
The Parliament does not yet engage the attention of those who elect it.
MEPs are elected, for the most part, because of the national positions
of their parties, not because of voters’ acquaintance with European matters.
The European Parliament may do good work analyzing and scrutinizing proposals
sent to it, but very few people know about this work and even fewer understand
where it fits politically.
A ramshackle
EU institutional complex makes democratic responsibility and accountability
more difficult than it needs to be. But it is not at all obvious what to
change and how to change it. Moreover, building real transnational transnational
democratic politics is a dramatically new problem that must take place
against the background of changing national democratic politics. Politicians
and leaders are faced with a daunting choice between confronting the EU’s
democratic dilemma at the potential cost of disrupting integration or moving
forward with integration and hoping for the best.
These
are big issues. Those who have built Europe are democrats and people of
the law, but existing European institutions do not facilitate the clarification
of issues of democratic responsibility and accountability. The EU is aware
of these problems. Indeed, it is just beginning a major transnational debate
about what to do about them which will culminate in 2004 in new intergovernmental
negotiations to reshape European institutions. Major European leaders have
already staked out very different positions. German foreign minister Joschka
Fischer wants the EU to become more clearly federal, along German lines.
French president Jacques Chirac wants the EU to become more clearly confederal,
with future leadership to come from a vanguard of large Western European
EU members. British prime minister Tony Blair wants the EU to become much
less supranational and more effective at solving concrete economic problems.
EU-watching over the next period should be fascinating. Historians are
not used to deconstructing historical turning points as they actually occur,
but there is no reason why they cannot start doing so now.
George
Ross is Morris Hillquit Professor of Labor and Social Thought and director
of the Center for German and European Studies, Brandeis University. With
Andrew Martin, he is editor of The Brave New World of European Labor: European
Trade Unions at the Millennium (Berghahn Books, 1999).
[1]
The EU has been the name of “Europe” only since 1993. Before that it was
called the EEC and then the EC.
[2]Monnet
thought that atomic power would be the key energy source for future European
growth, hence Euratom, modeled on ECSC. He seems to have misunderstood
that cheap oil had already begun to serve this function.
Join
the Historical Society and subscribe to Historically Speaking
Monticello,
the Usonian House, and Levittown: The Making of the Modern American Home
by
Alexander O. Boulton
The
search for the origins of the modern American suburban home often starts
in Levittown, the planned community on Long Island built by William and
Alfred Levitt after World War II. Both as a work of architecture and as
a reflection of who we are as Americans, however, the modern American suburban
home’s real origins can be traced back to the nation’s founding and the
architecture of Thomas Jefferson’s Monticello. William and Alfred
Levitt are not generally known as great architects. The tract housing
suburban developments that they built and named after themselves in Long
Island, New York, and in Pennsylvania, have long been scorned by architects,
urban planners, and social critics. The name Levittown has become virtually
synonymous with all the worst features of Cold War America. The lack of
aesthetic appeal is only one of a number of criticisms that includes conformity,
racism, and gender inequality.
Nevertheless,
the Levitt brothers helped launch a revolution in American life. The development
of the modern American suburban home was largely the result of a sudden
release of consumerism following the Great Depression and World War II.
During the 1930s and early 1940s, the housing industry, like other industries,
had stalled. When GIs returned from the war and sought new homes for their
baby boomer families, they hoped to escape from both the poverty and the
uncertainty that had marked the preceding years. They led one of the greatest
migrations in American history, fleeing from the cities in search of their
own small plot of land and, with it, their place in the American Dream.
The
Levitt brothers were there to give them exactly what they wanted.
Using assembly line techniques, which they had learned from building housing
for the military during the war, they were able to manufacture dozens of
houses a day, transforming farmland and forests into suburban tracts. The
boom in housing stimulated all sectors of the economy. In the years after
World War II America led the world in the construction of automobiles,
highways, shopping malls, and all the consumer goods—vacuum cleaners, washing
machines, lawn mowers, barbecue grills, and televisions—that came to symbolize
the good life.
The
new housing developments built by the Levitts and their many imitators
helped to shape new social patterns that came to define the American way
of life. For many, the American family reached its apex of perfection in
the 1950s. It was the heyday of the single wage-earner, nuclear family,
immortalized in a new American art form, the television situation comedy.
The new American suburban middle class emerged as a potent political force
as well; by the end of the century the suburban “soccer mom” became the
constituent most courted by both political parties.
However,
some social critics were not so sanguine. As Americans fled the cities,
they became, the critics said, not only physically, but also emotionally
distant from their fellow citizens. As federal funds shifted from the cities
to the suburbs, many urban areas went into decline. Crime and drugs proliferated
in the cities and a new form of intolerance, hiding behind the face of
apathy, emerged in the suburbs; black Americans were excluded by racial
covenants and by lending institutions from purchasing houses in Levittown
and other suburbs. In addition, feminists accused the new housing patterns
of creating virtual prison cells for suburban housewives who were physically
removed from each other and the life of the nation. Environmentalists,
at the same time, complained that the new developments wasted natural resources
and that an increasing dependence on the automobile polluted the environment.
The
Levitts, of course, were not responsible for creating all of the promises
and problems of modern America. They only occupy a prominent place in a
long line of historical developments that reach back to the Founding Fathers.
The critical link between Levittown and the era of the nation’s founding
is perhaps Frank Lloyd Wright. Sometimes identified as America’s greatest
architect, Wright is probably most famous for building the great show homes
of wealthy businessmen such as Edgar Kaufmann’s Fallingwater in Pennsylvania,
and the house for Frederick Robie in Chicago, or for public buildings such
as the Guggenheim Museum in New York City. Yet Wright’s most influential
buildings were the small, relatively inexpensive houses that he began to
design as a response to the Great Depression. He called these “Usonian
houses,” a play on the words “us” and “U.S.” These houses incorporated
many of the features of his more ambitious and expensive homes, but brought
them into the financial reach of the average American.
All
of Wright’s buildings reflected his philosophy of “organic architecture.”
They “grew,” he argued, like living plants, in conformity to natural laws,
local conditions, and practical needs. His organic architecture was a total
rejection of the architectural styles of his day. He was determined to
“break the box,” to escape from the confinement of formal rules and structure.
The major elements of his buildings were not the walls that superficially
supported the structures, but the spaces that gave them meaning. His houses
reached out from, and were largely supported by, a central fireplace and
chimneystack. Wright saw the hearth as the emotional as well as the physical
center of his houses. It symbolized for him the emotional core of the family.
Despite his own tumultuous family life, Wright’s architecture celebrated
a romantic image of the family as a nurturing haven for the individual
in his struggle against a hostile world.
Alfred
Levitt, the architect in the family (his brother Bill was the salesman),
was strongly influenced by Frank Lloyd Wright. In 1937, Alfred Levitt left
his job and spent every day of the next six months observing the construction
of one of Wright’s “Usonian” houses in Great Neck, New York. In designing
his homes for Levittown, Alfred Levitt copied many design features that
Wright had pioneered, and he was also influenced by the larger context
of ideas out of which they had developed. Despite the fact that most early
Levittown houses were designed to look like 17th-century Cape Cod cottages
with steep roofs covering the one and a half story structures, they were
thoroughly modern. Levittown houses, like Wright’s Usonian houses, had
no basements, and were heated by copper coils embedded in the concrete
slab floors. They had large picture windows and carports. Built-in cabinets
and bookcases and swinging shelves marked the flexible boundaries between
rooms. Levitt, like Wright, did away with the dining room, merging the
functions of the kitchen and the living room. Like Wright’s Usonian homes,
Levitt’s houses had a practical traffic flow that centered on the hearth.
More
importantly, perhaps, Levitt’s customers, like Wright’s, were a new incarnation
of the American middle class. Newly wealthy, impatient with old forms of
authority, they were being pulled into an increasingly complex and uncertain
world, and they found the meaning in their lives increasingly in the affections
of their families.
Both
Wright’s and the Levitts’ architecture reflected a transformation of the
traditional divisions of labor both within the home and in the outside
world. Wright’s architecture to a large extent was made possible by, and
reflected, the decline of live-in servants, who now generally lived in
tenements reached by streetcars. There was no longer a division between
the main family who inhabited the middle floors, and the upstairs and downstairs
worlds of servants. In Levittown, household work was done entirely by the
woman of the house, aided by vacuum cleaners, washing machines, and mass-produced
“TV dinners.” In both cases, technology helped, and gave a reason for,
the family to close in on itself as a safe refuge from a hostile world.
Yet
America’s unique marriage of architecture and an emotionally charged vision
of the nuclear family predates Frank Lloyd Wright. In any list of America’s
most influential builders we should include the name of Thomas Jefferson
with those of Alfred Levitt and Frank Lloyd Wright. Not everyone would
agree with this evaluation of Jefferson. To most architectural historians,
the architecture of Jefferson’s home at Monticello, of the Virginia State
Capitol, and of the buildings he designed at the University of Virginia,
is highly derivative. Jefferson’s version of Roman neo-classicism, copied
from 18th-century architecture books, and inspired by the buildings he
saw in France, many would say, only represents a highly idiosyncratic dead
end in American architectural history.
Yet
Thomas Jefferson’s Monticello looks not only backward toward the past but
forward into America’s future. Like Levittown’s Cape Cods, its historical
external form houses a new system of internal functions. Monticello represents
a new balance between architectural form, technology, and social/labor
patterns. The name itself, Monticello, or “little mountain,” suggests its
unique status and Jefferson’s radical intentions. Building on a mountaintop
in the 18th century was completely impractical. Jefferson’s house was far
from the fields and docks upon which the fortunes of colonial Virginians
depended. Monticello foreshadowed the 19th-century romantics’ idealization
of nature. Jefferson exulted in Monticello’s location: “How sublime to
look down into the workhouse of nature, to see her clouds, hail, snow,
rain, thunder, all fabricated at our feet” (Jefferson to Maria Cosway,
October 12, 1786).
Jefferson’s
famous labor-saving devices at Monticello were more importantly labor-hiding
devices. Every aspect of his house revolved around this impetus. Throughout
Jefferson’s life, housing for all except the domestic workers and craftsmen
was moved down the hill from the main house or hidden by fences. The house
itself is situated atop an underground passage that connects it to work
and storage areas, the kitchen, the stables, and slave quarters. The traffic
patterns in the house restrict the flow of visitors from the more private
sections of the house. Unlike the great mansions of Virginia’s colonial
aristocracy in which there were few entirely private areas, and where family,
guests, and servants intermingled, at Monticello public, private, and work
spaces are subtly but clearly delineated. Jefferson’s library and bedchambers
were known as his sanctum sanctorum, where only special visitors
were invited. Lilliputian stairs allowed family members to retreat to private
rooms away from downstairs guests. Dumbwaiters at one end of the house
transferred chamber pots from Jefferson’s bedchamber to underground tunnels,
while at the other end they sent bottles of wine to guests in the dining
room. Rotating shelves on doors allowed guests at dinner to be served without
slaves constantly entering and leaving the room.
The
complex traffic patterns of Monticello went hand in hand with a complex
geometry, uncommon to houses of the period. The octagonal bays displayed
Jefferson’s love of obtuse angles, his own way of “breaking the box” of
18th-century formalism. Each of the rooms varied in size and height, marking
off juxtaposing open and closed, high and low, and light and dark spaces.
Monticello
was physically removed from society and visually removed from labor, but
the impression that Jefferson hoped to convey was that of a house in harmony
with nature. It celebrated its natural surroundings, seeming to work not
by human labor, but by natural laws. It also served as Jefferson’s emotional
escape from the political world that he hated. In a typical letter, Jefferson
wrote to his daughter Martha while he was serving as vice president in
Philadelphia: “When I look to the ineffable pleasures of my family society,
I become more and more disgusted with the jealousies, the hatred, and the
rancorous and malignant passions of this scene, and lament my having ever
again been drawn into public view” (Jefferson to Martha Jefferson Randolph,
June 8, 1797). In another he wrote: “Worn down here [in Philadelphia] with
pursuits in which I take no delight, surrounded by enemies and spies, catching
and perverting every word which falls from my lips or flows from my pen,
and inventing where facts fail them, I pant for that society where all
is peace and harmony, where we love and are loved by every object we see”
(Jefferson to Martha Jefferson Randolph, February 5, 1801).
Monticello
was Thomas Jefferson’s other “declaration of independence,” celebrating
the individual in harmony with the laws of nature, as well as a new emphasis
on the family. As such it was less a display of 18th-century rationalism,
and more suggestive of the emotionalism of 19th-century romanticism.
Jefferson’s
home should be compared to Wright’s Fallingwater and Usonian houses, and
to Levittown. Unlike the architectural monuments of the past, these are
not towering palaces. They are essentially horizontal and internally complex.
They seek a harmony with natural laws, and at the same time they reflect
an American search for self-identity, not in the larger world, but in the
embrace of the family. They are the homes of individualists who are strangely
separate from the chain of cause and effect that lies between labor and
wealth. Monticello is not only America’s first modern, middle-class, suburban
home, it is also one of the earliest answers to Hector St. John de Crevecouer’s
question “Who is this new man? This American?”
Alexander
O. Boulton is assistant professor of history at Villa Julie College and
the author of “The American Paradox: Jeffersonian Equality and Racial Science,”
American Quarterly 47 (September 1995): 467–492.
Join
the Historical Society and subscribe to Historically Speaking
DISPATCH
FROM THE UNITED KINGDOM
The
Present Emergency
by
Jeremy Black
The
nature of the present crisis is such that the perspective I am offering
may swiftly seem redundant. At present, there have been no bin Laden attacks
in Britain, but a “Real IRA” car bomb last night in Birmingham is a pointed
reminder of our vulnerability. All historians, however, write against the
background of change, and it is therefore worth taking up the invitation
to do so. It is clear to me from ten days spent in the U.S. in late October
and early November that the complexity of the “British” response (if anyone
can thus simplify the attitudes of a large population) is not appreciated.
This complexity has played out in the remarks of commentators, including
historians, as well as in polls. To characterize it all too crudely, there
has been a vast amount of affectionate sympathy for Americans along with
a certain degree of criticism of America. The horror of the acts of September
11 was imprinted firmly through television, radio, and the press, and there
has been a widespread understanding of the urgency of a response.
British
commentators have also offered a wealth of information on a range of related
topics, from the history of Afghanistan to the problems of defining terrorism.
Prominent historians who have found themselves called on include Michael
Howard, pressing for a judicious response and underlining the difficulty
of the task; Felipe Fernandez-Armesto, writing on the nature of Islam;
and John Keegan, using his position as defense correspondent for the most
conservative of the newspapers, the Daily Telegraph, to urge a vigorous
military response including the dispatch of cruise missiles against those
who send encrypted messages through the Internet.
Comparing
the British with the American newspapers, the former have devoted more
space to the political background and dimension of the struggle. The British
experience also provides a different context for understanding terrorism.
Although there has been no single act of horror to compare with September
11 (the IRA fortunately failed when they tried to destroy the British version
of the World Trade Center—the far uglier Canary Wharf), the population
of Northern Ireland proportionately has taken far heavier casualties from
terrorism than that of the U.S. so far; and, of course, much of the IRA
terrorism was funded by American sympathizers. Simon Jenkins pointedly
asked in the Times where the Americans were when the IRA nearly
destroyed the government in the 1985 Brighton bombing. The sense that Americans
are being introduced to the real world is present in some of the commentary.
There are, of course, differences, not least the greater global range and
ambition of the bin Laden organization.
Much
of the British commentary has been about the need for a political as well
as a military strategy to confront the threat. Destroying bin Laden will
only profit us so much if another radical Islamic organization arises determined
to repeat what has hitherto been an act that has reaped many of the consequences
its progenitors presumably sought. Furthermore, as is very clear, it could
get much worse if the full repertoire of scientific weaponry is brought
into play, and also if states such as Egypt fall into hostile hands. A
political strategy, however, requires a reexamination of American policies
in the Middle East that may well be impossible for American politicians
and policymakers. If so, this violence is likely to recur. There is no
inherent reason why Islamic society should be anti-American, and there
is more in common between Islam and the U.S., with its powerful affirmation
of religious values, than China, Russia, or Europe. However, both because
of pronounced support for Israel since the 1960s and due to a failure to
engage with developing trends in Islamic politics from that period (both,
in part, unfortunate consequences of the focus on Vietnam and the habitual
confusion of means and ends), the situation is now very dangerous. Societies
with rapidly growing, youthful populations, centered on volatile urban
communities, are defining themselves in a way that can only partly be compensated
for by advanced weaponry. Indeed, a very senior British historian pointed
out to me recently that Britain itself has not faced a comparable challenge
from a large section of the population not sharing common values since
the Catholics under Elizabeth I; such a comparison reveals the advantages
of politicians with Oxbridge history degrees. The troubling persistence
of anti-Americanism in parts of the Islamic world will require a thoughtful
and long-term political response that will contribute to the defense of
America and, with it, the free world.
Historians
will engage with the issues of the moment as citizens and commentators.
More than most they need to consider their comments carefully, as their
accounts of the past will be seen as carrying particular weight. Historians
need to be careful when drawing conclusions about the effectiveness of
individual strategies. For example, Victor Hanson in the last issue of
Historically Speaking argues that it will be possible to “dismantle
the very foundations of Islamic fundamentalism,” a bold claim that is somewhat
compromised by his use of evidence. Pace Hanson, “civic militarism”
is not “a trademark of Western militaries,” and the move away from conscription
has made this clear. Furthermore, he neglects the failures of European
imperial powers in the 20th century, which would have thrown into doubt
his claim that “the foundations of Western culture . . . when applied to
the battlefield have always resulted in absolute carnage for their adversaries.”
Those who suffered from Western imperial- ism may also be surprised to
know that constitutional government is a foundation of Western culture.
There are also worrying lapses of detail. Lepanto is presented as an unproblematic
victory; Hanson ignores the swift rebuilding of the Turkish navy, Venice’s
rapid move to abandon her allies, the Turkish reconquest of Tunis, and
the Turkish retention of Cyprus, the cause of the conflict, for over three
centuries. He also neglects the role of local allies in enabling Cortés
to overthrow the Aztecs.
If
the use of historical evidence to provide rapid support for policy advice
is all too easy in a crisis, there is also the danger of a failure of contextualization.
Terrorism is employed as both a descriptive and an evaluative term, but
in the latter case there is a powerful subjective element. Were the Free
French who attacked Vichy with British support terrorists or freedom fighters?
What about the Contras, or those who currently oppose Russia or China?
Such complexity may have no role in public politics and may seem inappropriate
for a society recoiling from an evil and vile attack, but unless some effort
is made to engage with the issue, it will be impossible to appreciate the
degree to which President Bush’s proposal “Either you are with us, or you
are with the terrorists” makes no sense in much of the world.
Choices
are often far more complex, and much of the world will not march to this
step. It is difficult for many Americans to appreciate the fundamental
differences between American views on, for instance, the Middle East and
those of their allies. Those differences impose one of the most important
limits on American power and underline the major role foreign policy expertise
will play in preserving America’s superpower status. Understanding the
parameters within which allies can be expected to operate demands knowledge,
deftness, and expertise that have not always been the strong suit of American
government. Even without allies, the U.S. will remain the world’s leading
power and continue to achieve most of its own goals so long as it keeps
those goals limited. Again, it remains unclear how public opinion would
accept the concept of limits in defense and foreign policy since the American
people, and their politicians, have a low tolerance for vulnerability and
fear. This leads to demands for an invulnerable and comprehensive defense
system, but in practice no military establishment is likely to provide
both. This is one of the many issues that historians can fruitfully discuss.
Jeremy
Black is professor of history at the University of Exeter and author of
Western Warfare, 1775–1882 (Indiana University Press, 2001).
Join
the Historical Society and subscribe to Historically Speaking
Whiteness
and the Historians’ Imagination[1]
by
Eric Arnesen
The
rise of a genre of scholarship centering on white racial identity—on whiteness—is
one of the most dramatic developments in the humanities and social sciences
in recent years. The new scholars of whiteness insist that race is not
something that only non-whites possess but is a characteristic of whites
as well, necessitating close scrutiny of whites’ race and racial identity
and the very construction of race itself.
It
would seem that the “blizzard of ‘whiteness’ studies,” as cultural theorist
Homi Bhabha puts it, ought to elicit critical reflection. But with few
exceptions, the assessments in print today have been authored by those
writing within the whiteness framework and tend to be largely descriptive
or supportive. This is unfortunate. In my view, the whiteness project has
yet to deliver on its promises. The most influential historical studies
of whiteness—notably by David Roediger, Noel Ignatiev, Matthew Frye Jacobson,
Neil Foley, and Karen Brodkin—rely on arbitrary and inconsistent definitions
of their core concepts while they emphasize select, elite constructions
of race to the virtual exclusion of all other racial discourses. Offering
little concrete evidence to support many of their arguments, these works
often take creative liberties with the evidence they do have. Too much
of the historical scholarship on whiteness has disregarded scholarly standards,
employed sloppy methodology, generated new buzzwords and jargon, and at
times, produced an erroneous history.
The
weaknesses of whiteness scholarship are particularly evident in its treatment
of a subject that historians of the United States have chronicled for decades—the
hostile encounter between Irish immigrants and African-Americans in the
antebellum North. Long before whiteness came on the scene, historians described
in copious detail Irish immigrants’ political allegiance to the pro-slavery
Democratic party, their workplace clashes with blacks, and their participation
in anti-abolitionist and anti-black mobs. Whiteness scholars have revisited
these issues, asking once again: “How and why did the Irish in America
adopt their anti-black stance?” Attempting nothing short of a paradigmatic
revolution, whiteness scholars suggest that the necessarily prior question,
is “how did the Irish become white?” To pose this question is to assert
that 19th-century Irish immigrants to the United States were
not white upon their arrival—that is, they were not seen as white by the
larger American society and did not see themselves as white. Over time,
whiteness scholars argue, they became white. Yet early and mid-19th-century
commentators on the Irish did not speak of whiteness per se but
invoked a more diverse discursive apparatus, weaving considerations of
religion (which virtually vanish in the considerations of the whiteness
scholars), notions of innate and observed character and behavior, and yes,
race too into their anti-Irish commentaries. Therefore whiteness historians
must assume the role of interpreter, translating the 19th-century
vernacular of race and group inferiority into the late 20th-century
idiom of whiteness.
Upon
arriving in the United States, David Roediger declares, “it was by no means
clear that the Irish were white.” This claim rests not on an examination
of early and mid-19thcentury scientific thought, nor upon the actual observations
of contemporary nativeborn white opponents of Irish immigration, much less
on any assessment of what the Irish newcomers themselves happened to think.
Rather, it is rooted largely in the negative views, held by some, of the
Catholic Irish “race” in the antebellum era. The Irish were mocked by political
cartoonists who “played on the racial ambiguity of the Irish” through simian
imagery and by ethnologists who “derided the ‘Celtic race;’” they were
the butt of nativist folk wisdom which “held that an Irishman was a ‘nigger,’
inside out.” From these claims of Irish racial distinctiveness and inferiority—which
historians have long recognized and explored— Roediger decisively, if arbitrarily,
places whiteness at the center of the equation. Noel Ignatiev, author of
How the Irish Became White, concurs: it was “not so obvious in the
United States” when the Irish began “coming over here in large numbers
in the 1830s and ‘40s, that they would in fact be admitted to all the rights
of whites and granted all the privileges of citizenship.” That they were,
in fact, granted all those rights and privileges upon naturalization doesn’t
give Ignatiev pause; neither does the history of pre-famine migration,
in which Irish immigrants, many if not all of them Protestants, often blended
unproblematically into American society. As for the question posed in his
book’s oftencited title, Ignatiev barely attempts to answer it, drawing
instead from Roediger for theoretical justification.
Upon
close inspection, whiteness scholars’ assertions of Irish non-whiteness
rest largely on their conflation of racialization and the category of whiteness.
For Ignatiev and Roediger, the increased popularity of the “racialization
of the Irish”—the tendency to see the Irish as a distinct and inferior
race— is equated with their exclusion from whiteness itself. The two, however,
are by no means equivalent. Matthew Jacobson’s Whiteness of a Different
Color becomes relevant here. One need hardly accept Jacobson’s assertion
that the famine migration “announced a new era in the meaning of whiteness
in the United States” to appreciate the grounding of his arguments in the
contours of mid-19th-century scientific racism. Jacobson insists that racial
science produced, and American culture popularized, the notion of an “increasing
fragmentation and hierarchical ordering of distinct white races.”
The Irish were consigned to the Celtic race—a white, if inferior, race.
Although Jacobson undercuts his own contribution by repeatedly translating
a rich and complex language of race into the narrow idiom of whiteness,
his formulation, if taken at its face value, can effectively dispatch the
“how the Irish became white” question, replacing it with “how immigrants
became racialized.”
More
complex than the Irish case is the question of race, whiteness, and the
new immigrants from Eastern and Southern Europe who arrived in the United
States in the last decades of the 19th and the opening decades of the 20th
centuries. A vast literature on the immigrants’ experience in the United
States is available to historians of whiteness. John Higham’s classic 1955
study, Strangers in the Land, still remains unsurpassed as the most
valuable exploration of American nativism. Higham explored in depth the
increasingly racial nativism of the late 19th century which warned of the
dangers posed by the inferior “races” of Southern and Eastern Europe; subsequent
studies of immigration restriction, eugenics, and labor have documented
the varieties of racial classifications that consigned Eastern and Southern
Europeans to inferior slots. Aside from a new vocabulary, historians of
whiteness add little, if anything, to the extraordinarily rich history
of American immigration. To a certain extent, whiteness scholarship has
appropriated older historical narratives of immigration only to translate
them into the lexicon of contemporary theory.
Almost
half a century ago Higham’s study showed in great detail that the cultural
and biological inferiority of Italians, Jews, Slavs, and other new European
immigrants were widely advertised by scholarly experts and racist popularizers
alike. Indeed, staunch advocates of immigration restriction, eugenicists,
skilled trade unionists, the Dillingham Commission, university-based anthropologists,
ethnologists, biologists, and popularizers of scientific nativism and racism
served up the belief that Europeans were composed of a range of distinct
and unequal races. New immigrants arrived in a country that readily slotted
them into pre-existing or evolving categories of racial difference. But,
as in the case of the Irish, whiteness scholars often conflate the ubiquity
of racial thought—scientific and popular racisms which hierarchically ranked
a variety of European “races”—and whiteness.
To
a large degree, scientists and the antiimmigrant popularizers of the belief
in racial hierarchies did not employ whiteness as a category of racial
difference. Instead, they talked of multiple races, which, depending on
the particular classification, could number in the dozens for Europeans
alone. When elaborating on his racial classifications of Europeans in The
Passing of the Great Race, for instance, Madison Grant divided European
populations into “three distinct subspecies of mankind”—the Nordic/Baltic,
Mediterranean/ Iberian, and the Alpine subspecies, on the basis of what
he discerned as profound physical differences. As Matthew Jacobson observes,
Grant’s emphasis on those distinctions led him to conclude that the “term
‘Caucasian race’ has ceased to have any meaning except where it is used,
in the United States, to contrast white populations with Negroes or Indians
or in the Old World with Mongols.” “Caucasian” might have been a “cumbersome
and archaic designation,” Grant conceded, but it was still a “convenient
term.” Grant did not speak of whiteness, either literally or metaphorically.
What is gained by portraying Grant’s arguments as “views on the hierarchy
of whiteness,” as Jacobson does?
It
is evident, though, that in some circles, new immigrants were spoken of
as though they were not white. What to make of those claims is not always
evident. Jacobson cites an Ohio Know-Nothing newspaper charging that “Germans
were driving ‘white people’ out of the labor market” as evidence of “ascriptions
of Germanic racial identity.” Roediger quotes Higham’s observation that
“[i]n all sections native-born and northern European laborers called themselves
‘white men’ to distinguish themselves from the southern Europeans whom
they worked beside.” “‘You don’t call . . . an Italian a white man?’ a
West Coast construction boss was asked. ‘No, sir,’ he answered, ‘an Italian
is a Dago.’” It is conceivable, perhaps likely, that some of the makers
of these remarks did not view new immigrants as white. But such anecdotes,
taken out of their contexts, don’t get us very far in convincingly reconstructing
the outlooks of the speakers. Whatever these references mean—and I think
the jury is still out on how to read this suggestive terminology— they
beg the following questions: To what extent are these anecdotes representative
of broader opinion? Can public opinion be reduced to a single discursive
construction— the non-white status of new immigrants?If so, who makes up
that public opinion, and who gets left out?
If
new immigrants were not white in the above examples, they were in others,
or at least they were often not constructed as non-, almost-, or
quasi-white. Investigators into the causes of the 1919 steel strike, for
instance, found some 54 races employed in the steel industry. The overwhelming
distinction among workers was not between native-born American “whites”
and immigrant “nonwhites” but between the “Americans” and the “foreigners.”
Immigrant laborers complained repeatedly that they were given the “hardest
and most unpleasant jobs” and were “lorded over by the skilled American
workers.” Margaret F. Byington’s 1910 survey of Homestead, Pennsylvania,
drawing on the Twelfth U.S. Census, broke down the population of this mill
community into “native white of native parents,” “native white of foreign
parents,” “foreign born white,” and “colored.” The distinctions between
whites deal with nativity, not hierarchies of whiteness. And yet, “Hunkies”
were by no means the equals of native-born whites. “The break between the
Slavs and the rest of the community is on the whole more absolute than
that between the whites and the Negroes,” Byington discovered. Many “an
American workman . . . looks upon them with an utter absence of kinship.”
Differences of culture, language, and religion, and perceptions
of race—but not necessarily whiteness—operated to keep them apart.
The
assignment of new immigrants to a wide array of hierarchically ranked races
came under growing attack by the third decade of the 20th century. Whatever
the status of whiteness, the interwar years indisputably witnessed what
Elazar Barkan calls the “scientific repudiation of racism,” a decline “in
the scientific respectability of applying racialbiological perspectives
to cultural questions” that began in the 1920s. Anthropologists and biologists
challenged prevailing definitions of race; culture and environment came
to occupy significant places in new scholarly definitions of race, although
older notions of distinct European “races” died more slowly at the level
of the grass roots. Historians of whiteness go further, concluding that
the new immigrants from Southern and Eastern Europe, like the Irish before
them, eventually became white by the 1930s and 1940s.
Yet
here, as in their treatment of the 19th century, whiteness historians do
not specify the criteria they use to characterize the new immigrants and
their children as “not-quitewhite.” Passive voice construction allows them
to evade the necessary task of identifying the active agents denying or
qualifying these groups’ whiteness in the 1930s and 1940s, lessening the
need to square the assertions of not-quite-whiteness with the countless
examples to the contrary. Italian or Polish immigrants and their children
may not have been the social or economic equals of the old Protestant Anglo-Saxon
elite, but who, precisely, portrayed or “constructed” them as not-quite-white?
Not politicians courting their votes, government and military officials
attempting to mobilize them, academic anthropologists and social scientists
studying them, journalists writing about them, or industrial unionists
seeking to organize them. Only if whiteness is merely a metaphor for class
and social power are these men and women not white. But if it is merely
a metaphor, then its descriptive and explanatory power is nil and its repetition
in so many different contexts contributes only to confusion. Even if whiteness
scholars managed to produce some convincing evidence that some Americans—manufacturers,
professionals, or other elites—somehow doubted the full whiteness of new
immigrant groups in the 1930s and 1940s, on what grounds do these historians
single out those views, declare them hegemonic, and ignore all countervailing
opinion, no matter how great?
With
regard to the mid-19th century, similar questions can be posed regarding
the people who ostensibly saw the Irish as not white. Which Americans?
When? For how long? Jacobson’s answer is remarkable for its passive projection
of a monolithic stance toward the Irish onto all of American society. By
the mid-19th century, he argues, “racial conceptions” of the Irish “would
lead to a broad popular consensus that the Irish were ‘constitutionally
incapable of intelligent participation in the governance of the nation.’”
Without diminishing the significance of political nativism at particular
moments, the notion of a “broad popular consensus” would have been news
to many of the local and national leaders of the Democratic party, who
courted and relied upon Irish political support; it would have been news
not only to the Irish but also to many non-Irish workers in the nation’s
urban centers. For if some Americans denied whiteness to the Irish, other
Americans did not. Roediger acknowledges in passing that there were two
institutions that did not question the whiteness of the Irish—the Democratic
party and the Catholic church; neither can be described as insignificant
in size or influence. But it matters little to historians of whiteness
that one of the two major political parties in the United States embraced,
defended, and even championed the Irish, including them without hesitation
in the category of “white” or “caucasian.” Instead, historians of whiteness
ignore the significance of this counter-discourse and focus almost exclusively
on the more explicitly racialist discourse of the American elite—the Anglo-Saxons,
the nativists, and their ilk. How and why whiteness historians present
the views of only one portion of the American public—one that did not exercise
unquestioned and continuous power, despite the elite status of many in
its ranks—as the truly significant discourse on the racial construction
of the Irish is never addressed. Indeed, the answer to the initial query
of “how the Irish became white” is a short one: the Irish became white
because historians, not their contemporaries, first made them “non-white”
before making them “white.”
Consider
Jacobson’s invocation of whiteness to interpret racial conflict in New
York during the Civil War. In the bloody New York City Draft Riots of 1863,
he contends, the Irish rioters who embraced white supremacy and resorted
to racial violence demonstrated their “insistence upon whiteness.” With
these and other words, Jacobson treats whiteness and racialist beliefs
and actions as virtual synonyms, substituting the former for the latter
and presenting the maneuver as a novel interpretation. Equally problematic
are his efforts to force contemporaries’ discourse about rioters’ behavior
into the mold of whiteness. To elite onlookers, the Irish rioters were
little more than “savages,” “wild Indians let loose” in the city, a “howling,
demonic” mob. Rather than take contemporaries at their word, Jacobson perceives
code: these words reveal that elite critics were “questioning the rioters’
full status as ‘white persons’” and the riots became the “occasion for
a contest over the racial meaning of Irishness itself.” But they do nothing
of the sort. To the extent that anti-Irish sentiment involved casting the
Irish as a separate, albeit white, race, Jacobson suggests nothing that
immigration scholars haven’t demonstrated years ago. Jacobson’s interpretive
maneuver rests upon a definition of white- ness that is simultaneously
cosmically expansive and narrowly circumscribed: whiteness is so interpretively
open as to subsume any related discourses into its fold. At the same time,
the only whiteness that counts is that of the elite, defined as proper
decorum, a refraining from street violence, deference to law and order,
and the like. Absent from consideration are other constructions that suggest
the precise opposite of the questioning of whiteness. What is one to make
of the remarks of Horace Greeley’s Tribune when it admonished striking
Irish-American dockworkers —just three months before the Draft Riots of
July 1863—that while no law compelled them to work with blacks, both the
black and “the white man” have “a right to work for whoever will employ
and pay him,” and that the “negro is a citizen, with rights which white
men are bound to respect”? What of the coverage of the New York Herald,
which characterized striking longshoremen engaged in violent attacks on
blacks simply as “white men”? There is little ambiguity here: these papers
were confirming, not challenging, the status of the Irish as whites. But
historians of whiteness appreciate neither ambiguity nor counter-discourses
of race, the recognition of which would cast doubt on their bold claims.
Race,
racial identity in general, and white racial identity in particular are
tremendously important subjects, fully deserving of the attention they
have received and ought to continue to receive in the future. Yet how
one studies race and racial identity matters considerably, and many of
the assumptions, interpretive techniques, and methodologies pursued by
cultural historians of whiteness are highly problematic and, ultimately,
generative of superficial insights. The arbitrary definitions, tautological
reasoning, and conceptual substitution that function as the tools of the
whiteness historians’ trade serve the exploration of racial identity poorly.
Its
current popularity suggests that whiteness will retain its academic lease
on life in a variety of disciplines. But historians would do well to interrogate
the concept, and the methodologies employed by those who invoke it, far
more closely than they have. Racial identity is too important a subject
to receive nothing less than the most rigorous treatment at historians’
hands. If whiteness is to endure as a critical concept, its scholars need
to demonstrate that more than the historian’s imagination or aspirations
are involved. If they cannot, then it is time to retire whiteness for more
precise historical categories and analytical tools.
Eric
Arnesen is professor of history and African-American studies, and chair
of the department of history at the University of Illinois at Chicago.
He is the author, most recently, of Brotherhoods of Color: Black Railroad
Workers and the Struggle for Equality (Harvard University Press, 2001).