One Nation: Divided or United?

“We live in an era of democratic contradiction. As the Cold War recedes into history and the apparent triumph of liberal democracy spreads around the globethe domestic state of democracy within the United States remains in jeopardy,” writes David Trend in A Culture Divided: America’s Struggle for Unity. Echoing sentiments expressed in last night’s acceptance speech by Barak Obama, an excerpt from A Culture Divided follows below:

Rather than a nation where citizens feel empowered in their common governance, the U.S. has become a land of where growing numbers of citizens feel alienated from the democratic process. Voter turnout for the 2012 U.S. presidential election was nearly 20 percent less than in 2008. Massive anti-incumbency sentiments and resentment toward representative government parallel the rise of grassroots fringe movements and media demagogues. Clearly something has gone wrong with democracy in the U.S.––or more precisely with the way democracy is understood and exercised. Why else would so many people respond so strongly to promises to “change” the way Washington works?

Not surprisingly, these shifts have produced considerable public tensions, along with a disturbing tendency to reach for quick and easy solutions to problems. Witness rising political extremism and the remarkable popularity of such fringe personas as Ann Coulter, Sean Hannity, Rush Limbaugh, and Bill O’Reilly. Claiming to appeal to populist sentiments this cadre of would-be demagogues has emerged to push for tough laws, closed borders, and an ever more puritanical set of cultural standards.

Naturally, liberals and conservatives have always fought over political and economic issues. But something has changed recently. Unlike ideological conflicts of the past, which focused on tangible issues, the new political warfare is conducted over the more subjective terrain of identity and representation. Battles once restricted to laws and money are being waged over ideas and symbols. More than ever these struggles entail the discourses through which subjectivity is formed—as evidenced in debates over patriotism, political correctness, and the “defense of marriage.”

In the broadest sense these contests can be construed as issues of culture. They signal efforts to control the ways people learn about who they are and what they can become. This cultural notion was an important element of the political philosophy of Antonio Gramsci, who wrote, “One must speak for a struggle for a new culture, that is, for a new moral life that cannot but be intimately connected to a new intuition of life, until it becomes a new way of feeling and seeing reality.”[i] In this context Gramsci was not simply referring to the forms of culture that one commonly associates with art, literature, and even mass media, but the profoundly political process through which citizens are socialized to recognize and validate state power. This process infuses all components of the social apparatus: the office, the home, the school, and the church. If these institutions are recognized as sites of potential ideological persuasion, then all of life becomes a potential political battleground.

This expanded view of culture entails more than simply acknowledging the political implications of everyday actions. It means admitting that many areas that claim neutrality in our lives are in fact sites of profound ideological struggle. Television newscasts, educational syllabi, scientific breakthroughs, and “great” pieces of music—these are not “objective” phenomena that somehow exist outside the realm of politics. They are forms of representation invested with specific interests in every manifestation. Through these cultural objects dominance strives to replicate itself, often disguising its actions in the process. This invisibility of the center is often accompanied by a quiet exclusion of otherness. People may be concerned about the violent suppression of certain dissenting voices, yet at the same time they may be unaware of those consigned to the “structured absences” of discourse. In this sense every act of education, legislation, management, and creative expression is an act of inclusion and exclusion.

Few could have predicted the speed with which the world would be reconfigured by the events of September 11, 2001. Yet rather than bringing nations together in a sense of common purpose, 9/11 triggered new forms of national chauvinism and regional antagonism. Pick up any newspaper in the U.S. and it appears that the nation is facing a democratic crisis. In a post-9/11 era lacking in superpower conflicts, old fears of foreign insurgency have been supplanted by anxieties about fiscal threats and terrorists lurking among us. As social inequities continue to increase, many citizens question government and the master narratives supporting it. Complicating matters further has been the restructuring of global capitalism. As the world evolves into a transnational marketplace and the production of goods and services has become more fluid and decentralized, the distance between rich and poor nations has continued to widen. Meanwhile, within the U.S. communities of color are quickly diminishing a once dominant white majority. Factor in the growing influence of feminism, challenges to the traditional nuclear family, and activism supporting the rights of lesbians and gay men, and it becomes clear that a massive movement—indeed, a majority movement—is rising to demand a new kind of politics, and perhaps a new society.

As the U.S. military reigns supreme among world powers, democracy has become subverted in its foreign policy into an excuse to force the will of the U.S. on other nations. America’s recent adventures in Afghanistan and Iraq have demonstrated the way the U.S. has used terms like “freedom” and “liberty” as tools to rationalize its advancement of military and business interests throughout the world. There is nothing inherently wrong with a foreign policy to strengthen American security and economic well-being on the global stage. But the relentless exploitation of the nation’s most cherished values for these purposes diminishes their meaning and contributes to a distrust of U.S. wherever it goes. On the domestic front, ideological debate has become internalized as it did in the 1950s. Once again battles that were waged with guns and bullets are now fought with ideas and symbols. And once again access to the debate is a crucial issue, as attempts are made to exclude voices that would contest the status quo. Although identity politics did much to expand the national conversation about pluralism and values, other issues have induced heightened levels of divisiveness and antagonism. Continued strife in the 2000s over border security and gay marriage keep reappearing across the political landscape. As the internet has made more information available to people than ever before, the electorate finds itself increasingly uninformed and confused. And while democracy is a word that politicians and media personalities bandy about with great alacrity, its usefulness has become all but exhausted by divergent interests it has come to serve.

Rigid divisions between left and right wing positions yield little room for the understandings that can grow from genuine dialogue. In part this results from a philosophical legacy that splits every issue into a binary opposition. Often this is caused by patterns in public communication that reduce discussion to superficial sound bites and overheated rhetoric. A genuine democracy requires more than this. People stick with old-style party politics in the U.S. because other models don’t seem viable. This is largely due to the self-marginalizing character of most alternatives. Yet as recent events have demonstrated, public dissatisfaction with mainstream institutions stands at an all-time high.

Diverse communities seem willing as never before to reach for new answers to old problems. The roots of these solutions lie in the very democratic principles upon which the U.S. was founded, although many such principles need to be brought up to date and radicalized. Clearly the time has arrived to move beyond traditional divisions of left and right. This was the call for unification articulated by Barack Obama in his presidential campaign. While such a vision has proven more difficult to implement in the post-election years, the goal should not be abandoned. This new call for unity should not to be confused with demands for a centrist compromise or with a romantic appeal to pre-industrial communitarianism. Rather than asking people to surrender their identities in the interest of a national consensus, our purpose should be recognized as an expanded approach to democracy, which stresses the primacy of cultural difference. This means remembering that people are not simple creatures of Republican or Democratic ideology, but comprised of complex histories, needs, cultures, and values. To these ends, this newly invigorated democracy––what some would term a “radical democracy”––would reconcile current tensions between national and local governance by reorganizing political constituencies in ways typically considered off-limits to politics. By necessity this will entail the creation of “new political spaces” that fall outside traditional definitions of government, civil society, and the family. It will take a good deal of work to put these ideas into practice. Yet the time seems right to spell out some of the ways a radicalized democracy might be applied.

At the height the conservative ascendancy of the 1980s, controversies flourished over affirmative action, abortion, and failing schools, among other issues. Those cultural conflicts were waged into the next decade, providing fodder for religious ideologues and headline-hungry politicians. Embedded in the early culture wars was a remarkable realization––that in the aftermath of economic class conflicts and the civil rights movements, a great change had taken place. The white male power structure–largely supported by Christian conservatives––had been outflanked by feminists, multiculturalists, and liberals of all stripes. Protests, court cases, Vietnam, and Watergate had turned the nation upside down. Waged during the presidencies of Ronald Reagan and George H.W. Bush, the early culture wars were less “wars” between opposing camps than they were assaults by conservative alarmists against perceived patterns of difference and change.

In 1992, conservative commentator Patrick Buchanan warned that the nation was embroiled in “a culture war as critical to the kind of nation we shall be as the Cold War itself, for this war is for the soul of America.”[ii] In this frequently quoted statement, Buchanan claimed that while conservatives had been busy defending democracy around the globe, leftists had been infiltrating schools, the media, and the art world at home. It’s no secret what happened next, as many legislators, religious figures, and journalists cashed in on the publicity surrounding newly visible cultural conflicts.[iii] It also should go without saying that progressives never held “all the commanding heights of art and culture,” as Buchanan asserted.[iv]

Nevertheless the myth of a radical juggernaut took hold, manifest in countless outcries over multiculturalism, political correctness, and school curricula. Before long, rhetoric like Buchanan’s had enflamed the country sufficiently to win Republicans the Congress in 1994––the “year of the angry white male.” But the conservative momentum didn’t last long. Years later the expression “culture wars” had all but disappeared from the public lexicon, with many conservative leaders lamenting the slide of American society into self-indulgent secularism. This perception of moral decline reached its breaking point with the 1998 Monica Lewinsky scandal. In a widely quoted “Letter to Conservatives” that year Paul Weyrich wrote, “I believe that we probably have lost the culture war. That doesn’t mean the war is not going to continue, and that it isn’t going to be fought on other fronts. But in terms of society in general, we have lost.”[v] Implicit in Weyrich’s lament was the belief that a new kind of politics was needed. In the wake of bipartisan dismay over the Clinton debacle came a conservative call to arms––specifically a call for republicans to launch a massive populist movement through local churches and community organizations to build a “Moral Majority.”

Determined conservatives swept a most unlikely candidate into the White House: George W. Bush. Early in the 2000 primaries, many liberals quietly had hoped the awkward former Texas governor would gain the Republican nomination and handily lose to democrat Al Gore. As election results were tallied Bush did indeed lose the popular vote nationwide, but by so small a margin that the controversial winner-take-all award of Florida’s 25 electoral college ballots gave Bush the presidency. Following the most evenly divided presidential contest in American political history, Bush surprised observers on both sides of the political aisle by proclaiming a “mandate” for conservative reform. Supported by a republican controlled congress through 2006, the culture wars began anew with unrelenting fervor. Conflicts multiplied and intensified through the final years of the Bush presidency, with commentators of all ideological stripes expressing alarm over the vast division of the American populous. The 2008 election campaign began with calls for “unity” and common purpose––only to yield back-biting and rumor mongering when the going got tough.

Do recent controversies offer new signs of a fracturing America? Or do they emerge from the very core of American culture? Certainly political disagreement and partisan antagonism are nothing new in U.S. history. Social unrest, violent protest, and electoral enmity have been with us since the earliest days of the republic. Is there anything distinctive about the more recent episodes of disagreement and unrest? One notable difference has been a decline in visible class conflict. Globalization, corporate growth, and a weakened labor movement have yielded fewer strikes and even fewer management concessions to labor––as minimum wage and non-union employment have grown. Civil rights laws and equal pay regulations have helped reduce discrimination based on religion, race, gender, and sexual orientation––although the playing field is still far from level. Yet political campaigns still assert dramatic differences within the American public and the perception of a divided America persists.

In recent years republicans and democrats have fought over which party is better suited to heal the divides that separate Americans. In 2004 candidate John Kerry claimed the mantle of a “uniter” proclaiming that “When you’re president you need to talk to all of the people and that’s exactly what I intend to do.”[vi] Not long after that George W. Bush asserted that he was “ a uniter, not a divider. . . I refuse to play the politics of putting people into groups and pitting one group against another.”[vii] Four years later Barack Obama called himself a “uniter” who could “bring the country back together.”[viii] Then Republicans claimed that McCain was a uniter in the mold of the “great uniter Ronald Reagan.” Along the way Hillary Clinton, Mike Huckabee, and Mitt Romney became uniters. In the end, Obama won the election, declaring in his acceptance speech that the U.S. “has never been just a collection of individuals or a collection of red states and blue states. We are, and always will be, the United States of America.”[ix] All of these claims for uniter status underscore the broad-based fear that America is in fact a very divided nation. Implicit in such claims lies the recognition of diverse and often problematically oppositional thinking in contemporary politics.

Divisions in American society have received considerable academic attention. A sampling of recent titles on the topic include: America Divided by Maurice Isserman and Michael Kazin (New York and Oxford, Oxford, 2008), Divided America by Earl Black and Merle Black (New York: Simon and Schuster, 2007), A Republic Divided by the Annenberg Foundation (Radnor, PA: Annenberg Foundation, 2007), Divided We Stand by John Harmon McElroy (Lanham, MA: Rowman and Littlefield, 2006), Divided We Fall by Bryce Christensen (New Brunswick: Transaction, 2006), Silent Majority by Matthew Lassiter (Princeton: Princeton, 2006), Divided States of America: The Slash and Burn Politics of the 2004 Presidential Election by Larry J. Sabato (New York: Pearson/Longman, 2006), Michael Moore’s Fahrenheit 9/11: How One Film Divided a Nation by Robert Brent Toplin (Lawrence, KS: Kansas, 2006), and Divided by God: America’s Church-State Problem by Noah Feldman (New York: Farrar, Straus and Giroux, 2005).

All of these works argue that a loss of common purpose has overcome the United States. The presumed causes of America’s cultural divide vary book-to-book––heightened partisanship, anti-war dissent, religious separatism, regional differences––but all agree that America is coming apart. As Divided America puts it, “America’s unstable power politics generates relentlessly bitter conflicts over a huge range of domestic and foreign policies and motivates activists in both parties to compete fiercely all of the time.”[x] On the other hand, a character from the 2008 feature film Split: A Divided America simply exclaims, “If you’re savvy and you get it, you’re a republican. If you’re an idiot, you’re a democrat.”[xi] Some analysts have sought more nuanced distinctions to describe the situation. Hunter is somewhat sanguine in describing a divide between groups whose views he described as “orthodox” (universal, transhistorical, authoritarian) and “progressive” (individual, contemporary, relativistic).[xii]

Movie going audiences have shown a fascination with divisive subject matter from directors such as Michael Moore (Sicko, Fahrenheit 9/11, Bowling for Columbine, Roger and Me), Spike Lee (Do the Right Thing, Malcolm X, She’s Gotta Have It, When the Levees Broke), Morgan Spurlock (Supersize Me, 30 Days, What Would Jesus Buy?), and Oliver Stone (Born of the Fourth of July, Platoon, Salvador, W). But most of these films are too controversial for TV, where advertisers shy away from potentially alienating content. This doesn’t mean that television is the “vast wasteland” of monotonous conformity that it was once thought to be. The last decade has seen an explosion in reality-TV and outlandish talk show formats. The new shows feed public fascination with people in extraordinary circumstances or desires to get glimpses of humanity at its most weird. Precursors to current reality-TV focused more on the banalities of everyday life in programs like Candid Camera (showing “people caught in the act of being themselves), An American Family (documenting a nuclear family going through a divorce), or The Real World (depicting the daily lives of typical teenagers). But things changed in the late 1990s and 2000s with a shift to desolate locales (Big Brother, Survivor), celebrities (The Anna Nicole Show, The Osbournes), talent competitions (American Idol, Project Runway), and action (American Gladiators, COPS). The once-tame talk show format popularized by Dick Cavett, Johnny Carson, and David Frost has morphed into the full-blown trash-TV spectacle genre of Ricky Lake or Jerry Springer shows. What is one to make of this new torrent of hyperbolic media products? What is behind the growing sensationalism, titilization, and generalized trend toward the unreal in the current media environment?

 

Media Matters

It’s no great insight that mass media have taken over public life. Television, movies, and the internet have assumed a dominant role in the contemporary world. But could television, movies, and the internet be pulling America apart? According to the latest Time Use Survey from the U.S. Department of Labor, Americans over 15 years old spend 60 percent of their leisure time with TVs and computers––leaving less than two hours per day for such things as socializing, reading, exercising, and thinking.[xiii] About 90 percent of the time spent with media goes to watching television. Several theories explain television’s ability to attract and hold people’s attention. Generally speaking, people find TV interesting––more so than other things they might be doing. It creates a world of engaging stories and stimulating imagery.

Perhaps more importantly, TV watching requires little effort and is easily accessible. In the 2000s the number of television sets in typical American homes (including the poorest ones) rose to 2.1, with more families owning color TVs than those with washing machines.[xiv] The average TV in the U.S. is turned on nearly seven hours per day.[xv] In this context, one of the reasons people watch so much television has to do with its simple ubiquity. In relative terms, the price of television sets has been declining steadily since its wide scale introduction in the 1960s. Television literally has become a part of the home environment. Its presence and use have become naturalized and destigmatized. Rarely these days does one hear any mention of television as “mind candy” or “a vast wasteland” as one FCC chairman famously declared.[xvi] Rather television now is considered a cultural necessity and a much needed source of news and information.

Like it or not, for many people television is their main source for knowledge about the world. Most people will never travel to another country or even to parts of their own communities. Television news offers reports on what other people are doing, stories about those they rarely encounter, and fantasies about those they will never meet. The problem is that television is anything but a fair and unbiased conveyer of these stories and fantasies. TV and related media do not present a “real” image of the world––certainly not an image of the world that resembles the lives of viewers. Instead television offers an artificially stylized and narrowly drawn image of life, designed specifically to capture and hold audience attention.

Commerce drives television’s hyperbolic artificiality. Programming is paid for by people with things to sell. The shows people have become accustomed to seeing are tremendously expensive to produce and they only can be financed by the advertisers who want access to the huge audience TV reaches. Although television seems like such a commonplace and ordinary part of American culture, the reality is that TV production is a highly selective and competitive affair. Broadcasting executives continually hunt for programs and formats that engage viewers quickly, lining audiences up to receive commercial pitches. The intense competition for viewers creates a tendency to favor production formulas that guarantee results. This is why so much television looks alike. Ironically, in the interest of reaching ever larger audiences, television tells an ever diminishing number of stories.

What are the narratives that television and other commercial media convey? Mostly they are stories about human desire––for romance, adventure, or success. Or stories of human difference––about circumstances, actions, and thoughts that locate characters outside the banality of “normal” everyday life. Both strands of storytelling––desire and difference––play powerful roles in making America a divided society. Together the forces of desire and difference, while they are experienced in a similar way by millions of people, create a spectacle that separates viewers from those viewed. Most commonly, the media spectacle generates a “cult of celebrity”––a level of idolatry toward famous personalities that assumes a near-religious intensity. Often supported by heavy publicity and exposure in magazines, newspapers, and television “entertainment news” programs, the cult of celebrity focuses attention on the famous, as well as otherwise unexceptional individuals who have become famous for being famous. This latter category includes reality show contestants, debutants, or wealthy characters like Paris Hilton and Donald Trump.

Media celebrity creates a difference between viewers and those viewed in several important ways: by rendering viewers invisible, powerless, and silent in the media landscape. Considering that it is viewers––as consumers of programs and commercials––who “pay” for television programs with their time, it is indeed remarkable that audiences are so absent from media representation. In most TV comedies and dramas, viewers may be heard in laugh tracks or in the background of shows “filmed before a live studio audience,” but they are almost never seen. When viewers are seen, as in certain variety programs or game shows, they typically are depicted as wildly enthusiastic crowds or hapless individuals thrust awkwardly before the gaze of the camera––as in the frenzied atmosphere of The Bachelor or Deal or No Deal. This image of the helpless spectator has been modified in the cinema vérité styling of long-form reality shows like Survivor and Big Brother, but such programs really are more akin to scripted dramas. Audiences of Survivor are led to believe that they are observing the wilderness struggles of program participants in a realistic, real-time context. But in actuality, shows like Survivor are carefully constructed narratives assembled in the editing room. In scripted programs like CSI and 24, where all of the action is carefully planned and even rehearsed before filming, between 25 and 100 hours of material are recorded for a typical 42 minute “hour-long” episode. In the less predictable realm of reality TV, hours of raw footage can exceed 200 per episode. Sometimes, even 200 hours of footage is insufficient to create a compelling narrative, at which point story-lines are constructed from otherwise unrelated shots in the editing process.

The media spectacle also contributes to a generalized alienation of viewers by creating an idealized image of celebrity life that vastly differs from the daily existence of audiences. The 1990s program Lifestyles of the Rich most literally engaged this theme in documenting the enormous houses and extravagant leisure pursuits of entertainers, athletes, and business moguls––each episode concluding with host Robin Leach wishing viewers “champagne wishes and caviar dreams.” In the 200s viewers similarly have been taunted with images of wealth in the long-running program, Who Wants to be a Millionaire? Indeed, Slumdog Millionaire (2008), a movie about an Indian version of the show, swept he 2009 Academy Awards. A recent episode from the program MTV Cribs promoted itself as “ the most exciting way to peep into your favorite celebrity homes without getting slapped with a restraining order.” Not forgetting the youth audience, the popular sit-com Hannah Montana portrayed the trials and tribulations of a girl who lives a double life as an average teenage school girl named Miley Stewart (played by Miley Cyrus) by day and a famous pop singer named Hannah Montana at night. As the content of such programs glamorizes the larger-than-life world of celebrities, it suggests that the world inhabited by audiences is just the opposite. If celebrities are wealthy, powerful, and free to enjoy anything their budgets allow, audiences feel impoverished, disempowered, and constrained by limited resources. While this story has a powerful effect on all viewers, the message is especially potent to youthful audiences, who often already feel constrained by parental authority and age-related legal restraints.

As the content of the mediascape disempowers, its formal structure reinforces the message. Television in particular operates in a one-directional form of address. People may receive its messages, but they cannot talk back. A movie operates the same way. Contrary to popular opinion, so do computers to a large extent. Despite the much-promoted potential of the internet as a two-directional communications medium, entertainment and game sites mostly offer only the illusion of true interactivity. Popular sites like Pogo.com and MSNGames.com allow visitors to choose among games and to play them, but these “choices” are limited by what the site offers. The illusion of choice in the online game world is but a smaller version of the pseudo-freedom offered in overall commercial marketplace. This is perhaps the biggest deception of capitalism.

There is nothing especially new about the narrative appeal of human difference. Evidence dating to the stone age tells us of a kind of celebrity conferred on the successful and brave, as depicted in cave paintings of memorable hunts. With the invention of the printing press another kind of celebrity was conferred upon famous criminals whose exploits were documented in the earliest published broadsides, pamphlets, and dime novels. Media controversy and censorship first appeared in Europe and the U.S. over concern about the effects of these crime stories among young men in cities. At the turn of the twentieth century, movies of boxing matches constituted the most popular early films, to be replaced in later decades with equally well liked

gangster and western movies.

Imagery of conflict, crime, and adventure satisfied audience desires to witness sights not normally seen. This phenomenon promoted social division even further through popular depictions of the poor or those deemed primitive by cultivated society. Not so coincidentally, these developments in entertainment occurred as the U.S. was rising to preeminence as an imperial power. In international terms, nations most active in global colonization during the nineteenth century were also the countries producing the most motion pictures. On one level movies provided audiences with an unprecedented sensation of power and mobility (in both spatial and temporal terms), affording viewers unknown abilities to experience the world. In the estimation of theorist Christian Metz, the early cinema offered an enormous psychological attraction by fostering “a narcissism in that the spectator identifies with him/herself as a ‘kind of transcendental subject.’” By prosthetically extending human perception, the apparatus grants the spectator the illusory ubiquity of the “all-perceiving” self enjoying an exhilarating sense of visual power.[xvii]

In this sense filmmakers and viewers acted in concert to consume the world in visual terms. Cinema historians attribute this drive in part to the changing demographics of movie-going audiences. The popularity of film as entertainment paralleled with growth of large and crowded urban areas, where people quickly gravitated to the forms of escape afforded by the newly emerging movie palaces. By 1929 more than 80 million people went to the movies every week, a level of consumption that held constant for two decades. These viewers were looking for comfortable, yet exotic, spaces of amusement apart from the routine of daily life. To answer the demand, movie impresarios like Sid Grauman in Los Angeles and S.L. “Roxy” Rothapfel in New York opened opulent theaters with names like the “Egyptian,” the “Metropolitan,” and the “Chinese.”[xviii]

This new form of public gathering performed an important social function as well for this newly established urban community. It provided a sense of identity and belonging with very specific nationalistic overtones. To Ella Shohat and Robert Stam, “the cinema’s institutional ritual of gathering a community––spectators who share a region, language, and culture––homologizes, in a sense, the symbolic gathering of the nation.”[xix] Although it is important to resist overgeneralizing the effect of movies on different groups, the sheer number of people seeing films cannot be ignored. To the extent that movies were viewed on a regional and national scale, some writers have suggested that such imperialistic forms of entertainment actually helped divert public attention from domestic problems, serving to “neutralize the class struggle and transform class solidarity into national and racial solidarity.”[xx]

Depictions of western civilization’s proverbial “other” gained special popularity within the didactic travelogue genre that developed prior to 1920. Slide shows, short films, and lectures would appear between reels at commercial screenings to bring an “educational” interlude to afternoons at the movies. Typical of the colonial mood of the era, these films liberally mixed fact and fantasy in the interest of “adventure” entertainment. The titles of works emphasize the exotic character of the subject matter: Among the Cannibals of the South Pacific (1918), Head Hunters of the South Seas (1922), Hunting Big Game In Africa (1923), Trailing Wild African Animals (1923), and Wild Beauty (1927).[xxi] As Shohat and Stam explain,

 

“Primitive” peoples were turned into objects of quasi-sadistic experimentation. This kind of aggression reached a paroxysm in the 1920s films of Martin and Ona Johnson, where filmmakers gleefully prodded Pygmies, whom they called “monkeys” and “niggers,” to get sick on European cigars. In films such as Trailing African Wild Animals (1922) and Simba (1927), the Johnsons treated African peoples as a form of wildlife. The camera penetrated a foreign and familiar zone like a predator, seizing its “loot” of images as raw material to be reworked in the “mother land” and sold to sensation-hungry spectators and consumers, a process later fictionalized in King Kong (1933).[xxii]

 

Not surprisingly the American frontier provided endless fodder for the emerging travelogue industry in similar films like Camping with the Red Feet (1913), In the Land of the War Canoes (1914), and The Covered Wagon (1923). Actually, the practice of serving up western imagery had a venerable history dating to the mid-nineteenth century. During that earlier period, photographers like Alexander Gardner, William Henry Jackson, Timothy O’Sullivan, and Carleton Watkins found work by accompanying geological map-making expeditions. The resulting “scientific” documents quickly gained currency as exotic curiosities and later as high art. As critics Rosalind Krauss and Jan Zita Grover have pointed out, whether operating within the U.S. or around the globe the intentions (conscious or unconscious) of image-makers often had little to do with the ultimate use of these films.[xxiii] Certainly this was the case with the theatrical documentaries of producers like Robert Flaherty and Basil Wright. While tacitly anti-colonialist in their intention to present native cultures “untouched” by western culture, Flaherty’s Nanook of the North (1922) and subsequent Moana (1926) now stand as benchmarks in the commodification and sale of otherness to mass audiences.

As the U.S. faced the twentieth century’s most serious economic crisis, Hollywood turned its attention to critiques of capitalism. Influenced by leftist theater movements in New York City, depression-era films began to appear which dramatized the tension between values of individualism and collective action. Led by Warner Brothers, the major studios hired East Coast writers who focused film industry attention on contemporary social and political issues like poverty, crime, and unemployment. Often basing movies on sensational newspaper stories of the day, Paramount established itself as the leading producer of gangster films with works like Little Caesar (1931) and The Public Enemy (1931). Typically these films depicted young men and women who turned to crime after begin abandoned by a greedy society.

Complimenting this fascination with the poor and primitive was an equally great demand for movies about the rich and powerful. The growth of cinema enabled the creation of national celebrities on a level that traveling vaudeville acts had never been able to accomplish. Fed by cinematic publicity in the new medium of the newsreels, early movie audiences for the first time vicariously could “follow” celebrities as the screen idols appeared in public, got married, and worked on new ventures. The first commercial radio station went on the air in the U.S. in 1920. Within a decade the public was obsessed with radio personalities, many of whom adopted their personas to television following World War II.

Each new medium brought with it a fresh array of bigger-then-life figures, ever more removed from the daily lives of audiences. In On Television, Clive James argues that true fame was almost unknown before the twentieth century, because of the lack of global mass media.[xxiv] James asserts that the first true media celebrity was Charles Lindbergh, who made history with his 1927 transatlantic air flight and later received mass public attention due to the kidnapping of his son. James draws a critical difference between fame (the fact of being famous) and celebrity (what happens to fame when it is fed by media coverage). James has said that “We need to get back to a state where fame, if we must have it, is at least dependent on some kind of achievement.’[xxv] Indeed, the main accomplishment of many of today’s celebrities is that they somehow have achieved celebrity. Most often this comes from being a successful entertainer, journalist, athlete, author, artist, or otherwise gaining media attention for noteworthy work as an attorney, physician, scientist, etc. Other people are famous for being born into a famous family like Barrymore, Hilton, or Kennedy. But it is now possible for the media itself to create fame. This new strand of celebrity disturbs critic Bob Greene because it “doesn’t require paying any dues.” Greene asserts that before television, people had to accomplish something, possess some type of talent, or create a significant work. With the rise of television––specifically reality television––Greene says that audiences have become the creators, allowing people to “become famous not for doing, but merely for being.”[xxvi]

 

Visible or Invisible

Given the public attention lavished upon celebrities, it would seem that audiences must be fond of them. But it’s not that simple. Preference for a celebrity is like the preference for clothes or other commodities. Taste varies––and people identify with celebrities according to the identities they perceive for themselves as well as those they wish to project to others. Like many stylistic conventions, “fan” culture also is socially constructed by groups and driven to some extent by peer networks. But if not all people “like” a certain celebrity, this doesn’t mean that the celebrity is ignored. Some stars are famous for being hated. Take Larry Flynt and Howard Stern, for example. Many gossip websites list Brittney Spears and Lindsay Lohan as hated stars. Though they generate animus, these also are some of the most famous people in the world. For a variety of reasons, these individuals have the capacity to evoke envy or enmity in great numbers of people. Hence, they are photographed, published, discussed on TV shows, paid enormous amounts of money, and anointed with celebrity status. If these celebrities are the most visible people on the planet, who are the most invisible people?

Recently in Los Angeles I was walking along one of the city’s busier thoroughfares, a street so popular that some restaurants seat their patrons on sidewalk tables. After passing a bustling eatery of this kind, and a somewhat expensive one at that, I was taken by a sleeping form under a blanket a few feet from the nearest table of chattering eaters: the body of an anonymous homeless person napping in the bright California sunlight, invisible in plain sight. Maybe it was the fact that I happened to be walking alone that I noticed and reflected on the incongruity of animated, well-healed eaters and the motionless, impoverished sleeper. But the incident served as a reminder of the absent poor in the larger cultural arena.

Many people instinctively look away when they come upon a homeless person, having learned that avoiding eye contact is one of the best ways of getting past a solicitation for money. This is part of a more generalized attitude of avoidance that most of us practice to keep unpleasant reminders of poverty from our minds. Aside from an occasional newscast of an urban cleanup of homeless areas, images of the nation’s poor are largely absent from entertainment media, except when the homeless are romanticized as lovable tramps, as in the Nick Nolte film Down and Out in Beverley Hills (1986) or depicted as recuperable as in Will Smith’s Pursuit of Happiness (2006). Statistically speaking, the numbers of both poor and rich people have grown in recent decades, with the middle class getting smaller. While total reported income in the United States increased almost nine percent in the last year, average incomes for those in the bottom 90 percent declined.[xxvii] Approximately 36-million people, or about 12 percent of the U.S. population, lives in poverty––defined as an income of less than $13,000 for a family of two people. Half of the poor are children. The top 300,000 Americans collectively enjoyed almost as much income as the bottom 150-million. The gains went largely to the top one percent, whose incomes rose to an average of more than $1.1 million each, an increase of more than $139,000, or about 14 percent.

Someone in the top group received 440 times as much as the average person in the bottom half earned, nearly doubling the gap from 1980.

Most of us find ourselves somewhere on the continuum between the visible rich and the largely invisible poor. We exist in that unspecified neither land of “ordinary” people. But who is ordinary, exactly? Ordinary people are those nether rich or poor. Probably not especially old or young, not fat or skinny, short or tall, unattractive or handsome. Not illiterate or a PhD, nor an immigrant, nor someone ill or crazy. If one follows this process of subtraction and deduction far enough, it becomes clear that the term “ordinary person” actually excludes practically everyone. This is how “ordinary” and “average” become social constructions for making almost everyone feel imperfect, inadequate, or just plan odd. Even the stereotype of the “average American” has become a signifier of someone you wouldn’t want to meet. Average Joe, the short lived television show of the early 2000s, promised to present ordinary men competing for dates with starlets. But more often that not Average Joe succumbed to making cheap jokes about overweight or nervous male contestants.

Whether exceptional or average, people are separated from each other in media representations and language. Jacques Derrida famously observed that every depiction perpetuates a kind of “violence” on the subject represented by abstracting it into something unreal. The act of describing a subject in words or pictures inevitably results in an inadequacy of representation that violates the original by reducing it to a set of symbols. As Derrida put it, “the originary violence of language” emerges from making distinctions resulting in “the violence of difference, of classification, and of the system of appellations.”[xxviii] Two important points emerge from this argument. The first is that difference is far more profound an element in our lives than the obvious distinctions among people. It is present in the most elemental aspects of the languages we speak and write. For this reason, difference isn’t something to be feared or avoided. Instead, difference is a fact of meaning that needs to be recognized and negotiated on many levels. Also, differences are relative and open to interpretation from their largest manifestations to the smallest fragments of communication. Second, Derrida’s apparent pessimism about language should not leave us without hope. It simply argues against the fixed or final meaning of any label. Fortunately communication rarely operates through single phrases, words, or pictures––but through constellations of many pieces of information. Although no single piece may yield a reliable meaning, a kind of truth––or at least some progress toward veracity––emerges from the aggregation of many pieces.

Excerpted from “Cultural Democracy: America’s Struggle for Unity” by David Trend (Paradigm Publishers)



[i] Antonio Gramsci, Selections from Cultural Notebooks (Cambridge: Harvard University Press, 1981), 85. Particularly in recent years the all-encompassing aspects of Gramscian principles have been overstated in pedagogical theory. Clearly the institutional matrix in which schools reside exerts an influence upon the individual that is partial, at best. However, as one of the last great totalizers, Gramsci provides an important means of bridging gaps among disparate fields.

[ii] Patrick Buchanan, “Keynote Address,” Republican National Convention (1992).

[iii] Ira Schor, Culture Wars: School and Society in the Conservative Restoration, 1969-1985 (London and New York: Routledge, 1986); Geoffrey Hartman, Minor Prophesies: The Literacy Essay in the Culture Wars (Cambridge: Harvard University Press, 1991); Hunter, Culture Wars; Henry Louis Gates, Loose Canons: Notes on the Culture Wars (New York Oxford University Press, 1992); Gerald Graff, Beyond the Culture Wars: How Teaching the Conflicts Can Revitalize American Education (New York: WW Norton, 1992); Margaret Heins, Sex, Sin, and Blasphemy (New York: The New Press, 1993); Fred Whitehead, Culture Wars: Opposing Viewpoints (San Diego: Greenhaven Press, 1994); Russell Jacoby, Dogmatic Wisdom: How the Culture Wars Divert Education and Distract America (New York: Doubleday, 1994); Elaine Rapping, Media-tions: forays into the culture and gender wars (Boston: South End Press, 1994).

[iv] Patrick Buchanan, cited in Richard Bolton, Culture Wars: Documents from Recent Controversies in the Arts (New York: New Press, 1992), 32.

[v] Paul Weyrich, “Letter to Conservatives,” (February 16 1999), NationalCenter.org (accessed April 9, 2008).

[vi] “Kerry Vows to be a Uniter in NAACP Address,” CNN.com July 16, 2004. (accessed April 15, 2008).

[vii] David Horowitz, “A Uniter, Not a Divider,” Salon Magazine (May 6, 1999) http://www.salon.com/news/feature/1999/05/06/bush/index.html (accessed April 15, 2008).

[viii] Dan Baltz, “Obama Says He Can Unite U.S. More Effectively than Clinton,” (Aug. 15, 2007) Washingtonpost.com (accessed April 15, 2008).

[ix] Barack Obama, “Acceptance Speech,” (Nov. 4, 2008), Washingtonpost.com (accessed Nov. 10, 2008).

[x] Earl Black and Merle Black, Divided America (New York: Simon and Schuster, 2007), 2.

[xi] Kelly Nyks, director, Split: A Divided America (2008).

[xii] Culture Wars, 43-45.

[xiii] Bureau of Labor Statistics, American Time Use Survey (Washington, DC: U.S. Department of Labor, 2006).

[xiv] “Poverty Now Comes with a Color TV,” Christian Science Monitor (2005) http://articles.moneycentral.msn.com (accessed April 21, 2008).

[xv] Norman Herr, PhD, “Television and Health,” Sourcebook for Teaching Science. Internet reference. http://www.csun.edu/science/health/docs/tv&health.html (accessed April 21, 2008).

[xvi] Newton Minnow, “Vast Wasteland Speech,” delivered to the National Association of Broadcasters (May 9, 1961), http://janda.org/b20/news%20articles/vastwastland.htm (accessed April 21, 2008).

[xvii]Christian Metz, The Imaginary Signifier: Psychoanalysis and the Cinema(Bloomington: Indiana University Press, 1982), 51.

[xviii] John Belton, American Cinema/American Culture (New York: McGraw-Hill, 1994), 17.

[xix] Ella Shohat and Robert Stam, Unthinking Eurocentrism: Multiculturalism and the Media (New York: Routledge, 1994), 103.

[xx] Jan Pieterse, White on Black: Images of Africa and Blacks in Western Popular Culture (New Haven: CT, Yale Univ. Press, 1992), 77

[xxi] Barsam, 42-44.

[xxii] Shohat and Stam, Unthinking Eurocentrism, 107.

[xxiii] Rosalind Krauss, Art Journal and Jan Zita Grover, “Landscapes Ordinary and Extraordinary,” Afterimage 11, no. 5 (December 1983): 4-5

[xxiv] Clive James, On Television (New York: Picador, 1991).

[xxv] Clive James, “Save Us from Celebrity,” The Independent (Oct. 28, 2005),http://www.independent.co.uk (accessed April 27, 2008).

[xxvi] Bob Greene, “the New Stardom doesn’t Require Paying Any Dues, “ Jewish World Review (Sept. 14, 2000), http://en.wikipedia.org/wiki/Celebrity (accessed April 27, 2008).

[xxvii] All statistics in this paragraph, David Cay Johnson, “Income Gap is Widening, Data Shows, The New York Times, Match 27, 2007. http://www.nytimes.com/2007/03/29/business/29tax.html (accessed April 29, 2008).

[xxviii] Jacques Derrida, On Grammatology, trans. Gayatri Chakvorty Spivak (Baltimore: Johns Hopkins, 1976),110.

Leave a Reply

Your email address will not be published. Required fields are marked *