Skip to main content
winter2017

Teaching Subversion: A Response to the “Crisis of the Humanities”

Having recently finished a PhD in American cultural and intellectual history, there is one question I’ve heard often enough to wish I had another dissertation to write: “what next?”

I’ve started to think this is a euphemism for “we’re sorry.” If it does signify an apology, it would be better if it didn’t require me muttering something about “still figuring it out” while my eyes wander aimlessly. It seems I have no answer to give that can satisfy anyone. Jobs are scarce, and no, I don’t have one lined up yet. My new credential wraps a hollow man in a hollow prestige. Is that what you wanted to hear?

Academics know better than most of my relatives that the writing is on the wall.  An excess of PhDs—particularly in the humanities—has created a buyer’s market in which the vast majority will have trouble rising above the ranks of exploited adjunct instructors, assuming they wish to stay in academia. According to the furthest-seeing analysts, economic and political realities have conspired against us. Our prospects of finding gainful employment as tenure-track professors are considerably dimmer than they might have been for generations past, and this makes graduate school—and the humanities more generally—an increasingly fruitless endeavor.

That’s the story everyone knows, and many of us, grad students and under-employed PhDs alike, are depressed because we really wanted those jobs. Preferably at a good research university, we tell each other over drinks we can’t afford, hoping no one at the table will actually land one. Of course we’ll be happy for them if they do. But the realities of scarcity, bristling competition, and likely failure stiffen the smiles we exchange when we’re not panicking together.

It’s easy to blame vast structures of power and privilege for our current predicament. But when we accept the instrumental logic that equates the value of liberal education only with tangible outcomes, career tracks, and “results,” we effectively further our own demise. You can see this demise etched on the faces of many tenured humanities professors: that vacant, plodding look of resignation; the sense that nothing can be done because this is the only way it’s ever been done before.

It has become fashionable to refer to this general malaise as the crisis of the humanities. What it really signifies is a failure of nerve.

It’s important to remind ourselves of the precise contours of this failure. After all, much of the so-called “crisis” consists of middle-class status anxiety. Within one of America’s formerly prized institutional footholds, people who insist on working with their minds are feeling unusually hard-pressed. Our plight is no doubt connected to broader changes in the global information economy. Yet most contemporary marginalized groups are in considerably worse shape than we are. We alone can greet the prospects of precarity as a theoretical problem.

For this reason, graduate students, adjunct instructors, junior professors and other beleaguered types (tenured faculty, you are invited to apply) might welcome the opportunity to rethink our roles as sources of insight and agents of change. This may well mean reevaluating our “work,” our “interests,” and our motives for doing the kind of work we do. Every kind of reform could be on the table. Out of respect for those less fortunate, we might even stop deferring to the fear of lost prestige. By radically reconsidering our vantage point in a faltering American society, we might rediscover what it means to be pragmatists and idealists, experimenters with conviction.

Yet why do I feel phony as I write these lines? Why am I sure anyone reading this will know I’m a fraud?

We live at a fascinating juncture. In the age of hyper-niches, billowing economic pressures, and Google, neoliberalism reins supreme. Few question the broad sweep of progress that is embodied in the consumer marketplaces of the Internet—I say this having just bought toiletries on Amazon—yet many worry what the future holds.

Throughout my time in graduate school, and in college before that, I’ve tried mightily to be the change I wish to see in the world.  But I keep falling short when I see a Starbucks and smell their coffee. It is in this light that I want to confess my pretentions to “teaching subversion.” I am a fraud to the extent that I am complicit in much of what I criticize. Yet my response to the crisis of the humanities is intimately tied to this personal-political conflict.

For the past decade, my commitment to subverting the current drift of American culture has existed in tandem with my deep frustrations with academia. I have taken myself far too seriously, but I have also aimed for high ideals. It’s this effort, I think, this self-conscious striving and failing, that constitutes the real heart of what the humanities are about. Instead of worrying that the sky is falling, why not try to convey this idea in as many ways as we can?

It goes without saying that for the past six years I have enjoyed a comfortable teaching position at the good research university where I recently completed my graduate work. Without anything approaching long-term job security, I’ve been privileged to teach for a writing program that allows me to try out more or less anything I want with my students (mostly 18-19 year-olds from upper-middle-class backgrounds). Every undergraduate is required to take an introductory writing course, and because we’re in America they have plenty of options to choose from. Graduate students in English, History, Philosophy, Psychology, Visual and Cultural Studies, Brain and Cognitive Science, and other departments design individual courses according to a loosely standardized format. But beyond that, every writing instructor is basically autonomous, and every course is capped at fifteen students. Not bad, as far as teaching goes. The possibilities are very nearly endless.

For personal as well as political reasons, I’ve framed my classes as laboratories of social, cultural, and self-analysis. The culmination of this approach was a course I taught for two years called “In the Shadow of Facebook: The Internet and American Culture.” In the course description I told prospective students that in 2010 Time magazine named Mark Zuckerberg Person of the Year, calling Facebook “both indispensable and sometimes a little scary.” Most of us use Facebook without asking how and why it came about, or what it tells us about American culture. But in this class “we bring these questions out of the shadows by examining our own ideas, attitudes, and assumptions through writing.” 

Once willing student-participants signed up for my class/experiment, I encouraged them to view both Facebook and the Internet as living cultural artifacts. I said I wanted them to learn to look at these ubiquitous social networks the way an anthropologist might look at a tool from a foreign culture, and this means learning to ask anthropological questions.

I am inept with most digital devices, and fortunately this incapacity fit naturally with my teaching persona and bluster. During each semester I insisted that students print and bring hard copies of all readings, and with few exceptions I prohibited the presence of cell phones or laptops in class. In the syllabus I told them, “this is not your average lecture class in which you can safely surf the web and count on your professor to consider you a passive cog in Mark Zuckerberg’s beautiful machine.” On the contrary, I read out loud, delighting in the sound of my own voice, “I will treat you as a live, interesting human being, and I expect you to respond in kind.”

A central humanistic question is how we know what we think we know, and what consequences follow from our supposed knowledge. By questioning the things about which we feel most confident, we learn that all confidence rests on a thin artifice. In a classroom setting, students often balk at this discovery process. But if you can keep them engaged long enough, they often arrive at the critical turning point.

In the Facebook class I tried to bring them along both overtly and covertly. Through focused readings on topics ranging from the uses of social media, to the neuroscience of distraction, to the political prospects of a more connected world, I pursued my quietly subversive agenda. I asked students to build on their own experiences and begin to recognize their place in a system of cultural practices that has its own history.

By drawing attention to these contingencies, my goal was to make it more difficult for them to simply assent to the pressures of their culture without first reflecting, analyzing, and choosing. In effect, I tried to get them to see what critical thinking has to offer without telling them what it means.

While analyzing the Internet as a historical phenomenon, I looked for ways to get students to question reality—to invite them to consider contemporary technology as a form of hegemony that percolates through and influences nearly every aspect of their daily lives. On the first day of class they received a writing prompt that asked them to reflect on their relationship to “digital culture.” I told them that I’m interested in how they feel about their uses of technology, whether they wanted to focus on specific examples such as Facebook, Twitter, iPhone apps, or particular TV shows, or more general mediums like the computer and the television. The distinction between form and content was new to most students, and it set the stage for the kinds of analysis we would do throughout the semester.

Some students’ first-day essays revealed interesting anxieties. I always asked that they hand-write these in class to encourage honesty and freedom of expression; and by and large this strategy seemed to work. One student a few semesters back noted that “the Internet has engraved itself within our personal identities and the fabric of our society so deeply that if we were to lose the Internet even for several hours, widespread panic would ensue.” Others reflected on the themes of addiction and technological dependence in more personal terms, but the vast majority of students tended to represent their relationships to digital culture in starkly sanguine terms.

The visions of progress I saw in their writing often mirrored the psychological atmosphere of an Apple Store, complete with high-pitched declarations of efficiency and paeans to the gospels of convenience. Yet even in the most optimistic essays there were often shades of perhaps unconscious concern. Notions of necessity and obligation colored otherwise positive depictions of computers and the Internet, and the tension between embrace and unease was writ large in nearly every paragraph. During the first few weeks of class I tried to build on these aspects of their experience to prepare them for where we were going. My ultimate intent was to unearth progress as a mirage, efficiency as a sham, and convenience as a choice with a cultural history.

When we got to Wendell Berry’s 1987 essay, “Why I am not going to buy a computer,” many students were incensed, which I took to be a sign that we were getting somewhere. Berry despises what he calls “technological fundamentalism,” the tendency to assume by virtue of unconscious indoctrination that everything new is good. We hear the voices of this fundamentalism all around us, Berry argues. It leads to a sickening superciliousness whereby everything old appears outdated and subject to revision. What about sunlight, pen and paper, and the standard model Royal typewriter he bought in 1956! the famed Kentucky agrarian-poet cries out. What about the sanctity of face-to-face relationships with real human beings and the glorious tradition of writing by hand?

At the end of his essay, Berry offers that “when somebody has used a computer to write work that is demonstrably better than Dante’s, and when this better is demonstrably attributable to the use of a computer, then I will speak of computer with a more respectful tone of voice, though I still will not buy one.”

To the extent that his polemic is rhetorically charged, Berry succeeds at making plenty of enemies. When his essay was published in Harper’s in 1987, it generated several heated responses, which the magazine printed perhaps to highlight the fury of computer users at the time. Berry is a hypocrite, most argued. He should recognize the wonderful new vistas digital technologies make possible and stop wasting everyone’s time with his crusty, quasi-Luddite critiques.

To the magazine’s delight, Berry responded in kind. He admitted being a hypocrite. The problem of being “a person of this century,” he wrote, is that there is no way not to be a hypocrite. We are all plugged into the energy corporations, and most of us guzzle petroleum products in our homes and on the roads outside like there’s no tomorrow. In the spirit of the social critic Paul Goodman, all we can do is choose where to draw the line and try to stick to it.

This is a sentiment I have long hitched my wagon to—yet to most of my students Berry’s argument still rings of hypocrisy, and also (more tellingly) impossibility. When pressed, they refused to accept Berry’s critique of technological fundamentalism because they can’t help but see Facebook, Google, and Twitter as necessary innovations. Berry is too closed-minded, they charged. Too old. Too traditional. One student reported that we can safely disregard his essay because there is a website called WendellBerryBooks.com.

On one level, this kind of resistance is understandable. Given the cultural expectations that inform their specific objections, what choice do contemporary students have but to try to wring Berry’s neck? None of them could get their schoolwork done without computers. And social life would be unimaginable without Facebook. To preserve their sense of self and how the world works, then, it seems they have to argue against Berry for existential reasons; they have to squash his old-fashioned moralism because they feel him breathing down their necks.

It’s true that Berry was writing before the Internet, and that he had no idea how significant computers would soon become. But even a dated social critique like his still has value because it points beyond certain boundaries of thought and behavior: boundaries which perhaps many of us have come to accept as given.  In essence, I try to convey that Berry’s essay helps highlight the role of contingency, and the historicity of all values, including our own.

Whether this message resonated with students is hard to tell. They often grew distant when I questioned their fundamental assumptions about how the world works. But provocation has a great deal of pedagogical utility. If students are defensive, they’re motivated. From motivation comes all else.

The value of provocation was especially evident when we got to the topic of online dating. For this class I assigned a New Yorker article from 2011, Nick Paumgarten’s “Looking for someone: sex, love, and loneliness on the Internet.” I began by asking students to free-write about the pros and cons of online dating, whether they think there’s a stigma against it, and if they would ever consider trying it, either now or in the future. When we moved to large group discussion, students were typically reticent to share their personal views. A few always chimed in with strong endorsements in keeping with earlier defenses of progress; but the question of individual agency looms large when what is at stake is one’s own romantic future.

Here I felt I had them. What online dating is all about, I found a way to suggest as subtly as possible, is the principle of scientific management. We are all familiar with how this works in practice. When we find ourselves in the toothpaste aisle at the grocery store, we know that the available brands and accompanying brushes have all been vetted by multiple scientific experts. This same knowledge applies to every consumer product: to cars, televisions, and of course our personal computers. To live in the modern world, it seems we have to learn to depend on experts and the principle of scientific management. Otherwise we’ll be left behind in a fog of bad smells and other inefficiencies.

But at what cost? Where do we draw the line? At what point do we stop deferring to scientists, or the Harvard math majors who run OkCupid, and bypass their unimpeachably useful index of algorithms? At what point do we favor serendipity, chance, and the potential to dictate one’s own fate without balking at the risks of failure?

To dramatize the stakes of this cultural and historical dilemma, I liked to pose the following thought experiment. I told my students to imagine that at some time in the not-too-distant future a new online service has been developed. If you choose to use it, the service guarantees you a detailed report of how and when you will meet each of your romantic partners for the rest of your life. Names, dates, descriptions of physical proportions and breakups—everything is there; and upon reading the report your fate is sealed. It is up to you whether or not to use this service. But the technology is available. The algorithm has been perfected. Instead of going through the messy, haphazard process of sorting your way through lived experience, going down this blind path with one person, going down that blind path with another, you can have complete and total certainty. There is no longer any margin of error.

After presenting this scenario in the eeriest tone I could muster, I asked by a show of hands how many students would use such a service.  Hesitantly, a few arms went up, then down. Guarded debate ensued. We usually had one of our best discussions of the semester because everyone was so engaged—perhaps even terrified. They seemed to feel the full weight of the dilemma in personal terms, not least because they wanted to resolve it for themselves.

Not to get a better grade. Not to achieve better job prospects. But because some dent in their thinking had been made.

This is one example of what can be done, yet the possibilities are endless. Instead of hanging our heads in shame, people who care about ideas ought to get to work. With care, with conviction, and with lofty goals in mind; but also with some faith that the humanities will be worth saving if we have the courage to save them in new ways.

After spending the summer of 2016 wandering through several alternatives to academia, Michael Fisher has joined the Millennium School, a new independent middle school in San Francisco dedicated to broadening traditional definitions of success.

+ posts

Leave a Reply