Anonymous said: Which one would win: a resilient object or a disruptive force?
Anonymous said: Can't find you on the twitterverse
I’m in the twitterverse: @johnpatleary
Dear Sir or Madam, But Most Likely Sir:
I am writing to apply for your advertised position in Social Innovation. As a Comparative Literature Ph.D, I am proficient in the fabrication of closed tautological circles of non-meaning; this makes me the ideal candidate for a job seeking “innovative teachers…for the position of lecturer in innovation.”
On the other hand, as an Assistant Professor of English, I know only too well the dangers of failing to innovate. For example, I am often forced to talk to human students who are sitting in bounded classrooms often wired for multimedia applications I am unable or simply unwilling to use. Paper books are an obsolete technology barely worthy of the word, and poetry, despite its promising shortness, takes far too long to understand. These hardships have granted me an acute understanding of the innovation deficit your department so bravely seeks to overcome.
In spite of English Literature’s disciplinary hostility to “innovation,” change agency, and both entre- and intra-preneurship, my training as a literature scholar would offer immediate benefits to your department’s offerings in Social Innovation. For example, I would be pleased to proofread your job advertisements, in order to innovate their presently sub-optimal levels of intelligibility.
The professorship is open to both distinguished practitioners, especially those with a deep understanding of social entrepreneurship, and to tenure-level scholars in fields related to social innovation, including social entrepreneurs, social intrapreneurs and, more broadly, social change makers.
“Social entrepreneurs” are not a field, as the sentence’s syntax suggests, and that final clause could be made nimbler by using the adjective “social” only once, as here: “social entrepreneurs, intrapreneurs, and change makers.” In addition, it’s not clear that “change makers” constitutes a broader category than “entrepreneurs,” yet neither is it obviously more specific. Given my exposure to creative industries like literature, I would be excited to invent more terminology to make this list of synonyms for “businessman” even longer.
But innovating new ways of saying “entrepreneur” isn’t the only thought-leadership I would exercise within the field of Innovation Studies. As thinkfluencers have argued persuasively, disruption must occur not only within fields and businesses but institutions and organizations. My first intrapreneurial initiative, therefore, would be to fatally disrupt your (hopefully soon to be our) department. Moving our courses entirely online and replacing department faculty other than myself with low-wage adjuncts armed with xeroxes of J.S. Schumpeter quotations would improve efficiency, reach even more students, and ultimately make a bigger difference.
To paraphrase a great disruptor: We must destroy the Professorship of Social Innovation in order to save it. I am available for immediate Skype interviews.
John Patrick Leary
Earlier this month, the University of Illinois-Urbana Champaign recently took the unprecedented step of rescinding a job offer to the Palestinian-born scholar Steven Salaita, who was set to begin classes there this week. It was a unilateral move by the upper administration, apparently taken in response to a series of tweets in which Salaita condemned the Israeli bombardment of Gaza. Others have already written on the case and its implications for academic freedom—see especially Corey Robin’s blog and this op-ed by many Illinois faculty, for example. (Also check out @FakeCaryNelson on Twitter, for all the latest from a fictional version of the former advocate of academic freedom.)
In the spirit of this blog, I want to focus on the 2 official statements on the case from Illinois’ Chancellor, Phyllis Wise, and its Board of Trustees. As efforts at damage control, they are on the one hand singular in their ineloquence and ineptitude. Yet on the other hand they are familiar in their abuse of notions like “civility,” “debate,” and “discourse”—especially when the latter are “robust,” a keyword forthcoming on this blog.
As the militarized police occupation of Ferguson, MO, drew comparisons between the midwestern suburb and a "foreign authoritarian country," the town’s police chief affected a different sort of vocabulary in one of his press conferences. [Put aside, for a moment, the deep naivete of a writer, like this one for Vox.com, so stymied by violent repression in the United States, God’s country and freest land on earth, that he must invoke “Middle East dictatorships” as the only available comparison for the images on his TV screen.] The Ferguson PD released the name of the uniformed killer of young Mike Brown, the Boston Globe reported,after consulation with “stakeholders”:
Obviously the decision was taken at the highest levels of the local police brass; likely Missouri’s governor and the Department of Justice had a role in the decision. Nothing this police department has done yet smacks of consulation or transparency, so the likely trained recourse to the discourse of”stakeholders” is laughable here. Stakeholder, as I argued in an earlier post, is an austerity keyword that started in business schools and has migrated into the world of municipal government, non-profits, and organizations of all types. The word has financial origins, but it aims to reassure audiences that what they are witnessing is an egalitarian partnership, not a hierarchical enterprise, at work. As I wrote then:
Like other phrases derived from gambling and finance that have migrated into democratic politics—the appropriately gruesome phrase “skin in the game” comes to mind—stakeholder conflates access with rights, obscuring hierarchies of power under the veneer of cooperation.
A determined group of citizens in Ferguson seem undeceived by the laughably thin veneer of cooperation on display there, however.
“Sustainable” is an old word, which once referred negatively to an emotional burden one could endure; it also enjoyed popularity as a synonym of “provable,” in a legal sense. These now-obsolete usages gave way to the more general modern meaning, as “capable of being maintained or continued at a certain rate or level.”
For this contemporary definition the Oxford English Dictionary gives mostly economic examples, and indeed “sustainable” was until quite recently used to refer to “steady” growth, with none of the ethical or environmental meanings we now associate with the term. “The Big Three’ s first-quarter production plans look more sustainable now than they did a month ago,” wrote the Wall Street Journal in 1986, referring only to car sales projections, not gas mileage or carbon footprints.
Since the turn of the last century, the word has been used to mean “capable of being maintained” with the implied adverb “environmentally.” As a marketing term [do not click on this link, I am warning you]—and it is ubiquitous as a marketing term— “sustainable” is roughly synonymous with “smart,” suggestive of technological innovation along with a sense of moral conscientiousness and forward thinking. (Moral improvement is deeply embedded in the ideology of “innovation,” as well, as we saw in that keyword essay). “Sustainable” is the cornerstone of what a wince-inducing urbanist blog calls the “New Artisan Economy”: “By producing small quantities of artisanal products in an environmentally friendly way,” this author writes, “the overall economy becomes more sustainable which is a benefit for everyone” [sic].
The contemporary ethical-conservationist meaning of the word “sustainable” tracks with the rise of the noun form “sustainability,” a word almost unknown before the 1980s. BYU’s Corpus of Historical American English, which tracks word usage in popular written media, shows no uses of the term before the 1980s. Google’s ngram offers just a handful, mostly Defense Department memos and other bureaucratic documents lacking public circulation.
The coinage of “sustainability” correlates with the rise of “sustainable development,” a conservationist critique of development economics that emphasizes the frailty of nature—which the World Bank lovingly calls “natural capital.” Where mid-20th century development theory once advanced economic growth as its ideal, sustainable development offers “sustainability.” Interestingly, this move from “growth” to “sustainability” can be seen in the changes in popular uses of the word “sustainable” itself, from the Wall Street Journal’s 1987 usage to today.
The United Nations has helped define and popularize the concept in various summits and proclamations: the 1987 Brundtland Report defined “sustainable development” as “development that meets the needs of the present without compromising the ability of future generations to meet their own needs.” The word came into broader circulation in the 1990s, when it was the focus of the 1992 Rio Earth Summit, which made “sustainable” a byword of developmentalist ethics and official environmental policy-making. From the Summit’s report, called Agenda 21:
Principle 1: Human beings are at the centre of concerns for sustainable development. They are entitled to a healthy and productive life in harmony with nature.[…]
Principle 8: To achieve sustainable development and a higher quality of life for all people, States should reduce and eliminate unsustainable patterns of production and consumption and promote appropriate demographic policies.
“Sustainable” has the advantage of being unambiguously good—who wants to be exhaustible?—and invitingly vague. It can accommodate Marxist critics of capitalism and neo-Malthusian doomsday cranks. Mining companies love it. And as you might expect, BP is totally committed to “sustainability,” and has a website to prove it. (In a happy coincidence, sustaining Earth’s ecology and sustaining BP’s shareholder dividends are two sides of the same sustainable coin: “The best way for BP to achieve sustainable success as a company,” their website cheers, “is to act in the long-term interests of our shareholders, our partners and society.”) This combination of ethical straightforwardness in theory—we must be responsible stewards of natural resources for future generations, yes, yes, we all agree—and subjective imprecision in practice is the source of much of its popularity, as scholars have pointed out. And then there is also the temporal lag of counter-evidence: the final proof that our current practices are in fact unsustainable will not come until after we are dead.
So “sustainability,” like “innovation,” combines literal vagueness with moral certainty. As Keith Douglass Warner and David DeCosse point out in a blog post, “sustainability, much like “efficiency,” does not have an intrinsic meaning.” The question, as they argue, is sustainable for whom, and for how long? One will not get a clear answer by surveying the uses of the word. Duke Energy loves to tweet about “sustainability,” as does McDonalds; McMansions can be “stunning and sustainable.” The Sotheby’s primer on “sustainable eco-mansions” reassures buyers that “making a home sustainable is a scalable effort.” What this means is that the imprimatur of “sustainability” can be bought cheaply or dearly, as one wishes: Energy-star appliances and native-plant gardens at the low end, solar panels and reclaimed barn-lumber siding at the higher price point.
The marketing of “sustainability” exemplifies the framing of structural problems as individual ones, and of practices of citizenship as ones of consumption. Thus the inevitable “sustainability apps.” As used by the self-described “urban sustainability consultant” Warren Karlenzig, writing on the website Sustainable Cities Collective, “sustainability” is a libertarian notion of social change, but one in which the anti-social nihilism of “disruption” is softened by a green touch:
Open data will reduce urban traffic congestion: no longer must cars circle downtown blocks as real-time parking rates and open spaces become transparent. Even more sustainable are those who are deciding to telecommute or use public transit on days when they know that parking costs are spiking or when spaces are unavailable.
Built around a labored, confusing metaphor of cities as beehives, and developers and end users as “swarms” of bees, Karlenzig’s thesis is that a “sustainable” city will be spawned by technological expertise and venture capital: “Our pollen dance,” he writes, “will be our testimonials, use patterns, geo-location, and referrals.”
As a lifestyle and marketing term, “sustainable” can paradoxically express the same capitalist triumphalism—of an ever-expanding horizon of goods and services, of “growth” without consequences—that the conservationist concept was once meant to critique. “Sustainable development,” fuzzy as it is, was intended to remind us of the limited supply and unequal exploitation of natural resources. But if “sustainable” most literally means an ability to keep on doing something, its popularity as a consumerist value suggests that there is a fine line between “sustainable” and “complacent.” We can “sustain” grossly unequal cities—that is, they won’t fall apart utterly—with Lyft and Airbnb, rather than mass transit and affordable housing. For a while, anyway. Whether we will sustain our desire to live in them is another question.
In a press release announcing its acquisition of the much-loved TV comedy South Park and the yet-to-be-loved comedy The Hotwives of Orlando, Hulu trumpeted its expanding “library of exclusive, current and library content.” Hulu’s senior Vice President and “Head of Content” Craig Erwich wrote:
I could not be more thrilled to announce that we are continuing the momentum this year by bringing new seasons of our beloved Originals, as well as the premiere of our brand new title ‘The Hotwives of Orlando’ and new library deals that will make Hulu’s content offering more robust and diverse than ever before.
In volume 1 of Capital, Marx famously explained commodity fetishism under capitalism as an alienating social world in which, he wrote,
the relations connecting the labour of one individual with that of the rest appear, not as direct social relations between individuals at work, but as what they really are, material relations between persons and social relations between things.
The reversal contained in Marx’s phrase—manufactured things take on the dynamic richness of people, while people themselves are reduced to mere objects—is part of what bothers Benjamin Hart, in a perceptive article in Salon, about the use of the term “content” in the contemporary culture industry. Hart objects to the degradation of art—what a TV executive might call “quality content”—by its confusion with fluff. But Breaking Bad is a commodity, of course, a product sold to advertisers and viewers in exchange for money, no different in this fundamental respect from Who’s the Boss or The Hotwives of Orlando. The problem with “content,” therefore, runs deeper than the boundary between high art and low culture, to the privatization of the desires, knowledge, and experiences we gain from the stories we read, watch, and remember. “Content” names artistic and narrative creativity, and therefore creators themselves, as things like any other.
"Content" is ubiquitous in entertainment journalism and in industry discourse—television, film, and music industries in particular use the term regularly, while book publishers, perhaps conscious of the antiquity and prestige of their medium, seem to use it less (please correct me in the comments or on Twitter if I’m wrong here).
The rise of “content” in its current form can be traced to the broadband web. In 2000, Time Magazine reported the merger of AOL and Time Warner by explaining that the new technology of “broadband” originates in “the fat, fast pipes of cable television that could carry vast amounts of Internet content.” The anachronistic materiality of this description (the Internet as a series of tubes, or fat pipes) points out how “content” as a term underscores literary and visual media’s dissolution into digital immateriality. This is not to wring hands about the rise of e-books and small screens and the decline of print and cinema but to emphasize, rather, how digitization is an intensification of the commodification of all forms of culture.
As I found in some preliminary research on BYU’s Corpus of Historical American English,pre-2000 uses of the term “content” mostly follow the Oxford English Dictionary’s definition, even with its outdated print bias: “the things contained or treated of in a writing or document; the various subdivisions of its subject matter,” as in the table of contents.
Elaine Green, assistant principal of Detroit’s Mumford High School, told Time Magazine in June 1989 that teachers and students at her school were “pleased with the quality and content” of Channel 1, the old TV news distributed to schools that, as I recall it from my own high school days, was a beachhead in commercial advertising’s invasion of the school day. Green’s use was once typical—“content” was simply the stuff in Channel 1’s programming, not the programming itself.
The term also thrived in the 1990s in calls for government regulation of music, movies, and video games. Tipper Gore’s hilarious anecdote about her encounter with Prince’s “Darling Nikki” in 1985 and Congress’ sanction of the National Endowment for the Arts in the late 1990s focused attention on the “graphic content” of music and art.
Now, an intermission (parental guidance suggested):
Its popularity among executives, politicians, and advertisers gives “content” a drearily bureaucratic ring. In common phrases like “violent content,” “sexual content,” or “inappropriate content,” the word refers to knowledge and information that should be policed. In this context, it is a purposely bloodless euphemism for any controversial narrative, visual, or verbal elements of a work of art (and it refers, in music, only to lyrics, almost never to tone, melody, or rhythm). This usage of “content,” as the raw material by which an artistic work could be judged and condemned, without any attempt at interpretation, presaged the contemporary ubiquity of the term. Today, as Hart observes, “content” is just a “substance” made of digital words, which is how Merriam Webster’s pleasingly cheeky definition now describes it: “the principal substance (as written matter, illustrations, or music) offered by a World Wide Web site.”
This digital substance is the basis of so-called “content farms,” websites that cheaply and quickly produce articles meant to optimize search results. Outfits like eHow.com and the defunct Associated Content have used low-paid writers (reportedly earning as little as $3.50 per story) to produce articles intended to game Google results and thereby build a stockpile of “content” used to sell targeted ads. See the example, described by Farhad Manjoo, of an Associated Content article that used the phrase “Tiger Woods mistress pictures” 8 times.
Content-substance is undistinguished either generically, by subject matter, by level of specialization, or by style. It is a marketer’s term, used to describe anything that generates views, subscriptions, or ticket sales. But its popularity is less a symptom of the fragmentation of the media market—the multiplication of genres and the web-enabled devices where we consume them—than it is of the widespread privatization and privation of the educational, editorial, and journalistic professions, which has been encouraged, but not invented, by the Internet. The stories of journalism school graduates and newsroom veterans, like one laid-off Miami Herald reporter who turned to content farms to make ends meet in something resembling their chosen profession, are distressing cases in point.
"Content" in educational reform discourse refers to everything that is contained in a curriculum; it’s a usage that reflects the uniformity that reform critics like Diane Ravitch have criticized in the push for school “accountability." California’s Common Core standards informational sheet, for example, refers throughout to “content areas”—what I might call a discipline or a subject, like history or math. And “content standards” are “curricular and instructional strategies that best deliver the content to their students.” The implicitly quantitative presumptions that “content” reveals here—curricular knowledge that can be measured, repeated, and reliably delivered—is especially clear in the popularity of the construction “content delivery,” beloved by media managers and tech firms.
As things we are accustomed to thinking of as “culture” that we care about—novels, cinema, “prestige” television shows, investigative journalism, Purple Rain—are understood ever more bluntly by advertisers, producers, and others as mere commodities to be “delivered” to buyers or policed for their most literal meaning, what are more obviously “mere” commodities—brand names, commercials, and other objects sold by commercials—are imbued with the aesthetic character of the work we care about. Thus, advertising site Contently.org (get it?) on “brand storytelling,” and the American Marketing Association’s seminar on “how to map content to personas and stages of the buyers’ journey.” Here, “content” is treated as a direct path to the dreams, aspirations, doubts, and fears of individuals.
Borrowing a New Agey vernacular of life as a “journey,” a consumer’s potential purchase of a brand-new plasma TV or season 6 of Who’s the Boss? on DVD is graced with spiritual consequences. New Age spiritualism was its own kind of commodified spirituality, of course, making the brave new world of content ownership and ownership “journeys” alienation of the second order.