14 Pedagogy and the Logic of Platforms

Chris Gilliard

Originally published on July 3, 2017

In 1974, computers were oppressive devices in far-off air-conditioned places. Now you can be oppressed by computers in your own living room.

—Theodor Holm “Ted” Nelson, Computer Lib: Dream Machines (1987)

In his initial New Horizons column in EDUCAUSE Review, Mike Caulfield asked: “Can Higher Education Save the Web?”1 I was intrigued by this question since I often say to my students that the web is broken and that the ideal thing to do (although quite unrealistic) would be to tear it down and start from scratch.

I call the web “broken” because its primary architecture is based on what Harvard Business School Professor Shoshana Zuboff calls “surveillance capitalism,” a “form of information capitalism [that] aims to predict and modify human behavior as a means to produce revenue and market control.”2 Web2.0—the web of platforms, personalization, clickbait, and filter bubbles—is the only web most students know. That web exists by extracting individuals’ data through persistent surveillance, data mining, tracking, and browser fingerprinting3 and then seeking new and “innovative” ways to monetize that data. As platforms and advertisers seek to perfect these strategies, colleges and universities rush to mimic those strategies in order to improve retention.4

That said, I admit it might be useful to search for a more suitable term than “broken.” The web is not broken in this regard: a web based on surveillance, personalization, and monetization works perfectly well for particular constituencies, but it doesn’t work quite as well for persons of color, lower-income students, and people who have been walled off from information or opportunities because of the ways they are categorized according to opaque algorithms.

My students and I frame the realities of the current web in the context of digital redlining, which provides the basis for understanding how and why the web works the way it does and for whom. The concept of digital redlining springs from an understanding of the historical policy of redlining: “The practice of denying or limiting financial services to certain neighborhoods based on racial or ethnic composition without regard to the residents’ qualifications or creditworthiness. The term ‘redlining’ refers to the practice of using a red line on a map to delineate the area where financial institutions would not invest.”5

In the United States, redlining began informally but was institutionalized in the National Housing Act of 1934. At the behest of the Federal Home Loan Bank Board, the Home Owners Loan Corporation (HOLC) created maps for America’s largest cities and color-coded the areas where loans would be differentially available. The difference among these areas was race.

Digital redlining is the modern equivalent of this historical form of societal division; it is the creation and maintenance of technological policies, practices, pedagogy, and investment decisions that enforce class boundaries and discriminate against specific groups. The digital divide is a noun; it is the consequence of many forces. In contrast, digital redlining is a verb, the “doing” of difference, a “doing” whose consequences reinforce existing class structures. In one era, redlining created differences in physical access to schools, libraries, and home ownership. In my classes, we work to recognize how digital redlining is integrated into technologies, and especially education technologies, and is producing similar kinds of discriminatory results.

We might think about digital redlining as the process by which different schools get differential journal access. If one of the problems of the web as we know it now is access to quality information, digital redlining is the process by which so much of that quality information is locked by paywalls that prevent students (and learners of all kinds) from accessing that information. We might think about digital redlining as the level of surveillance (in the form of analytics that predict grades or programs that suggest majors to students). We also might think about digital redlining to the degree that students who perform Google searches get certain information based on the type of machine they are using or get served ads for high-interest loans based on their digital profile (a practice Google now bans). It’s essential to note that the personalized nature of the web often dictates what kind of information students get both inside and outside the classroom. A Data & Society Research Institute study makes this clear: “In an age of smartphones and social media, young people don’t follow the news as much as it follows them. News consumption is often a byproduct of spending time on social media platforms. When it comes to getting news content, Facebook, Twitter, Instagram and native apps like the Apple news app are currently the most common places where the teens and young adults in our focus groups encounter news.”6

Students are often surprised (and even angered) to learn the degree to which they are digitally redlined, surveilled, and profiled on the web and to find out that educational systems are looking to replicate many of those worst practices in the name of “efficiency,” “engagement,” or “improved outcomes.” Students don’t know any other web—or, for that matter, have any notion of a web that would be different from the one we have now. Many teachers have at least heard about a web that didn’t spy on users, a web that was (theoretically at least) about connecting not through platforms but through interfaces where individuals had a significant amount of choice in saying how the web looked and what was shared. A big part of the teaching that I do is to tell students: “It’s not supposed to be like this” or “It doesn’t have to be like this.” The web is fraught with recommender engines and analytics. Colleges and universities buy information on prospective students, and institutions profile students through social media accounts.7 Prospective employers do the same. When students find out about microtargeting, social media “filter bubbles,” surveillance capitalism, facial recognition, and black-box algorithms making decisions about their future—and learn that because so much targeting is based on economics and race, it will disproportionately affect them—their concept of what the web is changes.

Another aspect of my teaching is rethinking the notion of “consent.” It’s important to ask: What would the web look like if surveillance capitalism, information asymmetry, and digital redlining were not at the root of most of what students do online? We don’t know the answer. But if higher education is to “save the web,” we need to let students envision that something else is possible, and we need to enact those practices in classrooms. To do that, we need to understand “consent” to mean more than “click here if you agree to these terms.”

I often wonder if it’s possible to have this discussion without engaging in a deep and ahistorical practice of nostalgia. Telling students about the “good old days” of hand coding and dial-up internet access probably isn’t the best way to spend classroom time. However, when we use the web now, when we use it with students, and when we ask students to engage online, we must always ask: What are we signing them up for? (Ultimately, we must get them to ask that question themselves and take it with them.) Here the term “consent,” often overused and misunderstood, needs to be foregrounded in the idea that we must do all we can to explore the reality that students are entering into an asymmetrical relationship with platforms.

While we can do our best to inform students, the black box nature of the web means that we can never definitively say to them: “This is what you are going to be a part of.” The fact that the web functions the way it does is illustrative of the tremendously powerful economic forces that structure it. Technology platforms (e.g., Facebook and Twitter) and education technologies (e.g., the learning management system) exist to capture and monetize data. Using higher education to “save the web” means leveraging the classroom to make visible the effects of surveillance capitalism. It means more clearly defining and empowering the notion of consent. Most of all, it means envisioning, with students, new ways to exist online.

Notes

  1. Michael Caulfield, “Can Higher Education Save the Web?,” EDUCAUSE Review 52, no. 1 (January/February 2017).
  2. Shoshana Zuboff, “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization,” Journal of Information Technology 30, no. 1 (March 2015), 75.
  3. Dan Goodin, “Now Sites Can Fingerprint You Online Even When You Use Multiple Browsers,” Ars Technica, February 13, 2017.
  4. Sarah Brown, “Where Every Student Is a Potential Data Point,” Chronicle of Higher Education, April 9, 2017.
  5. 1934–1968: FHA Mortgage Insurance Requirements Utilize Redlining,” Fair Housing Center of Greater Boston website, accessed April 21, 2017.
  6. Mary Madden, Amanda Lenhart, and Claire Fontaine, How Youth Navigate the News Landscape, Data & Society Recent Qualitative Research (Miami: John S. and James L. Knight Foundation, 2017), 20.
  7. Natasha Singer, “They Loved Your G.P.A. Then They Saw Your Tweets,” New York Times, November 9, 2013.

About the Author

Dr. Chris Gilliard is a writer, professor and speaker. His scholarship concentrates on digital privacy, and the intersections of race, class, and technology. He is an advocate for critical and equity-focused approaches to tech in education. His work has been featured in The Chronicle of Higher Ed, EDUCAUSE Review, Fast Company, Vice, and Real Life Magazine.

Other works:

Friction-Free Racism

Caught in the Spotlight

 

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Pedagogy and the Logic of Platforms Copyright © 2020 by Chris Gilliard is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book