A Reflection on the Ways we Train our Brains: An Ode to our Over-fondness for Data
We live in a complex world, made up of complex systems we often don’t understand (and sometimes cannot even fathom). The complex systems that baffle us aren’t just the global ones like space exploration, global economic dynamics, weather science, or our city’s trash and recycling programs. We are equally baffled by more ‘local’ systems like the digital parking meter, the wifi router in our home, the smoke alarm in our house (How can I turn it off?! What is it trying to tell me?! Why won’t it work?! Did it work? Did it store my information for later? Did it put me at risk?). As the line between complex global and local collapses, our bafflement has deep consequences. As Donella Meadows “Thinking in Systems” tells us,
“there are no separate systems. The world is a continuum. Where to draw a boundary around a system depends on the purpose of the discussion — the questions we want to ask.”
The complexity is dizzying and the expectation that we be able to use these, digital, well-intentioned ‘intuitive’ interfaces and affordances without hesitation, fear, or difficulty, is only increasing. We are an optimistic bunch though… we believe the next iPhone or Samsung Galaxy (or the not-yet-invented-thing) will make sense of online banking and password storage, FINALLY. The next new thing will help users understand their privacy rights and risks in a way that gives them agency and puts them in charge! The next technological innovation will fundamentally transform education, business, and life! <sigh> It’s exhausting. It’s illusive. It’s always just-out-of-reach (hint, we keep moving the goalpost, thus perpetually making it out of reach). We keep running and we call it innovation and progress. As Shawn Achor says, “we’ve pushed happiness [success, completion] over the cognitive horizon,” so it will always be just out of reach. Why are we running? What are we running from? What are we running toward?
The knowable as Hot Cocoa (mm mm Predictability)
We like certainty, completeness, simplicity, we like knowing. We approach the world with a transactional, checkbook accounting expectation — if we document our inputs and note our outputs, it should all reconcile neatly in the end. If I eat a healthy diet, if I exercise, if I sleep well, if I do all the things the doctor recommended, then I should know my outcome will be good health. If I memorize all the things the professor said, then I should get a good score on the exam.
As a result, our institutions have adapted to contend with this fundamentally flawed, oversimplified causal thinking. For example, medicine contends with a patient that thinks a visit to the doctor should result in a tangible something, a new prescription, a referral, a plan; and education contends with a student who thinks that if she does everything the syllabus says, then she deserves an A (she’s done her part of the contract, she has learned).
The Causal Mind
Our causal mind kicks in and we become entitled to B simply because we have done A. We struggle with the breakdown in this causal thinking — how can it possibly be the case that what I expected (what I was promised!) didn’t happen?! (When Bad Things Happen to Good People). Clearly, if I do A, it should result in B! How else can we know what to expect in the world? How else can we be sentient beings in an understandable universe? And if B doesn’t result from A, then it must mean we just didn’t measure right— we didn’t collect enough data (when the algorithm fails we say it just needs to learn more — we grant algorithms more fallibility than we do people; for example, self-driving cars). But once you’ve built it, you’re invested in it.
You can’t just set up an elaborate surveillance infrastructure and then decide to ignore it. These data pipelines take on an institutional life of their own, and it doesn’t help that people speak of the “data driven organization” with the same religious fervor as a “Christ-centered life”.
The data mindset is good for some questions, but completely inadequate for others. But try arguing that with someone who insists on seeing the numbers. The promise is that enough data will give you insight. Retain data indefinitely, maybe waterboard it a little, and it will spill all its secrets.
There’s a little bit of a con going on here. On the data side, they tell you to collect all the data you can, because they have magic algorithms to help you make sense of it. On the algorithms side, where I live, they tell us not to worry too much about our models, because they have magical data. We can train on it without caring how the process works. The data collectors put their faith in the algorithms, and the programmers put their faith in the data.
At no point in this process is there any understanding, or wisdom. There’s not even domain knowledge. Data science is the universal answer, no matter the question. (Maciej Ceglowski 8:20) “Haunted by Data”
We behave as though with enough data, all is knowable, measurable and predictable. And with that as our foundation, it begins to crack. These foundation cracks can be seen in every domain, especially, perhaps, in education…
These are the tools of accountants and have nothing to do with larger visions or questions about what matters as part of a university education. The overreliance on metrics and measurement has become a tool used to remove questions of responsibility, morality, and justice from the language and policies of education. I believe the neoliberal toolkit as you put it is part of the discourse of civic illiteracy that now runs rampant in higher educational research, a kind of mind-numbing investment in a metric-based culture that kills the imagination and wages an assault on what it means to be critical, thoughtful, daring, and willing to take risks. Metrics in the service of an audit culture has become the new face of a culture of positivism, a kind of empirical-based panopticon that turns ideas into numbers and the creative impulse into ashes. Large scale assessments and quantitative data are the driving mechanisms in which everything is absorbed into the culture of business. The distinction between information and knowledge has become irrelevant in this model and anything that cannot be captured by numbers is treated with disdain. In this new audit panopticon, the only knowledge that matters is that which can be measured. Henry Giroux “The Language of Neoliberal Education”
When information is more important than knowledge, and certainty and measurability are more important than thoughtfulness, risk, wonder, exploration and discovery, what do we lose? What are we relinquishing? If to value something we have to be able to measure it and vice versa, what are we overlooking and missing? What is the byproduct, the sawdust or waste that is created by our need to have neat, simple, exact corners (in education, in business, and beyond)?
When measurability is success, it becomes an end in itself. We begin asking questions that lead us to measurable answers. We begin measuring those things that are easily measured. And those are not neutral acts. We act on our measurements — data becomes the tea leaves for decision-making, the map for change, the path toward advancement (data-driven decision-making in Education). And we feel a sense of comfort having followed the directions given to us from the disembodied data.
What you really want is to be reality-driven, where data is often an idealistic proxy for reality… building up highly quantified evaluation methods that are powered almost entirely by subjective, qualitative assessments of data. You’re not eliminating subjective decision making — you’re just obfuscating [sic] it behind a layer of numbers that make everything feel less random. Nate Sullivan “Why OKRs Kind of Suck”
The key here is that it “feels” less random; the reality is that it often isn’t. But this absolves us of the feeling that we are making decisions that aren’t justified, validated, or warranted by some higher power — in this case the power of data collection and its revelations.
Pay Attention to What Is Important, Not Just What Is Quantifiable. Our culture, obsessed with numbers, has given us the idea that what we can measure is more important than what we can’t measure. Think about that for a minute. It means that we make quantity more important than quality. If quantity forms the goals of our feedback loops, if quantity is the center of our attention and language and institutions, if we motivate ourselves, rate ourselves, and reward ourselves on our ability to produce quantity, then quantity will be the result. Donella Meadows, Thinking in Systems: A Primer
And so we reflect this longing to have a sense or feeling of less randomness in our activities. We develop frameworks, guides, rubrics, and how-tos so we can show that the outcomes we achieved are reproducible — a kind of ‘don’t worry, it’s science,’ but it’s not. OKRs (Objectives and Key Results), KPIs (Key Performance Indicators), syllabi, rubrics are all merely a symptom of this eagerness to standardize through form. They contain enough of the flavour-of-the-science-like framework. They lay out the path with authority, a sense of certainty, and confidence that we admire (perhaps even as much as we admire someone who says totally untrue statements with confidence). “Even if you have great ideas, nobody will listen to them if you sound like a wimp when you open your mouth. By contrast, even mediocre ideas seem profound when spoken with confidence” (Geoffrey James “How to Sound Confident Even If You’re Not). Indeed. We grant data megaphone-volume certainty and we defer to it happily.
Nothing is neutral
We may say we aren’t political, we aren’t an activist and those things can all be true. What is also true is that nothing we do or say is neutral. All ideas have social locations. We are all embedded in our experiences and our context. And we are making choices, informed by some more culturally-acceptable measures like data or experience, but also soaked in bias, whim, gut, impulse and other culturally-unacceptable un-measurables.
OKRs, KPIs, rubrics, and syllabi are all created by people in a context, a “situated” space (a concept we see in basically all of Feminist Epistemology). And the outcomes, the products are the reflections of how structured or how loose the environment is where they were created. If the culture of the institution, organization, or department does not allow for experimentation or innovation, the KPIs, OKRs, and rubrics will reflect that — they will be risk-averse, tight, unambitious, unimaginative. The ‘what’ will reflect the ‘who’: the product will reflect the group of people (power structure, disparities of experience, pay, investment, biases, backgrounds, etc.). Nothing is neutral and the context always matters. This is form and function. We obsess about their relationship when it comes to interface design, even building design. But do we really grok how tightly form and function influence thought? How inclusive is the space, the culture, the environment where the ideas are created? How empowered are the people to share their ideas? Does critique happen at all and if so how is it done?
Fear of Relativism
Our fondness for a positivist, data-driven life where we make decisions based on the measures, does violence in the ‘who’ of this form and function relationship. We put a value on what we measure, but this raises the questions, do we measure the right things? Are we measuring in a way that makes sense? Is it possible to be unbiased in measuring? Is it possible to get pure, objective data? And do we measure what matters? We measure what we value, but are we valuing the right things (retention versus love of learning for example)? Who determines the questions to ask to collect the data? How is the question articulated? We use these numbers to make decisions, so we should have really good, clear answers to all of these questions but we do not.
Instead, all we have achieved here is a troubling loop of self-reassurance: we measure the easily measured because it is measurable and we need it to be because we are making decisions based on those measures and calling them reason driven — when we know that we measure what we value (the measurable) and how we measure it is our bias (our “objective” methods).
And this troubling loop is present in every domain. I believe we do this because things that are neat and knowable make us feel as though we aren’t succumbing to some of the follies of being human: being inconsistent, hypocritical, error-prone, mistaken, wrong, etc. The way we try to protect ourselves from our own follies (or from critique) is to make rules, guides, templates, rubrics, KPIs, OKRs, plans, lists — and we call it logic and agree it makes sense. Hobgoblin!
And I think what drives this is a fundamental fear of relativism. We are terrified that if we abandon the primacy of strict structures (built on wobbly assumptions, human error and biases) we will fall into a pit of relativism. How silly! This is a slippery slope argument we need not fall into. We can, and should, however, confront with fresh critique the wobbly foundations we rely on. It just requires us to do more thinking, more collaborating with people with diverse ideas, and we must abandon our very comfy sense of completeness.
Relativism usually stems from the well-meaning principles of tolerance and diversity… The philosophical debate rolls on, but for our practical purposes relativism is a dead end. If people can wriggle out of moral judgment by claiming their actions are culturally acceptable, morality itself becomes a questionable concept. Ad absurdum, if goodness is in the eye of the beholder, slave owners get to decide whether slavery is ethical. To make any kind of moral progress we need to be able to draw a line between acceptable and unacceptable behaviour. Fortunately, most cultures do agree on major rights and wrongs, such as murder and adultery. Forty-eight nations found enough common ground to encode basic moral principles into the Universal Declaration of Human Rights. Cennydd Bowles “Future Ethics”
When did we decide it had to be one or the other? When did we decide that relinquishing decision-making to purely technical mechanisms was preferable? Why does including human judgment seem to invalidate the decision? Why can’t we use both: the human gut and the algorithmic?
A human decision will sometimes be preferable to a skewed algorithm: the more serious the implications of bias, the stronger the case for human involvement. But we shouldn’t assume humans will always be more just. Like algorithms, humans are products of their cultures and environments, and can be alarmingly biased…
To truly address implicit bias we must consider it a human problem as well as a technical one. Cennydd Bowles “Future Ethics”
This is not easy work. To blend algorithm and human we need some guidance, some tolerance for error, transparent action, clear intentions, diverse perspectives and more. Let’s begin by agreeing that our “moral imagination should involve emotion, not just logic” (Cennydd Bowles “Future Ethics”). And that itself will be a shocking statement for many.
The alternative, the way we do things now, is simply not tenable. We cannot unsee the harm our data-driven decisions make.
This is how we do things now:
Start with a brief that explains our vision + purpose.
Define the outcomes we hope to see.
Point to a method to apply to achieve results.
Point to a method to measure success.
— — — — — —
Restate the vision and purpose
Show the outcomes have occurred
Show we’ve used the method
Show measures of success
And the rest is variance…
How orderly, how exact, how… transactional.
But this is the world we actually live, work, grow, and learn in. It is chaotic, dimensional, complex, inter-sectional, cyclical, global. It is unpredictable, dynamic, emerging, and adapting.
When we let go of our notion of clarity and completeness, we see (and think) differently.
There’s something within the human mind that is attracted to straight lines and not curves, to whole numbers and not fractions, to uniformity and not diversity, and to certainties and not mystery…We can, and some of us do, celebrate and encourage self-organization, disorder, variety, and diversity. Some of us even make a moral code of doing so, as Aldo Leopold did with his land ethic: “A thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community. It is wrong when it tends otherwise” (Donella Meadows “Thinking in Systems”)
Jess Mitchell is Senior Manager, Research + Design at the Inclusive Design Research Centre at OCAD University. Her work focuses on fostering innovation and inclusion within diverse communities while achieving outcomes that benefit everyone.
She applies this inclusive and broad perspective along with extensive experience managing large-scale international projects, focused organizational initiatives, and everything in between. Her work spans numerous sectors and fields, alongside decades of experience in Education.
With a background in Ethics, Jess delivers a unique perspective on messy and complex contexts that helps organizations and individuals navigate a productive way forward.
The Tyranny of “Clear” Thinking by Jess Mitchell is published under a Creative Commons Attribution 4.0 International License.