When Science Becomes Biased Bob Zeidman
There’s a disturbing trend in research these days, where so-called computer scientists inject their personal beliefs into their research. They do this by “correcting” the math to reach the result they want.
They call it “unbiasing,” but it’s actually just the opposite. This work affects every person in almost every aspect of life. You need to be aware of this problem, and you should be concerned about how your life is being manipulated without your knowledge.
An algorithm is a series of steps taken to solve a problem. It can be as simple as your grandmother’s recipe for brisket—a step-by-step description of basting, cutting, mixing, baking, and serving, written on a gravy-stained index card. These days, an algorithm typically refers to the mathematical steps taken by a computer as instructed by a computer program.
Every day, we come into contact with these computer algorithms that control not only apps on our smartphones, but also traffic lights, elevators, thermostats, and, most importantly, the information we see on search engines and social networks.
These algorithms are used to optimize your life and give you personally relevant information, but also to understand as much about you as possible: your likes and dislikes are recorded.
Amazon uses their algorithms to show you products you actually might want to buy instead of products that you have no use for. Netflix uses their algorithms to recommend movies that you will probably enjoy. Insurance companies use their algorithms to quantify your risk and determine your premium costs. Health-care companies may soon use their algorithms to determine the correct treatments and prescriptions for you.
And when you search for information on Google, their algorithms produce results that give you the most relevant information to guide you in your decisions. Or they should.
Algorithm Bias
The newest hot research topic in computer science is called “algorithm bias.” Computer scientists are studying algorithm bias and how to combat it. I propose a new term for these researchers: “proclivist” from the Latin word “proclivitas” meaning bias.
These computer proclivists are studying algorithm bias in school and then going out into industry to head up departments in major companies, directing teams of engineers to discover so-called bias and then “correct” it.
These proclivists include Tulsee Doshi, product lead for the machine learning fairness effort at Google, and Joy Buolamwini, a researcher at MIT who founded the Algorithmic Justice League.
Combating bias sounds like a good thing, right? Except, like the Doublespeak in George Orwell’s dystopian novel “Nineteen Eighty-Four,” when computer proclivists talk about removing bias from algorithms, what they really mean is inserting bias into algorithms to produce results they believe are “fair” according to their own notions of fairness.
When you search Google for a particular term, you may think you’re getting the most relevant results, but as Google openly admits, you’re actually getting the results that Google wants you to get. Whether the bias that Google has inserted is political, religious, cultural, or in any other way inaccurate, Google is steering us where it wants us to go, pretending that it’s giving us impartial results.
Recently, Doshi gave a talk to the Association for Computing Machinery entitled “Fairness in Machine Learning.” She started out discussing how Google scores various queries to determine whether they are “toxic, neutral, or nontoxic.” Google determines toxicity by examining the context where the terms were found. If those pages included mean, hateful words and statements, then the word was rated “toxic.” She then proceeded to give examples of toxic terms and stated, “We don’t want to see this. … We don’t want this to be true.”
But science is never about what we want; it’s about what we discover, whether we like it or not. The computer proclivists think that science is about changing results to get them to be “fair” according to what some people arbitrarily decide is fair.
In a concrete example, years ago, when searching for the term “Jew” on Google, the top results were links to anti-Semitic websites. Many Jewish groups, including the Anti-Defamation League, where I was a board member, complained to Google, pressuring them to change their algorithm to eliminate these results.
Were the original results biased? Certainly the scrubbed results were biased since Google changed its algorithm to specifically exclude these anti-Semitic references to avoid offending people. We should be concerned about the anti-Semitic search results but more concerned about the scrubbed results, because if there’s an anti-Semitism problem in the United States or in the world, we need to know about it. Changing the search results only hid the problem. It made it much harder to track anti-Semitic groups. It did nothing to make the world a better place; it just swept the problem under the proverbial carpet.
Gender Bias
In her talk, Doshi referenced a paper by Buolamwini, a female African American researcher, called “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” There’s also a presentation on her Gender Shades website.
The genesis of her paper, according to Buolamwini, was when she tested a face recognition program on a picture of her own face and it couldn’t determine her gender. This seems particularly confusing because, in her paper, Buolamwini declares that there are many genders, not just the binary male and female, so it’s unclear what criteria for determining gender should be used. But whatever category of gender is “correct,” she claimed that dark-skinned people, particularly women, were more likely to get classified wrongly due to algorithm bias.
However, it’s a reasonable explanation that pictures of dark-skinned people would be more difficult to recognize because of the simple fact that photographs are limited in the levels of contrast they can show compared to the human eye. Also, light-skinned faces display more shadows that show contours and reveal details whereas dark-skinned faces do not. This is something that could be tested objectively, but it was not. Perhaps the algorithm’s inability to categorize certain faces is not bias but rather a natural difficulty or a flaw in the algorithm.
Fairness Metrics
Doshi went on to describe “fairness metrics” that, according to her, determine whether an algorithm is fair. Arvind Narayanan, a computer science professor at Princeton, has identified 21 definitions of fairness with respect to algorithms. Doshi admitted that Google actually has even more than 21 definitions. She also admitted that “correcting” an algorithm for one fairness definition actually makes it worse with respect to other definitions of fairness, but she shrugged this off because Google just has to “be thoughtful about which definitions we’re choosing and what they mean.”
There’s an excellent paper on the innate contradictions of “algorithm bias” entitled “Inherent Trade-Offs in the Fair Determination of Risk Scores” by Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan of Cornell and Harvard. The conclusion of this paper is that there’s no “correction” to any algorithm that can possibly satisfy all three of the most common “fairness” conditions, much less 21 or more criteria.
In other words, by making the algorithm more “fair” according to one criterion, it will be made more “unfair” according to the other criteria.
This mathematical proof didn’t consider what kind of algorithm was used, what criteria were used, how the people were divided into groups, or what kind of behaviors were being predicted. This was a beautiful proof based on pure mathematics. Unless you believe there was a flaw in the math or that mathematics itself is somehow biased (and some people actually do, including the Seattle school district), then this proof is indisputable.
As I write this, the COVID-19 virus has been shown to affect African Americans at higher rates than people of other races. Is COVID-19 biased? Should we adjust the statistics to “correct” its effect? Of course not. This so-called “unbiasing” actually prevents real scientists from finding underlying relationships that could lead to a better understanding of how the world works.
In the hypothetical case of COVID-19 bias, “correcting” the “bias” would hinder our ability to understand the disease and ultimately find a cure.
With regard to search engines such as Google, making the search results “fair” means that we not only learn the wrong things, but also learn those things that a small group of businesspeople, activists, and computer proclivists want us to learn. This new form of research is wrong, and it’s dangerous.
Bob Zeidman studied physics and electrical engineering at Cornell and Stanford, and filmmaking at De Anza College. He is the inventor of the famous Silicon Valley Napkin and the founder of several successful high-tech Silicon Valley firms including Zeidman Consulting and Software Analysis and Forensic Engineering. He also writes novels; his latest is the political satire “Good Intentions.”
Comments are closed.