Home

Exploring skepticism around ethics in technology through the lens of Metcalf et al.'s Corporate Logics

Author

Surya

Published

December 5, 2019

Updated

Not revised yet

Ethics is now a major part of the Silicon Valley hype cycle. Companies are literally hiring people to pause and criticize their products. This seems fishy -- why would corporates want to lower their returns? Are they really focussed on doing the right thing or is it just about escaping the risk? Corporates have been calling for guidelines to establish ethics in a principled way. Is this just PR-guided signalling? Does it make sense to allow corporates whose interests are in direct conflict with ethics, to shape the discourse of ethics? It probably is easy to give TED-style talks on the importance of ethics and other empty words such as transparency, fairness, etc. but doing ethics isn't.

For starters, how do we define a broad term such as ethics? We ourselves don't know what is right or wrong, which begs the question - how can we evaluate ethics? There are no simple metrics such as click through rate to test whether a product is ethical. We could have fancy ethics checklists or a hippocratic oath of data science, but ethics is not something we can implement. What matters is whether we structure our companies around ethics and uphold values. Who decides the right ethics in a company - is it the legal team, the security team, or the design team?

Metcalf et al.'s nuanced paper discusses many other pertinent questions about ethics in technology and points out that various ethical components being enmeshed in corporate logics doesn't help the cause.

Metcalf et al. conduct interviews with various stakeholders involved in the discourse around ethics in tech industry. They find structural failures in how ethics is done -- companies are trying to build solutions in the same mold that was used to build problems. In the following sections, we discuss some corporate logics that Metcalf et al. arrive at, and try to make sense of their skepticism about ethics in technology.

Corporate Logics

Meritocracy

The argument is as follows -- tech has the best people => they can solve any problem => they can solve ethics too. Infact, they are the best suited to solve the ethical challenges of today's world - which is wrong.

John Brockmann predicted a third culture which pushes the world towards intellectual discourse. In this new culture, smart people would be the global leaders. They would discuss smart things and consume intellectual snacks like TED talks. Evgeny Morozov calls out the moral bankruptcy of these smart people in his takedown of MIT Media Lab (Epstein Scandal) . In this scathing criticism, he says that the discourse around third culture enabled lousy billionaires to indulge in unethical entrepreneurial activities under the hood of academic intellectualism.

Tech industry rewards leaders who break rules. We hear quotes from CEOs and motivational gurus on why it is important to fail hard and fail often. While such quotes are encouraging for some, they definitely don't fit in the bill with respect to ethics. Doing unethical things and failing is never a valid option.

The entire saga of how Uber got their then CEO Travis Kalanick out is worth a read. Its alarming how so many leaders full of swagger and extreme confidence later go on to do terrible things. We should reflect upon the type of leaders that we encourage in tech, and stop correlating merit with swagger.

Technological Solutionism

Technologists are optimistic -- they believe that approaching a problem technically would fetch the best results. This involves framing ethics as some sort of an optimisation problem that can be iteratively solved, and arrive at a global solution. Again, this won't work -- because they are just grounding ethics to their domain, instead of entering the social domain where ethics actually belongs to. Tech could be a major part of various solutions, but the process often involves intense discussions and debates over uncomfortable corner cases that can't always be brought under some framework.

Take the example of machines being used to score subjective answers such as essays . These machines suffer from bias -- certain groups of people are consistently given less score. At times, even gibberish text is awarded a high score. This stems from the fact that NLP algorithms don't understand text yet. They perform well on test sets by capturing spurious correlations, such as - any essay with the word X should be given a high score.

Despite such concerns, these machines are increasingly being adopted for grading. They also don't provide much-needed feedback to the essays we write. This is problematic as kids need feedback to improve their writing skills.

It is also not obvious as to how we can improve the understanding of text without clean data. Our labelled training data (such as [essay, score]) already has various human biases, and any machine trained on such data is bound to make errors. So, a tech solution is possible only when we discuss, debate, and improve our data collection practices, which requires significant fieldwork not only from technologists, but also from social scientists .

Market Fundamentalism

This refers to how the market rewards every step tech takes, regardless of ethics, and thereby not having any incentive to do ethics. Consumers determine what decisions are taken in the industry. Their indifference to ethics is a signal to tech industry that they can pursue problems without any repercussions. The belief that solutions rewarded by markets are the best stops corporates from thinking critically about the products they sell.

GOQii, a fitness app in India, carries out unethical clinical trials on users via its nutrionists . For instance, a nutrionist might tell various users to drink juice A as a cure for sore throat, and later on check if it actually worked (like A/B testing). The nutrionists nudge users into completing certain actions by making them feel guilty about their health (more on nudging later). Despite these exploitative practices, the app is quite famous. Infact, it has attracted investment from Bollywood Superstar Akshay Kumar. Recently, Prime Minister Modi applauded GOQii's commitment to the Fit India movement. With both market and government support, no wonder GOQii feels it has complete freedom to exploit its users.

The three corporate logics reinforce each other. Markets reward technological solutions provided by people with merit.

Encouraging Dark Patterns?

With proxies in place, tech industry sometimes blinds us into believing that they are doing the right thing. Some apps transfer the decision making to consumers thereby claiming that all actions are taken with consumers' consent. This is a trap - because there is a large power differential between consumers and companies. Apps can use nudging and dark patterns to force us into making decisions on our behalf. Is the push for ethics just encouraging such dark patterns?

Shoshana Zuboff, an American author and expert on surveillance capitalism, talks of how data anonymity and data ownership are just hacky terms to normalize commercial surveillance . Tech isn't predicting our behaviour anymore. It is manufacturing our behaviour. That is, its not the machines that are being automated; its us. For instance, if you are only recommended romantic movies, you might click on at least one of it. This sends a wrong signal to the recommender that you like romantic movies, and hence it recommends you more romantic movies. This is known as a feedback loop. Most recommender systems do not correct for possible feedback loops, thereby manufacturing our behaviour. Interestingly, Zuboff calls consumers as the abandoned carcass, and consumer data we provide as the product.

We are often nudged into buying products. Tech industry makes us feel that we are not keeping pace with others incase we don't opt for the latest evolved products . Consider the example of childcare tech - if you don't monitor your child's body temperature with an app, you are not doing parenting right. Another example is that of fear-inducing apps such as Citizen, Ring - if you are not vary of crimes that happen in your neighbourhood, you are not careful enough . These apps feed on our fear, and inturn make us more scared of our neighbours (feedback!).

Dark patterns such as nudging show that despite having complete control of our choices and data, we are acting on the whims of tech industry. Rather than having tech adapt to our world, we have stripped off the complexities of our life to make tech work. Tech is indeed eating the world.

Evaluating Ethics

One of the biggest hurdles that researchers face while analyzing ethics of socio-technical systems is the lack of clean data. We don't even know what questions to ask, or on what metrics we are supposed to evaluate. Kleinberg et al. discuss this issue in their paper on evaluating biases in algorithmic hiring . Collecting data to train hiring systems is very difficult as it is not entirely clear what features in employees are good. Moreover, collecting subjective data such as cultural fit to a company faces serious issues of confirmation bias, and our systems may end up with the same problems that are present in normal hiring. What if our algorithm finds out that the best fit to the company is a male with shrill voice? Do we ignore this as a spurious correlation in our data? Or do we accept this link as an important discovery of our algorithm? Thus, it is not clear as to how we can objectively evaluate such algorithms.

Lydia Denworth's article in Scientific American points out how claims of social media destroying the GenZ are either wrong or massively overstated by researchers. The issue again is of poorly collected data. Experimental design for collecting data has often neglected content and context. This falsifies measurements of social media's impact on users' mental health. Asking users to record their daily activities on social media in a diary again suffers from confirmation bias.

Ethical AI

Many researchers have shifted their focus towards opening black-box machine learning models and explaining predictions to garner user trust. Recent NLP conferences show a surge in papers on explaining and pruning large neural models. However, the field of explainable AI also suffers from a lack of clear definitions, as pointed out by Zachary Lipton in his review of interpretability discourse . Explainable AI papers often cherrypick visualizations, and the explanations are not robust enough to perturbations in data.

Ghorbani et al. show that it is easy to construct adversarial examples that change explanations (feature importance, sample importance) while keeping the predictions same. Lakkaraju et al. show that it is possible to manipulate user trust by generating misleading explanations. For example, a black-box model might rely on defendant's race to predict if the defendant is risky or not. If an explanation of this model instead shows that the prediction depends on prior jail incarcerations, it might mislead the lawmakers into trusting the prediction.

Improving the robustness of explanations is an important research direction. If not, bad decisions could potentially be justified as undertaken by ethical AI machines. (Think of military consequences)

Final Remarks

Metcalf et al. ask us whether we are climbing the right hill at all. Do we even know what we are looking for? The situation can be vastly improved if companies structure themselves around ethics rather than just creating an ethics team. Releasing anonymized data will also help the research community to ask better questions.

Media tends to take extreme stances to garner clicks. They either overhype tech products or overstate harms of tech. Being overcritical of an entire field (like AI) because of few mishaps helps neither the researchers nor the public. Both technologists and media should make conversations about tech accessible to all so that we can recognize overhype / mishaps better.

India, in particular, should stop expecting solutions to come only from its engineers. We have always seen liberal arts education as something secondary to science education. This has resulted in technologists with no knowledge of humanities. Forget doing ethics, they can't even talk ethics. As explained earlier, solutions to real-world problems require various fields such as pyschology, sociology, etc. to join hands with technology. A first step in this direction would be to cultivate respect for humanities.

Things will only get better when we make ethics an everyday practice. Michael Schur, the creator of (philosophical!) sitcom "The Good Place", has some sound advice for all of us .

It feels, all the time in life, like a bad decision is right in front of you. No matter who you are, there's the opportunity to make bad decisions and hurt people. And it takes work just to keep not making those bad decisions. It takes a lot of concentrated effort to do the right thing all the time. Hopefully, you get so used to it, and it becomes such a part of who you are, that it doesn't take work.

Acknowledgements

The article was prepared using the Distill Template

This essay is largely a summary of news articles that I've read over the past few months. I found links to most articles via Twitter. It is very crucial to choose the right gatekeepers as there's a lot of noise on Twitter. Here are some accounts that you must follow: @hardmaru, @zacharylipton, @vboykis, @random_walker, @michael_nielsen

Updates and Corrections

If you see mistakes or want to suggest changes, please contact me