Exploring skepticism around ethics in technology through the lens of Metcalf et al.'s Corporate Logics
Ethics is now a major part of the Silicon Valley hype cycle. Companies are literally hiring people to pause and criticize their products. This seems fishy -- why would corporates want to lower their returns? Are they really focussed on doing the right thing or is it just about escaping the risk? Corporates have been calling for guidelines to establish ethics in a principled way. Is this just PR-guided signalling? Does it make sense to allow corporates whose interests are in direct conflict with ethics, to shape the discourse of ethics? It probably is easy to give TED-style talks on the importance of ethics and other empty words such as transparency, fairness, etc. but doing ethics isn't.
For starters, how do we define a broad term such as ethics? We ourselves don't know what is right or wrong, which begs the question - how can we evaluate ethics? There are no simple metrics such as click through rate to test whether a product is ethical. We could have fancy ethics checklists or a hippocratic oath of data science, but ethics is not something we can implement. What matters is whether we structure our companies around ethics and uphold values. Who decides the right ethics in a company - is it the legal team, the security team, or the design team?
Metcalf et al.'s nuanced paper
Metcalf et al. conduct interviews with various stakeholders involved in the discourse around ethics in tech industry. They find structural failures in how ethics is done -- companies are trying to build solutions in the same mold that was used to build problems. In the following sections, we discuss some corporate logics that Metcalf et al. arrive at, and try to make sense of their skepticism about ethics in technology.
The argument is as follows -- tech has the best people => they can solve any problem => they can solve ethics too. Infact, they are the best suited to solve the ethical challenges of today's world - which is wrong.
John Brockmann predicted a third culture which pushes the world towards intellectual discourse. In this new culture, smart people would be the global leaders. They would discuss smart things and consume intellectual snacks like TED talks. Evgeny Morozov calls out the moral bankruptcy of these smart people in his takedown of MIT Media Lab (Epstein Scandal)
Tech industry rewards leaders who break rules. We hear quotes from CEOs and motivational gurus on why it is important to fail hard and fail often. While such quotes are encouraging for some, they definitely don't fit in the bill with respect to ethics. Doing unethical things and failing is never a valid option.
The entire saga
Technologists are optimistic -- they believe that approaching a problem technically would fetch the best results. This involves framing ethics as some sort of an optimisation problem that can be iteratively solved, and arrive at a global solution. Again, this won't work -- because they are just grounding ethics to their domain, instead of entering the social domain where ethics actually belongs to. Tech could be a major part of various solutions, but the process often involves intense discussions and debates over uncomfortable corner cases that can't always be brought under some framework.
Take the example of machines being used to score subjective answers such as essays
Despite such concerns, these machines are increasingly being adopted for grading. They also don't provide much-needed feedback to the essays we write. This is problematic as kids need feedback to improve their writing skills.
It is also not obvious as to how we can improve the understanding of text without clean data. Our labelled training data (such as [essay, score]) already has various human biases,
and any machine trained on such data is bound to make errors. So, a tech solution is possible only when we discuss, debate, and improve our data collection practices, which requires significant fieldwork not only from technologists, but also from social scientists
This refers to how the market rewards every step tech takes, regardless of ethics, and thereby not having any incentive to do ethics. Consumers determine what decisions are taken in the industry. Their indifference to ethics is a signal to tech industry that they can pursue problems without any repercussions. The belief that solutions rewarded by markets are the best stops corporates from thinking critically about the products they sell.
GOQii, a fitness app in India, carries out unethical clinical trials on
users via its nutrionists
The three corporate logics reinforce each other. Markets reward technological solutions provided by people with merit.
With proxies in place, tech industry sometimes blinds us into believing that they are doing the right thing. Some apps transfer the decision making to consumers thereby claiming that all actions are taken with consumers' consent. This is a trap - because there is a large power differential between consumers and companies. Apps can use nudging and dark patterns to force us into making decisions on our behalf. Is the push for ethics just encouraging such dark patterns?
Shoshana Zuboff, an American author and expert on surveillance capitalism, talks
of how data anonymity and data ownership are just hacky terms to normalize commercial surveillance
We are often nudged into buying products. Tech industry makes us feel that we are not keeping pace with others incase we don't opt for the latest evolved products
Dark patterns such as nudging show that despite having complete control of our choices and data, we are acting on the whims of tech industry. Rather than having tech adapt to our world, we have stripped off the complexities of our life to make tech work. Tech is indeed eating the world.
One of the biggest hurdles that researchers face while analyzing ethics of socio-technical systems is the lack of clean data. We don't even know what questions to ask, or on what metrics we are supposed to evaluate. Kleinberg et al. discuss this issue in their paper on evaluating biases in algorithmic hiring
Lydia Denworth's article
Many researchers have shifted their focus towards opening black-box machine learning
models and explaining predictions to garner user trust. Recent NLP conferences show a surge in papers on explaining and pruning large neural models. However, the field of explainable AI also suffers from a lack of clear definitions, as pointed out by Zachary Lipton in his review of interpretability discourse
Ghorbani et al.
Improving the robustness of explanations is an important research direction. If not, bad decisions could potentially be justified as undertaken by ethical AI machines. (Think of military consequences)
Metcalf et al. ask us whether we are climbing the right hill at all. Do we even know what we are looking for? The situation can be vastly improved if companies structure themselves around ethics rather than just creating an ethics team. Releasing anonymized data will also help the research community to ask better questions.
Media tends to take extreme stances to garner clicks. They either overhype tech products or overstate harms of tech. Being overcritical of an entire field (like AI) because of few mishaps helps neither the researchers nor the public. Both technologists and media should make conversations about tech accessible to all so that we can recognize overhype / mishaps better.
India, in particular, should stop expecting solutions to come only from its engineers. We have always seen liberal arts education as something secondary to science education. This has resulted in technologists with no knowledge of humanities. Forget doing ethics, they can't even talk ethics. As explained earlier, solutions to real-world problems require various fields such as pyschology, sociology, etc. to join hands with technology. A first step in this direction would be to cultivate respect for humanities.
Things will only get better when we make ethics an everyday practice. Michael Schur, the creator of (philosophical!) sitcom "The Good Place", has some sound advice for all of us
It feels, all the time in life, like a bad decision is right in front of you. No matter who you are, there's the opportunity to make bad decisions and hurt people. And it takes work just to keep not making those bad decisions. It takes a lot of concentrated effort to do the right thing all the time. Hopefully, you get so used to it, and it becomes such a part of who you are, that it doesn't take work.
The article was prepared using the Distill Template
This essay is largely a summary of news articles that I've read over the past few months. I found links to most articles via Twitter. It is very crucial to choose the right gatekeepers as there's a lot of noise on Twitter. Here are some accounts that you must follow: @hardmaru, @zacharylipton, @vboykis, @random_walker, @michael_nielsen
If you see mistakes or want to suggest changes, please contact me