A human rights approach to tech ethics can provide a framework to prioritize technologies that support social welfare while minimizing risks.
One of the perks of volunteering at the Institute is that you get to meet some of the greatest experts on how technology can be employed for the public good in a way that is both effective and humane. Last week we met S. Matthew Liao, professor of bioethics at NYU, who spoke about his human rights approach to ethical tech.
The premise is very simple: more than 80 ethical standards are floating around today to guide the development and use of technology in our everyday life. Some of these standards are produced by reputable research organizations or international bodies (such as Johns Hopkins University) and others by private companies (yes, Google has its own ethical principles).
While each set of standards is different, most agree on key points such as Beneficence (technology should do good, not harm) and Autonomy (having detachment issues from your Twitter feed? That is a sign you should be concerned about your individual autonomy). Behind the obvious challenge of creating a hit parade of tech ethics standards, the biggest roadblock is actually their interpretation.
Most technologies can be used for both good and evil. Think about facial recognition: it can bring home a kidnapped child as well as be used to over-police vulnerable groups. There exist extreme and horrific examples of the latter, such as China’s use of facial recognition to track and control its Uighur Muslim minority. When improvements in “law and order” justify a loss of privacy is a question that existing ethical standards for technology, which are usually stated in absolute terms, do not answer.
That is why Dr. Liao proposes taking a step back and thinking about the big picture to create a framework for the real-world implementation of ethical guidelines. He begins with the question: what are such standards trying to protect?
Human beings want to live full lives. To fulfill this fundamental need, people require the ability and self-agency to form meaningful relationships, to know themselves and the world around them, and to engage in activities they find pleasurable. The environment we live in should nurture the necessary conditions for it to be possible. And today, technology is an inextricable part of our environment. These basic notions, according to Dr. Liao, should motivate any safeguards for the use of technology in the public sphere.
The human rights-based framework for technology ethics proposed by Dr. Liao intends to preserve those necessary fundamental conditions: it outlines the universal rights of individuals and then translates them into the universal duties of companies and institutions that deploy technological tools. Have you ever blindly accepted the terms and conditions that allow a company to indiscriminately use the data collected by your smartphone app? A human rights approach would protect an individual even beyond and despite their consent, by ensuring that the activities carried out with the collected data ultimately do not hamper the fundamental conditions for living full lives.
The COVID-19 pandemic has revealed the importance of ethical technology like nothing else, especially in public health. Nothing like it, however, has shown how complex the decision-making process is when trade-offs are involved. Is the protection of public health a good enough reason to limit individual autonomy and impose lockdowns that force people to stay home and businesses to accrue losses or shut down? Or to impose vaccination on every child that attends a public school?
Nobody has a clear answer to these questions and a checklist approach to tech ethics is not helpful because it sets standards but not priorities, which are especially crucial when the set standards conflict with one another. According to Dr. Liao, a human rights framework would and should explicitly set these priorities and provide a tool for complex decision-making that can be both thorough and pragmatic.
But as with all good things, there is a catch: such an approach requires an in-depth analysis of both the technology and its possible uses. Is the technology ready for prime time and, if it is in fact ready, could its use be harmful? Answering these questions before deployment can take time and effort in a business world that moves fast and procrastinates on considering consequences. Yet, it is in a company’s best interest to be ethical. Trust is becoming a business ingredient as essential to profitability as innovation is. Not considering ethics will likely bite companies back down the line. “Look at Facebook,” Dr. Liao remarked.
Under the guidance of Dr. Liao as an Advisory Board member, the Institute is making this human rights-based approach part of its own recipe for technological implementations that serve public health, and we look forward to learning more about the ethics of health technology and policy. Are you ready to join the conversation?