“Never before in history have such a small number of designers – a handful of young, mostly male engineers, living in the Bay Area of California, working at a handful of tech companies – had such a large influence on two billion people’s thoughts and choices.”
Those are the words of Tristan Harris, former design ethicist at Google and founder of Time Well Spent, a not-for-profit initiative to help educate businesses, users, and designers about morally acceptable technology design choices.
Tristan is spot on. However, he and the entire high-tech industry may not be going far enough, or fast enough.
Take for instance Google, Tristan’s former employer.
Recently at their annual, I/O conference, Google CEO Sundar Pichai demonstrated the company’s “Duplex” technology nested within the Google Assistant project. Google describes it as “a new technology for conducting natural conversations to carry out ‘real world’ tasks over the phone.”
In the demonstration, Pichai asked Assistant to book a haircut appointment and in another a restaurant reservation. In both cases, Google Assistant acted as a human might, delivering “umms” and “ahhs” in the speech while conducting a conversation that seemed as natural as two people might. Both humans on the other end of the phone had no idea they were interacting with a machine.
It was both compelling and terrifying.
Compelling in that artificial intelligence has progressed to the point where most of us are utterly astonished such a conversation can now play out between man and machine. Although it was just a demo, Google indicated it plans to test Duplex in the Assistant technology this summer.
Duplex is not only compelling but equally terrifying because, well, there are plenty of reasons.
However, there is one issue in particular that organizations need to start doing something about, and that is the need for a Chief Ethics Officer role and an in-house ethics office in general.
When you search (yes, in Google) for “Google Chief Ethics Officer,” the first few results highlight Andy Hinton, Google’s “Vice-President and Chief Compliance Officer.” Most companies have such a role. However, there are rarely any Chief Ethics Officers. Why?
Microsoft is also in on the act. It recently announced that all developers at the company would become “AI developers.” There is caution in the wind at least. Satya Nadella, the Microsoft CEO, said, “These [AI] advancements create incredible developer opportunity and come with a responsibility to ensure the technology we build is trusted and benefits all.”
Whether Google, Microsoft or any other high-tech company found in Silicon Valley or elsewhere, it is time they created a separate role and office—outside of compliance, regulatory and the lawyers—to make ethical recommendations on whether or not a particular technology ought to come to market.
We require teams of differing minds debating the pros and cons of whether or not a technology is good for society. If Silicon Valley has turned itself into one massive case study of groupthink—swimming in sinkholes of cognitive biases—who is standing up for those of us in a society that may not want such advancements? Who becomes the judge of society’s ethics?
There is an example to look up to in these confusing times. The medical community.
Patrick Lin, Associate Philosophy Professor and the Director of the Ethics and Emerging Sciences Group at California Polytechnic State University, and Evan Selinger, Associate Professor of Philosophy at Rochester Institute of Technology, wrote in Forbes four years ago, the medical community has been at the forefront of ethics for years. They write:
“In-house ethics committees have been a mainstay in medicine for the last 30 years when a US Presidential commission recommended it in 1983. Those committees are composed of lawyers too, but also doctors, nurses, bioethicists, theologians, and philosophers—a much more capable approach than mere risk-avoidance to tackle controversial procedures, such as ending life support and amputating healthy limbs.”
In Canada, the Canadian Medical Association first produced its Code of Ethics in 1868 and is considered the Association’s most important document. The Code is updated every five years—from a wide range of representatives—and focuses on areas that include “decision-making, consent, privacy, confidentiality, research and physician responsibilities.”
It is from the medical community that the high-tech community may learn its greatest lesson.
Create a Chief Ethics Officer role, and an in-house ethics team made up not only of lawyers but educators, philosophers, doctors, psychologists, sociologists, and artists.
Furthermore, as universities such as Carnegie Mellon University begin introducing undergraduate degrees in artificial intelligence, ensure the program has a strong ethics component throughout the entire curriculum.
Only then—when ethics is outside of the compliance department, and it is woven into academic pedagogy—will society be in a better place to stem the tide of potentially unwanted, technological advances.
I am all for technological advancement. I have even started to use Siri on occasion. But when I visit my doctor, I trust that the ethics of her decision-making and use of technology have already been vetted by a mixed group of professionals weighing the pros and cons.
Now more than ever our technology companies (and faculties) need to employ the same type of thinking.