Does Your AI Have Users’ Best Interests at Heart?
We now live in a world built on machine learning and AI, which relies on data as its fuel, and which in the future will support everything from precision agriculture to personalized healthcare. The next generation of platforms will even recognize our emotions and read our thoughts. For leaders in the Algorithmic Age, simply following the rules has never looked more perilous, nor more morally insufficient. As we create systems that are more capable of understanding and targeting services at individual users, our capacity to do evil by automating bias and weaponizing algorithms will grow exponentially. And yet, this also raises the question of what exactly is evil? Is it breaking the law, breaking your industry code of conduct, or breaking user trust? Rather than relying on regulation, leaders must instead walk an ethical tight rope. Your customers will expect you to use their data to create personalized and anticipatory services for them while demanding that you prevent the inappropriate use and manipulation of their information. As you look for your own moral compass, one principle is apparent: You can’t serve two masters. In the end, you either build a culture based on following the law, or you focus on empowering users. The choice might seem to be an easy one, but it is more complex in practice.
Ethical decisions are rarely easy. Now, even less so. Smart machines, cheap computation, and vast amounts of consumer data not only offer incredible opportunities for modern organizations, they also present a moral dilemma for 21st century leaders too: Is it OK, as long as it’s legal?
Certainly, there will be no shortage of regulation in the coming years. For ambitious politicians and regulators, Big Tech is starting to resemble Big Tobacco with the headline-grabbing prospect of record fines, forced break-ups, dawn raids, and populist public outrage. Yet for leaders looking for guidance in the Algorithmic Age, simply following the rules has never looked more perilous, nor more morally insufficient.
Don’t get me wrong. A turbulent world of AI- and data-powered products requires robust rules. Given the spate of data breaches and abuses in recent years, Google’s former unofficial motto, “Don’t be evil,” now seems both prescient and naive. As we create systems that are more capable of understanding and targeting services at individual users, our capacity to do evil by automating bias and weaponizing algorithms will grow exponentially. And yet, this also raises the question of what exactly is evil? Is it breaking the law, breaking your industry code of conduct, or breaking user trust?
Algorithmic bias can take many forms — it is not always as clear cut as racism in criminal sentencing or gender discrimination in hiring. Sometimes too much truth is just as dangerous. In 2013, researchers Michal Kosinski, David Stillwell, and Thore Graepel published an academic paper that demonstrated that Facebook “likes” (which were publicly open by default at that time) could be used to predict a range of highly sensitive personal attributes, including sexual orientation and gender, ethnicity, religious and political views, personality traits, use of addictive substances, parental separation status, and age.
Disturbingly, even if you didn’t reveal your sexual orientation or political preferences, this information could still be statistically predicted by what you did reveal. So, while less than 5% of users identified as gay were connected with explicitly gay groups, their preference could still be deduced. When they published their study, the researchers acknowledged that their findings risked being misused by third parties to incite discrimination, for example. However, where others saw danger and risk, Aleksandr Kogan, one of Kosinski’s colleagues at Cambridge University, saw opportunity. In early 2014, Cambridge Analytica, a British political consulting firm, signed a deal with Kogan for a private venture that would capitalize on the work of Kosinski and his team.
Kogan was able to create a quiz, thanks to an initiative at Facebook that allowed third parties to access user data. Almost 300,000 users were estimated to have taken that quiz. It later emerged that Cambridge Analytica then exploited the data it had harvested via the quiz to access and build profiles on 87 million Facebook users. Arguably, neither Facebook nor Cambridge Analytica’s decisions were strictly illegal, but in hindsight — and in context of the scandal the program soon unleashed — they could hardly be called good judgment calls.
According to Julian Wheatland, COO of Cambridge Analytica at the time, the company’s biggest mistake was believing that complying with government regulations was enough, and thereby ignoring broader questions of data ethics, bias and public perception.
How would you have handled a similar situation? Was Facebook’s mistake a two-fold one of not setting the right policies for handling their user data upfront, and sharing that information too openly with their partners? Should they have anticipated the reaction of the U.S. senators who eventually called a Congressional hearing, and spent more resources on lobby groups? Would a more comprehensive user agreement have shielded Facebook from liability? Or was this simply a case of bad luck? Was providing research data to Kogan a reasonable action to take at the time?
By contrast, consider Apple. When Tim Cook took the stage to announce Apple’s latest and greatest products for 2019, it was clear that privacy and security, rather than design and speed, were now the real focus. From eliminating human grading of Siri requests to warnings on which apps are tracking your location, Apple was attempting to shift digital ethics out of the legal domain, and into the world of competitive advantage.
Over the last decade, Apple has been criticized for taking the opposing stance on many issues relative to its peers like Facebook and Google. Unlike them, Apple runs a closed ecosystem with tight controls: you can’t load software on an iPhone unless it has been authorized by Apple. The company was also one of the first to fully encrypt its devices, including deploying end-to-end encryption on iMessage and FaceTime for communication between users. When the FBI demanded a password to unlock a phone, Apple refused and went to court to defend its right to do so. When the company launched Apple Pay and more recently their new credit card, it kept customer transactions private rather than recording all the data for its own analytics.
While Facebook’s actions may have been within the letter of the law, and within the bounds of industry practice, at the time, they did not have the users’ best interests at heart. There may be a simple reason for this. Apple sells products to consumers. At Facebook, the product is the consumer. Facebook sells consumers to advertisers.
Banning all data-collection is futile. There is no going back. We already live in a world built on machine learning and AI, which relies on data as its fuel, and which in the future will support everything from precision agriculture to personalized healthcare. The next generation of platforms will even recognize our emotions and read our thoughts.
Rather than relying on regulation, leaders must instead walk an ethical tight rope. Your customers will expect you to use their data to create personalized and anticipatory services for them while demanding that you prevent the inappropriate use and manipulation of their information. As you look for your own moral compass, one principle is apparent: You can’t serve two masters. In the end, you either build a culture based on following the law, or you focus on empowering users. The choice might seem to be an easy one, but it is more complex in practice. Being seen to do good is not the same as actually being good.
That’s at least one silver lining when it comes to the threat of robots taking our jobs. Who better to navigate complex, nuanced, and difficult ethical judgments than humans themselves? Any machine can identify the right action from a set of rules, but actually knowing and understanding what is good — that’s something inherently human.
Mike Walsh is the author of The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You. Walsh is the CEO of Tomorrow, a global consultancy on designing companies for the 21st century.
Does Your AI Have Users’ Best Interests at Heart?
Research & References of Does Your AI Have Users’ Best Interests at Heart?|A&C Accounting And Tax Services
Source
0 Comments