top of page

Dr. Cathy O'Neil on Ethics and Artificial Intelligence

Last week, data scientists from all over the UK gathered at DataFest 2018 in Edinburgh to listen to some of the leading minds in the field of AI. One of them, Dr. Cathy O’Neil, the well-known New York author of the book Weapons of Math Destruction, gave a talk on the need for awareness of ethics when developing AI.

She began with the example of a commercial Google put out during the previous year’s SuperBowl. The commercial was of a dad asking the Google product Alexa what sounds a blue whale makes for his daughter. Dr. O’Neil used this commercial as an example to show just how much trust we as a society puts in Google, as the father assumed that the information given to him by Alexa was in fact correct and thus acceptable to give to his daughter. Dr. O’Neil went on to ask the audience if they had anyway to prove that the information Google provided was actually the correct information, or because the fact that it was Google they automatically believed in the facts given. Another example she gave, using Google again, was Google’s autocomplete feature. If someone begins to type in a phrase, Google will offer to autocomplete the sentence, thus influencing the thoughts and possibly even the ideas of the person before they can even finish their sentence.

As Dr. O’Neil went on the explain, Google would define these instances as successful examples of AI at work, as the code essentially did what it was created to do. Dr. O’Neil’s then warned that this definition of success, the idea that if the algorithm “works” for me, can lead to serious problems. If the algorithm works for those who code it, who is to say that the one coding the algorithm does not have some hidden bias, or even alternative motives? Because of our inherent trust in math and code, we assume that anything produced by data scientists is in fact hard truth, when really it can be manipulated.

Because of this, the context of the data curation and the way it is interpreted are essential to creating AI that will benefit the community instead of an individual in power. Currently, there is a lot of secrecy surrounding data, its collection and its usage, as people often do not even realize when their data is being collected, and even if they do, they do not understand how it is being used thereafter. In order to produce algorithms for the future, we must use data from the past, and if this data is corrupted in anyway do to historical bias, then the bias the algorithm is trying to eradicate is really just being automated into the AI system itself. This creates the question of how do we hold AI accountable when often times in business AI is supposed to be holding humans accountable?

Dr. O’Neil ended her talk by emphasizing the need to redefine what makes a good algorithm. Currently, the definition is whether or not the algorithm is accurate. If it’s accurate, then it’s good. If it’s not accurate, then it’s bad. But what is the result of this accuracy? If in being accurate the algorithm is able to identify and then deny more minorities from receiving bank loans, is this really a good algorithm? The definition of a good algorithm needs to be broadened to include an ethical standard.

What does this mean for Silicon Valley companies?

Data scientists are only translators, not ethicists. This means that companies need to start seriously focusing on the ethics they are embedding into their systems, as well as paying close attention to what their definition of success really is. Because if they do not do this, what happens?



bottom of page