How To Prevent The Algorithms From Taking Over
The fourth Global Conference on Cyberspace is currently taking place in The Netherlands. Hosted by a different government each year the conference aims to promote international collaboration on all things internet. One of the issues on the agenda, the emergence of algorithms, was extensively explored during the panel discussion The Ethics of Algorithms [working paper PDF].
Awareness about the power of algorithms reached a wider audience last summer when researchers published a study about the...
The fourth Global Conference on Cyberspace is currently taking place in The Netherlands. Hosted by a different government each year the conference aims to promote international collaboration on all things internet. One of the issues on the agenda, the emergence of algorithms, was extensively explored during the panel discussion The Ethics of Algorithms [working paper PDF].
Awareness about the power of algorithms reached a wider audience last summer when researchers published a study about the effects of manipulating the Facebook news feed. The algorithm determining which content is served in the news feed was tweaked in such a way that some 700.000 users received relatively more negative messages. The aim of the now infamous experiment was finding out whether this has an effect on the emotional state of the user. The conclusion was that it does.
Rigging elections
'This type of algorithmic feed and the manipulation of people's perceptions raises a pressing issue', said Frank Pasquale, Professor of law at University of Maryland and the first speaker of the panel. It places a tremendous power to influence in the hands of companies like Facebook. Pasquale, who addressed the audience in a video message because he had obligations elsewhere continued: 'Jonathan Zittrain has written up a really interesting case study projecting how Facebook could secretly spike the feed of individuals with prompts to vote. Not transparent efforts to influence the election one way or the other. But simply on the basis of knowing who the Facebook users are, prompting the people who are most likely to vote for individuals that would help Facebook policies, and not prompt to vote people who are not going to help Facebook.
The audience actively participated in the discussion and an editor of a German newspaper pointed out that traditional media platforms like large TV networks or newspapers also have the power to sway elections and they're not necessarily neutral either. He offered Fox News as an obviously biased news outlet.
Transparency
'The question is whether people are aware, the question is transparency', countered Frank LaRue, executive at the Robert F. Kennedy Center for Human Rights. 'Anyone can say whatever they want and I defend that. Fox news is that example, I think it is a dreadful form of news but they are taking a position. But the important thing is that people know what Fox News' position is. The problem with the algorithms and the use of platforms is that they seem very neutral and everyone is using them with the idea that they are neutral. And they're not. So the question here is not to prevent them or silence them. The question is transparency. If they take a position, fine. But then they should make it known.'
Jillian York, Policy Director at the Electronic Frontier Foundation added: 'There are legal frameworks for this and there is an understanding that these are content hosts and not editorialized platforms. That may not be true and that is what this discussion is getting at. It is also about stamping someone with a verified stamp, being the determiner of who is important and who is not. There are all sorts of ways in which these social media companies are no longer strictly platform hosts.'
Breach of privacy
Frank LaRue presented a beautiful argument why he considered the algorithmic manipulation of information nothing less than a breach of privacy. Freedom of expression is often understood as the right to say what we want to say. But it is much more than that, LaRue said, it is the process of knowledge. It is the freedom to access knowledge, to form ideas, opinions and to express them. 'If we complete the process we can complete the exercise of a freedom. But if the access to information already is being tampered with then we're being limited in the privacy of our communication. It is probably the deepest violation of privacy because it affects our inner thoughts'.
He presented the example of an interactive talking Barbie doll Mattel intends to bring to market this year. It will be able to have a dialogue with the child. To make this function available Mattel will link the toy to a database in the cloud. Based on what the child says, the toy will fetch data from the database in order to respond. This raises many ethical issues. For one, this places a cloud-connected microphone in the center of a family. But even more creepy is that the most fundamental process of being a human being – learning language – will be influenced by an algorithm.
Solutions
Several possible solutions of how to deal with the rise of algorithms and preventing them from eroding long-established values were being considered during the discussion. Many speakers considered transparency the bare minimum. One form of transparency is for companies to make the algorithms' code open source so it can be assessed.
But transparency is not enough, said Kave Salamatian, Professor of computer Science at University of Savoie. Algorithms are too complex to be assessed. On top of that, reading and understanding someone else's code is the most dreadful thing. Most programmers would prefer crucifixion, he joked. More importantly, it does not help to know whether the algorithm is written in Python or Pearl. For instance, Facebook's algorithms reflect the philosophy and ideology of the people who build the company and the code. And those deep-seated believes can't be discerned from the code.
Framework
Another proposed solution was to have a minimal framework of ethical standards algorithms should comply to. But a member of the audience, Valerie Frissen, Professor ICT and Social Change, said the idea of a framework betrayed a narrow idea of the meaning of ethics. Ethics is not a set of immutable laws written in stone. Rather it is a continuous process of reflexivity on how we as humans relate to technology. Frissen: 'In that sense transparency is key, if you have no transparency you disallow the digital citizens to be aware of what is happening and to frame their own relationship with technology.'
Is there room for transparency?
Facebook's Director of Policy in Europe, Richard Allan, was also one of the panellists. He took the fact that his company was brought up as a bad example several times during the discussion pretty well and said it was good to exchange ideas. He said Facebook had actually started the news feed experiment with both ethics and transparency in mind: 'There were claims in the media that the Facebook news feed was causing harm to people. Because people were showed a lot of happy content, they were being depressed. The ethical response that the people who look after that part of the service took was to say, we really want to look at this, because if this is true we want to be able to do something about it.
'What happened was all hell came unleashed when the report came out. It was assumed it was done for unethical reasons. My question is, are we going to be allowed to do research, is there going to be a space where we can have these kinds of conversations or is it the case that every time we publish something we're going to get such a storm in the media, our brand is going to be so damaged, that our instinct is to close down and shut [this kind of transparency] down.'
You can watch the entire discussion. It starts around 1:13:00
Awareness about the power of algorithms reached a wider audience last summer when researchers published a study about the effects of manipulating the Facebook news feed. The algorithm determining which content is served in the news feed was tweaked in such a way that some 700.000 users received relatively more negative messages. The aim of the now infamous experiment was finding out whether this has an effect on the emotional state of the user. The conclusion was that it does.
Rigging elections
'This type of algorithmic feed and the manipulation of people's perceptions raises a pressing issue', said Frank Pasquale, Professor of law at University of Maryland and the first speaker of the panel. It places a tremendous power to influence in the hands of companies like Facebook. Pasquale, who addressed the audience in a video message because he had obligations elsewhere continued: 'Jonathan Zittrain has written up a really interesting case study projecting how Facebook could secretly spike the feed of individuals with prompts to vote. Not transparent efforts to influence the election one way or the other. But simply on the basis of knowing who the Facebook users are, prompting the people who are most likely to vote for individuals that would help Facebook policies, and not prompt to vote people who are not going to help Facebook.
The audience actively participated in the discussion and an editor of a German newspaper pointed out that traditional media platforms like large TV networks or newspapers also have the power to sway elections and they're not necessarily neutral either. He offered Fox News as an obviously biased news outlet.
Transparency
'The question is whether people are aware, the question is transparency', countered Frank LaRue, executive at the Robert F. Kennedy Center for Human Rights. 'Anyone can say whatever they want and I defend that. Fox news is that example, I think it is a dreadful form of news but they are taking a position. But the important thing is that people know what Fox News' position is. The problem with the algorithms and the use of platforms is that they seem very neutral and everyone is using them with the idea that they are neutral. And they're not. So the question here is not to prevent them or silence them. The question is transparency. If they take a position, fine. But then they should make it known.'
Jillian York, Policy Director at the Electronic Frontier Foundation added: 'There are legal frameworks for this and there is an understanding that these are content hosts and not editorialized platforms. That may not be true and that is what this discussion is getting at. It is also about stamping someone with a verified stamp, being the determiner of who is important and who is not. There are all sorts of ways in which these social media companies are no longer strictly platform hosts.'
Breach of privacy
Frank LaRue presented a beautiful argument why he considered the algorithmic manipulation of information nothing less than a breach of privacy. Freedom of expression is often understood as the right to say what we want to say. But it is much more than that, LaRue said, it is the process of knowledge. It is the freedom to access knowledge, to form ideas, opinions and to express them. 'If we complete the process we can complete the exercise of a freedom. But if the access to information already is being tampered with then we're being limited in the privacy of our communication. It is probably the deepest violation of privacy because it affects our inner thoughts'.
He presented the example of an interactive talking Barbie doll Mattel intends to bring to market this year. It will be able to have a dialogue with the child. To make this function available Mattel will link the toy to a database in the cloud. Based on what the child says, the toy will fetch data from the database in order to respond. This raises many ethical issues. For one, this places a cloud-connected microphone in the center of a family. But even more creepy is that the most fundamental process of being a human being – learning language – will be influenced by an algorithm.
Solutions
Several possible solutions of how to deal with the rise of algorithms and preventing them from eroding long-established values were being considered during the discussion. Many speakers considered transparency the bare minimum. One form of transparency is for companies to make the algorithms' code open source so it can be assessed.
But transparency is not enough, said Kave Salamatian, Professor of computer Science at University of Savoie. Algorithms are too complex to be assessed. On top of that, reading and understanding someone else's code is the most dreadful thing. Most programmers would prefer crucifixion, he joked. More importantly, it does not help to know whether the algorithm is written in Python or Pearl. For instance, Facebook's algorithms reflect the philosophy and ideology of the people who build the company and the code. And those deep-seated believes can't be discerned from the code.
Framework
Another proposed solution was to have a minimal framework of ethical standards algorithms should comply to. But a member of the audience, Valerie Frissen, Professor ICT and Social Change, said the idea of a framework betrayed a narrow idea of the meaning of ethics. Ethics is not a set of immutable laws written in stone. Rather it is a continuous process of reflexivity on how we as humans relate to technology. Frissen: 'In that sense transparency is key, if you have no transparency you disallow the digital citizens to be aware of what is happening and to frame their own relationship with technology.'
Is there room for transparency?
Facebook's Director of Policy in Europe, Richard Allan, was also one of the panellists. He took the fact that his company was brought up as a bad example several times during the discussion pretty well and said it was good to exchange ideas. He said Facebook had actually started the news feed experiment with both ethics and transparency in mind: 'There were claims in the media that the Facebook news feed was causing harm to people. Because people were showed a lot of happy content, they were being depressed. The ethical response that the people who look after that part of the service took was to say, we really want to look at this, because if this is true we want to be able to do something about it.
'What happened was all hell came unleashed when the report came out. It was assumed it was done for unethical reasons. My question is, are we going to be allowed to do research, is there going to be a space where we can have these kinds of conversations or is it the case that every time we publish something we're going to get such a storm in the media, our brand is going to be so damaged, that our instinct is to close down and shut [this kind of transparency] down.'
You can watch the entire discussion. It starts around 1:13:00