Opinion: examining the "what if?" potential of new technology like Facebook at the outset would help us think about possible implications

It’s 2005 in downtown Palo Alto. Mark Zuckerberg and Eduardo Saverin are sitting in the conference room in the first Facebook office. Mark puts down the pages he has been reading. Eduardo is nearly finished reading the same document.

It’s a science fiction short story written by a young intern with an unlikely triple major in literature, computer science and sociology. They are intrigued by her skillset, but unsure what role she can play, so they ask her to write a story of how Facebook will have changed the world by 2020. They hope she might dream up some useful marketing material.

She comes up with something wholly unexpected. It’s a story of how Facebook becomes global, and part of everyday life for millions of people. It’s a story of how the platform has become a tool for fine-tuned, highly precise political propaganda. A tool for mass manipulation, turning democracy completely on its head.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Prime Time, a report on dirty tricks in politics using social media and personal data 

Back to 2018 and to reality. As far as we know, there wasn’t an intern who came up with a science fiction story in the Facebook offices in 2005. However, Facebook has turned democratic systems upside down. A social media platform has caused us to question the systems that lie at the very heart of our society.

Let’s extend the fantasy. Let's imagine Eduardo puts down the paper on the desk, sighing.

"Well?", asks Mark, "what do you think?"

"A bit far-fetched?"

Mark jumps at this comment:

"What is? Facebook could be that big, right? We could get there!"

"Yeah... but it is not a tool for politics. That's not what we are building..." Eduardo answers tentatively.

"But it could... whether we want it to or not, it could be used like that."

If Zuckerberg was given that fictional heads-up, would he have built measures into Facebook’s design to prevent that from happening?

What's the point of this what-if scenario? Imagine what might have happened if Zuckerberg was given that fictional heads-up. If he had an inkling that Facebook could be essentially hijacked and used to undermine entire political systems. Would he have built measures into Facebook’s design to prevent that from happening?

Should the data scientists, artificial intelligence experts, platform builders and designers of future technology be engaging in this kind of ‘what if’ thinking? Should they make science fiction imagining or writing a fixture of their project?

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Radio One's Today With Sean O'Rourke, Luc Delany, former European policy manager with Facebook and Hugh Linehan from The Irish Times discuss Facebook and Fake News

I believe they should and I’m not alone. In his 2009 essay "Design Fiction: A short essay on design, science, fact and fiction", artist and technologist Julian Bleecker argued that science fact and science fiction already have a lot in common, and speculating about a near future, when the technologies we are inventing are already in use, contributes to the design of those technologies itself. He explains that imagining the potential of technology is critical to understanding not only the technical aspects of what we are inventing, but also their cultural implications. This sort of future-gazing helps researchers to reflect upon assumptions and preconceptions they may have regarding their research.

Data science research (social media, machine learning, statistical analysis, artificial intelligence, etc) moves relatively slowly. It can be conservative, safe, necessary to certain models of economic growth, even boring. And yet, it is a field that will cause extensive cultural and societal disruption – it already has. 

Data scientists don’t write science fiction. Imagining the possibilities of what they do is not part of their methodology: it won't help anybody design faster, more accurate algorithms. But as Jonathan Nolan, co-creator of the TV series Westworld, puts it, what it can do is help us by "inventing cautionary tales for ourselves".

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ News, Mark Zuckerberg on how  his data was improperly shared

In a research world where it seems anything can happen, these cautionary tales should be an integral part of the process. At the recent WWW2018 Web Conference, the Re-coding Black Mirror workshop saw researchers in web technologies use science fiction stories in the style of the Black Mirror TV series to figure out possible negative consequences of their research.

They also looked at possible solutions to the issues that emerged from narrating imagined technological futures. This led to critical discussions much beyond the usual focus of computer scientists and technologists working in this area, and well beyond the usual boundaries of privacy and data protection that dominate the current discourse on data ethics.

If you are a data scientist, or even if you are just using data science technologies, ask yourself this: what would happen if your vision became reality? What if what you are building or deploying or using became global and part of everybody's daily life? Or what if it didn’t and it was only available to a few?


The views expressed here are those of the author and do not represent or reflect the views of RTÉ