skip to main content

Are we so used to surveillance that we barely notice it anymore?

Often we trade our personal information without a second thought, not realising what we are giving up. Photo: Getty Images
Often we trade our personal information without a second thought, not realising what we are giving up. Photo: Getty Images

Analysis: We rarely read the fine print and give away our data without necessarily knowing the repercussions

Last year when US legislation threatened to either ban TikTok or force its sale to a U.S. company, the reaction was surprisingly casual. Some users of the platform approached this with humour posting videos joking about how they would save time by directly sending their personal data to the Chinese government themselves. This was obviously satire but, it highlighted a broader sentiment which is that people have become so accustomed to surveillance that they often meet it with apathy.

A platform like LinkedIn illustrates this perfectly because it has become a norm to share personal and professional details online influenced by the desire to build a curated, marketable image for social and professional capital, much like the hyper-surveilled world of Black Mirror's "Nosedive" episode.

This casual attitude toward surveillance is not just about desensitisation, it is also driven by the undeniable economic value that social media has created in the digital age. Platforms like TikTok, Instagram, and YouTube have given rise to an entire influencer economy, which was estimated to be worth approximately $250 billion last year and expected to double by 2027. Social media has evolved from platforms meant for social interaction into a viable career path for many people.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Radio 1's Brendan O'Connor show, Elaine Burke from 'For Tech Sake' on the apps that mine our personal location data, what these companies use it for and how to opt out

The allure of monetising content and gaining influence often overshadows concerns about data privacy. This comes into focus when we consider how with the popularity of family "vlogs", vulnerable groups, particularly children, are sometimes exploited for content—so much so that a new California law now mandates earnings protections for minors in family vlogs just like the law requires for child actors, prompting some vloggers to relocate to avoid compliance. After all, sharing personal data and being under the watchful eye of algorithms can seem like a small price to pay for the promise of financial success and visibility. This trade-off—privacy for prosperity—further entrenches the idea that surveillance is simply part of the deal in today's digital age.

There is another angle to this story apart from social and economic gains in the digital age, data for convenience. Often we trade our personal information without a second thought, not realising what we are giving up. For instance, in 2017, the company Purple added a joke clause to its Wi-Fi terms, requiring users to perform 1000 hours of community service, including tasks like cleaning toilets at festivals. Over 22,000 people accepted without reading. Whilst this was clearly a light-hearted prank, it underscores a more serious reality which is most of us rarely read the fine print when it comes to digital agreements. We are giving away our data without necessarily knowing the repercussions of it just like in the case of Cambridge Analytica that ended up influencing the US election.

But how did we get here? Before it was a fundamental right, the right to privacy as we know it can be traced back to the late 19th century where it was described as "the right to be let alone." This was on the backdrop of the lucrative invasive journalism that was on the rise between 1850 and 1890. Nearly a century later, there was a boom in the usage of CCTV systems in public spaces. This started a new era of mass surveillance. As a result, when 9/11 happened, the ubiquitousness of CCTV systems and the overwhelming need for security led to a new era of mass surveillance. What started as a safety precaution turned into an enduring legacy of broader acceptance of state surveillance, with privacy being the price paid for a sense of security.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Brainstorm, Can I delete myself from the internet?

Now, with the rising star in tech, Extended Reality (VR) which is the umbrella term for Virtual Reality, Mixed Reality, and Augmented Reality there is an even bigger threat with mass surveillance. XR involves technologies such as the Meta Quests and the Apple Vision Pros and experts estimate the XR market could hit $5 trillion by 2030. While this AI driven technology holds so much promise to enhance so many aspects our lives, it collects even more data than our smartphones or laptops. These devices don’t just track what you click—they monitor where you look, the length of focus, how you move, and even your physical reactions such as heart bits through smart wearables. With such extensive data collection, XR significantly heightens surveillance risks increasing the need for thoughtful data governance strategies.

As a society, we need to ask ourselves if the societal acceptance, economic gain, security or convenience is worth the privacy we lose. Mass surveillance can lead loss of freedom. Imagine using a pair of VR glasses. As you interact with them, they track your eye movements, potentially detecting subtle but abnormal patterns that could indicate neurological conditions—something that might not be detected without close observation. While this could be beneficial if the data went to a doctor, that’s not always the case. Data sharing practises to drive innovation have become very common but instances of data leaks and misuse increasingly become of concern in these situations as the data may end up in the wrong hands.

Read more: The biggest lie online: why we ignore legal terms and conditions

The 21st century has seen the expansion of the legal definition of privacy to include individual’s data and both individuals and governments have a role to play in safeguarding this right. Individuals must make informed choices about their data, while governments must implement robust policies to protect citizens' privacy. GDPR was a step forward, giving individuals more autonomy over their data though consent and transparency. It however was not designed with complex AI systems in mind which led to the development of the AI Act which was designed to oversee how AI is developed and deployed. However, these regulations still struggle to keep up with technological advancements. The AI Act, for example, asks companies to categorise their systems based on risks from unacceptable risks to minimal risks. The problem is, (1) companies are being asked to mark their own homework and (2), not all risks can be predicted which means there are still so many unknowns that current legislation leaves room for. This means as we embrace new technologies, we must do so with our eyes open, ensuring that privacy remains a right, not just a relic of the past.

Follow RTÉ Brainstorm on WhatsApp and Instagram for more stories and updates


The views expressed here are those of the author and do not represent or reflect the views of RTÉ