Analysis: The crucial shift in how security failures happen by an attacker aggregating, comparing and interpreting data generated through normal behaviour
A jog is meant to be the most ordinary of routines. You lace up, you run, you log it, you move on. But in March, a routine run on the deck of France's aircraft carrier Charles de Gaulle reportedly did something else: it helped reveal the ship’s position in the eastern Mediterranean when the route was uploaded to Strava.
This wasn’t a "hack" in the way people usually imagine a security breach. Nobody broke into a classified network. Nobody stole a password. A consumer app did exactly what it was designed to do: record a route, timestamp it, and make it shareable. The risk emerged only when that ordinary trace was read in the right context, by someone who knew what to do with it.
This is the crucial shift in how security failures happen. In the older cybersecurity story, the "attacker" breaks something: a password, a firewall, a server. In the newer one, nobody breaks anything, but modern digital life generates constant data exhaust through normal behaviour, and that exhaust becomes intelligence when it is aggregated, compared, and interpreted. One route is a curiosity. Repeated routes become a signature. Enough signatures become a map.
From RTÉ One's Prime Time, security concerns as thousands of phone locations for sale
We have been here before, and the pattern is now well established. In 2018, Strava’s global heatmap turned millions of harmless exercise logs into a glowing map of sensitive military locations and patrol routes, revealing the geometry of bases in places that otherwise looked empty. A few months later, Polar’s "Explore" feature made it possible to browse public workouts around sensitive sites and then click through to individual user profiles.
In some cases, repeated route endpoints made it possible to infer where those users likely lived, while also showing that they regularly exercised at military or intelligence locations. Polar later suspended the Explore feature after the reporting, but the lesson stuck: once platforms connect location traces to profiles, the risk is no longer just "a base appears on a map." It becomes "a person becomes traceable."
Then the story evolved from accidental leakage to something closer to deliberate fishing. In 2022, The Guardian reported that people could use Strava’s social features to identify who was active around sensitive sites by creating routes or challenges in and around bases, then watching which users’ profiles surfaced when those routes were matched or compared against real activity.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ 2FM's, Morning with Laura Fox, Kate McDonald talks about RTÉ Prime Time's undercover investigation into location data from mobile phones
By 2024, Le Monde reported that thousands of Israeli soldiers were still identifiable through Strava, describing how simulated workouts at bases could reveal profiles and movement patterns and prompting an investigation by Israel's Ministry of Defence.
These cases are often framed as privacy scandals. They are that. But they are also security incidents, because what is being exposed is not a single location, but a behavioural signature. A carrier at sea is not just a dot on a map; it is a high-value asset with a mission, a protective envelope, and adversaries who care about its precise position. A base is not just a compound; it is a node in a system of deployments, logistics and operations. A pattern of movement is not just exercise; it is a clue.
This is how a single run becomes intelligence. The process is mundane: collect enough points, look for repetition, infer meaning from context. A route that curves in a very specific oval is not hard to recognise as a ship’s deck. A cluster of runs that appear in an otherwise empty region suggests an installation. Regularity is the giveaway. Humans are creatures of habit, and data systems are built to detect habit.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ Radio 1's Brendan O'Connor, all you need to know about your location data
Many organisations still frame security as perimeter defence and compliance: firewalls, access controls, training people not to click suspicious links, and audits that assume the main risk is someone breaking in. Those controls matter, but they do not address the more awkward vulnerability: people are not breaking rules when they use popular apps in normal ways.
They are behaving as the environment invites them to behave. The vulnerability emerges because the environment is saturated with sensors and sharing defaults. What is expanding here is a behavioural attack surface. It is the sum of everyday traces produced by routine habits: GPS routes, photo metadata, smartwatch logs, automatic cloud synchronisation, and the quiet hand-offs between services that users rarely notice.
A run can begin on a watch, sync to a phone, upload to a platform, and remain public far longer than the moment it captured. The user may think they shared a workout. In practice, they have published a pattern. Even when platforms introduce privacy features, the leakage can be subtle.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ Radio 1's Today With David McCullagh, new phone hack threat DarkSword can steal phone data
Recent research has studied "privacy zones" in fitness apps, features that hide the start and end of routes near sensitive locations such as a home. The researchers found that people could often infer the protected location with surprising accuracy from a small number of routes, raising uncomfortable questions about how much these controls actually protect when habits are repetitive.
This should sharpen how we interpret the Charles de Gaulle jog. The leak is not only "someone posted something they shouldn’t." It is that a platform built for social sharing makes it easy for outsiders to observe, aggregate, and interpret what was never intended as intelligence. The vulnerability is not simply a broken rule. It is a design mismatch: consumer visibility tools being used in environments where visibility is a liability.
The deeper takeaway is about how we think about security. Organisations still default to a compliance model: write rules, issue warnings, deliver training. But this kind of risk is not solved by telling people to be more careful. People are not careless when they use popular tools normally.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ Radio 1's Today with Sean O'Rourke, smartphone apps and users' personal data
They are simply living inside an ecosystem that constantly turns behaviour into data. Security in 2026 needs a wider lens. Everyday consumer tools now sit inside environments where location, routine and timing can matter. Personal devices in operational spaces are no longer a side issue; they are part of the design challenge. And the most damaging exposures often aren’t a single reckless post, but the slow build-up of small traces that only become sensitive when someone connects them.
The Charles de Gaulle run is compelling because it compresses that reality into a vivid image: a person jogging in circles, unknowingly drawing a map of a strategic asset in public. It’s a reminder that the technologies we use without thinking are creating visibility in places we assume are hidden. And in a world where data is easy to collect and easy to interpret, visibility is the new vulnerability.
Nobody has to hack you to learn from you. They just have to watch what your devices are already saying.
Follow RTÉ Brainstorm on WhatsApp and Instagram for more stories and updates
The views expressed here are those of the author and do not represent or reflect the views of RTÉ