Linkedin's blocking fail: technical glitch or safety incident?

When key safety and governance mechanisms like blocking and banning fail, the burden of protection is unfairly shifted onto the most vulnerable users, leading to serious and tangible real-world harm.

DATA PROTECTION LEADERSHIPPERSONAL DATA BREACHGOVERNANCE

Tim Clements

3/16/20265 min read

Technical glitch or safety incident
Technical glitch or safety incident

Linkedin's blocking glitch over the weekend brought attention to this piece of platform functionality with some people unaware it existed to some very worries about their personal safety. Social media platforms like Linkedin use a number of different kinds of restrictions to address user autonomy and platform safety. They can be divided into two main categories user-initiated actions and platform-enforced actions - please note the platforms may have one or more of these available for users, and this does vary by platform.

User-initiated actions
These are tools provided by the companies to allow users to use participate in the platform's community with mechanisms they can use to mitigate harassment:

  • Blocking is the most primary tool for personal safety. It removes all direct interaction, preventing the blocked user from viewing profiles, sending messages, or interacting with the blocker's content.

  • Muting is a discreet, one-sided restriction. It removes a user's content from the muter’s feed without notifying the other party. It's used mainly to reduce friction and remove unwanted content but maintains the existing social connection.

  • Restricting (soft blocking) is a tool that's somewhere in-between the first two as it allows the restricted user to believe they are still interacting normally, while their comments are hidden from others (unless approved) and their presence in direct messages is minimised (e.g. hiding read receipts/online status).

  • Unfollowing/unfriending is the removal of a connection, which stops the flow of content from that user into your own feed without initiating the more severe restriction of blocking.

Platform-enforced actions
These are proactive and reactive measures that the platforms themselves can take to enforce their Community Guidelines, Terms of Service, and legal requirements:

  • Temporary suspension/timeouts is a disciplinary measure that restricts account functionality (i.e. posting, messaging, commenting) for a set amount of time and is intended to correct minor or initial policy violations.

  • Permanent ban/account termination is the ultimate consequence for serious or repeat violations and is an indefinite removal of access to the platform.

  • Shadowbanning/algorithmic de-prioritisation is a mechanism where the visibility of an account is restricted in searches, feeds, and recommendations without the user knowing, which obviously also raises data protection concerns around transparency. Shadowbanning is supposed to curb the spread of misinformation, spam, or borderline-harmful content. Interestingly, many platforms deny this measure exists,

  • Feature-specific bans/action blocks is where specific functionalities (e.g. "cannot comment for 24 hours" or "demonetisation") are restricted rather than a user's entire account, which allows the user to remain on the platform while being penalised for specific behavioural infringements.

  • Content removal/takedowns is where specific pieces of content (e.g. posts, videos, images) that violate guidelines are deleted, often along with a formal warnings or strike against the account.

Technical and jurisdictional enforcement
These are infrastructure-level controls used to maintain the integrity of bans and enforce legal or regional mandates. Although they are mainly there to protect the interests of the platform's communities, be aware that the power to dictate how they are used has become a hot topic between private companies trying to curate safe spaces and some governments around the world seeking to exert their influence or control.

  • IP and device banning are technical measures that identify a user’s internet connection or physical hardware to prevent banned individuals from circumventing platform rules via secondary accounts.

  • Community/group/server bans is a type of enforcement carried out by moderators or administrators at a more granular level e.g. community level (e.g., Subreddits, Discord servers, Facebook Groups) rather than across the entire platform.

  • Geo-blocking/country-wide bans are high-level restrictions where platforms are made inaccessible within specific countries, normally because of a government mandate, censorship, or regulatory non-compliance.

Why does all this matter?
When the term "content moderation" is mentioned, for some, it may sound like a harsh, technical process, but for many people around the world, tools like block, mute, and ban can mean the difference between participating in online communities and feeling safe and being driven offline entirely. These people are put at risk when these systems fail, as we saw during the weekend on Linkedin. Here are some of the groups (with example cases) who may have become concerned during the weekend:

  1. Survivors of domestic abuse
    In the US, the National Network to End Domestic Violence documents how abusers use "stalkerware" and social media surveillance to maintain control. When blocking fails, the barrier between victim and abuser is erased.

  2. Targeted journalists
    Journalists are often silenced by the huge volume of threats without the ability to block coordinated "troll armies." Maria Ressa, CEO of Rappler, has faced endless, state-sponsored harassment campaigns.

  3. Victims of doxing
    Users often share personal updates assuming their personal data is shielded by a block. The case of journalist Taylor Lorenz highlights how private information is exposed when digital boundaries are breached resulting in real-world threats.

  4. Marginalised communities
    Women of colour on social media face a disproportionate amount of racist abuse. When blocking tools fail, these platforms become hostile environments that essentially remove these voices, forcing them to quit rather than endure the vitriol. According to Amnesty International, women were abused on Twitter (X) every 30 seconds.

  5. People fleeing coercive control
    For people leaving high-control groups or cults, cutting your online presence is a survival tactic. A system error that accidentally unblocks former group members can lead to renewed attempts at forced re-engagement and psychological abuse.

  6. Human rights defenders
    Activists operating under restrictive regimes rely on blocking to keep their personal networks hidden from state-aligned surveillance bots. A failure here can lead to the identification and persecution of their associates.

  7. Victims of image-based abuse, e.g. revenge porn
    The rise of "non-consensual intimate imagery" platforms relies on the inability of victims to quickly mass-block or mass-report content before it goes viral.

  8. Public figures facing stalkers
    Beyond the "celebrity" aspect, many people deal with obsessive, parasocial stalkers. Blocking is the final line of defense to stop constant, inescapable DMs that turn personal life into a minefield of obsession.

  9. Mental health support groups
    Moderators of trauma or recovery forums rely on the ability to ban predators who infiltrate these spaces to target the vulnerable. Without effective banning, these support spaces are quickly sabotaged.

  10. Victims of corporate smear campaigns
    When coordinated "astroturfing" networks target individuals with fake accounts and misinformation, blocking these networks is the only way to stop the echo chamber from destroying the victim's professional reputation.

To conclude, blocking and banning are key mechanism of digital governance for social media platforms and a system glitch that disables them is not just a technical issue, it's a public safety incident.

Purpose and Means is a niche data protection and GRC consultancy based in Copenhagen and operating globally. We work with companies providing services with flexibility and a slightly different approach to the larger consultancies. We have the agility to adjust and change as your plans change. Take a look at some of our client cases to get sense of what we do.

We are experienced in working with data protection leaders and their teams in addressing troubled projects, programmes and functions. Feel free to book a call if you wish to hear more about how we can help you improve your work.