Why you need to consider data protection with every software release

Before any software release goes live, companies must consider data protection risks to prevent unintended harm, ensure compliance, and avoid costly failures, because once personal data is exposed or misused, there’s no taking it back.

EDUCATION AND TRAININGGOVERNANCEITPROGRAMME MATURITY ASSESSMENTS

Tim Clements

2/17/20254 min read

In 2020, Microsoft launched a feature in its O365 Workplace Analytics tool. It was designed to help managers understand productivity trends across their teams. But it also gave managers the ability to see data about individual employees, such as the number of emails sent, the number of meetings attended, and so on.

Suddenly, what was meant to be a workforce insights tool turned into a workplace surveillance system. Employees had no idea their personal productivity was being tracked at that level. No legal ground and no transparency.

A data protection failure, and also a failure of Privacy by Design principle #1, 'proactive, not reactive, preventative not remedial,' i.e. fixing the problem before it ever becomes one.

After a public backlash, Microsoft had to roll back the feature and adjust how data was aggregated, but by then, the damage was done. In his excellent paper published during November 2024, 'Tracking Indoor Location, Movement and Desk Occupancy in the Workplace,' Wolfi Christl details this Microsoft case alongside many others that are real eye openers.

Although, fingers pointed towards Microsoft, the companies deploying the release also were part of the failure. These days, many software companies release security updates, fixes, new functionality using a relatively short release cycle, often monthly, and then companies deploy the releases in their own environments.

Microsoft’s tool processed employee data, analysed their behaviour, and exposed it to their managers, all without informing employees or allowing them any control. This isn’t just bad design, it’s a failure to recognise data protection as a fundamental consideration in software development.

Data protection risks in software releases

Many companies treat software releases as purely a technical process:

  1. Code is written, or distributed from the vendor

  2. Features are tested

  3. Deployment is approved

  4. The release goes live

What’s missing? An assessment from a data protection perspective.

Before every release into the live environment, someone should be asking:

  • Does the release include any functionality involving the processing of personal data?

  • Does it create risks for individuals’ rights and freedoms?

  • Are we ensuring transparency, fairness, and compliance with data protection laws?

Often, by the time these questions are raised - if they are raised at all - the software is already deployed and live in the production environment, and the company may (worse case) be dealing with complaints, regulatory investigations, and reputational damage.

Where to embed data protection checks

If your company uses ITIL 4, they'll have processes like 'Change Management' and 'Release Management'. If it follows COBIT 2019, it'll have a 'Promote to Production and Manage Releases' process. There are other frameworks out there that may have similar control processes and this is exactly where a data protection risk trigger needs to be embedded. Before a release is approved, there should be a mandatory checkpoint:

'Does this feature or update require a data protection risk assessment?'

If the answer is yes, then the right risk assessment should be conducted.

  • Under GDPR, this could be a Data Protection Impact Assessment (DPIA).

  • If AI is involved, new laws like the EU AI Act may require an algorithmic impact assessment.

But none of this works unless the right people are trained to spot risks.

Why typical data protection training won’t cut it

Many companies assume they’ve covered their bases because they send employees to general data protection training, often purchased from a law firm or one of the big consulting companies. It covers GDPR principles, data minimisation, data subject rights, and so on.

Unfortunately, that kind of training is too broad to help the teams actually involved in software release management. People responsible for deploying and managing software releases need specific, contextual training, not generic legal-like briefings. They need to understand:

  • How code changes can introduce risks to individuals

  • How AI models can process data in unexpected ways

  • How small design choices can create big compliance failures

This training should be practical and scenario-based. Engineers, product managers, and IT teams should be studying real-world failures like Microsoft’s Workplace Analytics case mentioned above, not just abstract legal principles.

Why deep knowledge is essential

Not all data protection risks are legal risks. And not all legal risks are technical risks. This is why companies need cross-functional knowledge in data protection:

  • Legal teams understand and advise on the laws.

  • IT teams understand the tech.

  • Product teams understand use cases.

None of these teams can work in isolation. If data protection is not embedded into the release management and change processes, things will go badly wrong:

  • Features that track users without any attention to data protection principles get deployed.

  • AI models that amplify bias go live.

  • Data gets processed in ways that violate legal and ethical standards.

And companies don’t find out until it’s too late.

Prevention is easier and cheaper than damage control

A data protection failure in a software release is like a car recall, except much, much worse. If a car has faulty brakes, you can recall and repair it. But once personal data is collected, processed, or exposed improperly, it's complicated, expensive and almost impossible to reverse the situation - you can’t recall the fault. That’s why companies need to build data protection and AI risk checks into their operational processes and procedures such as release and change management (in this case), before harm is done.

Side note: Solove’s "Exclude" harm – a case study in data protection failure

The Microsoft case is a textbook example of what Daniel Solove calls 'Exclude' in his taxonomy of privacy harms. 'Exclude' happens when decisions are made about people, using their data, without their knowledge or input.

Solove's taxonony is also well worth your attention if you've not come across it.

Does this post resonate? If so, get in touch to arrange a no obligation discussion about how your data protection work can be improved.