[{"content":"","date":null,"permalink":"/blog/","section":"Blog","summary":"","title":"Blog"},{"content":"The recent wave of layoffs across major Danish companies like Nordea and Demant is a loud wake-up call that \u0026lsquo;business as usual\u0026rsquo; is no longer a viable workforce strategy.\nWe are starting to see a huge displacement of skills. McKinsey\u0026rsquo;s State of Organizations 2026, the LinkedIn Workplace Learning Report 2025, the ISC2 Cybersecurity Workforce Study, and the WEF\u0026rsquo;s Future of Jobs Report 2025 — and also their report titled Organizational Transformation in the Age of AI — all confirm we are facing a massive capability gap that traditional hiring and training won\u0026rsquo;t be able to solve.\nMy very high-level top 3 conclusions from the reports are:\nCompanies need professionals who combine deep technical expertise with business acumen — and unfortunately, our education systems aren\u0026rsquo;t building them fast enough (McKinsey, ISC2).\nThe need for \u0026ldquo;AI Fluency\u0026rdquo; — companies that fail to integrate AI into their L\u0026amp;D frameworks are seeing lower productivity and higher attrition among their top performers (LinkedIn).\nThe shortage has multiple causes, not all tech-related — you can\u0026rsquo;t hide the fact that automation is killing entry-level roles, which were once the training ground for our future leaders (WEF).\nThe skills shortage is not just a temporary recruiting problem. It is a fundamental redesign of the modern corporation. Companies that have realised this are moving from \u0026ldquo;hiring for potential\u0026rdquo; to \u0026ldquo;hiring for hybrid utility,\u0026rdquo; and their leaders are already executing their intelligence-based transition roadmaps.\nAnd we\u0026rsquo;re beginning to see the impact reflected in our headlines. I\u0026rsquo;m based in Denmark and only recently a couple of stories have stood out:\nNordea (Finance): After cutting 271 roles in early 2026, the bank announced a further 1,500-job reduction, driven by an urgent push for AI-powered automation and efficiency.\nDemant (Tech/Healthcare): The company recently reported a 700-person global layoff.\nThere are many others in Denmark albeit on a small scale, and globally there are similar headlines.\nMemories of 2011 # Reflecting on this earlier this week, I began to wonder how these companies had analysed and worked through the numbers — what roles, for example?\nI realised I had been here before.\n15 years ago I designed and facilitated some structured workshops in a global top 3 FMCG corporation, to navigate a similar challenge during the IT outsourcing wave. We spent weeks in workshops mapping talent, defining \u0026ldquo;future states,\u0026rdquo; and separating \u0026ldquo;commoditised\u0026rdquo; roles from strategic ones.\nLooking back at my workshop archives, I can see that the technology has changed, but the leadership challenge has not.\nBack then we were navigating critical zones — the ambiguity of which tasks belonged to the business and which belonged to an external provider. Today, we are navigating a new critical zone: the uncertainty of which tasks remain human-centric and which can be \u0026ldquo;offloaded\u0026rdquo; to AI.\nDon\u0026rsquo;t panic #I have the impression that some leadership teams are currently making \u0026ldquo;fire or train\u0026rdquo; decisions based on intuition or fear — which is unfortunately a good way to lose your best people and destroy your culture.\nI use a structured, battle-tested framework to help leadership teams navigate the critical zones so they can plan and take action. My approach is centred around these three elements:\nFrom role-based to task-based: Stop looking at titles and start mapping the specific tasks that create value versus those ready for AI augmentation.\nA 4-step strategy: I facilitate the progression from \u0026ldquo;Given\u0026rdquo; (Why are we doing this?) to \u0026ldquo;Today\u0026rdquo; (What are the roles?), \u0026ldquo;How,\u0026rdquo; and \u0026ldquo;Where.\u0026rdquo;\nThe value matrix: We categorise capabilities based on strategic importance — defining what is truly \u0026ldquo;Core\u0026rdquo; to your competitive advantage and what is \u0026ldquo;Non-core.\u0026rdquo;\nHow are you navigating this? #You don\u0026rsquo;t need more spreadsheets or generic AI advice. You need a structured, facilitated process to map your company\u0026rsquo;s future in terms of skills and capabilities.\nMy workshop design moves your team from fear to an actionable \u0026ldquo;Map of Future Roles,\u0026rdquo; ensuring your human talent is focused on the work that actually drives your business forward. If you are currently grappling with this conundrum, let\u0026rsquo;s talk.\nPurpose and Means is a niche data protection and GRC consultancy based in Copenhagen and operating globally. We work with companies providing services with flexibility and a slightly different approach to the larger consultancies. We have the agility to adjust and change as your plans change. Take a look at some of our client cases to get a sense of what we do.\nWe are experienced in working with data protection leaders and their teams in addressing troubled projects, programmes and functions. Feel free to book a call if you wish to hear more about how we can help.\n","date":"19 March 2026","permalink":"/deja-vu-at-the-office-why-yesterdays-sourcing-strategy-is-todays-ai-survival-guide/","section":"Blog","summary":"","title":"Déjà Vu at the Office: Why yesterday's sourcing strategy is today's AI survival guide"},{"content":"","date":null,"permalink":"/tags/governance/","section":"Tags","summary":"","title":"Governance"},{"content":"At Purpose and Means, we don\u0026rsquo;t settle for compliance for compliance\u0026rsquo;s sake. We partner with leaders across the globe to align digital regulation tightly with business purpose and strategy, ensuring it\u0026rsquo;s a living, breathing part of your organisation - not just a legal checkbox.\n","date":null,"permalink":"/","section":"Helping GRC leaders turn complexity into clear, actionable strategies","summary":"","title":"Helping GRC leaders turn complexity into clear, actionable strategies"},{"content":"","date":null,"permalink":"/tags/it/","section":"Tags","summary":"","title":"IT"},{"content":"","date":null,"permalink":"/tags/project-management/","section":"Tags","summary":"","title":"Project Management"},{"content":"","date":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags"},{"content":"","date":null,"permalink":"/tags/workshops/","section":"Tags","summary":"","title":"Workshops"},{"content":"","date":null,"permalink":"/tags/data-protection-leadership/","section":"Tags","summary":"","title":"Data Protection Leadership"},{"content":"When key safety and governance mechanisms like blocking and banning fail, the burden of protection is unfairly shifted onto the most vulnerable users, leading to serious and tangible real-world harm.\nLinkedIn\u0026rsquo;s blocking glitch over the weekend brought attention to this piece of platform functionality — with some people unaware it existed, and others very worried about their personal safety. Social media platforms like LinkedIn use a number of different kinds of restrictions to address user autonomy and platform safety. They can be divided into two main categories: user-initiated actions and platform-enforced actions. Please note that platforms may have one or more of these available for users, and this does vary by platform.\nUser-initiated actions #These are tools provided by companies to allow users to participate in the platform\u0026rsquo;s community with mechanisms they can use to mitigate harassment:\nBlocking is the most primary tool for personal safety. It removes all direct interaction, preventing the blocked user from viewing profiles, sending messages, or interacting with the blocker\u0026rsquo;s content.\nMuting is a discreet, one-sided restriction. It removes a user\u0026rsquo;s content from the muter\u0026rsquo;s feed without notifying the other party. It\u0026rsquo;s used mainly to reduce friction and remove unwanted content but maintains the existing social connection.\nRestricting (soft blocking) is a tool that\u0026rsquo;s somewhere in between the first two — it allows the restricted user to believe they are still interacting normally, while their comments are hidden from others (unless approved) and their presence in direct messages is minimised (e.g. hiding read receipts/online status).\nUnfollowing/unfriending is the removal of a connection, which stops the flow of content from that user into your own feed without initiating the more severe restriction of blocking.\nPlatform-enforced actions #These are proactive and reactive measures that the platforms themselves can take to enforce their Community Guidelines, Terms of Service, and legal requirements:\nTemporary suspension/timeouts is a disciplinary measure that restricts account functionality (i.e. posting, messaging, commenting) for a set amount of time and is intended to correct minor or initial policy violations.\nPermanent ban/account termination is the ultimate consequence for serious or repeat violations and is an indefinite removal of access to the platform.\nShadowbanning/algorithmic de-prioritisation is a mechanism where the visibility of an account is restricted in searches, feeds, and recommendations without the user knowing, which also raises data protection concerns around transparency. Shadowbanning is supposed to curb the spread of misinformation, spam, or borderline-harmful content. Interestingly, many platforms deny this measure exists.\nFeature-specific bans/action blocks is where specific functionalities (e.g. \u0026ldquo;cannot comment for 24 hours\u0026rdquo; or \u0026ldquo;demonetisation\u0026rdquo;) are restricted rather than a user\u0026rsquo;s entire account, which allows the user to remain on the platform while being penalised for specific behavioural infringements.\nContent removal/takedowns is where specific pieces of content (e.g. posts, videos, images) that violate guidelines are deleted, often along with a formal warning or strike against the account.\nTechnical and jurisdictional enforcement #These are infrastructure-level controls used to maintain the integrity of bans and enforce legal or regional mandates. Although they are mainly there to protect the interests of the platform\u0026rsquo;s communities, be aware that the power to dictate how they are used has become a hot topic between private companies trying to curate safe spaces and some governments seeking to exert their influence or control.\nIP and device banning are technical measures that identify a user\u0026rsquo;s internet connection or physical hardware to prevent banned individuals from circumventing platform rules via secondary accounts.\nCommunity/group/server bans is a type of enforcement carried out by moderators or administrators at a more granular level (e.g. Subreddits, Discord servers, Facebook Groups) rather than across the entire platform.\nGeo-blocking/country-wide bans are high-level restrictions where platforms are made inaccessible within specific countries, normally because of a government mandate, censorship, or regulatory non-compliance.\nWhy does all this matter? #When the term \u0026ldquo;content moderation\u0026rdquo; is mentioned, for some it may sound like a harsh, technical process — but for many people around the world, tools like block, mute, and ban can mean the difference between participating in online communities safely and being driven offline entirely. These people are put at risk when these systems fail, as we saw over the weekend on LinkedIn. As with many aspects of data protection, context matters. Here are some of the groups (with example cases) who may have become concerned:\nSurvivors of domestic abuse — In the US, the National Network to End Domestic Violence documents how abusers use \u0026ldquo;stalkerware\u0026rdquo; and social media surveillance to maintain control. When blocking fails, the barrier between victim and abuser is erased.\nTargeted journalists — Journalists are often silenced by the huge volume of threats without the ability to block coordinated \u0026ldquo;troll armies.\u0026rdquo; Maria Ressa, CEO of Rappler, has faced endless, state-sponsored harassment campaigns.\nVictims of doxing — Users often share personal updates assuming their personal data is shielded by a block. The case of journalist Taylor Lorenz highlights how private information is exposed when digital boundaries are breached, resulting in real-world threats.\nMarginalised communities — Women of colour on social media face a disproportionate amount of racist abuse. When blocking tools fail, these platforms become hostile environments that essentially remove these voices. According to Amnesty International, women were abused on Twitter (X) every 30 seconds.\nPeople fleeing coercive control — For people leaving high-control groups or cults, cutting your online presence is a survival tactic. A system error that accidentally unblocks former group members can lead to renewed attempts at forced re-engagement and psychological abuse.\nHuman rights defenders — Activists operating under restrictive regimes rely on blocking to keep their personal networks hidden from state-aligned surveillance bots. A failure here can lead to the identification and persecution of their associates.\nVictims of image-based abuse (e.g. revenge porn) — The rise of non-consensual intimate imagery platforms relies on the inability of victims to quickly mass-block or mass-report content before it goes viral.\nPublic figures facing stalkers — Beyond the \u0026ldquo;celebrity\u0026rdquo; aspect, many people deal with obsessive, parasocial stalkers. Blocking is the final line of defence to stop constant, inescapable messages.\nMental health support groups — Moderators of trauma or recovery forums rely on the ability to ban predators who infiltrate these spaces to target the vulnerable. Without effective banning, these support spaces are quickly sabotaged.\nVictims of corporate smear campaigns — When coordinated \u0026ldquo;astroturfing\u0026rdquo; networks target individuals with fake accounts and misinformation, blocking these networks is the only way to stop the echo chamber from destroying the victim\u0026rsquo;s professional reputation.\nTo conclude, blocking and banning are key mechanisms of digital governance for social media platforms — and a system glitch that disables them is not just a technical issue, it\u0026rsquo;s a public safety incident.\nPurpose and Means is a niche data protection and GRC consultancy based in Copenhagen and operating globally. We work with companies providing services with flexibility and a slightly different approach to the larger consultancies. We have the agility to adjust and change as your plans change. Take a look at some of our client cases to get a sense of what we do.\nWe are experienced in working with data protection leaders and their teams in addressing troubled projects, programmes and functions. Feel free to book a call if you wish to hear more about how we can help.\n","date":"16 March 2026","permalink":"/linkedins-blocking-fail-technical-glitch-or-safety-incident/","section":"Blog","summary":"","title":"LinkedIn's blocking fail: technical glitch or safety incident?"},{"content":"","date":null,"permalink":"/tags/personal-data-breach/","section":"Tags","summary":"","title":"Personal Data Breach"},{"content":"Sometimes, the most complex data protection or GRC challenges are best solved not by traditional meetings but by building models of our reality.\nMany data protection and GRC teams still rely on linear methods to navigate the complexities of their work and you as their leader are often none the wiser as to what is happening in the invisible architecture of their reality that a dashboard, spreadsheet or 1:1 meeting does not reveal. And there is a limit to what words can capture when the challenges are abstract. To really solve a problem you sometimes have to hold it.\nThere is a unique, rhythmic sound to 10 kg of Lego bricks being poured out onto a table. It’s a dry, clattering sound, but the sound of potential!\nAs I sat there the other day, sorting through a huge bin bag full of bricks that I had bought from a local online classifieds listing, I found myself doing something unexpected - deconstructing history.\nI had bought the bag of bricks from a man whose son was now in his twenties and moved out long ago. The man\u0026rsquo;s wife had recently died, and while they had held onto these bricks for sentimental reasons, the time had come to have a clear out, and move on. As I sorted the bricks and disassembled the occasional half-finished model buried in the pile, I reflected because I saw the evidence of many a family project - a wing from a spaceship here, a section of a castle wall there.\nI pictured the mum and dad sitting on the floor with their son, just as my partner and I did with our own kids years ago. Used Lego bricks are not just mass-produced pieces of plastic, they were artifacts of a life shared.\nOther lots of Lego I have bought on the classified ads site have been from people who mistakenly bought the kits thinking they were toys - Lego® Serious Play® kits are very specific - and also recently, over 10 boxes from a Copenhagen tech-startup that had just been acquired and most of the boxes were unopened.\nLongevity of Lego This fascination with secondhand objects isn\u0026rsquo;t new for me. My house is full of various British vintage amplifiers and loudspeakers, and my album collection consists of many records bough secondhand over the past five decades.\nAnd then there are books. About 15 years ago, when I was studying for my CGEIT and CRISC certifications, I bought quite a few textbooks online. Many of them were former library books, some from New York, some from San Francisco to name a couple of places. They still had the library stamps, the lending history, and the occasional margin note scribbled by someone who had previously borrowed the book. I couldn\u0026rsquo;t help but conjure up stories of these books in my hands living their previous life.\nTo me, those books were knowledge carriers. In the same way, the bin bag of bricks that had clearly seen years of play are story carriers. Buying them secondhand isn’t just about sustainability and price I think it’s also about the philosophy of the work.\nComplex systems Lego® Serious Play® has become an important and fun part of my work with Data Protection, Infosec, Risk, and Legal teams. These departments often deal with the invisible. Things like hidden data flows, abstract threat landscapes, and complex legal and compliance requirements. Words alone are sometimes inadequate when you are trying to map a personal data breach response plan or explain the nuances of data protection risk,\nWhen your colleagues sit down with handfuls of bricks you can quickly notice things change. They are not building models of Stonehenge, they are building models of their reality.\nWhether it is a legal team mapping out the complexities of an emerging law or an infosec team visualising the potential attack surface of a remote workforce, Lego® Serious Play® allow them to:\nMove the problem from inside their heads onto the table where it can be seen, debated, and stress-tested.\nRapidly represent opportunities and challenges - co-creating with their hands is far quicker than trying to write everything down\nHierarchy disappears because the brick models are the focus, . The most junior analyst has the same opportunity to contribute to the model as the head of the department.\nOngoing recycling When I wash, dry, and sort these used pieces, I am preparing them for their second act. They will soon be piled up on the tables of my upcoming in-person workshops, where participants will use them to build models that represent their own complex business challenges. And then at the end of the sessions, those models will be broken down again, the bricks returned to the container ready to be transformed into someone else\u0026rsquo;s story.\nFor my virtual workshops, I send out kits that participants can keep and hopefully reuse themselves.\n","date":"12 March 2026","permalink":"/piecing-together-your-teams-work-practices-brick-by-brick/","section":"Blog","summary":"","title":"Piecing together your team's work practices, brick by brick"},{"content":"Notes from the FTC’s latest conference about informational harms highlighting why data protection is a complex economic, societal, and product design issue that is far too critical to be left solely to the legal department.\nI have various education and training courses covering harms resulting from the processing of personal data and at the Federal Trade Commission\u0026rsquo;s (FTC) last informational harms conference in 2017 I gleaned so many horrific real life cases to underpin the messages in my courses.\nWhen the FTC recently shared the video recording of their latest Consumer Injuries and Benefits in the Data-Driven Economy conference that was held last month, I did set aside some time to watch it over several sittings - it is over 7 hours long! Be aware that the recording includes an hour-long lunch break (i.e. silence) - not sure why they didn\u0026rsquo;t just edit that out.\nWhat stood out for me this time around was very little discussion about regulatory frameworks, GDPR fines, or the exact phrasing of privacy notices or data protection policies.\nFrom my perspective it was more around economics, behavioural psychology, cybersecurity, and sociology and reinforced my belief that data protection these days is about how we design products, structure markets, and protect society. In other words, way, way more than ticking legal and compliance tick boxes. The recording is available on Youtube here:\nIf you still believe data protection and privacy are primarily topics for your legal and compliance departments then watch the video, or read my brief synopsis below and share it with your C-suite colleagues, product teams, technologists and economists (if you have them).\nData protection is a also competition issue When we leave data protection to the legal department, they focus on articles and recitals which are of course very important but if you bring in economists, they see market distortion.\nAlgorithm collusion: The FTC highlighted how sharing granular consumer data allows companies to engage in \u0026ldquo;algorithmic collusion.\u0026rdquo; Prices become extremely competitive and consumers face hyper-effective price discrimination.\nCompliance monopoly: Common mechanisms like cookie banners and consent interfaces can harm competition. These types of mechanism often annoy consumers and add friction that entrenches incumbent platforms. Dominant bigtech leverage control over ecosystems in order to maintain their powerful positions in the market, which is tough on new entrants looking to compete.\nReal-world harms are physical, emotional, and societal Many data protection professionals calculate personal data breaches from the cost of notifying individuals and paying potential fines but the stories shared during the conference were real life and affecting people lives:\nStories about medical identity theft, corrupting health records and triggers ransomware attacks that literally disrupt hospital operations and patient safety.\nData brokers scraping and selling public records (like property transactions and marriage registrations) directly facilitate real-world harms, including stalking and domestic violence.\nThere were conversations that touched on the algorithmic amplification of hate speech (e.g. Facebook’s role in Myanmar). This is severe societal-level harm, stretching far beyond the scope of data subject rights.\nProduct innovation and trust Innovation in the data economy involves a fragile balance of behavioural science and product design.\nUsage-based insurance programmes (i.e., monitoring driving habits) can lower premiums and improve road safety, but they face massive consumer resistance due to privacy concerns. This was compared to the early political resistance to when seatbelt laws were first introduced.\nWe often associate personalisation with targeted ads and customised interfaces but things are much more sophisticated these days with personalised phishing and deep fakes (like fake video calls from trusted contacts) that cause huge emotional and financial ruin, as well as lost employment opportunities.\nBehavioural manipulation due to continued informational asymmetry. Dark patterns and overwhelming data protection and privacy preferences mean that genuine \u0026ldquo;informed consent\u0026rdquo; is effectively impossible for the average person.\nChallenges and issues The conference identified several barriers to addressing these issues, and none of them can be solved by bringing in teams of lawyers or re-writing the privacy policy:\nQuantifying harm is difficult: Clear definitions for how to quantify multi-dimensional, context-dependent privacy harms are lacking. How do you put a monetary value on the emotional toll of a deep fake or the loss of autonomy?\nData brokers: The data broker ecosystem is a black box because limited empirical data exists, and we cannot regulate or design against what we cannot see.\nThe pacing of change: Rapid technological innovation (especially in AI) increasingly outpaces existing regulatory frameworks and researchers face huge challenges in acquiring actual data from platforms in order to conduct studies.\nIt is clear that while the \u0026ldquo;data-driven economy\u0026rdquo; creates many benefits for various groups of stakeholders, it also generates complex, systemic injuries and if your company assigns data protection solely to the legal department, then you will lack the needed competences to navigate and address this complexity. Companies need economists to understand data markets, sociologists to understand societal impacts, behavioural psychologists to design ethical choices, and technologists to secure our identities.\nI think it is time to look beyond the law.\n","date":"9 March 2026","permalink":"/beyond-legal-20-getting-real-with-harms/","section":"Blog","summary":"","title":"Beyond legal #20: Getting real with harms"},{"content":"Workplace surveillance is rarely a simple top-down pyramid, but a recursive trap where \u0026ldquo;Dataism\u0026rdquo; judges the C-suite and the true \u0026ldquo;ultimate dashboard\u0026rdquo; sits not in the boardroom, but with the IT administrators.\nI’ve delivered various employee education sessions covering workplace surveillance over the years, and recently updated the material taking into account the vast amount of change we’ve seen in the past year or so, technological, societal and bringing in some interesting cases. I also wanted to bring in the reality I believe exists in quite a few companies, but I also know this can vary depending upon location, local cultural norms, industry sector, etc.\nWe sometimes visualise workplace surveillance as a pyramid where the CEO sits at the top with a clear view of the bottom. But the reality is that surveillance is rarely a simple top-down model. Instead, companies build complex hierarchies of surveillance where the \u0026ldquo;watchers\u0026rdquo; are also the \u0026ldquo;watched.\u0026rdquo;\nWhen C-suite executives require workplace analytics, they are often driven by Dataism. They believe the dashboard reveals the truth about efficiency and by accepting this logic, they inadvertently validate the metric as the ultimate judge of value. If the truth of the company is found only in the data, then the executive’s own value must also be measured by that data, and I think that they are not the “masters” of the algorithm because they are also its subjects (or victims).\nOnce the norm of Dataism is established, it inevitably travels upward:\nManagers and supervisors are monitored to ensure they enforce protocols. Their ability to manage efficiently becomes an important KPI.\nCompanies use metadata to assess management performance, so if a department is flagged for say, low engagement by a piece of software, the VP of that department is the one being judged.\nDoes the CEO have the “ultimate dashboard\u0026quot;? In my career spanning 5 decades, I’ve been in quite a few CEO offices and I’m yet to visit one that resembles a control room, or has various banks of cameras. The CEO is usually too detached and busy to monitor raw surveillance feeds. So, where does the \u0026ldquo;ultimate dashboard\u0026rdquo; actually sit?\nI think in many cases, the person with the most granular view of a company, including the movements and messages of the C-Suite, is a mid-level Systems Administrator or a Security Operations Center (SOC) analyst. This inverts the hierarchy because a junior employee technically has surveillance power over the executive team. Obviously legal controls should prevent unauthorised disclosure of this powerful information but a dangerous information asymmetry does exist. The CEO doesn\u0026rsquo;t see the data. They see a sanitised report filtered by the very people the data is supposed to measure.\nAnother interesting question is do CEOs make the ultimate decision to procure and implement the surveillance tools? Often, the answer is no. Most large companies use various platforms to help run their businesses, but they are also governed by their mechanisms because the surveillance infrastructure is not always a strategic decision made in the boardroom. It inadvertently begins in procurement and gets implemented in the server room.\nFor example, the CTO procures Microsoft 365 or Zoom for communication where \u0026ldquo;Productivity Scores” or \u0026ldquo;Attention Tracking\u0026rdquo; features are embedded in the platform\u0026rsquo;s architecture. The decision to surveil wasn\u0026rsquo;t made by the CEO, it was made by the platform vendor and enabled by a sysadmin. And of course CISOs buy tools for security.\nSo the CEO rarely signs a document that says, \u0026ldquo;Let\u0026rsquo;s spy on everyone.\u0026rdquo; They sign a budget for \u0026ldquo;Digital Transformation,\u0026rdquo; and the surveillance apparatus is built one piece at a time.\nOn a final note, years ago I worked with a company where it was a revealed that the company’s CEO had made a request to the CISO in surveil another member of the c-suite - all behind the scenes, and undocumented, and yes, in an unlawful manner. This created an awkward dilemma for CISO, what some might call a \u0026ldquo;governance crisis.\u0026rdquo; These days whistleblower laws exist in many countries, but are these sufficient to address such dilemmas?\nTo conclude, if you are a CEO, you don\u0026rsquo;t physically pull the levels of the dashboard, your IT admin does and remember you are often subject to the default settings of the platforms you procure or the vendors you hire.\n","date":"2 March 2026","permalink":"/the-myth-of-the-ceos-surveillance-dashboard/","section":"Blog","summary":"","title":"The myth of the CEO’s surveillance dashboard"},{"content":"Triggered by the recent Danish regulatory ruling against the use of Google in schools, \u0026ldquo;Googleization of education\u0026rdquo; is not just a legal compliance issue, but a key societal challenge that commodifies children\u0026rsquo;s data, threatens their rights and freedoms, and erodes traditional teacher autonomy in favour of algorithmic standardisation.\nAt the start of this month, Datatilsynet, (the Danish Data Protection Supervisory Authority) delivered a decision criticising 51 municipalities for their use of Google products in the Danish public school system. Datatilsynet highlighted insufficient data protection measures and warned that the widespread use of Google’s suite, as well as its reliance on sub-processors outside the EU, violates the GDPR. Their press release (in Danish) can be read here.\nIf you look at the case purely through a compliance lens, you see a story about processing activities, international data transfers, and sub-processor agreements. But as I often mention in this blog post series, we must look Beyond Legal. Data protection is not primarily a topic for legal professionals because it has huge societal consequences - both positive and negative, and from my perspective, this case is more about the fundamental transformation of our education system, the digital rights of our children, and the erosion of the teaching profession itself.\nThe art of teaching Teaching runs deep in my blood because over the past five decades, my brother, sister, mother, and grandfather have all dedicated their lives to the classroom as teachers and lecturers. I have huge respect for the profession, and it really saddens me that the traditional role of the teacher: the mentor, the nurturer, the inspiration for many, is potentially being compromised by technological interference.\nThis ties into a reflection I shared recently on LinkedIn regarding Sir Ken Robinson’s classic TED Talk about how \u0026ldquo;schools kill creativity.\u0026rdquo; It\u0026rsquo;s 20 years ago since the late Sir Ken famously warned against the industrial, factory-model of education that processes students rather than nurturing their individual and unique talents. And through some follow-up exchanges within my network, and some ongoing studying I\u0026rsquo;ve been doing this month about the platform society I just wanted to get my thoughts down in this post before the month ends.\nThe Googleization of education Much has been written about this topic by others. People like Sonia Livingstone and Professor José van Dijlk, and of course all the work and awareness Jesper Graugaard (\u0026ldquo;Father to the Danish chromebook case\u0026rdquo;) has brought to the widespread adoption of Google’s digital services (like Google Classroom and Google Analytics) and hardware (such as Chromebooks) in educational settings. It transforms classrooms into environments completely reliant on a single tech giant\u0026rsquo;s vertically integrated ecosystem.\nAs I mentioned in Beyond legal #17: Planting other trees in the forest, we can use van Dijck\u0026rsquo;s metaphor of the \u0026ldquo;Platformisation Tree\u0026rdquo; to understand this. Schools\u0026rsquo; entire educational infrastructure is becoming entangled in the deep, opaque roots of bigtech which is part of a broader platformisation of education, where private tech companies, not educators, are controlling educational tools, data, and administration.\nThe loss of autonomy There\u0026rsquo;s much to be concerned about and when we synthesise the work of Livingstone, van Dijlk, Graugaard (among others) with the current reality of the classroom, this is way beyond GDPR violations:\nData protection \u0026amp; commercialisation: In Beyond legal #18: Data separation as a design strategy, I wrote about the danger of the \u0026lsquo;Oligopticon\u0026rsquo; how distinct, partial data gazes merge into a 360-degree Panopticon. A child\u0026rsquo;s learning progress is one such partial gaze and when lots of behavioural data is collected via a singular Google ID within a unified ecosystem, the walls between data silos collapse. Children, who are inherently vulnerable, become targets for data exploitation, and their education is turned into an economic resource.\nLoss of teacher autonomy: This is what hurts the most when I think of my family. We are seeing a fundamental change from teacher-led pedagogy to algorithm-driven learning and analytics. Algorithms dictate the pace, standardising and centralising curricula based on platform logic rather than human intuition.\nShift in educational values: Education should be about enabling critical citizenship and cultural growth. Unfortunately what we are seeing these days is platformisation shifting the focus towards purely measurable skills and data-driven learning outcomes.\nSurveillance and continuous monitoring: The use of predictive analytics to track student performance creates an environment of constant surveillance. We risk trapping children in \u0026ldquo;educational filter bubbles,\u0026rdquo; where content is algorithmically personalised but severely limited, stifling the very creativity Sir Ken championed in his Ted Talk back in 2006.\nEducational data as a valuable asset What happens to childrens\u0026rsquo; data when they leave school? Educational data is increasingly integrated into social profiles, used for future labour market selection, recruitment, and data brokerage. Premium certification services are emerging around this data, raising huge ethical questions about children’s rights.\nWhat can we learn from the rise of MOOCs and platforms like Coursera? While offering free content, they monetise certificates and unbundle traditional university services. This model reduces teacher autonomy and risks the privatisation of knowledge validation, pushing a global standardisation that bypasses national accreditation entirely.\nAre there better alternatives? The Datatilsynet ruling highlights the conflict between public institutions and bigtech monopolies. As I mentioned in Beyond legal #17, the power imbalance in Data Processing Agreements (DPAs) often leaves municipalities with a \u0026ldquo;take it or leave it\u0026rdquo; ultimatum.\nIf we want to protect the United Nations Convention on the Rights of the Child (UNCRC) and fully realise the benefit of the GDPR, we need to recognise that legal enforcement only patches the gaps. We need stronger governance and better alternatives that prioritise the interests of the children and their teachers. A few thoughts here (also taken from the work of the people mentioned earlier):\nArchitectural data separation: Recalling Jaap-Henk Hoepman\u0026rsquo;s strategies from Beyond legal #18, we must proactively design for data separation. Educational data must be architecturally decoupled from commercial ad-tech ecosystems to preserve contextual integrity.\nOpen-source \u0026amp; FAIR principles: We should promote the development of open-source and open-data educational platforms that follow FAIR principles (Findable, Accessible, Interoperable, Reusable).\nBlended learning solutions: We must re-centre the classroom around human interaction, where digital tools assist rather than direct, this will help restore teacher autonomy.\nWe need to preserve public values in education, equality of access, democratic governance, and the freedom for children to learn and make mistakes without being permanently profiled. The classroom should be a sanctuary for creative growth and not just another branch on the platformisation tree.\n","date":"26 February 2026","permalink":"/beyond-legal-19-the-chrome-ification-of-education-and-the-algorithmic-classroom/","section":"Blog","summary":"","title":"Beyond legal # 19: The Chrome-ification of education and the algorithmic classroom"},{"content":"We often fear the single eye of \u0026lsquo;Big Brother,\u0026rsquo; but a real threat to our fundamental rights is the architectural loss of data separation, which is allowing the distinct, partial gazes of the \u0026lsquo;Oligopticon\u0026rsquo; to merge into a 360-degree Panopticon.\nI occasionally see the use of the Panopticon as a metaphor to warn against the \u0026ldquo;Big Brother\u0026rdquo; state and the total surveillance of the digital age. The Panopticon is Jeremy Bentham’s 18th-century prison design where a single watchman observes all inmates, creating a state of permanent visibility.\nIn 2018 whilst attending a conference in Philadelphia I visited Eastern State Penitentiary, the former prison that at one time had Al Capone as a prisoner. It was a fascinating visit walking around the ruins and interestingly, key elements of the workings of the Panopticon were still in place. Aside from the obvious physical architecture, some of the mirrors that enabled the permanent visibility were still mounted on the walls. In the right-hand photo below you can see one such mirror (and obviously someone still gives them a good polish).\nThe Panopticon assumes a single, all-seeing eye. These days, the reality of the platform society is far more complex.\nAs I mentioned in my previous blog post, \u0026ldquo;Planting other trees in the forest,\u0026rdquo; we can look to thinkers like Jose van Dijck to begin to understand the nature of modern surveillance. Van Dijck (building on concepts from the late Bruno Latour) talks much about the concept of the Oligopticon.\nUnlike the Panopticon, which relies on a single observer, Van Dijck describes the Oligopticon as a system of \u0026ldquo;partial gazes.\u0026rdquo; In this system, we are watched by many different entities, each seeing a narrow but precise view of us:\nYour bank sees your financial history.\nYour doctor sees your blood work.\nYour social media platform sees your social graph.\nWhen you first look at it, you might think that these \u0026ldquo;fragmented observations\u0026rdquo; are safer than a single Big Brother, but Van Dijck alerts us to the fact that these partial gazes are often interconnected.\nIn the platform society, these distinct oligopticons e.g., corporations, governments, and data brokers (to mention a few), constantly share and cross-reference data. The result is a distributed surveillance network where multiple partial views are joined together to form a comprehensive, holistic picture of the individual. This system is more pervasive than the Panopticon because it is everywhere, embedded in the sharing and \u0026ldquo;always on\u0026rdquo; culture in which many voluntarily participate, losing control over how their/our fragmented data is reassembled.\nMy point here is the danger is not just data collection, it is **data **connection.\nSeparating data can save lives The danger of connecting these \u0026ldquo;partial gazes\u0026rdquo; is not theoretical. History has shown us that when data silos collapse and information is cross-referenced the result can be deadly\nLast month, around the week of data protection day, I wrote a post on Linkedin about a group of \u0026ldquo;data subversives\u0026rdquo; who operated during WW2. These individuals realised that administrative data was no longer just bureaucratic. In the hands of an oppressor seeking a \u0026ldquo;comprehensive view\u0026rdquo; of the population, it was a weapon. They took action to address this:\nWillem Arondeus led a group in Amsterdam that bombed the population registry in 1943. They understood that by destroying the physical records, they could prevent the Nazis from cross-referencing names with addresses and identities, effectively blinding the occupier.\nRené Carmille, a punch-card expert in France, programmed the census machines to physically jam or never punch \u0026ldquo;Column 11\u0026rdquo; - the column for religious affiliation.\nAdolfo Kaminsky and Paul Grüninger manipulated identity documents. Kaminsky, a French master forger created false papers that allowed people to pass through the system unseen. Grüninger, a Swiss police commander, falsified dates on entry visas to save refugees who legally shouldn\u0026rsquo;t have been there.\nIrena Sendler in Poland took the opposite approach: she created a \u0026ldquo;shadow registry.\u0026rdquo; While smuggling children out of the Warsaw Ghetto, she wrote their real names on slips of paper and buried them in jars under an apple tree. She separated their true data from the official system to preserve their future.\nThese \u0026ldquo;subversives\u0026rdquo; fought against the efficiency of the connected gaze. They understood that separating data saves lives. They fought to keep the views fragmented, preventing the regime from forming the \u0026ldquo;complete picture\u0026rdquo; necessary for total control. If you want to read more about these brave individuals, take a look at my interactive infographic about the origins and history of European data protection and privacy.\nToday, we are rebuilding exactly what they tried to destroy. We are doing it not through military occupation, but through commercial integration.\nIn the graphic above, the risks are not just around data collection, but also data connection. When a bank, a social media company, or the state can cross-reference their data, the \u0026ldquo;narrow\u0026rdquo; view becomes a totalising one and when the walls between the silos dissolve then we can quickly be in a constitutional crisis.\nM\u0026amp;A and partnerships The move towards a more interconnected Oligopticon is driven by a market that seeks 360-degree customer views. When a bigtech company acquires a niche player, they are not just acquiring technology. They are buying a new \u0026ldquo;partial gaze\u0026rdquo; to add to their existing collection. A couple of examples in recent years caught the attention of the EDPB as they were concerned about the consequences of combining datasets that could negatively impact the fundamental rights of individuals by creating a profile of such depth that it would be impossible to escape:\nGoogle \u0026amp; Fitbit: When Google acquired Fitbit, it sought to merge its \u0026ldquo;Search Oligopticon\u0026rdquo; (what you are interested in) with a \u0026ldquo;Health Oligopticon\u0026rdquo; (how your body functions).\nApple \u0026amp; Shazam: Similarly, Apple’s acquisition of Shazam allowed the device manufacturer to integrate granular listening habits into its broader ecosystem, effectively merging the hardware and software gaze with the cultural gaze.\nAlso be aware that contractual partnerships act as the \u0026ldquo;glue\u0026rdquo; between unconnected oligopticons. Data sharing agreements between apps, brokers, and advertisers mean that even without a merger, the \u0026ldquo;partial gaze\u0026rdquo; of one company is legally available to another.\nAnd if you think this is too theoretical, here are some recent examples of what happens to our fundamental rights when the gazes connect.\n1. The neighbourhood Panopticon: Amazon Ring (source)\nThe partial gaze: The private home security camera. Historically, Ring footage was seen as \u0026ldquo;private property,\u0026rdquo; viewable only by the homeowner.\nThe connection:* *Amazon’s Ring created the \u0026ldquo;Neighbours\u0026rdquo; app and established contractual partnerships with over 2.000 police and fire departments.\nThe result:* *By aggregating millions of private views, Amazon created a massive, privately-owned surveillance network accessible by the state.\nFundamental rights impact: This erodes the Presumption of Innocence and Freedom of Assembly. It creates a society where simply walking down a street subjects you to a police lineup.\n2. The welfare Panopticon: The SyRI legislation (source)\nThe partial gaze:* *Distinct government agencies holding separate data: water usage, tax returns, and vehicle registrations.\nThe connection: The Dutch government used SyRI (System Risk Indication) to link these disparate databases to algorithmically predict welfare fraud.\nThe Result: Innocent behaviours (e.g., low water usage combined with specific employment status) triggered fraud investigations. The state created a \u0026ldquo;glass house\u0026rdquo; for low-income citizens.\nFundamental rights impact: A Dutch court halted SyRI for violating Article 8 of the Charter of Fundamental Rights of the European Union (Right to Privacy). The court noted that the lack of transparency made it impossible for citizens to defend themselves against the \u0026ldquo;black box\u0026rdquo; of state power.\n3. The biological Panopticon: Post-Roe period trackers (source)\nThe partial gaze: Femtech apps (like Flo or Clue) where users track menstruation cycles. A strictly medical/personal gaze.\nThe connection: Following the overturning of Roe v. Wade, data brokers and law enforcement began seeking access to this data to infer pregnancy status.\nThe result: The \u0026ldquo;medical gaze\u0026rdquo; of the app merges with the \u0026ldquo;punitive gaze\u0026rdquo; of the state. A missed period in an app, combined with geolocation data near a clinic, creates an evidentiary trail for criminal prosecution.\nFundamental rights impact: This threatens the Right to Health and Bodily Integrity. When health data becomes testimonial evidence against the user, individuals avoid seeking medical care, fearing self-incrimination.\n4. The labour Panopticon: Workplace \u0026ldquo;bossware\u0026rdquo; (source)\nThe partial gaze: Traditional employment metrics (deadlines, clock-in times).\nThe connection:* *Tools like Hubstaff or Microsoft’s Productivity Score (initially) capture keystrokes, mouse movements, screenshots, and webcam feeds.\nThe result:* *The distinction between \u0026ldquo;working\u0026rdquo; and \u0026ldquo;thinking\u0026rdquo; is erased. The employer sees not just the output, but the process which intrudes upon the mental space of the worker.\nFundamental rights impact: This impacts Human Dignity and Labour Rights. It reduces the human worker to a data stream, eliminating the \u0026ldquo;unobserved space\u0026rdquo; necessary for creative thought and mental rest.\n5. The living room Panopticon: Smart TVs and ACR (source)\nThe partial gaze: The television as a one-way display device.\nThe connection: Modern Smart TVs use Automatic Content Recognition (ACR) to identify what you are watching, and share that data with advertisers via contractual partnerships.\nThe result: The TV watches you back. It combines your viewing habits with your IP address (potentially linking to devices in your home) to build a profile of your political leanings and cultural interests.\nFundamental rights impact: This violates the Article 7 of the Charter of Fundamental Rights of the European Union (Respect for private and family life, home and correspondence). The home is traditionally the ultimate refuge from the public gaze. ACR technology turns the living room into a market research lab without explicit, informed consent.\nBeyond Legal: Data separation as a design strategy If we view the interconnected Oligopticon purely as a legal problem, what will happen if we\u0026rsquo;re not careful is there\u0026rsquo;ll be yet another legal solution such as \u0026ldquo;better contracts\u0026rdquo; or \u0026ldquo;more granular consent forms.\u0026rdquo; This is where we need to move beyond compliance and towards architecture, and if you are familiar with the work of the Dutchman, Jaap-Henk Hoepman, then you may also be aware of his \u0026ldquo;little blue book\u0026rdquo; (free download) or Privacy Is Hard and Seven Other Myths: Achieving Privacy through Careful Design. Hoepman emphasises that data protection cannot be an afterthought. It must be engineered and he details (among many things) Separate as a fundamental design strategy.\nHoepman teaches us that we should process personal data in distributed datasets whenever possible, isolating distinct domains to prevent correlation:\nContextual integrity: By separating data, we respect the context in which it was given (healthcare vs. employment).\nRisk reduction: If one database is breached, the attacker gets a slice, not the whole pie.\nRené Carmille and Willem Arondeus understood this intuitively in the 1940s. If the architecture of the system is designed for total connection, it can easily be used for oppression.\nToday, the defence against the modern Panopticon cannot be left solely to lawyers. It requires Product Managers, Data Architects, and Engineers to embrace Hoepman’s strategy. From a data protection perspective we must design for unlinkability.\nIf we allow these gazes to merge through shared unique identifiers and interoperable databases, we lose more than our privacy. We lose the freedom to be different people in different places, a freedom that is essential to human dignity.\n","date":"20 February 2026","permalink":"/beyond-legal-18-data-separation-as-a-design-strategy/","section":"Blog","summary":"","title":"Beyond legal #18: data separation as a design strategy"},{"content":"Whether you are a business leader trying to decouple from high-risk vendors, or a consumer trying to protect your family\u0026rsquo;s digital footprint, the strategy is the same. We must stop looking only at the shiny leaves of the apps we use and start understanding the roots. Our mindset needs to be switched to \u0026ldquo;risk reduction.\u0026rdquo;\nIn my previous post, I wrote that the time has come for data protection and GRC leaders to shine. I suggested that total risk avoidance is currently not an option if you wish to take a European-centric technology strategy. And on a personal level, unless you plan to sit at home on a closed network, emailing your family members using a homelab email server, you will continue to be exposed.\nSo, if avoidance isn\u0026rsquo;t an option, where does that leave us? It leaves us with risk reduction and strategic decoupling, and to navigate this, we need a better map.\nThe \u0026ldquo;Platformisation Tree\u0026rdquo;\nThis week, as part of a course I\u0026rsquo;m on, I have been studying the work of Professor José van Dijck, particularly her book \u0026ldquo;The Platform Society\u0026rdquo; from 2018 and her more recent paper, \u0026quot;Seeing the forest for the trees: Visualizing platformization and its governance.\u0026quot;\nShe uses the metaphor of the \u0026ldquo;Platformization Tree\u0026rdquo; - this obviously inspired the graphic I\u0026rsquo;ve used for this post, though mine is quite different from hers - you\u0026rsquo;ll see this if you read her paper linked to above. It is an interesting visualisation for business leaders and consumers alike because it forces us to look beyond the \u0026ldquo;leaves\u0026rdquo; - the apps and interfaces we interact with daily - and stare into the tangled \u0026ldquo;roots\u0026rdquo; of the infrastructure.\nIt is in these roots that the complexity of our dependence on non-European tech lies. And it is in these roots that the \u0026ldquo;legal\u0026rdquo; view of data protection often fails to capture the full picture.\nThe consumer perspective For the consumer, the sheer size and complexity of this tree can be difficult to take in. But I think we are seeing a shift, because initiatives are popping up in Europe to help people begin navigating this forest so they can make choices.\nIn Denmark, where I live, there is an initiative called \u0026ldquo;Danmark Skifter\u0026rdquo; (Denmark Switches). In other countries there are similar projects. I applaud them as a starting point. They raise awareness and offer actionable steps for people to reduce their dependency on bigtech and, thereby, reduce their own risk.\nThese initiates have their critics, but I disagree. Following a risk reduction strategy is infinitely better than not attempting one at all, especially when risk avoidance is not an option right now.\nI think we must stop treating consumers as if they are too naive to understand complexity. In recent years, people are have become capable of understanding that \u0026ldquo;risk\u0026rdquo; in data protection is not an abstract compliance score - it is about human rights.\nOur personal risk profile varies wildly depending on who we are (such as):\nAre you more susceptible based on your sexuality, ethnicity or religious beliefs?\nAre you involved in union activities or political movements?\nWho do you associate with?\nDo you, or a family member have a criminal past?\nAre you in the media spotlight? Are you famous, a celebrity or do you live next door to someone who is?\nThe amount of data you have provided over the years, combined with your frequency of use, creates a unique personal risk profile. Ultimately, it comes down to your personal risk appetite for yourself and your family. The conversations I\u0026rsquo;ve had this past couple of years indicate to me that people can increasingly navigate these complexities and make choices, provided they can see the forest and the trees.\nThe Controller/Processor reality When we move from the consumer branch down to the B2B roots, the transparency vanishes, especially in AdTech.\nData protection professionals have been well aware of the controller, processor, and sub-processor chains for many years. While the role of a sub-processor was not regulated under the 1995 EU Data Protection Directive, they came into sharp focus with the GDPR in 2016, specifically regarding the need for a processor to obtain the controller\u0026rsquo;s written authorisation. In theory, the law is clear: The controller is in charge, but in practice, the power imbalance in the \u0026ldquo;Platform Society\u0026rdquo; is very significant.\nWhen dealing with the bigtech platforms, their Data Processing Agreement (DPA) is often a \u0026ldquo;take it or leave it\u0026rdquo; document. The nuance that often catches people out is the requirement for specific authorisation. This means the controller must approve a particular sub-processor for a particular processing operation.\nThis is where the \u0026ldquo;Beyond Legal\u0026rdquo; mindset is critical. You cannot simply read the contract, you\u0026rsquo;ve got to understand the technical reality.\nMany data protection professionals use tools like Exodus or Webbkoll, among others, to analyse potential data flows. These are good tools. They might flag a tracker or a potential flow to a US-based sub-processor, but remember the phrase above, \u0026ldquo;particular processing operation.\u0026rdquo;\nJust because a tool detects a library or a script, it does not confirm that the processing operation is active in your company\u0026rsquo;s specific context. The flow might be dormant, or it might not be triggered by the specific user journey you have designed.\nThese tools are signals for deeper investigation, not a final verdict. They are the start of the conversation, not the end. Many of these tools are freely available online so consumers can also make good use of them.\nNew trees in the forest In my study group this week, there were some great conversations and ideas being thrown about in terms of European alternatives to the GAFAM platforms, but they often ended up down in the complex root system - how can Europe step up, how can it decouple itself? As mentioned earlier, it\u0026rsquo;s not possible but with the spirit and enthusiasm I sense is building there is plenty of appetite to change this even, as one person suggested it may take as long as a generation. And here is an infographic of the entire lecture - such an interesting topic:\n","date":"13 February 2026","permalink":"/beyond-legal-17-planting-other-trees-in-the-forest/","section":"Blog","summary":"","title":"Beyond legal #17: Planting other trees in the forest"},{"content":"Having a calm head, getting the right people to use methodical approaches and truly understanding the types of risk we are talking about has never been more important in this day and age.\nIn this post #16 of my Beyond Legal series, I want to cover a couple of areas, one is timely as companies try to make sense of current global events and the second is a common frustration I hear from GRC leaders: they have a leader title, but they struggle to get buy-in from their peers.\nIn some companies, governance is still framed as review, approval, and constraint. It\u0026rsquo;s something that happens after strategy, after architecture, and after procurement.\nThese days, geopolitical events are forcing many European organisations to explore reducing dependencies on US vendors. It\u0026rsquo;s a feasibility question and a business issue (not just a tech issue) that cuts through operating models, product delivery, security dependencies, and the end-to-end processing and movement of data, and is no longer just a legal discussion about transfer mechanisms (SCCs or frameworks).\nAnd this is why I keep coming back to a line that I think matters a lot these days: GRC should be a design requirement for a company\u0026rsquo;s business model and if you are a GRC leader, you have an opportunity now be part of the important conversations that are could be taking place in your company.\nConvergence is the new operating reality In an earlier post on The convergence of AI Governance, Data Protection, and ESG, I stated that these disciplines can no longer be managed in silos, or as separate pieces of work. I repeat here a few phrases from the post\nAI governance - “compliance is in code”\nTrust - evidence and certification rather than marketing promises.\nThe environmental footprint of digital is becoming measurable - “digital is physical”.\nData sovereignty and fragmented digital markets are now shaping IT architecture and business strategy.\nIf you are struggling with getting buy-in for your work, this convergence could be your way in. It\u0026rsquo;s going to allow you to stop talking about data protection, privacy or GRC as siloed specialist concerns and start talking about governed digital transformation as a foundation for growth, resilience, and market access.\nSovereignty is also about dependency\nYears ago I shared a post on LinkedIn that was perhaps relevant back then in relation to GDPR, but I think things have certainly moved on in relation to mapping data flows across the data lifecycle and the kinds of questions to ask as part of your risk assessment. Fit for purpose back then perhaps but not for the challenges we face today.\nI\u0026rsquo;m currently seeing many articles in the tech media and on LinkedIn where tech leaders say \u0026ldquo;we need to reduce our dependency on US vendors,” or words to that effect but I sense some may underestimate what dependency actually means.\nIt\u0026rsquo;s not just about where data is stored (residency). Dependency shows up in control planes and less obvious flows such as identity providers, key management, device management, monitoring and telemetry, vendor support access, backup flows, and embedded AI services where prompts and context flow in ways few people have mapped. And don\u0026rsquo;t forget the often invisible supply chain of your vendors, in other words, your vendors\u0026rsquo; vendors, and so on.\nI\u0026rsquo;m currently looking for a similar mapping approach that can be used to capture the hidden sovereignty risks inherent in modern, integrated architectures. The diagram above is a simple Excalidraw mock-up based a few diagrams I\u0026rsquo;ve seen shared online. It hopefully moves beyond the simpler data lifecycle diagrams I made back around 10 years ago - this is work in progress, and any input you may have will be welcomed.\nThe answers to the questions many tech leaders are asking right now should not involve a knee-jerk vendor purge. For example, anything US is no-go, cut it immediately. The right response is to conduct a methodical, business-facing assessment that answers: What would it take? What would it break? What is the impact on our business objectives? And critically: what type of risk are we actually talking about?\nI also think there are quite a few companies claiming they know their data flows, or at least that\u0026rsquo;s what they say on the stage when presenting at conferences. In practice, they may know fragments of the reality and that is not enough for sovereignty decisions because sovereignty risk often lives in the secondary flows as I illustrate in my mock-up above: diagnostic logs, analytics tags, customer support tooling, cross-region failover, developer environments, and shadow AI usage.\n\u0026ldquo;Sovereignty Feasibility Assessment\u0026rdquo; As I\u0026rsquo;ve written many times over the years, my business analysis education through BCS in London around 20+ years ago gave me some very useful tools and techniques including the concept of conducting a feasibility study or feasibility analysis, rather than jumping right in making brash decisions here and there, and I therefore think it can be applied to help figure out how to address today\u0026rsquo;s sovereignty challenges\nSo if you lack business support or buy-in, don’t start by proposing a sweeping “sovereign cloud programme.” Instead, propose to conduct a time-boxed (and what I call) a Sovereignty Feasibility Assessment with a clear output: a set of scenarios and their business impacts. This requires getting the right people in the room - Enterprise Architecture, Security, Data/AI leadership, Procurement, Product, Operations, and ESG/Sustainability to name a few examples.\nThe deliverable is a small number of realistic scenarios or options, for example:\nHarden and Segment: Reduce exposure via encryption and keys without replacing the vendor.\nMulti-Sovereign: Regional routing and localised control planes.\nDual-Stack: Replace specific domains while retaining incumbents for others.\nFull Decouple: (Not fast, not cheap, but sometimes necessary).\nThe value is the explicit trade-offs that must be described in business language.\nBelow is a mock-up of a simplified scenarios matrix which in reality would eventually require actual quantification in terms of cost, time impacts, etc. Obviously different scenarios exist dependent upon your business context.\nWe also really need to be clear about the type of risk we are referring to here - it\u0026rsquo;s *sovereignty risk *and how this is assessed and eventually articulated to senior leadership is important. Since the goal is risk reduction via, say, \u0026ldquo;Decoupling,\u0026rdquo; the risks being treated are likely to be around:\nExtraterritorial access, i.e., the risk of foreign governments (e.g., US Cloud Act) accessing data regardless of where it is legally \u0026ldquo;resident.\u0026rdquo;\nVendor lock-in / sanctions, i.e., the risk of being cut off from critical infrastructure due to trade wars or policy changes.\nOperational resilience, i.e., the ability to operate autonomously without reliance on a global control plane.\nOnce you understand these risks you can then link them to other organisational risks that you may have previously identified and mapped in a causal loop diagram showing the potential ripple effects (see this post that covers this concept) of these sovereignty risks.\nWhat happens to your business if you jerk your knee? This is the part many governance narratives avoid. Reducing dependencies can strengthen resilience, but your company can quickly be in free-fall as a result if you don\u0026rsquo;t do things in a methodical manner with a calm head. You need to understand the impacts, such as:\nThe impact on innovation Hyperscalers often provide best-in-class managed services and AI acceleration. Replacing them can reduce velocity and create talent challenges. Your teams may interpret or experience this as a step back, unless the migration is paired with architectural simplification.\nThe impact on resilience Putting all your eggs in one basket (over-dependence) creates systemic risk i.e., single provider, single jurisdiction. Reducing concentration improves continuity and your ability to negotiate.\nThe market impact Sovereignty can be a commercial and positive differentiator, especially when talking about \u0026ldquo;trust\u0026rdquo; but it can also result in plenty of negative impacts.\nImpact on the operating model Orchestrating the right people to conduct this \u0026ldquo;sovereignty work\u0026rdquo; forces a company to ask some very fundamental questions that they wished they had the easy answers to: Who owns this data product? What is the authoritative source? Where is deletion actually enforceable? This is very much about “compliance is in code” that I referenced earlier in this post. Also as I mentioned earlier, this is a business issue especially these days when tech is integrated tightly in business processes and ownership is in the business.\nWhere does ESG and CDR fit in? Corporate Digital Responsibility (CDR) is getting attention globally, but to be fair, it does have deeper roots in Europe. The European Commission’s approach to data and AI is increasingly framed around societal outcomes, not just economic growth.\nFor GRC leaders, CDR can be the bridge that binds sovereignty to ESG. It extends the Social and Governance pillars into the digital domain, for example, digital inclusion, fairness, transparency, bias prevention. It also touches Environmental concerns through sustainable ICT.\nMy point certainly isn’t to transform every vendor decision into an ESG initiative, rather to show that digital transformation is now part of how the company delivers on its broader obligations and long-term license to operate in the markets it\u0026rsquo;s present in or wishes to enter.\nTo conclude, if you take one idea from this *Beyond Legal *post: stop trying to get buy-in through policy. Instead, achieve it through relevant dialogue and methodical practices.\nYou change the conversation and hold attention when you can present a lineage-based diagram of how the company’s data actually flows and all the vendor dependencies that go with it.\nWe are living in times that demands method, not knee-jerk reactions. Get the right competences around the table, map the lineage, and turn sovereignty into a set of feasible scenarios.\n","date":"23 January 2026","permalink":"/beyond-legal-16-grc-leaders-your-chance-to-shine/","section":"Blog","summary":"","title":"Beyond legal #16: GRC leaders - your chance to shine"},{"content":"AI governance is about to be stress-tested by agentic systems, and the companies that hold up won’t be the most centralised, they’ll be the ones that embed oversight locally without burning out their champions.\nNormally my posts in this blog spill over to LinkedIn but this one is the other way round. A couple of posts there about decentralised AI accountability have come over here.\nFor years, I’ve seen decentralised “local stewardship” models show up in information security, risk, and data protection, and now they’re appearing (at speed) in AI governance.\nI’ve seen this from both angles: years ago as a local security officer and local risk officer inside a large multinational, and later running global security and data protection programmes across other global companies. You learn quickly what good looks like, and what goes wrong. In fact, my own “data protection champions” model was built directly from those early experiences, and several clients have successfully adapted it in their companies.\nI know local champion networks tend to divide opinion. Some people swear by them, others have scars from implementations that quietly failed. My take is, these networks can be powerful, if they’re designed properly and tended to afterwards.\nA public-sector signal worth paying attention to\nRecently I came across the Texas Department of Transportation (TxDOT) AI Strategic Plan 2025–2027 and one element stood out: instead of trying to control every algorithm from a central HQ, TxDOT is building an “AI Champion Network.”\nIt’s essentially the classic Hub-and-Spoke model:\nThe Hub: sets strategy, standards, guardrails, safety expectations\nThe Spokes: embedded “AI Champions” in districts/business units who own execution locally\nThere’s a bigger lesson here than “government does governance too.” It suggests that companies likely to succeed in the next 12–24 months won’t be the ones with the biggest compliance departments, they’ll be the ones that can weave governance into daily operations through embedded networks. So, if a government agency can move beyond an “Ivory Tower” governance model, there’s a good chance your company can too.\nYour local colleagues are not resisting change, they are tired out\nAfter sharing these thoughts in my initial Linkedin post on the topic, a few comments triggered a further (and important) reflection: the risk of “champion fatigue.”\nWhen these networks struggle, it’s often framed as resistance, as if local colleagues are unwilling. But more often, it’s simply overload: people burning out under the weight of extra compliance tasks that got piled onto an already full role, and I think that distinction changes everything.\nIf the problem is overload, the solution isn’t more pressure. It has to be better design.\nBecause governance shouldn’t become a bottleneck at HQ, but it also can’t become a burden dumped onto the frontline (the first line of defence).\nWhy “local champions with checklists” won’t be enough for what’s coming\nMany companies are moving from GenAI (systems that speak) to Agentic AI (systems that act).\nWhen AI starts executing transactions, spending money, triggering processes, or writing and deploying code, governance shifts from “is this content acceptable?” to “is this system operating safely and as intended, end-to-end?”\nIn other words: a local champion armed with a checklist will struggle to provide what’s really needed.\nIn a trends radar I began late last year - AI Governance: 2026 and Beyond - two trends came through particularly strongly:\nAgentic AI governance: a shift from content moderation to operational oversight\nDecentralised accountability: the need for embedded risk owners, not just central policies\nI’ve seen external signals that reinforce these directions, including:\nIBM highlighting governance needs for agentic systems\nA WWT report pointing to “Ambassador Networks” as critical infrastructure in banking\nBoth are worth reading if you’re designing (or redesigning) your local AI governance network.\nThe design challenge: embed governance and reduce friction\nSo the question isn’t “Should we decentralise governance?” It’s:\nHow do we embed accountability locally without turning people into part-time compliance administrators?\nHow do we make the hub strong enough to provide clarity and safety, but light enough not to block delivery?\nHow do we design governance for systems that act, not just tools that generate text?\nDone well, hub-and-spoke governance can scale organisational competence as well as oversight.\nAnd that’s where I think the next wave of AI (and data protection) maturity will come from: not more centralised control, but better-designed networks that make governance part of how work gets done, without burning out the people we rely on.\n","date":"9 January 2026","permalink":"/avoid-the-ivory-tower-hub-and-spoke-ai-governance-without-exhausting-your-local-colleagues/","section":"Blog","summary":"","title":"Avoid the ivory tower: Hub-and-Spoke AI governance without exhausting your local colleagues"},{"content":"Fifteen posts in, Beyond Legal has explored why effective data protection (and AI governance) is less about legal interpretation and more about building real organisational capability across risk, engineering, operations, governance, communication, and measurement.\nWhen I started writing my Beyond Legal series of blog posts, I wanted to challenge a default setting in our field: data protection is often treated as a legal topic, owned by legal professionals, solved with legal artefacts.\nOf course, the laws themselves matter a huge amount. But succeeded in data protection isn’t just about legal interpretation. It also requires companies to fully understand how personal data is *actually processed *in their unique context, make sound decisions under uncertainty, and operationalise controls across their systems.\nNow that I’ve published 15 posts, here’s a brief summary of what’s I\u0026rsquo;ve covered so far, organised by themes rather than in chronological order, because the point of the series isn’t \u0026ldquo;yet another GDPR explainer”. It’s a journey involving recognising and building the needed capabilities.\n1. The real problem isn’t that the GDPR is hard. It’s that the job is bigger than “legal”\nThis past year there\u0026rsquo;s been a lot of commentary in data protection highlighting frustrations around “GDPR complexity” often blaming the regulation itself, rather than looking inwards questioning whether companies have the needed capabilities.\nPosts:\nSimplify the GDPR? Upgrade your competences instead\nBeyond legal #1: The data protection leader\u0026rsquo;s journey begins\nBeyond legal #3: Why data protection leadership is more complex than just \u0026lsquo;difficult\u0026rsquo; laws, and what it takes to succeed\nCore takeaway: when we frame data protection as primarily legal, we over-invest in legal outputs and under-invest in delivery capability.\n2. Risk competence is not optional, it’s the operating system\nData protection work is risk work so a functioning and effective risk management system needs to be at the heart of a data protection leader\u0026rsquo;s work. But risk often gets reduced to vague language (“low risk”, “medium risk”) without shared methods, shared definitions, or decision discipline. The result is inconsistent and subjective decisions, a false sense of being in control, and either blanket blocking (\u0026ldquo;no!\u0026rdquo;) or rubber-stamping.\nPosts:\nBeyond legal #2: Why every data protection and AI governance leader needs SIRA competences in their toolkit\nBeyond legal #4: time to up your risk game\nCore takeaway: if you want data protection (and AI governance) to be credible, you need to be able to analyse and communicate risk in a way that stands up outside the legal function.\n3. Governance isn’t a pile of documents. It’s coordination.\nAnother theme: “governance” is widely misunderstood. Too often, it becomes a synonym for policies, templates, committees, and reporting. But governance is the practical reality of who decides, based on what, with which controls, and with what feedback loop.\nPosts:\nBeyond legal #5: Get to know your data management and data governance colleagues\nBeyond legal #6: The great governance misunderstanding\nBeyond legal #7: You can\u0026rsquo;t protect what you are not aware of\nCore takeaway: you can’t govern what you can’t see. Awareness of data, flows, systems, and ownership is a precondition for meaningful protection.\n4. If you don’t show up in engineering, you’re not in control.\nThis is where “beyond legal” becomes concrete and involves visiting the trenches, or engine room. If privacy or data protection considerations are not embedded into how products and systems are built and changed, it will always be late, reactive, and labour-intensive\u0026hellip;and expensive.\nPosts:\nBeyond legal #8: Why your next hire should be a data protection engineer\nBeyond legal #9: Get to know your SDLC\nCore takeaway:** **scalable data protection is engineered. The most effective controls are designed and built into systems and delivery processes, not bolted on afterwards.\n5. Training and measurement\nA lot of companies can prove they did something (training delivered, DPIAs completed, RoPA entries updated). Fewer can show that the something mattered: improved behaviour, fewer incidents, better design decisions, reduced rework, improved time-to-safe-release.\nPosts:\nBeyond legal #10: The battle for minds - why your data protection training needs an advertising makeover\nBeyond legal #11: Stop counting activities and start measuring outcomes\nCore takeaway: if you want behavioural change, you need communication that competes for attention and metrics that reflect actual impact.\n6. When incidents happen, operations beats legal theatre.\nPersonal data breach response is the ultimate test of whether your “governance” is real and effective. In the moment, what matters is coordination, clarity, and rehearsed operational capability, not beautifully written policies.\nPost:\nBeyond legal #12: From headless chickens to coordinated data breach response - why operations beat legal theatre Core takeaway:** **legal advice is essential, but it can’t substitute for operational readiness.\n7. Third parties: contracts don’t create trust, practices do.\nVendor risk and third-party data protection is often approached by prioritising signatures on documents. But signatures don’t operate controls. Ongoing assurance does - and that requires relationships, transparency, and governance mechanisms that work after onboarding.\nPost:\nBeyond legal #13: From signed contracts to trusted partnerships Core takeaway: third-party governance is a living system, not a one-off legal transaction.\n8 RoPA, business analysis, and “data sovereignty” debates.\nTwo later posts push into a broader category. Our tools and concepts often fail when they’re trivialised or detached from the reality of daily operations.\nA RoPA that doesn’t reflect how processing actually takes place is a paper tiger and a wasted investment. Understanding and living up to Data sovereignty requirements is complex and involves a huge amount of hard work of trade-offs, assessing risk, and system design.\nPosts:\nBeyond legal #14: Does your RoPA really show how your company processes personal data? (Why you need business analysis skills)\nBeyond legal #15: Data sovereignty and absolutism\nCore takeaway:** **good governance needs working models that reflect reality, and leaders who can navigate trade-offs.\nWhat’s still to come?\nMore posts will follow, continuing to map the competences needed for modern data protection and AI governance leadership, and to challenge the idea that data protection and privacy is primarily a job for lawyers. If you’ve been reading any of my posts, I would love to hear:\nWhich theme has been most relevant to your work?\nWhere is the biggest capability gap in your company right now (risk, engineering, operations, measurement, governance)?\nWhat should the next set of posts explore?\n","date":"6 January 2026","permalink":"/15-posts-into-beyond-legal-whats-been-covered-so-far/","section":"Blog","summary":"","title":"15 posts into “Beyond Legal”: what's been covered so far"},{"content":"Why absolutism fails unless the business is absolutist too — and what to do instead.\nIn light of geopolitical events that have had a huge impact to many companies in recent years, especially 2025, I\u0026rsquo;m hearing these phrases in some data protection circles:\n\u0026ldquo;We need sovereign data.\u0026rdquo;\n\u0026ldquo;We can’t use any US vendors.\u0026rdquo;\n\u0026ldquo;No US sub-processors.\u0026rdquo;\n\u0026ldquo;Localise everything.\u0026rdquo;\n\u0026ldquo;What are the EU alternatives.\u0026rdquo;\n\u0026hellip;or similar words or phrases. It sounds decisive, but it also sounds a little bit like the old pattern I’ve been trying to move beyond in this series: theatre - lots of bold statements, documents and controls on paper and in theory, but very little operational capability when reality hits.\nData sovereignty is real. AI infrastructure sovereignty is real. Tightening rules are real. But the absolutist response - blanket bans and black-and-white policies, often fails to reflect the reality that an absolutist sovereignty stance only works if the business is absolutist too.\nAbsolutism is relatively easy on a personal level - I do try to practice this myself in the tech choices I make. I can decide not to use a specific platform, accept the inconvenience, and move on.\nAt a corporate level, absolutism is an operating model redesign. It touches everything:\nmarket access and customer commitments\nproduct roadmap and delivery speed\ncost base, resilience, and talent\nvendor ecosystem, security tooling, AI capabilities\nsupport models, incident response, and audit evidence\nIf the business strategy remains “global scale, fast delivery, best-in-class tooling, predictable cost”, and the data and technology strategy becomes “sovereignty at all costs”, your entire business is at risk. You\u0026rsquo;ll soon be dealing with:\nexceptions,\nworkarounds,\nshadow tooling,\nand a fragmented architecture that increases risk through complexity.\nData sovereignty used to be discussed as a legal constraint: “where is the data stored?” That framing is now too small.\nWhat’s happening instead is a strategic re-architecture of digital markets driven by three connected trends. I outlined the broader landscape (and the trends radar diagram below) in a recent blog post:\nData sovereignty \u0026amp; fragmented digital markets Geography is becoming a design constraint. Digital markets are fragmenting into regional blocs with different rules and expectations. “Borderless” is no longer the default operating assumption.\nAI infrastructure as strategic sovereignty AI infrastructure choices - cloud region, data centre location, chip/GPU dependency, managed AI services - have moved from “technical optimisation” into the realm of:\nsovereignty and national security expectations\nresilience and outage tolerance\nsupply chain risk (including export controls and chip concentration)\nESG and energy footprint scrutiny\nIn other words: your infrastructure strategy is now a geopolitical and resilience strategy.\nTightening data sovereignty rules More localisation mandates, more restrictions, more overlap between data protection and privacy, national security, and AI governance. Compliance is harder, and enforcement is more serious. The response to these trends is not a ban list. The response is a capability.\nTry this: The sovereignty alignment test\nIf you want to adopt an absolutist sovereignty approach (e.g. “no US sub-processors, ever”), run this test first:\nIs my company also willing to adopt the business posture that comes with it?\nMarket: Are we willing to lose deals/markets where sovereignty isn’t achievable today?\nCost: Are we willing to pay for duplicated regional stacks and assurance overhead?\nProduct: Are we willing to accept slower delivery and regional feature divergence?\nVendors: Are we willing to reduce our choice of tools - especially for AI, security, and observability - until local alternatives emerge or mature?\nResilience: Are we ready for the complexity risk created by fragmentation (more moving parts = more failure modes)?\nTalent: Can we staff, operate, and secure multiple regional architectures properly?\nRoadmap: Do we accept this as a multi-year transformation rather than a policy announcement?\nIf your leadership can’t say “yes” to those trade-offs, an absolutist sovereignty policy is not a strategy. It’s a future exception register.\nAnd exceptions are not free. They become risk debt, often undocumented, usually poorly governed, and guaranteed to resurface when you least want them to, e.g. during an incident, or audit to name a couple of examples.\n“Risk-based approach” does not mean eliminate all risk\nA risk-based approach is frequently misunderstood in sovereignty conversations. It does not mean:\n“remove all risks at any cost”\n“choose the most restrictive option by default”\n“treat any transfer or US linkage as automatically unacceptable”\nIt means:\nidentify risks in context,\nimplement appropriate controls (technical, organisational, legal),\naccept and manage residual risk consciously,\ndocument decisions and review them as things change.\nAn absolutist approach can have severe business impact: cost overruns, reduced capability, delivery delays, and ironically, a weaker security posture because the architecture becomes fragmented and harder to operate safely.\nIf you want a visual reminder that risk behaves like a system, where decisions create ripple effects and feedback loops, this is exactly why I built my causal loop diagram:\nSee this recent blogpost that explains the diagram.\nSovereignty without the drama\nBelow is my suggested and pragmatic approach. In line with everything I\u0026rsquo;ve said in this \u0026lsquo;beyond legal\u0026rsquo; blogpost series, it’s intentionally cross-functional: CISO/CTO/CPO, plus procurement, architecture, engineering, data governance, and risk.\nStep 1: Know your data (what personal data is involved, if any?)\nBefore debating processors, sub-processors or regions, map:\nwhat personal data exists and where\nsensitivity (special category, children, location, biometrics, etc.)\npurpose and value-chain dependency\noverlooked personal data: logs, identifiers, telemetry, analytics events\nif AI is involved: training data, fine-tuning data, prompts/outputs, RAG datasets, evaluation datasets\nIf you have a RoPA or an expanded data map and it\u0026rsquo;s all up-to-date, this will help facilitate this step.\nOutcome: sovereignty conversations become data-class specific, not ideology-driven.\nStep 2: Make yourself aware of the legal requirements\nBased on the data you\u0026rsquo;re processing, you need clarity on:\nwhich jurisdictions apply (data subjects, entities, processing locations)\nsector rules (finance, health, telecoms, public sector procurement)\nwhat is hard localisation vs “allowed with safeguards”\nAI-specific obligations where relevant (model lifecycle constraints, transparency, logging)\nOutcome: you begin treating sovereignty as a documented set of constraints.\nStep 3: Quantify data volumes and data subjects\nhow much data (and growth rate)?\ncategories of, and volumes of data subjects?\nwhat is the concentration (one dataset vs. spread across systems)?\nwhat’s the worst-case exposure scenario?\nOutcome: you can prioritise. Not everything needs the same approach.\nStep 4: Assess controls already in place (technical, organisational, legal)\nSovereignty is not only where the infrastructure sits. Controls often matter more:\nTechnical\nencryption in transit/at rest\nkey management model (and who holds keys)\nsegmentation and tenant isolation\naccess governance (PAM/JIT/MFA/least privilege)\nmonitoring, logging, anomaly detection\ntokenisation/pseudonymisation\nbackup and DR geography controls\nrestricting admin/support access paths where feasible\nOrganisational\nownership and stewardship\nchange control for data flows and AI model lifecycle\nincident response readiness across regions\nsupplier monitoring discipline\nLegal\nDPAs and R\u0026amp;Rs\nsub-processor transparency and approval workflow\naudit/assurance rights and evidence expectations\ncross-border transfer mechanisms where relevant\nOutcome: you discover where risk can be reduced without knee-jerk bans.\nStep 5: Nature of business and value chain information needs\nThis really is the \u0026lsquo;beyond legal\u0026rsquo; heart of sovereignty. Ask:\nwhat information must flow for fraud, security operations, support, analytics, product improvement?\nwhere will localisation break core workflows?\nwhat is the commercial reality (public sector, regulated customers, procurement gates)?\nOutcome: sovereignty posture becomes aligned with business strategy, not clashing with it.\nStep 6: Choose your approach(es)\nA practical way to avoid absolutism is to define one or more sovereignty approaches by data class and workload (for example):\n\u0026ldquo;Sovereign-by-necessity\u0026rdquo; Highly regulated/high-risk datasets and workloads. Localise and ring-fence with strong assurance.\n\u0026ldquo;Multi-sovereign modular\u0026rdquo; Regional data planes with governed cross-region capabilities (such privacy-preserving patterns, minimisation, aggregation). This is where some multinationals end up.\n\u0026ldquo;Global with controls\u0026rdquo; Low-risk datasets/workloads where global processing is acceptable with safeguards and documented residual risk.\nOutcome: you stop trying to force one rule onto all processing and start designing for reality.\nPlease note that Sovereign‑by‑necessity, multi-sovereign modular and global with controls are not formal legal categories.\nUS sub-processors are not automatically a show-stopper\n“AWS or Google anywhere in the chain = stop.”\nA US-based processor or sub-processor may increase certain risks (including government access considerations). But it is not automatically a decisive conclusion without understanding:\nwhat data is involved (and sensitivity)\nwhat the sub-processor actually does, i.e. the actual processing services it provides to you. And this is where I see too much alarmism. Just because a controller or processor lists a US vendor in their privacy policy, it requires deep understanding in your unique business context\nwhat access paths exist (including support)\nwhat safeguards exist (encryption, key control, segmentation)\nwhat residual risk remains, and whether it is acceptable for this dataset and business purpose\nSometimes the answer will be: localise. Sometimes the answer will be: proceed with strong safeguards and documented residual risk.\nEither can be defensible. The indefensible position is making the decision before understanding the data and the controls.\nClosing: build sovereignty capability\nThe \u0026lsquo;beyond legal\u0026rsquo; lesson here is simple:\nSovereignty is governance (board-level direction and risk appetite).\nSovereignty is architecture (modular, multi-region, control-rich).\nSovereignty is business strategy alignment (markets, cost, capability trade-offs)\n","date":"5 January 2026","permalink":"/beyond-legal-15-data-sovereignty-and-absolutism/","section":"Blog","summary":"","title":"Beyond legal #15: Data sovereignty and absolutism"},{"content":"We\u0026rsquo;re not just another consultancy, we\u0026rsquo;re a catalyst for change.\nAt Purpose and Means, we don\u0026rsquo;t settle for compliance for compliance\u0026rsquo;s sake. We partner with leaders across the globe to align data protection tightly with business purpose and strategy, ensuring it\u0026rsquo;s a living, breathing part of your organisation - not just a legal checkbox.\nWhat makes us different?\nWe bring data protection to life: Using visual thinking and engagement-focused methods, we make data protection accessible and actionable from top to bottom.\nBeyond the legal bias: While we collaborate with legal teams, we break down silos, empowering every function of your business to embrace ethical, rights-based approaches to data.\nEthics at the core: We don\u0026rsquo;t just preach and teach compliance; we help you embed fairness and transparency, reflecting key principles of European data protection law.\nA transparent, bespoke approach\nProven track record: Trusted by global corporations, we deliver results that match the complexity and demands of today\u0026rsquo;s business environment.\nOn your schedule: With open calendar availability, you know when your needs can be met—no guesswork or delays.\nData protection isn\u0026rsquo;t just our work - it\u0026rsquo;s our passion.\nIf your current approach feels theoretical, disconnected, or stuck in a legal silo, let\u0026rsquo;s change that. Together, we\u0026rsquo;ll make data protection a dynamic force for your business.\nBook a call to hear more!\n","date":null,"permalink":"/about-purpose-and-means-data-protection-consultancy/","section":"Pages","summary":"","title":"About Purpose and Means"},{"content":" We help leaders bridge the gap between Data Protection, AI Governance, and ESG. We guide you to find the blind spots, align your teams, and turn compliance into a shared business value - without the bloat of a big consultancy.\nCurrently in your company, your data protection team protects data. Your CISO secures it. Your ESG Lead counts carbon. Your data team builds AI. But who connects them? When these functions operate in silos, you create regulatory risk, waste resources, and miss the \u0026lsquo;Governance\u0026rsquo; (G) in ESG. Our latest foresight radar confirms that this era is over and you don\u0026rsquo;t need a new department, you need alignment.\nBased on our analysis of over 50 emerging trends, technologies, and verified market signals, there are four important shifts that every leader working in data protection, ESG, and AI governance must prepare for:\n1. \u0026ldquo;Responsible AI\u0026rdquo; is no longer optional #We are seeing a rapid shift from voluntary \u0026ldquo;AI Ethics\u0026rdquo; to mandatory Institutionalised Governance (megatrend). Soon, boards will face fiduciary liability for AI failures. The \u0026ldquo;black box\u0026rdquo; excuse will no longer hold up in court or in the annual report.\nThe shift: We are moving from manual spreadsheets to Automated AI Governance Platforms. Just as you have a system of record for finance (ERP) and customers (CRM), you will need a system of record for AI. The action: Stop writing policy documents that no one reads. Start implementing Automated Compliance Tools and Fairness Toolkits that hard-code your values into the software pipeline. 2. The new currency is \u0026ldquo;Verified Trust\u0026rdquo; #Our foresight analysis identified a major gap in the market that is filling fast: Standards \u0026amp; Expectations. It is no longer enough to say you are trustworthy. You must prove it.\nThe signal: We are seeing a significant spike in the adoption of ISO 42001 in 2025. It appears that it is becoming the \u0026ldquo;badge of trust\u0026rdquo; for the AI era. The implication: Soon, uncertified vendors will simply be locked out of high-value supply chains (Finance, Health, Public Sector). Trust is becoming a procurement gate. 3. The \u0026ldquo;Green\u0026rdquo; strategy must include the \u0026ldquo;Digital\u0026rdquo; strategy #We identified an interesting blind spot: the physical footprint of AI. Stakeholders are waking up to the reality that GenAI is thirsty for water and hungry for power.\nThe looming crisis: By 2030, we face a massive E-Waste Crisis driven by the rapid turnover of AI chips. The solution: It\u0026rsquo;s time to implement AI Carbon Accounting. Developers need to see the energy cost of their code. Also, procurement needs to shift toward Circular AI Hardware - leasing and recycling compute power rather than buying and dumping it. 4. Sovereignty dictates architecture #The idea of a \u0026ldquo;borderless cloud\u0026rdquo; is fading. Data sovereignty rules are forcing a redesign of how data flows.\nThe reality: You can no longer just \u0026ldquo;put it in the cloud.\u0026rdquo; You must ask: Which cloud? Where is the data centre? Who holds the encryption keys? The tech response: We are seeing the rise of Sovereign Clouds and Data Routing Fabrics that automatically steer data based on jurisdiction. Architecture is becoming a geopolitical decision. Quantum and 6G #Looking beyond 2028, two massive technological waves will reshape the risk landscape:\nThe Quantum Threat: The \u0026ldquo;Harvest now, decrypt later\u0026rdquo; risk is real. Companies dealing with long-life data must start migrating to Quantum-Safe Cryptography. AI-Native 6G: By 2030, networks will not just carry data, they will negotiate it. AI Interfaces will allow devices to autonomously buy connectivity and energy, creating a new machine-to-machine economy that requires strict governance. The integrated future #The companies that will succeed in the next decade will not just be the companies with the smartest AI. They will be the companies with the most trusted AI. They will be the companies that have successfully merged their CISO, CDO, and ESG mandates into a unified \u0026ldquo;Digital Trust\u0026rdquo; strategy.\nHow we can help you #We help you align AI, data protection with your ESG goals so that your teams understand the part they need to play, and the actionable steps they need to take.\nWe guide you through the following steps:\nSelect appropriate mapping methodology Identify synergies between data protection and ESG goals Allocate responsibilities with clear ownership across teams Develop meaningful metrics so employees understand their role Engage employees through contextual education and training Implement via operational procedures with triggers, monitoring, and improvement Outcomes You Can Expect # A cohesive framework that aligns data protection with ESG goals. Clear metrics that link data protection practices to measurable ESG outcomes. Educated and empowered employees who understand how to make the right decisions. Enhanced trust with stakeholders through transparency and accountability. Want to get started? #Ready to integrate data protection into your ESG strategy? Let\u0026rsquo;s work together to empower your organisation and drive measurable impact. Contact us today to get started.\n","date":null,"permalink":"/esg-and-data-protection/","section":"Services","summary":"","title":"AI, Data Protection and ESG"},{"content":"Equip your team with the skills and knowledge needed to perform their work.\nWe offer a wide range of custom courses that can be delivered live online, or in-person at your offices.\nOur course portfolio includes the following courses (contact tc@purposeandmeans.io to arrange a call to discuss your specific training needs):\nData Protection \u0026amp; Privacy # Preparing for a data protection audit How to make a remediation plan after an audit or assessment Unravelling the abstract elements of the DPIA Mapping key EDPB/Art.29 WP guidance against elements of your privacy program Design Patterns v Dark Patterns History and origins of Data Protection \u0026amp; Privacy Laws Data subjects – the importance of value sensitivity Anonymization techniques Technology considerations in the privacy program Modelling privacy threats and violations Countermeasures Privacy Engineering basics Privacy by Design principles Privacy concerns resulting from emerging technologies Data protection principles and concepts The importance of a purpose-driven data protection strategy aligned with the business strategy Unpacking a Privacy Program: why it\u0026rsquo;s important, what it consists of, and how the elements fit together Technology and privacy Data protection in Europe (GDPR primer) Establishing a privacy program step-by-step Joint controllership Transfer impact assessments The real value of the ROPA Data controller and data processor obligations Marketing \u0026amp; Employee Data # Privacy for Marketeers: Data about people – let\u0026rsquo;s start with the basics Privacy for Marketeers: Embedding privacy considerations into your projects Telematics – benefits and privacy concerns The importance of data classification Elevating data protection across the organisation through strong employee engagement Data about People – the broadness of personal data Avoid online deception Comparing and contrasting individual rights Ethical dilemmas – a quiz Risk \u0026amp; Governance # Data protection and privacy risk – linking to enterprise risk Processing of employee personal data – basic considerations Monitoring of employees – trends, risks and cases Cookies and trackers – what\u0026rsquo;s changing? Aligning data protection with ESG Data management for DPOs Quantitative risk measurement What do we mean by \u0026lsquo;rights and freedoms\u0026rsquo;? The challenges of combining and matching data Data protection risk – cause and effects (linking to ERM) AI Governance # EU AI Act – a primer Understanding ethical AI risks AI fundamentals – with a case AI fundamentals – a primer GenAI risk guidance: for employees International Frameworks # APEC framework overview India overview Japan overview China overview Singapore overview Australia overview New Zealand overview Brazil framework Mexico overview Columbia overview Peru overview Argentina overview Chile overview Saudi Arabia overview Israel overview ","date":null,"permalink":"/custom-education-and-training/","section":"Services","summary":"","title":"Bespoke Education and Training"},{"content":"Book a Call with Purpose and Means #It would be great to hear from you. To book a call, click the link below to access availability and choose a time that works for you. As many of our clients use Microsoft tools, we also make use of Outlook, Teams and Bookings.\nBefore you click: you will be redirected to an external Microsoft booking page (Microsoft Bookings, part of Microsoft 365). This page is hosted and operated by Microsoft, not Purpose and Means. When you book a call, Microsoft will collect your name, email address, and chosen time slot in order to confirm the booking and send you a calendar invite.\nFor more information on how Microsoft processes personal data, please refer to the Microsoft Privacy Statement.\nBook a call with Purpose and Means →\n","date":null,"permalink":"/book-a-call/","section":"Pages","summary":"","title":"Book a Call"},{"content":" \"Purpose and Means delivered a highly professional and exceptionally well-structured online privacy workshop for our dispersed European team. Despite the remote format, the methodology proved effective, engaging, and fully aligned with — if not exceeding — our expectations. The session successfully enhanced the knowledge of 20+ participants, providing clear and well-founded methodology on complex areas of privacy and data protection. It also contributed meaningfully to enabling collaboration and constructive discussion across the group. I was particularly impressed by the rigorous approach, and the dynamics around technically intricate matters in a manner that was both accessible and thought‑provoking. We confidently recommend Purpose and Means to any organisation seeking to strengthen its data protection competencies.\" — Telma Candeias, Head of Privacy Europe, Retail Broking, Howden The voices from your team #Formal maturity and readiness assessments are valuable but are often conducted using resources outside the team, e.g. assurance or audit teams, or external teams, or even as a Data Protection leader you may facilitate such sessions yourself.\nJust like many companies value the \u0026ldquo;voice of the customer,\u0026rdquo; there are so many opportunities to empower your own team members and elicit and capture their voice, their insights and opinions — after all, they are working at the coal face and encounter the issues head on. Often, problems result from issues that fall outside of traditional assessments - it could be inter-department niggles, blockages where they may be powerless against your senior management peers, or they encounter gaps in your company\u0026rsquo;s operational procedures.\nPurpose and Means design and conduct these workshops according to your unique requirements. We use traditional workshop approaches but also make use of more creative methodologies such as LEGO® Serious Play®.\n\"We've worked with Purpose and Means for a few years now and they continue to provide us with education and training solutions that effectively engage our employees across the globe.\" — Rogelio Aguilar Alamilla, DPO and Head of DPO Services, BNP Paribas Training for your global teams #Purpose and Means partner with global corporations that span EMEA, LATAM, NA, and APAC, ensuring employees stay informed on local laws, global and local policies, cross-market developments, and emerging technologies.\nEach year, we collaborate with clients to prioritise key topics, tailoring our training to meet their specific needs. The result? Simplified compliance, reduced risk exposure, and empowered teams ready to tackle real-world challenges.\nWe create engaging course materials delivered via live webinars and compiled into interactive digital books and policy packs. These include videos, visually striking presentations, concise text, hands-on exercises, and quizzes that make learning practical and memorable.\nOur gamification features, such as leaderboards, drive participation and enable a culture of continuous learning, helping your organisation stay ahead.\n\"We've collaborated with Purpose and Means over the past couple of years where they have structured and facilitated regulatory impact analysis and improvement workshops for our global community of Data Protection Coordinators. We were impressed by their skill, pragmatism and structured approaches.\" — Dr Andreas Gabriel, Head of Group Data Protection, Everllence (former MAN Energy Solutions) Tell me and I forget, teach me and I remember, involve me and I learn #People are the cornerstone of any data protection programme. With constant change being the reality of doing business, leaders must bring diverse groups together — not just to explain upcoming changes, but to involve them in business analysis, prioritisation, and planning. This collaboration ensures the changes are embraced, the plans are realistic, and ownership is shared across the team.\nFor global companies, local nuances matter. Effective workshops often require bespoke approaches not only due to regulatory differences, but also taking into account local cultural considerations. So for companies with global networks of Data Protection Coordinators or Privacy Stewards, gathering those groups, designing the workshop structure and how the facilitation takes place will vary depending upon the jurisdiction. It\u0026rsquo;s being mindful of these differences that can be the difference between a successful business outcome and a mediocre one.\n\"Purpose and Means has been a valuable partner in delivering tailored education and training for our HR and Digital Marketing teams, while also enhancing organisational awareness on key data protection topics. Purpose and Means also help our Privacy Week events succeed with some great content.\" — Daugaile Syrusaite, Group DPO, CooperCompanies Prioritising critical processing activities #Operating globally demands a workforce skilled not only in core business areas but also in nuanced, context-specific data protection practices. We provide tailored education and training designed to meet these needs. By engaging directly with your teams, we identify their unique challenges - because generic solutions may tick a box but rarely build the expertise required to manage risks effectively.\nIn many companies, areas like HR, Digital Marketing, and Product Development carry elevated risks due to their complexity and diverse practices influenced by local laws and regulations, and diverse technologies.\nAs a small business, Purpose and Means offers the agility to analyse, adapt, and refine training plans in step with your changing business conditions. The outcome? Teams that are informed, confident, and equipped to safeguard critical processes while navigating global regulatory landscapes.\n\"Purpose and Means facilitated our employee engagement process and helped change the perception of privacy which aided our interactions with employees and management alike.\" — Unni Kathe Ottersland, Privacy Compliance Director EMEAP Your virtual communications department #Data protection leaders operate in fast-moving environments where unresolved issues can escalate, stakeholders demand quick responses, and staying ahead of constant regulatory and technological change is a must.\nSuccessful leaders recognize the importance of year-round employee engagement, creating activities tailored to job roles, addressing knowledge gaps, and building essential skills.\nWe work with you to develop an Employee Engagement Roadmap aligned with your company\u0026rsquo;s preferred channels and formats. From intranet portals to diverse materials and tailored news briefings for stakeholder groups, we ensure data protection awareness and capability are elevated across your workforce. The result? Enhanced compliance, proactive risk management, and a culture of informed decision-making.\n\"Purpose and Means conducted a series of full-day requirement elicitation workshops with our key stakeholder groups that allowed us to pinpoint and prioritise opportunities for improvement.\" — Adam Gell, Deputy Commissioner, Isle of Man Information Commissioner Structure and facilitation #Data protection is a team effort involving diverse stakeholders - each with distinct roles, obligations, and perspectives. Whether you\u0026rsquo;re a Data Protection Regulator, Data Controller, Processor, or Data Subject, success depends on aligning people, processes, information, and technology.\nWhen it’s time to enhance engagement and refine ways of working, our structured, facilitated workshops and interview-based approach help uncover hidden requirements and viewpoints.\nWe guide you in sorting, prioritizing, and integrating what you discover, transforming complex issues into clear, actionable steps. The result? A focused strategy, a practical roadmap, and a solid business case to drive meaningful progress in your data protection initiatives.\n\"Purpose and Means' assessment of our current data protection practices allowed us to pinpoint and prioritise opportunities for improvement.\" — Lasse Lippert, General Counsel, Flying Tiger Copenhagen Stronger business alignment and buy-in #In many companies, data protection is seen as a purely legal concern, which can limit funding and buy-in, especially if the legal team\u0026rsquo;s size or data protection expertise is constrained. However, if data drives your business, it\u0026rsquo;s essential for data protection to align seamlessly with your business strategy and data touchpoints. It\u0026rsquo;s not just compliance — it\u0026rsquo;s a foundation for trust and value creation.\nWe address gaps by conducting a thorough maturity assessment of your current practices, analysing your business strategy and key programmes, and cultivating a purpose for data protection that resonates across your organisation. From there, we develop a strategy that tightly aligns data protection with your business\u0026rsquo;s data-driven goals.\nOur proven, repeatable approach emphasises strong stakeholder engagement and the use of visual tools, making it easier to secure buy-in and drive meaningful, sustained improvements.\n\"Purpose and Means' initial analysis and data protection strategy provided a solid foundation for growing Pandora's data protection capabilities.\" — Daniel Herman Røjtburg, Associate General Counsel, Pandora Elevate your approach to data protection #In many companies, data protection is viewed as a back-office responsibility - something for the legal team to handle quietly. But this perspective can stall progress, especially in a world where data fuels innovation and multi-stakeholder trust.\nTrue business growth requires integrating data protection into the heart of your strategy. It\u0026rsquo;s not just a safeguard — it\u0026rsquo;s a competitive advantage.\nOur approach starts with a deep dive into your operations that are dependent upon the processing of personal data. We evaluate your systems, examine your strategic goals, and identify opportunities to embed data protection as a core business enabler.\nFrom there, we develop roadmaps that transforms data protection into a strategic powerhouse. Roadmaps that focus on work packages and deliverables, and roadmaps that communicate business outcomes.\nUsing interactive methods and intuitive tools, we ensure everyone - from top leadership to frontline teams - understands the value and gets behind the vision.\nBecause when data protection evolves into a business asset, it stops being a barrier and starts driving results.\n","date":null,"permalink":"/data-protection-client-cases/","section":"Pages","summary":"","title":"Client Feedback and Cases"},{"content":"date: 2026-01-01 showDate: false showReadingTime: false showBreadcrumbs: false\nRead our latest thinking on data protection, AI governance, GRC leadership, and emerging technology regulation.\n","date":null,"permalink":"/data-protection-and-tech-blog/","section":"Pages","summary":"","title":"Data Protection and Tech Blog"},{"content":"Building better programmes: a path to continuous improvement\nHow effective is your current programme in meeting its goals? Whether it\u0026rsquo;s data protection, ESG, or another key initiative, a maturity assessment provides a clear view of where you stand, and where you need to go.\nWe help organisations identify gaps, prioritise improvements, and build a roadmap for success.\nHow we do it # Discovery and context setting: Understand your programme\u0026rsquo;s scope, objectives, and key stakeholders. Define the assessment criteria based on industry standards and organisational priorities.\nAssessment methodology: Evaluate programme practices using existing best practice frameworks. Analyse programme alignment with business goals, regulatory requirements, and employee engagement.\nKey insights and reporting: Deliver a comprehensive report outlining strengths, weaknesses, and gaps. Provide actionable recommendations for immediate quick wins and long-term improvements. Provide management summaries and \u0026rsquo;leadership intensive\u0026rsquo; workshop.\nRoadmap development: Prioritise actions based on criticality and business impact. Design a step-by-step plan to elevate your programme\u0026rsquo;s maturity, with clear milestones and deliverables.\nKey focus areas # Leadership and oversight: Evaluate your organisational structure to ensure appropriate leadership roles and clear reporting lines are in place. Identify opportunities to strengthen oversight and accountability within your programme.\nPolicies and procedures: Assess your current policies and procedures for relevance, effectiveness, and alignment with industry standards. Recommend updates or new measures to enhance operational efficiency and regulatory compliance.\nTraining and awareness: Review the design and delivery of your education and training programmes to ensure they effectively raise awareness and build competency. Propose interactive and role-specific learning solutions to reinforce best practices.\nTransparency and trust: Ensure communication about your programme is clear, consistent, and accessible to all stakeholders. Highlight how transparency can enhance trust and build stronger relationships with customers, employees, and partners.\nMetrics and continuous improvement: Establish measurable indicators of success to track progress and demonstrate accountability. Build a framework for regular reviews, ensuring ongoing refinement and alignment with organisational goals.\nOutcomes you can expect # A clear understanding of your programme\u0026rsquo;s strengths, weaknesses, and areas for improvement. Actionable recommendations aligned with your company\u0026rsquo;s risk appetite. Improved operational practices that align with business objectives and stakeholder expectations. A roadmap for building a mature, scalable, and effective programme. Interested? #Ready to elevate your programme\u0026rsquo;s maturity and unlock its full potential? Contact us today to get started on a bespoke assessment.\n","date":null,"permalink":"/data-protection-maturity-assessments/","section":"Services","summary":"","title":"Data Protection Programme Maturity Assessments"},{"content":"A comprehensive framework for building trust, driving growth, and ensuring ethical data practices\nThese days, data protection is no longer just a legal requirement – it\u0026rsquo;s a strategic imperative. Our comprehensive framework provides companies with the knowledge, tools, and guidance to cultivate a clear data protection purpose and implement a robust strategy that aligns with business objectives, builds trust with stakeholders, and enables sustainable growth. Moving beyond a purely legal perspective, this framework transforms data protection from a \u0026ldquo;necessary evil\u0026rdquo; into a business growth enabler and a source of competitive advantage.\nKey Benefits # Elevate Data Protection to a Strategic Level: Integrate data protection into your company\u0026rsquo;s core business strategy, ensuring that it\u0026rsquo;s not just an afterthought but a fundamental consideration in all personal data related decision-making processes.\nBuild and Maintain Stakeholder Trust: Demonstrate a genuine commitment to data protection, building trust with employees, customers, partners, and other stakeholders. Address the concerns that keep your CEO awake at night – not just regulatory fines, but the erosion of customer trust and reputational damage.\nDrive Business Growth and Innovation: Unlock the potential of personal data while ensuring ethical and responsible data use, enabling innovation and creating new business opportunities. Use data protection as a competitive differentiator, attracting and retaining customers who value trust and transparency.\nProactively Manage Data Protection Risks: Identify and mitigate key data protection risks, going beyond generic compliance to address the real-world impact on individuals. Implement reporting mechanisms that align data protection metrics with business needs to have insight on how to manage daily operations and drive business value.\nImprove Stakeholder Engagement and Collaboration: Engage key stakeholders across the company, including senior management, digital marketing, HR, IT, and legal, by communicating the value of data protection in their terms and building a shared sense of responsibility.\nCreate a Sustainable Data Protection Programme: Develop a practical, actionable framework that integrates seamlessly into your company\u0026rsquo;s daily operations. Through adequate investment, implement effective policies, tailored procedures, and meaningful metrics.\nStaying Ahead: By incorporating foresight and horizon scanning into your work you\u0026rsquo;ll be able to proactively monitor all the ever-evolving regulations and emerging technologies.\nAdapt to Emerging Technologies: Address the unique challenges and opportunities presented by emerging technologies like AI, ensuring that data protection is built in from the start.\nIntegrate with ESG and Social Responsibility: Align data protection initiatives with broader ESG and CSR objectives, demonstrating a commitment to ethical and sustainable business practices.\nKey Features # The \u0026ldquo;Why\u0026rdquo;: Define the key elements of a data protection strategy. Understand the key triggers that make data protection purpose and strategy essential, including the limitations of relying solely on enforcement and the critical role of trust in today\u0026rsquo;s marketplace.\nThe \u0026ldquo;What\u0026rdquo;: Define the key work packages involved in the formulation of your data protection strategy, including gap analysis, alignment with business objectives, consideration of unique organisational factors, and the identification of key enablers across various domains.\nThe \u0026ldquo;How\u0026rdquo;: Follow a step-by-step guide to cultivating a data protection purpose and implementing a strategy, including stakeholder engagement, maturity assessments, prioritisation, and the development of a detailed roadmap.\nKey Deliverables: Utilise the key deliverables of a data protection strategy, including easy to follow one-pagers, detailed plans, an outcomes roadmap focused on business impact, and actionable insights for improving your data protection posture.\nThe Importance of Purpose: Embrace the need for a clear and compelling \u0026ldquo;data protection purpose\u0026rdquo; that resonates with leadership and employees, moving beyond compliance to a value-driven approach that promotes ethical data practices and builds trust.\nTarget Audience #This framework approach can easily be applied to other disciplines aside from data protection, for example AI Governance, Information or Cyber Security, to name a couple:\nData protection leaders Privacy officers Compliance Officers Chief Information Security Officers (CISOs) Chief Marketing Officers (CMOs) Chief Information Officers (CIOs) Chief Technology Officers (CTOs) Risk Officers Senior management Anyone involved in data processing and risk management Value Proposition #This Data Protection Purpose and Strategy Framework provides a roadmap for organisations to transform their approach to data protection, moving from a reactive, compliance-driven mindset to a proactive, value-driven approach. By aligning data protection with business objectives, building trust with stakeholders, and fostering a culture of ethical data practices, companies can unlock the full potential of data while mitigating risks and achieving sustainable growth. This framework is not just about avoiding fines – it\u0026rsquo;s about building a better business.\nInterested? #Let\u0026rsquo;s transform how your organisation views data protection. Feel free to get in touch to arrange a no obligation call to discuss your current situation and challenges with a view to making tangible improvements.\n","date":null,"permalink":"/data-protection-purpose-and-data-protection-strategy/","section":"Services","summary":"","title":"Data Protection Purpose and Strategy"},{"content":"date: 2026-01-01 showDate: false showReadingTime: false showBreadcrumbs: false\nData Protection Services and Digital Compliance Services | Purpose and Means\nKey issues we address #Many consultancies promise to do it all. We don’t.\nOur focus is sharp and deliberate: tackling the challenges we’re best equipped to solve. The following issues matter, and when addressed the right way, they can have a huge positive impact on your work.\nRisk measurement #Quantify, build trust, and make smarter compliance decisions.\nVirtual communication support #Keep your stakeholders informed, engaged, and on the same page.\nHorizon scanning and analysis #Spot tomorrow’s challenges today.\nData protection purpose and strategy #Programme maturity assessments #Identify the strengths and weaknesses of your data protection programme and formulate a prioritises remidiation roadmap and detailed plans.\nESG and data protection #From planning to lessons learned, our workshops are designed to move the needle.\nCustom education and training #Align your data protection work with your business strategy to elevate funding, buy-in and support.\nRapid analysis workshops #Identify and align data protection practices with your company\u0026rsquo;s ESG goals.\nEquip your team to think critically, adapt swiftly, and act confidently.\n","date":null,"permalink":"/data-protection-services/","section":"Pages","summary":"","title":"Data Protection Services"},{"content":"","date":null,"permalink":"/explainers/","section":"Explainers","summary":"","title":"Explainers"},{"content":"LEGO® SERIOUS PLAY® #Facilitated workshops for rapid results and zero boredom.\nData protection, AI governance, infosec, and laws and regulations are not just about articles and recitals, or standards and frameworks. They involve a complex ecosystem of people, technology, borders, and processes. Standard meetings often fail to capture these intricate dependencies.\nWe are increasingly using LEGO® SERIOUS PLAY® (LSP) because it allows leaders to build 3D models of their complex systems, making hidden realities visible. It turns abstract compliance issues into tangible objects that can be discussed, connected, and understood. Because technology regulation involves abstract concepts, invisible risks, and complex stakeholder relationships, LSP is particularly effective for making these issues concrete and manageable.\nLSP can be used in a number of different workshop settings and contexts, dependent upon workshop objectives. Below are a couple of examples — a half-day workshop and a full-day workshop (split over two half-days). Both are titled From insight to action: Defining your data protection priorities.\nThe compact approach #This is a focused, 4-hour online strategy session designed specifically for data protection professionals. It moves beyond standard meetings or assessments to map the current reality of your key data topics.\nThis workshop is designed for data protection leaders and their teams, or clusters of embedded cross-functional data protection responsibles — typically looking to improve collaboration or to reset and align on strategy and priorities for the coming period.\nUsing LSP allows us to facilitate deep dialogue and rapid problem-solving. By building 3D models of complex challenges, participants unlock insights and communicate nuances that words alone cannot capture.\nAnd just to clarify — this is not an icebreaker. It is a facilitated, neuroscience-based process for navigating complexity and aligning teams.\nThe real time view. The focus of this session is \u0026ldquo;real time identity.\u0026rdquo; We do not spend time discussing how things should be according to policy, or how we hope they will be in five years. We focus entirely on the right now — creating a safe environment for colleagues to build the messy reality of their daily work: the strengths, the bottlenecks, and the workarounds. Only by visualising the actual gap between policy and practice can we effectively bridge it.\nKey workshop outcomes:\nVisualise complexity: Turn abstract compliance issues into tangible objects that can be discussed, connected, and managed strategically. Candid collaboration: The method levels the playing field, ensuring 100% participation and allowing team members to share perspectives they might not voice in standard meetings. Identify hidden gaps: By visually mapping the landscape of pressures impacting their work, hidden risks and resource gaps become apparent. Concrete priorities: Move rapidly from insight to action. The session concludes with agreed-upon, actionable priorities for the coming period. This is a lean-forward, highly interactive online session. Participants receive a small pack of LEGO® bricks in the post prior to the workshop. We use MS Teams for dialogue and Miro as our digital canvas to capture results.\nThe 4-hour agenda at a glance:\nFoundation \u0026amp; warm-up: We begin by onboarding teams to the methodology, using quick, engaging exercises to establish the \u0026ldquo;hand-mind connection\u0026rdquo; and the rules of engagement. Your data protection REALITY: Working in small breakout groups, participants build individual models representing specific data protection topics from two perspectives: the internal reality (strengths/weaknesses) and external perceptions (stakeholders/regulators). Your data protection LANDSCAPE: We expand the view to identify the agents — the specific trends, regulatory pressures, technological gaps, or stakeholders impacting the topic right now. These are mapped visually to show their proximity and impact. Priorities (planning): Analysing the visual landscape they have created, the workgroups define key strategic conclusions and agree on top 3–5 concrete priorities for the period ahead. The session closes with a knowledge-sharing plenary.\nThe \u0026ldquo;split\u0026rdquo; approach (two half-days, or one full day) #This longer session can be held across a full working day, or — as we recommend — split across two half-days, providing participants a long break from their screens, time to reflect and recharge. The longer time allows us to work closer with you and dive deeper into mapping your data protection landscape.\nHalf day 1 focus: Mapping the Reality (The \u0026ldquo;What\u0026rdquo;). We spend the entire session building the foundation, defining the current identity of the selected data protection topics, and mapping the surrounding landscape of pressures. The day ends with a completed visual map of the LEGO® models on Miro. Half day 2 focus: Analysing \u0026amp; Planning (The \u0026ldquo;So What \u0026amp; Now What\u0026rdquo;). We return to the maps to define the connections (the system), analyse the implications, and develop concrete strategic actions. Our LSP workshops can be held in-person, online, or hybrid. The online version is particularly attractive for dispersed teams — small LEGO® kits are posted to all participants in advance, and on the day we come together using MS Teams and Miro.\nWe firmly believe data protection, AI governance, and GRC in general needs new thinking and different approaches to address the many challenges leaders have faced over the years. LSP is a good starting point for doing things differently.\nWant to know more? Feel free to book a call where we\u0026rsquo;ll explain more about the LSP approach and how it can align with your requirements.\nLEGO® SERIOUS PLAY® explainer #The following explainer provides the gist of what the LSP approach is all about.\n","date":null,"permalink":"/facilitated-lego-serious-play-workshops/","section":"Services","summary":"","title":"Facilitated LEGO® SERIOUS PLAY® Workshops"},{"content":"We create a lot of content for various clients. On this page, we\u0026rsquo;ve collected examples of content created in the past that show different content types, styles, and formats.\nYou are welcome to copy for your own personal use. For commercial use — if you wish to use it in your own work context — please email us about credit/attribution.\nEarly Origins of AI A timeline of key milestones in the history of AI, from philosophical foundations to the recent rise of mainstream adoption.\nEU AI Act 5 Minute Explainer A visual summary of the EU AI Act, outlining its risk-based approach, key compliance requirements, and categories of regulated AI systems.\nEU AI Act Pre-Project Game Plan A pre-project game plan for EU AI Act compliance, emphasising strategic planning, team onboarding, and impact analysis.\nEU AI Act Mind Map A visual mapping of the structure and chapters of the EU AI Act, detailing its main regulatory sections, referenced articles, and annexes.\nHistory and Origins of Employee Monitoring An historical timeline of employee monitoring, tracing its evolution from factory systems through CCTV, GPS tracking, AI analytics, and biometric monitoring.\nTechnologies that Facilitate Employee Monitoring A summary of the main technologies used in employee monitoring, including computer monitoring, video surveillance, GPS tracking, and biometrics.\nHistory of Tracking Consumers — Pre Cookies An overview of consumer tracking before cookies, covering survey research, credit card tracking, database marketing, and pre-web analytics.\nHistory and Origins of Tracking Consumers Illustrates the evolution of consumer tracking with cookies, from adoption through rising privacy concerns, regulatory actions, and third-party cookie restrictions.\nDPIA on a Page A high-level overview of DPIA essentials, explaining purpose, key triggers, assessment criteria, team roles, and integration into business processes.\nk-anonymity, l-diversity \u0026amp; t-closeness Explains the principles, pros, and cons of these data anonymisation techniques and the types of attacks they mitigate or leave vulnerable.\nEU Fundamental Rights and Freedoms of Individuals An overview of the main categories of fundamental rights and freedoms protected under the EU Charter, including dignity, equality, and justice.\nEDPS Sponsored Resolution on GenAI Summarises the EDPS-sponsored resolution from the 45th Global Privacy Assembly on responsible design, data protection, and regulatory advancement for generative AI.\nThe SolarWinds Hack on a Page Recaps the SolarWinds hack, describing the supply chain attack's method, attribution to Cozy Bear, and impact on over 250 organisations.\nThe SolarWinds Hack: Sequence of Events Diagrams the sequence of events and key actors in the SolarWinds hack, from initial reconnaissance through public disclosure and attribution to Russia's SVR.\nSchrems Timeline Presents the timeline of the Schrems legal challenges against EU-US data transfer mechanisms, from Safe Harbor through to the EU-US Data Privacy Framework.\nEDPB Decision Bans Behavioural Advertising Explains the EDPB's urgent decision banning behavioural advertising by Meta in the EEA and its industry-wide implications for data protection.\nAmazon Logistique Case Analyses the Amazon Logistique case, detailing a €32 million GDPR fine for unlawful and excessive employee monitoring practices in French warehouses.\nAmazon Logistique Case: Sequence of Events Traces the steps from handheld scanner deployment through employee complaints, GDPR investigation, and the €32 million CNIL fine.\nGDPR on a Page A high-level overview of the GDPR, outlining compliance obligations, security, lawful basis, data subject rights, enforcement, and penalties.\n23andMe Personal Data Breach A summary of the 23andMe data breach, describing how vulnerabilities led to compromised genetic data and targeted attacks against specific groups.\nUK-US Data Bridge Explains the UK-US Data Bridge, which extends EU-US DPF protections to UK-US data transfers under strict compliance and supervisory oversight.\nCalifornia Delete Act Outlines new requirements for data brokers to register, provide deletion mechanisms, and enhance privacy rights for California residents.\nCrypto Scam: Sequence of Events Depicts the sequence of events in a crypto scam, illustrating how victims are manipulated through false investment opportunities and staged recoveries.\nKnow the Terms: Deletion, Erasure and Destruction Data deletion, data erasure, and data destruction are often misused and misunderstood. A quick overview of what you need to know.\nEvolution of European Data Protection (Landscape) Chronologically illustrates the evolution of European data protection, mapping landmark laws and major societal events from 1950 to 2025.\nEvolution of European Data Protection (Portrait) Portrait version of the European data protection evolution infographic, mapping landmark laws and major societal events from 1950 to 2025.\nData Protection Day Event Kit A tongue-in-cheek \"Plan B\" kit for Data Protection Day events, featuring themed attire, suggested giveaways, speaker topics, and engagement objectives.\n","date":null,"permalink":"/free-infographics-explainers-and-one-pagers-for-data-protection-and-grc-leaders/","section":"Services","summary":"","title":"Free Infographics, Explainers and One-Pagers for Data Protection and GRC Leaders"},{"content":"History and origins of employee monitoring | Purpose and Means\nHistory and origins of employee monitoring #Employee monitoring is a hot topic and during the pandemic we saw this contentious issue, and its associated technologies, entering employees\u0026rsquo; homes. To understand what’s happening today, we must step back and trace the origins and evolution of employee monitoring over past centuries.\nEMPLOYEE MONITORINGTRACKING USERSEXPLAINER\nTim Clements\n6/6/20243 min read\nEmployee monitoring is a hot topic and during the pandemic we saw this contentious issue, and its associated technologies, entering employees\u0026rsquo; homes. To understand what’s happening today, we must step back and trace the origins and evolution of employee monitoring over past centuries.\nAs you can see from the timeline, in the early days the focus was primarily on efficiency and productivity, influenced by Frederick Taylor\u0026rsquo;s principles of scientific management. This era saw the introduction of time clocks and later, the use of observation and checklists to measure employee performance.\nIn the mid-20th century, the development of surveillance technologies saw a shift towards more sophisticated monitoring with the advent of video surveillance and computer monitoring in the latter half of the century. Technology evolved rapidly at the turn of the 21st century that accelerated employee monitoring practices with the introduction of software to track digital activities, GPS for location tracking, and eventually, advanced analytics and AI in the modern workplace.\nHere are brief descriptions of the milestones on the timeline:\nLate 1700s: Factory Systems begin ‍The onset of the Industrial Revolution introduces the factory system, where structured oversight of workers is necessary to manage and synchronise labour with mechanical processes. This period marks the beginning of formal workplace supervision.\nMid-1800s: Thomas Affleck\u0026rsquo;s monitoring of slaves ‍Agriculturalist Thomas Affleck publishes \u0026ldquo;Plantation Record and Account Book,\u0026rdquo; which provides detailed frameworks for monitoring enslaved workers on Southern plantations. The guide includes meticulous record-keeping techniques aimed at maximising agricultural output and labour efficiency under harsh conditions.\n1880s: Early Scientific Management ‍Frederick Taylor begins developing his management theories that would lay the foundation for scientific management. His early observations and experiments focus on optimising manual labour tasks and workflows, which he would later formalise.\n1911: Frederick Taylor publishes \u0026ldquo;The Principles of Scientific Management\u0026rdquo; ‍Taylor formalises his theories in a seminal publication that argues for the efficiency of workers being maximised through scientific analysis of labour processes. His work introduces time studies, standardisation, and systematic planning as methods to enhance productivity and control.\n1924-1932: Hawthorne studies ‍Conducted at Western Electric\u0026rsquo;s Hawthorne Works in Cicero, Illinois, these studies initially aimed to examine how physical conditions, such as lighting and work hours, affected worker productivity. Surprisingly, researchers found that attention to employees significantly influenced their performance—a phenomenon dubbed the \u0026ldquo;Hawthorne effect.\u0026rdquo;\n1930s-1960s: Time clocks spread and Henry Ford\u0026rsquo;s assembly line ‍The proliferation of mechanical time clocks allows for more precise tracking of employee attendance and hours worked. Simultaneously, Henry Ford revolutionises manufacturing with the assembly line, which standardises production and incorporates monitoring at every stage to ensure efficiency.\n1970s: Rise of CCTV ‍The adoption of Closed-Circuit Television (CCTV) in workplaces marks a significant advancement in surveillance technology, extending beyond security to include monitoring employee behaviour and performance.\n1980s: Computer monitoring begins ‍The widespread introduction of personal computers in workplaces leads to new forms of electronic monitoring. Employers begin using software to monitor computer usage, including application and email surveillance.\n1990s: Email surveillance becomes common ‍As email becomes an essential tool for corporate communication, companies increasingly monitor email traffic to safeguard proprietary information and ensure compliance with company policies.\n2000s: GPS tracking and telematics in company vehicles ‍The adoption of GPS tracking devices and telematics technology in company vehicles allows employers to monitor routes, driving habits, and operational efficiency, enhancing fleet management and worker safety.\nRyne Spaulding v. United Parcel Service, Inc. (2009) ‍This case highlights the debate over GPS monitoring in company vehicles, with the court upholding the employer\u0026rsquo;s right to use GPS for business efficiency and safety monitoring.\n2010s: Integration of AI and advanced analytics ‍Businesses start leveraging artificial intelligence and sophisticated analytics to monitor employee productivity, behaviour, and predict future workforce needs and outcomes.\nCity of Ontario vs. Quon (2010) ‍This landmark legal case addresses the extent of workplace privacy, with the US Supreme Court ruling that the city\u0026rsquo;s review of text messages on a police officer\u0026rsquo;s city-issued pager was reasonable and did not violate the Fourth Amendment.\n2020s: Biometrics and remote monitoring technologies surge ‍The use of biometric data for monitoring purposes, such as fingerprint scanning and facial recognition for access control and timekeeping, becomes more prevalent. The COVID-19 pandemic also accelerates the adoption of remote monitoring tools, including software to oversee employees working from home.\nBarclays and H\u0026amp;M cases (2020) ‍These cases illustrate the potential pitfalls of invasive monitoring practices. Barclays retracts an employee monitoring system after backlash, and H\u0026amp;M faces a hefty fine for processing personal data without adequate transparency to their employees among other things.\nAmazon France case (2021) ‍Amazon is fined by the CNIL for excessive surveillance of warehouse workers, sparking significant debate over privacy and surveillance in the workplace.\n","date":null,"permalink":"/history-and-origins-of-employee-monitoring/","section":"Learnings","summary":"","title":"History and Origins of Employee Monitoring"},{"content":" The history and origins of tracking consumers and the use of cookies | Purpose and Means The history and origins of tracking consumers and the use of cookies #Cookies hit the headlines again last week with Google\u0026rsquo;s announcement that it had halted it\u0026rsquo;s phase-out of third-party cookies. We encounter them every day whilst browsing the web, and they\u0026rsquo;ve been around for decades but to understand their purpose, we need to trace their history and origins.\nCOOKIESTRACKING USERSEXPLAINER\nTim Clements\n7/31/20247 min read\nCookies hit the headlines again last week with Google\u0026rsquo;s announcement that it had halted it\u0026rsquo;s phase-out of third-party cookies. We encounter them every day whilst browsing the web, and they\u0026rsquo;ve been around for decades but to understand their purpose, we need to trace their history and origins.\n1. Pre-cookies #Before the advent of the World Wide Web, tracking consumers involved various methods that leveraged both direct and indirect means to collect data. Here\u0026rsquo;s a detailed timeline and history of consumer tracking techniques before the web era:\nEarly 20th Century: beginnings of consumer research #1920s: The rise of consumerism leads to the early stages of market research. Companies begin using surveys and focus groups to understand consumer preferences and behaviours.\nMail surveys: Businesses send out questionnaires to consumers, incentivising responses with small rewards. This method helps gather data on purchasing habits and preferences.\n1930s-1940s: evolution of market research # Gallup organisation: Founded in 1935 by George Gallup, it popularises opinion polling. Gallup\u0026rsquo;s methods include conducting interviews and surveys to gather consumer opinions and trends.\nNielsen Ratings: Established by Arthur Nielsen in the 1940s, Nielsen Ratings begin tracking radio and later television viewership to provide data on audience sizes and preferences.\n1950s-1960s: expansion of data collection # Credit card tracking: The introduction of credit cards allows banks and retailers to collect data on consumer purchases. This data is used to analyse spending patterns and target marketing efforts.\nLoyalty programmes: Retailers start offering loyalty programs to track repeat customers. These programs use punch cards or stamps to record purchases and reward loyal customers with discounts or gifts.\n1970s: computerisation and data analytics # Point-of-Sale (POS) systems: The adoption of computerised POS systems in retail stores marks a significant advancement. These systems record transaction data, which can be analysed to track consumer buying habits.\n**Direct mail marketing: **Businesses use mailing lists to send targeted advertisements and promotions to specific consumer segments. Mailing lists are compiled from various sources, including customer purchases and survey responses.\n1980s: database marketing # Customer databases: Companies begin creating detailed customer databases, compiling information from various sources such as sales records, warranty cards, and customer service interactions.\nGeo-demographic segmentation: Marketers use demographic and geographic data to segment consumers into distinct groups for targeted marketing. This method leverages census data and other public records to categorise consumers based on location, age, income, etc.\n1990s: pre-web advanced techniques # Telemarketing: With the proliferation of telephone services, telemarketing becomes a popular method for reaching out to consumers. Companies use call lists to directly market products and gather data on consumer interests.\nEarly data brokers: Firms specialising in collecting and selling consumer data emerge. These data brokers compile information from various sources, including public records, credit reports, and purchase histories, to create comprehensive consumer profiles.\n2. Cookies \u0026amp; more #1994‍ # Lou Montulli, a programmer at Netscape developed ‘cookies’ to enable e-commerce applications, specifically to help websites remember users\u0026rsquo; shopping carts On October 27th, 1994, AT\u0026amp;T places the first banner ad on hotwired.com for 3 months.\nMontulli remembered a software trick from an old operating systems manual he’d read a few years earlier, a technique for passing information back and forth between the user and the system. For some reason, the small piece of data exchanged had been called a “magic cookie.”\n1995 # Netscape Navigator is the first web browser to support cookies.\nInternet Engineering Task Force (IETF) begins discussions on standardising cookies.\n1996 # DoubleClick, one of the first ad-tech companies to use cookies for tracking user behaviour across different websites to serve targeted ads.\n1998 # Public awareness of cookies increases. New York Times publishes: ‘Cookies May Annoy But They Don\u0026rsquo;t Hurt.’ Wired covers similar articles.\nIn the US, the FTC publish ‘Privacy Online: A report to Congress’ that mentions cookies.\n2002 # The EU’s ePrivacy Directive, is introduced requiring websites to obtain user consent before storing cookies, focusing on the protection of online privacy.\n2009 # The ePrivacy Directive is amended to require websites to obtain explicit consent from users before storing cookies, reinforcing privacy protections.\n2010 # With a mixed reception and limited implementation by websites and browsers, the ‘Do Not Track’ header is proposed by the FTC as a mechanism for users to signal their preference not to be tracked across websites.\n2013 # Device and browser fingerprinting emerge as more sophisticated tracking methods that do not rely on cookies but instead use unique attributes of a user’s device and browser.\n2017 # Apple introduces Intelligent Tracking Prevention (ITP) in their Safari web browser to limit tracking.\n2018 # The EU\u0026rsquo;s GDPR comes into effect, imposing strict requirements on how websites obtain and document user consent that are also applicable for for cookies and other tracking technologies.\n2019 # Planet49’s use of pre-ticked boxes to obtain user consent for cookies during an online lottery is challenged and brought before the CJEU for clarification.\nIn the same year, Firefox turned its Enhanced Tracking Protection (ETP) privacy feature on by default\n2020 # Google announces plans to phase out third-party cookies in Chrome by 2022, later extended to 2024.\nIn the same year, Amazon is fined €35M by the CNIL for violating cookie rules.\n2021 # Amazon are fined €746M by the Luxembourg CNPD for using cookies without proper user consent.\nAt the end of 2021, the CNIL fine Google €150M and Facebook €60M for cookie violations\n2022 # Microsoft fine €60M for not giving Bing users a way to reject cookies.\n2024 # Google’s intention with its Privacy Sandbox was to phase out third-party cookies by using techniques, like differential privacy, k-anonymity, and on-device processing. It should also help limit other forms of tracking, like fingerprinting, by restricting the amount of information sites can access.\nOn 22 July, Google announced it is halting its phase-out of third party cookies in its Chrome browser while reworking its Privacy Sandbox initiative to address a balance between consumer privacy and sustainable advertising.\nThe demise (and reprieve) of third-party cookies #The digital landscape has undergone a major transformation as third-party cookies are being phased out by the major web browsers. This shift is driven by increasing concerns over user privacy and the need for greater transparency in how personal information is collected and used.\nGoogle Chrome, announced its plan to phase out third-party cookies by the end of 2024. This change is part of Google’s Privacy Sandbox initiative, which aims to develop new technologies that replace third-party cookies while preserving privacy and supporting advertising needs. Features like the Topics API and FLEDGE are part of this initiative, designed to enable targeted advertising without compromising user privacy.\nOther browsers, such as Firefox and Apple\u0026rsquo;s Safari, have already implemented measures to block third-party cookies by default. Firefox\u0026rsquo;s Enhanced Tracking Protection and Safari\u0026rsquo;s Intelligent Tracking Prevention have set the stage for a more privacy-focused web browsing experience.\nThe demise of third-party cookies marks a significant shift in online advertising and user tracking. Advertisers and website owners must adapt to these changes by exploring alternative methods for tracking and personalisation that comply with new privacy standards.\n2017 # Safari introduces Intelligent Tracking Prevention (ITP) Apple’s Safari browser launches ITP to reduce cross-site tracking by limiting the lifespan of cookies and blocking third-party trackers. This initial move sets the stage for other browsers to follow suit in enhancing user privacy​\n2018 # Firefox enhances tracking protection Mozilla introduces Enhanced Tracking Protection (ETP) in Firefox, which blocks known trackers by default. This development marks a significant step toward user privacy by preventing third-party cookies from tracking users across websites​\n2019 # Google announces Privacy Sandbox Google announces the Privacy Sandbox initiative aimed at developing web standards that enhance privacy while still supporting online advertising. This initiative represents Google\u0026rsquo;s strategy to eventually replace third-party cookies with more privacy-preserving alternatives​\n2020 # Safari updates its Intelligent Tracking Prevention to block all third-party cookies by default, further tightening its restrictions on cross-site tracking and enhancing user privacy​\nFirefox rolls out Total Cookie Protection within its ETP, isolating cookies to the site where they were created, thus preventing cross-site tracking and enhancing privacy​\n2021 # Google announces a two-year delay to its original plan to phase out third-party cookies by 2022, pushing the deadline to the second half of 2023. This extension aims to provide developers and advertisers more time to adapt to the new standards​\n2022 # Google delays the phase-out of third-party cookies again, setting a new deadline for the second half of 2024. This decision reflects the need for more thorough testing and a smoother transition for the advertising industry​\n2023 # Google begins extensive testing of Privacy Sandbox features, such as the Topics API and FLEDGE, with a limited number of users. These trials aim to evaluate the effectiveness of these new technologies in replacing third-party cookies​\n2024 # Google announces that it will, for the third time, delay the deprecation of third-party cookies on its Chrome browser. “We recognize that there are ongoing challenges related to reconciling divergent feedback from the industry, regulators and developers, and will continue to engage closely with the entire ecosystem,” the company said in a blog post.\nThe decision to postpone once again was made in response to heightened regulatory scrutiny – particularly from the UK’s antitrust enforcer, the Competition and Markets Authority (CMA), which had been closely monitoring the proposed transition. The CMA had flagged 39 unique “concerns” about Google’s plan to sunset third-party cookies,\nJuly 2024 # Google halts its effort to phase out third-party cookies in Chrome, choosing instead to enhance its Privacy Sandbox initiative to balance privacy with sustainable advertising. Anthony Chavez, Google\u0026rsquo;s Vice President of Privacy Sandbox, confirmed that cookies will remain, but privacy-preserving alternatives will continue to be developed. Google plans to introduce a feature in Chrome allowing users to make informed privacy choices. The decision to keep cookies has disappointed some regulators, like the UK\u0026rsquo;s Information Commissioner\u0026rsquo;s Office (ICO). However, industry groups like the Digital Advertising Alliance support Google\u0026rsquo;s move, emphasising the need for privacy-enhancing advertising approaches.\n","date":null,"permalink":"/the-history-and-origins-of-tracking-consumers-and-the-use-of-cookies-copy/","section":"Learnings","summary":"","title":"History and Origins of Tracking Consumers and Cookies"},{"content":"These days, waiting until risks or opportunities are right in front of you is too late. Horizon scanning helps you take control — spotting the signals of change early, analysing their impact, and planning strategically to navigate what\u0026rsquo;s next.\nInstead of reacting to change, you can anticipate it.\nWhat Is Horizon Scanning? #Horizon scanning is a structured process for identifying and analysing the emerging trends, risks, and opportunities that could affect your organisation. It\u0026rsquo;s about looking beyond the immediate and asking:\nWhat\u0026rsquo;s coming? How will it impact us? What do we need to do about it? This isn\u0026rsquo;t just a box-ticking exercise — it\u0026rsquo;s the foundation for smarter, future-proof decisions.\nWhy does it matter? # Anticipate risks: Stay ahead of regulatory changes, technological advancements, and societal shifts. Spot opportunities: Identify innovations and trends that could give you a competitive edge. Strategic confidence: Plan with clarity, knowing you\u0026rsquo;ve considered the bigger picture. Who is it for? #Horizon scanning is ideal for compliance leaders, risk managers, and strategic decision-makers navigating complex environments like:\nData protection and privacy AI and emerging technologies Ethics and sustainability Security and regulatory change If you\u0026rsquo;re facing complexity and uncertainty, this is how you stay ahead.\nOur approach #Step 1: Scan the landscape We identify and prioritise the key forces shaping your world — regulations, technology, social change, and beyond.\nStep 2: Analyse the impact We evaluate what these changes mean for your organisation, cutting through the noise to focus on what matters most.\nStep 3: Plan your response We help you develop actionable strategies so you can confidently navigate uncertainty and position your organisation for success.\nThe outcome #With horizon scanning, you\u0026rsquo;re not just reacting — you\u0026rsquo;re driving the agenda. You\u0026rsquo;ll have a clear roadmap for the future, backed by insight and foresight, so you can move forward with confidence.\nDemo radars #Here are a couple of demo radars. Feel free to zoom in and out, and click on some of the entries.\nLet\u0026rsquo;s work together #Ready to start scanning the horizon? Contact us today to build a smarter, more proactive strategy for what\u0026rsquo;s next.\nHow we can help you # We can get you started by facilitating the end-to-end process and then maintain and update your radars on an ongoing basis with regular interaction with you and your team. We can help you get started by facilitating the end-to-end process and then handing responsibilities to you, providing support afterwards where needed. ","date":null,"permalink":"/horizon-scanning-and-analysis/","section":"Services","summary":"","title":"Horizon Scanning and Analysis"},{"content":"How to stop boring your employees to death: unleash real results with human-centred data protection training. | Purpose and Means\nHow to stop boring your employees to death: unleash real results with human-centred data protection training. #To create truly effective data protection training, embrace a human-centered approach that prioritises understanding learners\u0026rsquo; needs, engaging their emotions, and building a culture of responsibility rather than simply enforcing compliance.\nEMPLOYEE AWARENESSEDUCATION AND TRAININGVIRTUAL COMMS. SUPPORTGOVERNANCE\nTim Clements\n4/7/20252 min read\nThere is still too much generic education and training material that may tick a box, but only superficially equips employees with the knowledge and skills they need to understand the actionable steps they need to take in the context of their daily work.\nAt Purpose and Means we really do focus on the needs of employees (including leadership) and the positive experience they need to have whilst engaging with our materials.\nTo achieve this, the development and delivery of education and training must be fundamentally human-centred. We use a comprehensive methodology, so we can establish a streamlined, framework that prioritises learner engagement and long-term behavioural change.\nThe process commences with*** Discovery***. Conduct targeted research - surveys, interviews, role analyses - to understand the learners: their existing knowledge, specific pain points, and daily workflows. At the same time, we\u0026rsquo;ll conduct Context Analysis, encompassing cultural norms, policy frameworks, and technological infrastructure. This dual assessment forms the basis for relevant and impactful training. This initial phase is extremely important.\nFor example we helped a MedTech client a few years ago establish a year-long programme where Purpose and Means acted as their \u0026ldquo;virtual communications and change department.\u0026rdquo; During *Discovery, *we identified a number of pain points including data protection-related knowledge gaps with their sales and marketing teams. The sales teams were interacting with hospitals and healthcare professionals but struggled to convey the data protection-related aspects, and the marketing colleagues were not fully aware of the opportunities (and not just constraints) when working around the legal requirements. Generic materials would not have helped either stakeholder group.\nNext, Sensory \u0026amp; Narrative Design is critical. We\u0026rsquo;ll define clear, human-centred learning objectives that translate laws and regulations into behavioural outcomes and connect data protection and GRC to ethical values. We\u0026rsquo;ll then cultivate a compelling narrative, illustrating the human impact of say, personal data breaches and emphasising the learners\u0026rsquo; role in protection personal data. We\u0026rsquo;ll design various engaging learning experiences that cater to diverse learning styles, incorporating visual aids, interactive discussions, simulations, and case studies that will resonate. Then we develop high-quality, highly visual, actionable learning materials that are clear, concise, and readily accessible.\nDelivery \u0026amp; Refinement is crucial for optimising training effectiveness. Prior to implementation, we\u0026rsquo;ll conduct pilot testing with representative learners, gathering feedback and analysing performance data. We\u0026rsquo;ll then iterate upon the training design, making necessary adjustments and material updates based on the pilot results. This iterative process ensures the training is continuously refined and tailored to the specific needs of the learners.\nFinally, we\u0026rsquo;ll focus on*** Embedding \u0026amp; Long-Term Impact***. We\u0026rsquo;ll facilitate education and training with empathy, with open dialogue and encouraging active participation. Beyond the education and training sessions, we then implement post-training reinforcement strategies: regular communications, accessible resources, ongoing support channels, and periodic refresher sessions. We\u0026rsquo;ll then work with you to monitor and evaluate the long-term impact of the training on data protection behaviours, incident rates, and company culture.\nDoes this resonate? Feel free to get in touch to arrange a no obligation call to discuss your communication and change management needs.\n","date":null,"permalink":"/how-to-stop-boring-your-employees-to-death-unleash-real-results-with-human-centred-data-protection-training/","section":"Learnings","summary":"","title":"How to Stop Boring Your Employees: Human-Centred Data Protection Training"},{"content":" Leadership explainer: k-anonymity, l-diversity and t-closeness | Purpose and Means Leadership explainer: k-anonymity, l-diversity and t-closeness #There are many anonymisation techniques that companies can choose depending upon the nature of their business, business sector, jurisdiction, risk, complexity of processing and maturity/internal competences to mention a few factors. All of these techniques are complex.\nDE-IDENTIFICATION TECHNIQUESEXPLAINER\nTim Clements\n8/8/20243 min read\nIn order to begin to explain the techniques to other colleagues, including senior leadership we need to simplify and present the topics in simple terms. In this blog post I explore three techniques that are closely related: k-anonymity, l-diversity and t-closeness. In future posts I\u0026rsquo;ll cover other techniques, also in simple terms.\nThis one-page explainer is a useful reference if you need to provide your steering committee or board with an overview of some of the more technical aspects of your data protection work.\nWhy use these techniques? #When data needs to be shared or used in a way that doesn\u0026rsquo;t require the identification of individuals, reducing risks to individuals\u0026rsquo; rights \u0026amp; freedoms. Also, they allow companies to continue processing data outside of the scope of data protection laws like GDPR.\nThe threat of an attack #An individual or entity may attempt to gain, or accidentally gain, unauthorised access to sensitive information in a dataset resulting in identity theft, discrimination, or unauthorised exposure of personal data, among many things.\n‍ Typical attacks may involve one or more of the following:\n‍Singling Out The ability to identify a single individual within a dataset based on unique or rare combinations of attributes.\nLinkability Data from one dataset can be linked with data from another dataset, leading to the identification of individuals or inference of additional information.\nInference The ability to deduce or infer sensitive information about an individual based on patterns, distributions, or lack of diversity within a dataset.\nHomogeneity Occurs when all individuals in an anonymised group share the same sensitive attribute, allowing attackers to easily infer that attribute.\nBackground knowledge Uses external information or prior knowledge about an individual to re-identify them or infer sensitive data in an anonymised dataset\nKey terms #**Strong identifiers **uniquely identify an individual on their own, e.g. government issued ID, email address, passport number, credit card number.\n**Weak identifiers **are more general and may belong to more than one individual e.g. birthdate, gender, postcode, though there are exceptions that may lead to re-identification.\nQuasi identifiers are weak identifiers that, when combined, can lead to re-identification of an individual, e.g. age, gender, ethnicity, location.\nA proxy is an attribute or piece of data that indirectly represents or can stand in for another piece of data, so even when direct identifiers are removed, proxies can still reveal sensitive information, e.g. head dress, tattoos, jewellery in images.\nk-anonymity # Objective k-anonymity prevents unique identification in a dataset by making each individual indistinguishable from at least \u0026lsquo;k\u0026rsquo;−1 others based on quasi-identifiers (e.g. age, gender, postcode).\nHow it works For every combination of quasi-identifiers, at least \u0026lsquo;k\u0026rsquo; records share the same values, ensuring that attempts to re-identify an individual can only narrow down to a group of at least \u0026lsquo;k\u0026rsquo; people. This may be achieved through suppressing and/or generalising attributes.\nPros k-anonymity offers a basic level of privacy protection, making it more difficult for external parties to identify a single individual’s record.\nCons\n‍Vulnerability to homogeneity attacks arises when all records within a group share the same sensitive attribute, allowing an attacker to easily infer that attribute despite the group’s size.\nBackground knowledge attacks occur when an attacker uses external information to re-identify individuals, even in a k-anonymous dataset.\nl-diversity #Builds upon k-anonymity.\nObjective l-diversity enhances k-anonymity by ensuring that within any group of records sharing the same quasi-identifiers, there is a diversity of sensitive attributes.\nHow it works Within each anonymised group, the sensitive attribute (e.g. illness, salary) must have at least \u0026rsquo;l\u0026rsquo; different values. This ensures that even if an attacker knows an individual belongs to a certain group, they cannot easily infer the sensitive attribute due to the variation within that group.\nPros l-diversity reduces the risks of homogeneity within groups, thereby strengthening the privacy protections offered by k-anonymity.\nCons\nInsufficient protection against skewness attacks occurs when the distribution of sensitive attributes within a group is skewed, allowing an attacker to make accurate inferences.\nSemantic similarity is a limitation of l-diversity, as it doesn’t account for cases where sensitive values are different but semantically similar, potentially weakening privacy.\nt-closeness #Builds upon l-diversity.\nObjective t-closeness enhances privacy protection by ensuring that the distribution of sensitive attributes within any anonymised group closely mirrors the distribution of those attributes across the entire dataset.\nHow it works t-closeness requires that the difference in the distribution of a sensitive attribute within a group and the entire dataset does not exceed a specified threshold \u0026rsquo;t\u0026rsquo;. This prevents attackers from making confident inferences about an individual’s sensitive attributes, even if they know the individual\u0026rsquo;s group.\nPros t-closeness overcomes the limitations of k-anonymity and l-diversity by taking into account the overall distribution of sensitive data, offering a stronger defense against various types of attacks. l-diversity reduces the risks of homogeneity within groups, thereby strengthening the privacy protections offered by k-anonymity.\nCons\n*Complexity and data utility *- achieving t-closeness can be challenging and may require significant data manipulation, potentially reducing the dataset’s utility for analysis.\nOverhead - it may introduce more computational and design complexity, making it harder to implement at scale compared to k-anonymity and l-diversity.\n","date":null,"permalink":"/leadership-explainer-k-anonymity-l-diversity-and-t-closeness/","section":"Explainers","summary":"","title":"Leadership Explainer: k-Anonymity, l-Diversity and t-Closeness"},{"content":"","date":null,"permalink":"/learning/","section":"Learnings","summary":"","title":"Learnings"},{"content":"A journey through time and innovation\nThis comprehensive interactive timeline visually chronicles the evolution of Artificial Intelligence, from its philosophical roots to its modern-day breakthroughs in machine learning and generative AI. It highlights key historical milestones, influential figures, technological advancements, and periods of both progress and challenge in the field of AI.\nThis learning resource can be supplemented with knowledge checks and quizzes, and can be integrated with a number of learning management systems (LMS). If your organisation is a non-profit or educational institution, we normally allow you to embed the interactive infographic into your web pages or intranet pages for free. Email us for more information.\nThis resource is best viewed on a desktop computer or tablet. Although it can be viewed on a smartphone, the interface may appear cluttered.\nAbout this resource #The timeline covers the following eras:\nEarly Origins (384 BC to 1943)\n384–322 BC: Aristotle lays foundational work for objects, logic, and scientific methods. 1642: Blaise Pascal invents the mechanical calculator. 1763: Bayes\u0026rsquo; Theorem is published, marking the beginning of statistical methods foundational to AI. 1818: Mary Shelley\u0026rsquo;s Frankenstein and Samuel Butler\u0026rsquo;s Erewhon introduce AI in literature. 1843: Ada Lovelace describes the first computer algorithm (Note G), suggesting machines could manipulate symbols. 1920: Karel Čapek\u0026rsquo;s play R.U.R. introduces the word \u0026ldquo;robot,\u0026rdquo; expanding AI concepts into popular culture. 1943: Warren McCulloch and Walter Pitts create a computational model for neural networks. Golden Years of Technological Advances (1950 to 1972)\n1950: Alan Turing publishes \u0026ldquo;Computing Machinery and Intelligence\u0026rdquo; and proposes the Turing Test. 1951: Marvin Minsky \u0026amp; Dean Edmunds build SNARC, a network of 3,000 vacuum tubes simulating 40 neurons. 1955: John McCarthy coins the term \u0026ldquo;Artificial Intelligence\u0026rdquo; and organises the Dartmouth Summer Research Project. 1959: Arthur L. Samuel demonstrates machines can learn from past errors using the game of checkers. 1961: \u0026ldquo;Unimate,\u0026rdquo; the first industrial robot, is deployed on a General Motors assembly line. 1964: ELIZA, an early NLP program, is developed at MIT by Joseph Weizenbaum. 1972: Karen Spärck Jones devises Inverse Document Frequency (IDF), underlying most modern search engines. AI Winter #1 (1973 to 1974)\n1973: Scepticism about AI\u0026rsquo;s capabilities leads to reduced funding. 1974: Ray Kurzweil\u0026rsquo;s company introduces Optical Character Recognition (OCR) commercially. Rejuvenation (1980s)\n1980s: \u0026ldquo;Expert systems\u0026rdquo; are adopted by corporations around the world. 1986: David Rumelhart and James McClelland develop ideas around parallel distributed processing and neural network models. AI Winter #2 (1987)\n1987: The LISP machine market collapses. Rebirth (1995 to 1997)\n1995: A semi-autonomous vehicle navigates a 4,501 km journey. 1997: IBM\u0026rsquo;s Deep Blue becomes the first computer to beat reigning world chess champion Garry Kasparov. AI Enters the Home (1999 to 2002)\n1999: Sony launches AIBO, a four-legged autonomous robot for consumers. 2002: iRobot releases Roomba, demonstrating practical household AI applications. Breakthroughs and AI Becomes Mainstream (2004 to 2018)\n2004: DARPA challenges stimulate advancements in autonomous vehicle technology. 2011: Introduction of AI-powered voice assistants like Siri, Google Now, and Cortana. 2012: Google\u0026rsquo;s X lab demonstrates 70% better results in detecting and recognising cats from YouTube videos using unsupervised learning. 2016: Google\u0026rsquo;s AlphaGo defeats Go champion Lee Se-dol, a landmark achievement in AI\u0026rsquo;s strategic capabilities. 2018: Generative AI — featuring GANs and variational autoencoders — excels in producing realistic images, text, and music from vast datasets. ","date":null,"permalink":"/origins-and-history-of-ai/","section":"Learnings","summary":"","title":"Origins and History of AI"},{"content":"European Data Protection and Privacy #Origins, history and evolution\nThis interactive learning resource can be supplemented with knowledge checks and quizzes, and can be integrated with a number of learning management systems (LMS).\nIf your organisation is a non-profit or educational institution, we normally allow you to embed the interactive infographic into your web pages or intranet pages for free. Email us for more information.\nThis resource is best viewed on a desktop computer or tablet. Although it can be viewed on a smartphone, the interface may appear cluttered.\nAbout this resource #The diagram illustrates a timeline of key events and developments that shaped data protection and privacy in Europe, from early historical roots to modern challenges.\nEarly Roots: The timeline begins with rudimentary notions of privacy in the 13th century (chimneys enabling privacy, Justices of the Peace Act 1361), followed by early legislative efforts in the 18th and 19th centuries, such as Sweden\u0026rsquo;s 1776 Access to Public Records Act, France banning the publishing of private facts in 1858, and Norway\u0026rsquo;s 1889 Criminal Code prohibiting the dissemination of personal or domestic information.\n20th Century Developments: The 20th century saw major catalysts for data protection laws, including:\nMassive privacy abuses under Fascism (1930s, Dehomag D11) and Communism (Stasi 1950–90, Securitate 1948–89), as well as WWII atrocities. The 1948 UN Universal Declaration of Human Rights establishing fundamental rights. The widespread use and reliance on technology highlighting new vulnerabilities. Revelations about global surveillance (e.g. NSA, GCHQ) underscoring the need for protection. Emergence of European Data Protection Frameworks:\nThe 1950 CoE European Convention on Human Rights was a foundational step. Individual countries began enacting data protection laws, starting with the 1970 Hesse Data Protection Act (Germany) and the 1973 Sweden Data Act. The Council of Europe Resolutions 73/22 \u0026amp; 74/29 (1973/74) and the 1978/79 enactment of general data protection laws in seven European countries marked significant progress. International guidelines and frameworks emerged with the 1980 OECD Guidelines and the 1981 CoE Convention 108. The 1984 UK Data Protection Act and the 1995 EU Data Protection Directive further solidified national and regional legal frameworks. Challenges and Modern Era: The late 20th and early 21st centuries presented new challenges and legislative responses:\nThe emergence of early hacking (phreaking). Technological advancements brought phenomena like the Morris worm, the growth of the internet (WWW, Google), and cyber threats like the \u0026lsquo;I love you\u0026rsquo; virus. The 2000 EU e-Commerce Directive and the 2000 Charter of Fundamental Rights of the EU adapted laws to the digital age. The 2002 EU ePrivacy Directive addressed electronic communications. Major terrorist atrocities like 9/11, the London and Madrid bombings, and massive personal data breaches heightened concerns. Figures like Snowden, Hanff, and Schrems played crucial roles in revealing surveillance and advocating for privacy rights. Frameworks like US-EU Safe Harbor and the Privacy Shield Framework attempted to govern transatlantic data transfers. The culmination of these efforts led to the 2016 EU GDPR (enforced 2018), followed by continued discussions on EC adequacy decisions and the recurring theme of ensuring robust privacy. Significant political events like Brexit and global crises like Covid-19 also impacted data protection discourse. The resource concludes by emphasising that the core principle underlying all privacy and data protection efforts is the protection of fundamental rights and freedoms of individuals — human rights.\n","date":null,"permalink":"/origins-history-and-evolution-of-european-data-protection-and-privacy/","section":"Learnings","summary":"","title":"Origins, History and Evolution of European Data Protection and Privacy"},{"content":"","date":null,"permalink":"/pages/","section":"Pages","summary":"","title":"Pages"},{"content":"Last updated: April 2026\nWho We Are #Purpose and Means is a sole proprietorship registered in Denmark (CVR number: 18895692) and is the data controller for personal data processed via this website for conducting its business of providing organisations and individuals with various products and services.\nPurpose and Means uses a Quality Management System (QMS) for conducting its training activities, approved and audited by APMG International based in the UK.\nPurpose and Means is owned by Timothy Clements.\nOur Take on Data Protection #Purpose and Means is based in Denmark and is subject to laws and regulations including the General Data Protection Regulation (GDPR) and the ePrivacy Directive. As we also operate globally, we are subject to a range of applicable data protection and privacy laws. Being a very small business, we are not able to fully comply with every single law, but we regularly make ourselves aware of key nuances between the GDPR and other laws, especially where client demand necessitates greater focus.\nPurpose and Means sees data protection as a key part of doing business and respects key principles including lawfulness, fairness and transparency, data minimisation, and purpose limitation.\nWe do not have a Data Protection Officer (DPO) appointed because in our business context, we do not meet the criteria specified within the GDPR.\nIn our processing context we do not assess our processing activities to pose a \u0026lsquo;high risk\u0026rsquo; to the rights and freedoms of individuals.\nPlease do contact us if you have any questions, concerns, or feedback about how Purpose and Means processes personal data.\nContact Us #Purpose and Means\nEsthersvej 21\n2900 Hellerup\nDenmark\nEmail: tc@purposeandmeans.io\nData Collection #Active Data Collection #Active data collection is when you knowingly provide personal data to Purpose and Means. We actively collect data from you when you:\nComplete a web form Register for an education or training course Sign up for a newsletter Request information about a product or service via email Passive Data Collection (Website Technical Data) #When you visit our website, certain technical data is processed automatically to deliver the website to your device and keep it secure. This can include:\nYour IP address The date and time of your visit and requests The pages and files you request Device and browser information (such as user agent, language, and approximate device type) Diagnostic and security information (e.g., error logs and activity that may indicate abuse) Website Infrastructure and Third-Party Data #Our website is a static site hosted on a Virtual Private Server (VPS) managed by Hetzner Online GmbH in Germany. There are no third-party content delivery networks, cookie consent tools, advertising trackers, or analytics services used on this website.\nWhen you visit our website, your browser connects only to our own server. No third parties receive your technical data as part of a standard website visit.\nThe only exception is pages that embed YouTube videos (used within our self-hosted interactive learning content). If you visit such a page, your browser will connect to YouTube\u0026rsquo;s servers. YouTube (Google LLC) will receive your IP address and device/browser information and may set cookies in accordance with their own privacy policy.\nCookies #Our website uses no cookies by default. Zero cookies are set when you visit purposeandmeans.io.\nThe only exception is if you watch an embedded YouTube video. In that case, YouTube may set cookies in accordance with Google\u0026rsquo;s privacy policy.\nLawful Bases for Processing #1. Private Individuals # Consent — to receive a marketing newsletter from Purpose and Means no more than once a month Legitimate interests — to conduct profitable commerce providing our portfolio of products and services Compliance with legal obligations — in line with Danish tax and financial laws Performance of a contract — prior to, or actual purchasing of Purpose and Means products and services 2. Employees of Purpose and Means Clients # Legitimate interests — to conduct profitable commerce providing our portfolio of products and services What Data is Collected? #Provided Data # First name and surname Postal address Email address Opinions about products and services provided Derived Data # Attendance levels (for courses) Purchase history Levels of understanding (knowledge checks, quizzes) Inferred Data # Propensity to purchase other products or services (for manual recommendations) Website Technical Data #When you browse our website (without filling in a form), we may process:\nIP address Device and browser information (user agent) Request metadata (pages/files requested, timestamps) Limited referrer information (typically indicating you came from purposeandmeans.io) Data Retention #Personal data is securely retained and backed up by Purpose and Means in Denmark, and replicated to servers in Germany managed by Hetzner Online GmbH (see \u0026lsquo;Data Transfers\u0026rsquo;).\nInvoices #For accounting purposes we use Dinero, a Danish company, that stores invoice information. This data is retained for 5 years to comply with Danish financial and tax laws.\nEmail #Purpose and Means uses Microsoft Outlook as an email and calendar application. Personal data received by email is retained and replicated by Microsoft. Unless there are specific purposes to continue processing, this personal data is retained for 3 years.\nComplaints and Complaint Log #Personal data related to a complaint will be retained for 2 years after the complaint is resolved.\nData Subject Requests #From completion of the request, personal data will be retained for 5 years, in line with recommendations issued by Datatilsynet.\nConsent Records #Records of consent will be retained for 2 years or until no longer necessary, in line with recommendations issued by Datatilsynet.\nWebsite and Server Logs #Technical logs generated when you visit our website are retained for 30 days for security, troubleshooting, and performance purposes, then deleted or anonymised.\nData Usage #Consent # Providing you with a promotional newsletter Legitimate Interests # Dealing with your business-related requests and enquiries Improvement of our products and services through feedback requests Promotion of Purpose and Means products and services through testimonials Determining levels of understanding of our training courses (quizzes, knowledge checks) Security and anti-fraud activities Performance of a Contract # Prior enquiries, registration and purchase of a product or service Transfer of necessary data to third-party training instructors (email address) Legal Obligation # Complying with Danish financial and tax laws Website Operation and Security #We use technical data from website visits for the following purposes:\nTo deliver and render the website To maintain security and prevent abuse To troubleshoot and improve reliability Data Transfers #Within the EU/EEA #Website hosting: purposeandmeans.io is hosted on a VPS with Hetzner Online GmbH in Germany. All website files are served directly from this server. No CDN or third-party delivery network is used.\nCloud storage: We store personal data in a Nextcloud solution hosted in Germany at Hetzner Online GmbH.\nOutside the EU/EEA #Email and calendar: We use Microsoft Office 365 including Outlook and Teams for email and video conferencing. Personal data received by email is retained and replicated by Microsoft in accordance with their Data Processing Agreement.\nYouTube (embedded videos): Some learning pages embed YouTube videos. If you interact with these pages, your browser connects to Google\u0026rsquo;s servers in accordance with Google\u0026rsquo;s privacy policy.\nData Deletion #Data about you is deleted when there is no longer a purpose to retain it. Retention is determined by the purposes of processing described in the \u0026lsquo;Data Usage\u0026rsquo; section and the retention periods in \u0026lsquo;Data Retention.\u0026rsquo;\nFor example, if you withdraw consent for our newsletter, your email address in relation to that activity is deleted, unless it is also retained for another purpose such as a financial transaction.\nWebsite technical data (such as access logs) is deleted or rotated in line with the retention periods described above.\nYour Rights #Rights Applying to All Processing # Right to access — request confirmation that we are processing your personal data and request a copy Right to rectification — request correction of inaccurate or incomplete data Right to restrict processing — request that we limit use of your data Right to not be subject to automated decision-making including profiling Right to lodge a complaint with a Supervisory Authority — as we are a Danish company, complaints should be made to Datatilsynet (though they recommend first complaining to the Data Controller) Additional Rights Depending on Lawful Basis # Right to withdraw consent — applies where consent is the lawful basis (typically for our marketing newsletter) Right to erasure — applies where the lawful basis is consent, performance of a contract, or legitimate interests Right to data portability — applies where the lawful basis is consent or performance of a contract Right to object — applies to direct marketing conducted under legitimate interests How to Exercise Your Rights #Please send an email detailing your request to tc@purposeandmeans.io with the subject line \u0026lsquo;Data Subject Request.\u0026rsquo; We may request proof of identity and/or context in order to process your request.\nInformation Security #We recognise that information security is a vital part of data protection. While no data transmission can be completely secure, we implement a variety of physical, technical, and procedural measures to protect personal data from unauthorised access, use, disclosure, alteration, or destruction.\nOur website is a static HTML site with no database, no login page, and no user accounts. HTTPS is enforced across all connections. Security headers including HSTS, Content Security Policy, X-Frame-Options, X-Content-Type-Options, and Referrer Policy are implemented. Zero cookies are set by default.\nResidual Risks You Need to Be Aware of (Website Use) #When you visit pages that embed YouTube videos, the following residual risks apply:\nThird-party visibility of visits: YouTube (Google LLC) can see your IP address and device/browser information and may infer that you visited purposeandmeans.io.\nPossible impact to you: reduced anonymity online and reduced confidentiality of browsing habits.\nCross-site correlation: Google may be able to correlate your visit with other sites using technical signals, depending on their practices.\nPossible impact to you: your visit may contribute to profiling and inferences about your interests.\nInternational processing: YouTube resources are served from Google\u0026rsquo;s global infrastructure, which may involve processing outside the EU/EEA.\nPossible impact to you: your technical data may be processed in jurisdictions with different legal protections.\n","date":null,"permalink":"/privacy-notice-information-about-how-purpose-and-means-processes-personal-data/","section":"Pages","summary":"","title":"Privacy Notice"},{"content":"From challenges to solutions: facilitated workshops for rapid results #These days businesses need quick, effective decisions, but navigating complexity can often feel overwhelming. That\u0026rsquo;s where our Rapid Analysis Workshops come in. These structured, facilitated sessions help digital leaders break down challenges, clarify goals, and create actionable strategies - all in record time.\nWhat We Offer #Our workshops are tailored to address a wide range of business scenarios, enabling teams to collaborate, align, and move forward. Each session is designed to:\nEnable clear, focused discussions. Build consensus among stakeholders. Deliver tangible outcomes that drive results. Workshop Types #Assumptions and dependencies workshop # Purpose: Identify key assumptions and interdependencies that allow you to plan your projects and programmes. Outcome: Improved collaboration through shared understanding of constraints and dependencies. Business Impact: Ensures smoother project execution by aligning on critical unknowns early, and to track until validation. Purpose cultivation workshop # Purpose: Align teams on the \u0026ldquo;why\u0026rdquo; behind initiatives and develop customer-centric narratives that enable buy-in to your digital compliance projects and programmes. Outcome: A clear, shared purpose that drives creativity and focus in project execution. Business Impact: Strengthens stakeholder buy-in and ensures alignment with organisational goals. Risk workshop # Purpose: Identify, assess, categorise and prioritise risks to create a proactive mitigation strategy. Outcome: Clear action plans for risk prevention or reduction. Business Impact: Safeguards against financial, operational, or reputational risks, boosting resilience. Planning and estimating workshop # Purpose: Develop accurate timelines and allocate resources effectively. Outcome: A detailed project plan with aligned stakeholder expectations. Business Impact: Reduces scope creep and ensures predictable delivery. Project and programme initiation workshop (kick-off workshop) # Purpose: Set clear objectives, scope, and roles for new initiatives or programmes. Outcome: Cohesive teams with aligned priorities and understanding of the mission. Business Impact: Accelerates project kick-off with minimal ambiguity and maximum momentum. Issue review and resolution workshop # Purpose: Address specific challenges or roadblocks that hinder progress. Outcome: Collaborative resolution strategies and clear next steps. Business Impact: Restores productivity, reduces delays, and maintains team morale. Business impact analysis workshop # Purpose: Evaluate the ripple effects of changes, disruptions, or new initiatives. Ideal for regulatory challenges including DSA, DMA, EU AI Act, NIS2, EU Data Act, GDPR, CRA, DORA, EMFA, PLD, and CSRD, among others. Outcome: A comprehensive understanding of business impacts and actionable insights. Business Impact: Enhances decision-making by highlighting risks and opportunities. AI workforce adaptation workshop # Purpose: Help companies assess, categorise, and strategically adapt their workforce in response to AI-driven transformation. Outcome: A clear roadmap for AI-driven workforce transformation with a custom workforce matrix. Business Impact: Future-proofed workforce, smarter talent allocation, competitive advantage, stronger employee retention, and better AI integration. Process improvement workshop # Purpose: Identify inefficiencies in current workflows and design optimised processes. Outcome: Streamlined operations with clear, measurable improvements. Business Impact: Increases efficiency, reduces costs, and enhances team productivity. Lessons learned workshop # Purpose: Reflect on completed projects to extract actionable insights for future initiatives. Outcome: Documented successes, failures, and key lessons to improve future performance. Business Impact: Builds a culture of continuous improvement and knowledge sharing. Our approach # Structured facilitation: we guide discussions to keep sessions productive and focused on outcomes. Tailored frameworks: every workshop is customised to your specific needs and objectives. Actionable deliverables: at the end of each session, you\u0026rsquo;ll have a clear plan, roadmap, or set of recommendations to implement immediately. Why choose our rapid analysis workshops? # Speed to impact: tackle complex challenges efficiently and emerge with clear, actionable solutions. Collaborative environment: engage stakeholders in structured, meaningful discussions. Structured facilitation: benefit from proven methodologies and a facilitator who keeps you on track. Workshops can be in-person, virtual, or hybrid depending upon your geographies and travel budget.\nTransform challenges into opportunities. Let\u0026rsquo;s design a workshop that drives the results you need. Contact us today to get started.\n","date":null,"permalink":"/purpose-and-means-rapid-analysis-workshops/","section":"Services","summary":"","title":"Rapid Analysis Workshops"},{"content":"Risk alignment and risk measurement #Align organisational risks, quantify risk, build trust, and make smarter compliance decisions.\nWhat we do #We help compliance leaders quantify compliance risk in financial terms, move from subjective assessments to data-driven decisions, build structured frameworks that align teams and leadership, and communicate risks clearly to secure leadership buy-in and budget.\nWhy it matters #Compliance failures trigger ripple effects beyond penalties - reputation loss, operational downtime, and increased regulatory scrutiny. By leveraging data, we help you measure what is truly at stake and prioritise where to act first.\nHow we work #We assess your risk landscape, develop data-driven insights using structured estimation and scenario planning, and help you articulate risk in business terms so executives understand the stakes.\nLet us work together #Contact us at tc@purposeandmeans.io to build a smarter approach to risk measurement.\n","date":null,"permalink":"/risk-measurement/","section":"Explainers","summary":"","title":"Risk Alignment and Measurement"},{"content":"Say goodbye to boring training: the power of interactive scenarios | Purpose and Means\nSay goodbye to boring training: the power of interactive scenarios #Branching scenarios replace passive learning with immersive, real-world decision-making experiences, ensuring employees truly understand policies, ethics, and AI governance in a practical, engaging way.\nEMPLOYEE AWARENESSDATA PROTECTION DAYEDUCATION AND TRAININGVIRTUAL COMMS. SUPPORT\nTim Clements\n2/26/20254 min read\nI want you to think back to the last training session you attended. How memorable was it? Did you spend hours staring at your screen, passively absorbing information you barely remembered the next day? If so, I know exactly how you feel. This approach to training is not always effective, it\u0026rsquo;s uninspiring, and inefficient.\nNow, imagine a different kind of learning experience, one where you are put in real-world dilemmas, where you have to make choices, and see the direct consequences of your decisions. This is the power of branching scenarios.\nWhy traditional education and training falls short #Most corporate training is designed to check a box. I’ve sat through those long lectures myself, skimmed through endless slides, often just full of bullet points. If it\u0026rsquo;s in-person training, the room may be full, and the bigger the class, the fewer questions get asked. But when real-world situations arise in the workplace, such as a personal data breach, an ethical dilemma, or a decision about AI governance, you don’t have time to search your memory bank for that one slide from three months ago.\nBranching scenarios change this dynamic by transforming passive learning into active problem-solving. Instead of dumping information on you and hoping some of it sticks, these scenarios force you to think critically, make decisions, and see the results of your choices in real time.\nTraining tailored to your corporate reality #No two companies are the same. The challenges you face are unique to your industry, your company culture, and your specific regulatory environment. That’s why generic training modules often may not be appropriate. Branching scenarios can be designed around the actual dilemmas you\u0026rsquo;re likely to encounter.\nFor example, in a financial services context, you could navigate a scenario where you must decide whether to flag a suspicious transaction for potential money laundering. In a tech company, a branching scenario could focus on ethical AI governance, guiding you through the nuances of algorithmic bias and data protection concerns.\nRather than being told what you should do, branching scenarios let you experience, or at least get a sense of, what happens when you make the wrong choice, and that lesson is far more memorable than any slide deck.\nLearning from real-world events #Training is most effective when it’s relevant. Branching scenarios can be built around actual events, such as personal data breaches, security hacks, or regulatory violations, to ensure you see the real consequences of decisions made in the workplace.\nFor example, you can take any case involving a company that may have come under the scrutiny of a Supervisory Authority and re-enact the sequence of events that the investigation reveals. Or it could be a company that\u0026rsquo;s recently suffered a cyber incident due to an employee clicking on a phishing email. A branching scenario could replicate that situation, allowing you to choose whether to open an email, report it, or ignore it. If you choose incorrectly, you witness the full-scale impact of a cyberattack, and all the consequences that could arise. During data protection week a month ago, I developed a branching scenario for a client that had recently updated it policy for incident reporting and used this very example.\nLearning from the misfortune of others, especially if you know deep down that whatever the incident could have happened to your company, is something I encourage my clients to invest in.\nThis kind of training doesn’t just educate, it sticks. When you personally experience the fallout of your decisions (even in a simulated environment), you are far more likely to internalise the lesson and apply it in the real world.\nBridging the gap between policy and practice #How often do you update your policies? Even if it\u0026rsquo;s regularly, how confident are you that your employees fully understand how those policies apply to their daily work, in their contexts?\nBranching scenarios can be used to support the roll out of new policies, processes, or procedures by embedding your employees in realistic situations where they must apply those policies. Rather than simply being told about a new AI Acceptable Use policy, for example, a scenario could require your employees to determine the correct use of GenAI tools depending upon the type of data. This ensures that they not only understand the new policy but also practice applying it in a risk-free environment.\nPerfect for AI governance, data protection, and beyond #Few areas of corporate training require as much nuance and critical thinking as AI governance and data protection. These are fields where the right decision isn’t always clear-cut, and employees must navigate gray areas where the wrong choice can have significant ethical, legal, or financial consequences.\nBranching scenarios are particularly valuable in these areas because they encourage employees to think through complex situations and weigh up conflicting priorities. Should a machine learning model prioritise accuracy over fairness? What should an employee do if they suspect customer data is being misused? These are not questions that can be answered easily trying to remember from slides of endless bullet points. They need practice making decisions in scenarios before they face them in the reality of their workplace.\nIntegration with your corporate ecosystem #Branching scenarios are not only engaging but also highly adaptable to your company’s existing systems and branding. The graphical interface can easily be aligned with your company\u0026rsquo;s corporate visual identity, ensuring a seamless look and feel that integrates effortlessly with your existing internal training materials. In the graphic accompanying this post, I\u0026rsquo;ve shown a few variations in terms of style, but in reality, there are many options.\nAlso, the finished branching scenario can be added to your Learning Management System (LMS) via an LTI connection, or, embedded into the SharePoint pages on your intranet, or accessed directly from our hosting provider. This flexibility allows you to deploy training in a way that best suits your company’s needs while ensuring easy accessibility for employees.\nThe future of training is interactive #The days of monotonous training sessions should be behind us. We don’t need more slides, more lectures, or more memorisation drills. Your employees need training that is engaging, immersive, and applicable to their daily work.\nSo, the next time you\u0026rsquo;re planning an employee training programme, ask yourself: do your employees really need another PowerPoint presentation? Or is it time to step into the future with branching scenarios?\nInterested? Feel free to get in touch to arrange a no obligation call to discuss your requirements, where I\u0026rsquo;ll also happily share some demos of branching scenarios.\nAs a side note, I first came across this approach to employee engagement back in the late 90s whilst working on a project for a Danish pharma giant who were publishing their groundbreaking Social Report online. We jointly developed a number of dilemmas supporting their policies in this area. One that stood out was a dilemma around what to do if you discovered a colleague secretly consuming alcohol at work!\n","date":null,"permalink":"/say-goodbye-to-boring-training-the-power-of-interactive-scenarios/","section":"Learnings","summary":"","title":"Say Goodbye to Boring Training: The Power of Interactive Scenarios"},{"content":"","date":null,"permalink":"/services/","section":"Services","summary":"","title":"Services"},{"content":" Technologies that facilitate employee monitoring | Purpose and Means Technologies that facilitate employee monitoring #There are no end of solutions on the market that contain a vast array of technologies to facilitate employee monitoring. From time tracking software that monitors task durations to sophisticated AI that analyses behavioural patterns, these technologies are diverse.\nTRACKING USERSEMPLOYEE MONITORINGEXPLAINER\nTim Clements\n6/20/20244 min read\nThere are no end of solutions on the market that contain a vast array of technologies to facilitate employee monitoring. From time tracking software that monitors task durations to sophisticated AI that analyses behavioural patterns, these technologies are diverse. For example, keyloggers track every keystroke, providing insights into user behaviour. Network monitoring oversees network traffic to secure data and prevent unauthorised access, yet they must be managed carefully to avoid overreach.\nOften integrated into sophisticated packages the technologies automate employee monitoring practices that have been around for over a hundred years. See this blog post to get an understanding of the history and origins of employee monitoring.\nUnderstanding these tools\u0026rsquo; capabilities and risks is crucial for their ethical and legal application. To help guide you, we\u0026rsquo;ve produced this infographic that summarises some of the key technologies available but there are many more available and are evolving at a rapid pace.\nComputer and internet monitoring‍ #Keystroke logging: captures every keystroke made on a computer, allowing employers to see what employees type, which can include emails, documents, and chat messages.\nWebsite and application tracking: monitors which websites and applications employees access during work hours to ensure they are using company resources appropriately.\nUser Behaviour Analytics (UBA): utilises advanced algorithms to assess patterns of user activity on company networks to identify anomalies that could indicate insider threats or security risks.\nEmail monitoring: involves scanning and analysing inbound and outbound emails for content compliance, security risks, and productivity assessment.\nNetwork monitoring: observes data flow over a company’s network to detect unauthorised access, data breaches, and to ensure network security and efficiency.\nSoftware utilisation tracking: measures how employees use software applications to manage licensing compliance and to optimise software investments based on actual usage.\nVideo surveillance # CCTV: Closed-Circuit Television (CCTV) is a type of video surveillance system that uses cameras to monitor and record activities in specific areas for security and monitoring purposes.\nLive monitoring: real-time observation using video cameras to ensure workplace safety, security, and compliance with company policies.\nMotion detection cameras: cameras that start recording only when motion is detected, helping to save storage space and focus monitoring on periods of activity.\nThermal imaging cameras: used to detect heat patterns, which can be used in specific industrial settings for safety and also to track human movement.\nGPS and location tracking # Vehicle tracking: utilises GPS devices installed in company vehicles to monitor their locations and movements to optimise fleet management.\nMobile device tracking: tracks the location of company-provided mobile devices to ensure that employees are at their assigned locations during work hours.\nGeofencing: sets up virtual boundaries that trigger alerts when an employee enters or leaves a specific geographical location, used for ensuring compliance with operational constraints.\nAsset tracking: utilises technologies like GPS and RFID chips to monitor and manage the location and status of valuable assets, such as equipment or merchandise, in real-time to enhance security and operational efficiency.\nDriver behaviour monitoring: employs GPS and other sensor technologies to analyse driving patterns, such as speed, braking, and cornering, to ensure safe driving practices and optimise fleet performance.\nBiometric monitoring # Fingerprint scanners: commonly used at entry points or on time clocks to verify an employee’s identity based on their unique fingerprint.\nFacial recognition systems: used for security access and timekeeping by analysing facial features to confirm an individual\u0026rsquo;s identity.\nVoice recognition: utilises voice patterns to authenticate identity, often used in secure communication systems.\nHand geometry: measures the shape and size of an employee’s hand for access control, typically in high-security areas.\nBehavioural biometrics: analyses behavioural patterns like typing rhythm, mouse movements, and walking patterns for continuous verification of authenticated users.\nPhone call monitoring \u0026amp; recording # Call recording: records calls to or from company phones, often used in customer service and sales environments to monitor quality and compliance.\nVoice analytics: uses advanced algorithms to analyse spoken content for tone, stress, and emotional sentiment to improve customer interactions and employee training.\nAutomated call scoring: uses pre-defined criteria to automatically evaluate the quality of calls handled by employees, aiding in performance reviews and training.\nSentiment analysis: applies natural language processing tools to detect emotional tones in voice communications, which can inform customer service strategies.\nPredictive analytics: uses data from recorded phone calls to forecast trends and patterns in employee performance, customer satisfaction, and operational efficiency, helping managers make informed decisions and improve future interactions.\nEmail and messaging archiving # Keyword monitoring: scans emails and messages for specific keywords that may indicate compliance issues, security threats, or other concerns.\nAttachment control: monitors and regulates files attached to emails to prevent the spread of malware and the leakage of sensitive information.\nVoice analytics: uses advanced software to analyse tone, stress levels, and spoken content to gauge performance and compliance.\nConversation threading: organises emails and messages into threads, making it easier to review the full context of conversations for audits and compliance checks.\nComprehensive archiving: this involves the systematic storage of electronic records, such as emails, chat messages, and documents, to ensure that all communications are retained securely over time. In employee monitoring, comprehensive archiving helps organisations maintain detailed records of employee communications for various purposes including historical reference, performance review, and legal compliance. It ensures that data is accessible for audits or investigations, providing a reliable resource for understanding past activities.\nCompliance review: this refers to the regular examination of stored communications and other monitored data to ensure that the organisation\u0026rsquo;s activities are in line with legal standards, industry regulations, and internal policies. Compliance reviews are critical in employee monitoring as they help organisations verify that the monitoring practices themselves, as well as employee actions, adhere to relevant laws and regulations (such as GDPR or HIPAA). These reviews aid in identifying and mitigating risks associated with non-compliance, which can include legal penalties and reputational damage.\nDo you find these infographics useful? We research and produce one-pagers, explainers and similar infographics for a wide range of topics, and have found they are a popular addition to our clients\u0026rsquo; employee engagement activities, supporting the education and training programmes we provide.\nInterested? Book a discovery call to hear more about how Purpose and Means can support your own employee engagement activities, putting original, contextual and highly visual material in front of your company\u0026rsquo;s employees.\n","date":null,"permalink":"/technologies-that-facilitate-employee-monitoring/","section":"Learnings","summary":"","title":"Technologies That Facilitate Employee Monitoring"},{"content":" Making data protection part of the conversation #Empowering communication. Inspiring engagement. Delivering results.\nEmployee and management engagement isn\u0026rsquo;t just about information — it\u0026rsquo;s about connection. At Purpose and Means, we specialise in helping digital leaders cut through noise, craft clear messages, and cultivate a culture of common understanding around data protection, AI, and their importance in achieving organisational goals.\nWhat we do #Our Virtual Communications Support services are tailored to the unique needs of data protection leaders. Whether you\u0026rsquo;re launching a new initiative or maintaining momentum, we provide tools and strategies to ensure your messages resonate and drive action.\nWhy it matters #Engagement isn\u0026rsquo;t just about sharing information — it\u0026rsquo;s about building a deeper understanding of your mission and goals. We help you transform data protection into a core part of your organisational culture, driving better decisions and stronger alignment.\nOur approach includes # Channel analysis and roadmapping: We analyse your available communication platforms and design a strategic engagement roadmap, ensuring consistent and impactful outreach over the year. Regular alignment and planning: Weekly or fortnightly calls to refine plans, approve content, and stay aligned with your evolving needs. Content creation and delivery: News updates: Regular, engaging posts crafted with sector-specific insights and accompanied by visuals. Portal management: Establishing registers for FAQs, event logs, and issue tracking on platforms like SharePoint. Dynamic video content: Filmed interviews and short animations that humanise your message and bring real-world relevance to data protection topics. Baseline metrics and ongoing measurement: We measure impact, track engagement, and provide monthly updates to ensure your communication is hitting the mark. Example initiatives # Data protection awareness events: From Data Protection Day/Week/Month to themed engagement weeks, we create experiences that educate and inspire employees. Bespoke visual content: Short animations and one-page explainers contextualised to employees\u0026rsquo; roles, making complex topics accessible and actionable. Interactive workshops: Full-day sessions to align teams on critical issues, such as the impacts of new AI and data laws, turning challenges into actionable roadmaps. Let\u0026rsquo;s work together #Ready to transform communication into connection? Let\u0026rsquo;s work together to engage your teams and align your organisation with purpose. Contact us today to start the conversation.\n","date":null,"permalink":"/virtual-communication-support-for-effective-employee-engagement/","section":"Services","summary":"","title":"Virtual Communication Support for Effective Employee Engagement"},{"content":" Virtual communications support that cuts through the noise | Purpose and Means Virtual communications support that cuts through the noise #The field we work in can feel overwhelming. It’s full of regulations, acronyms, and technical terms that intimidate some people or send them running for the hills. If your employees don’t understand it, they won’t act on it.\nEMPLOYEE AWARENESSDATA PROTECTION DAYDATA PROTECTION LEADERSHIPVIRTUAL COMMS. SUPPORT\nTim Clements\n1/17/20254 min read\nJanuary is typically one the busiest months of the year for me and that\u0026rsquo;s because clients are making final preparations for Data Protection Week, a huge opportunity for them to engage their employees on the latest data protection, AI and all the other related issues.\nThis year is no different, except the topics are more diverse than ever:\nAvoiding AI deception online\nBeing prepared for data breaches.\nGenAI prompt engineering.\nData protection\u0026rsquo;s role in ESG\nSafe use of GenAI\nLatest update on tracking technologies.\nStrategic horizon scanning\nAwareness of various new laws in different jurisdictions\nAnd honestly, I love it. The variety keeps me on my toes. It’s why I started Purpose and Means years ago to help leaders communicate their complex topics to top management and employees so they actually get the concepts.\nVirtual communications support isn’t just about making noise. It’s about making an impact.\nHere’s how I do it.\nBreaking down the complex\nThe field we work in can feel overwhelming. It’s full of regulations, acronyms, and technical terms that intimidate some people or send them running for the hills. If your employees don’t understand it, they won’t act on it.\nMy role is to explain the complex, turning overwhelming information into resources that are clear, engaging, and practical.\nTake the typical 20-page policy document. Dry. Dense. Ignored.\nI transform that into something people want to interact with. Whether it’s:\nA short, interactive video to explain key concepts.\nAn interactive quiz to test their knowledge.\nA dilemma branching scenario that lets employees explore decision-making in real-world situations.\nOr my newest product: a digital interactive book or policy pack, where employees can click, navigate, and interact with dynamic content, all in one place.\nIt’s all about creating tools that employees can use in their day-to-day work.\nTailored to your needs\nEvery company\u0026rsquo;s situation is unique and different to others (obviously). It could be an urgent response to a specific issue, say, a new regulation in their sector. Others are looking for long-term support, like building awareness of AI risks or improving their ESG communications.\nOne size does not fit all. That’s why my work is always customised to the client’s needs. For example:\nIf you’re training your digital marketing or UI/UX team, I’ll go deep into the nuances of requirements and what they mean in their specific context\nIf frontline employees are the target audience, I’ll create content that’s practical and easy to digest, again, in their context.\nI really do try to avoid generic stuff that only ticks a box. The format matters, and so does the tone.\nIt’s not just about what you want to say, it’s about what your employees need to hear.\nAnd that’s the difference between information that sits on a server and communication that actually sticks.\nWhy virtual communications support matters\nWhy invest in virtual communications support in the first place?\nBecause most data protection leaders are overwhelmed, and/or lack the knowledge and competences to put together a stakeholder engagement plan and execute it throughout the year.\nAnd your workplaces are overwhelmed too. Your employees already have overflowing inboxes, packed calendars, and too many priorities. Among the biggest challenges is to ensure your messages cut through the flood of other messages they are bombarded with from your peers that all look, sound and feel the same - same PowerPoint template, same generic corporate-speak.\nYou are competing for your employees\u0026rsquo; attention so your material needs to stand out, it needs to be eye-catching, hard to avoid.\nIf you want to engage them, and I mean truly engage them, you need to meet them where they are and that may involve tailoring content to whether they\u0026rsquo;re Gen X, Millennials or Gen Z - or even Boomers II.\nThat means:\nCommunicating in formats they’re familiar with.\nKeeping it concise.\nMaking it visually engaging and interactive.\nVirtual communications support ensures you’re not just checking a compliance box, you’re building a workforce that understands the “why” behind their actions.\nThe work I love\nWhat I love most about this work isn’t just being challenged to think differently and be creative. It’s the variety. One day, I’m crafting a branching scenario about AI ethics. The next, I’m designing an interactive policy pack on ESG. Then I’m producing an interactive video to explain the latest tracking technologies.\nAnd the best part? Knowing that this work has a real impact. When an employee tells me a scenario helped them understand how to respond to a data breach, or when a client says their team finally gets the importance of an updated policy, I know I’ve done my job.\nA partner for long-term success\nPurpose and Means isn’t just about delivering content. It’s about building a relationship with you and your company, a partnership based on understanding your goals and challenges.\n**My goal is to make you look good. **\nI want to help you build trust with your employees, enable engagement, and create a culture where data protection isn’t a burden, it’s a shared responsibility.\nSo, whether you’re planning for Data Protection Week, tackling new regulations or emerging tech, or looking for fresh ways to educate your workforce, let’s work together.\nWith Purpose and Means, you’re not just getting a service, you’re getting a strategic partner who’s passionate about turning complex topics into simple, engaging solutions.\nHow can I help you create a cadence of employee and management engagement throughout this year? Feel free to book a no obligation call to discuss your requirements.\n","date":null,"permalink":"/virtual-communications-support-that-cuts-through-the-noise/","section":"Learnings","summary":"","title":"Virtual Communications Support That Cuts Through the Noise"},{"content":" What is a Data Protection Strategy and why your company needs one? | Purpose and Means What is a Data Protection Strategy and why your company needs one? #Discover the essential role of a data protection strategy for business success. Learn why it\u0026rsquo;s more than just compliance and how it safeguards your future.\nDATA PROTECTION LEADERSHIPDATA PROTECTION PURPOSE AND STRATEGYGOVERNANCE\nTim Clements\n3/24/20253 min read\nThe trust factor that keeps CEOs awake at night #And when it comes to your company\u0026rsquo;s data protection activities, the right strategy can mean the difference between cultivating customer trust and watching it evaporate overnight.\nLet me be clear: A data protection strategy isn\u0026rsquo;t just some fancy document gathering dust in your digital filing cabinet. It\u0026rsquo;s the backbone of your business\u0026rsquo;s future survival in a climate where consumers are becoming increasingly discerning about who they trust with their personal data.\nWhat exactly IS a Data Protection Strategy? #In its simplest form, a data protection strategy is a roadmap that acknowledges where your business currently stands regarding data protection, and plots a course to where you need to be. It\u0026rsquo;s not a one-size-fits-all template that you can copy/paste from a competitor. It\u0026rsquo;s as unique as your business.\nYour strategy should account for your specific geography, industry requirements, technology infrastructure, business objectives, and – most critically – how dependent your success is on the trust equation between you and your customers.\nThink of it as your company\u0026rsquo;s commitment to processing personal data responsibly, backed by concrete policies, procedures, and practices that align with your business goals.\nBeyond compliance: The real reason you need this strategy #Forget about regulatory fines and penalties for a moment. Yes, they exist. Yes, they can be substantial. But here\u0026rsquo;s the truth most law firms won\u0026rsquo;t tell you: enforcement penalties aren\u0026rsquo;t what should keep you up at night.\nWhat should worry you is the loss of trust.\nThe reality for many companies is trust is the most delicate and valuable currency you possess. Consumers aren\u0026rsquo;t just asking about your products anymore; they\u0026rsquo;re asking about your values. They\u0026rsquo;re asking how you handle their information. And they\u0026rsquo;re making purchasing decisions based on the answers.\nLook at Apple. Their \u0026ldquo;What happens on your iPhone stays on your iPhone\u0026rdquo; campaign wasn\u0026rsquo;t just clever marketing. It was a strategic business differentiator that recognised a fundamental shift in consumer expectations. People are prepared to pay more for products and services that align with their values – including data protection.\nThe hidden costs of not having a strategy #When your data protection work is perceived as merely a \u0026ldquo;necessary evil\u0026rdquo; or a legal checkbox exercise, you\u0026rsquo;re missing tremendous opportunities for business alignment and growth.\nWithout a proper strategy:\nYou\u0026rsquo;ll remain reactive instead of proactive – always putting out fires rather than preventing them.\nYou\u0026rsquo;ll struggle with disjointed efforts across departments because data protection will be seen as \u0026ldquo;that legal thing\u0026rdquo; rather than a business enabler.\nYou\u0026rsquo;ll waste resources on generic policies and procedures that don\u0026rsquo;t fit your company\u0026rsquo;s unique culture or operations.\nYou\u0026rsquo;ll miss opportunities to align with marketing, product development, and customer service initiatives that could actually benefit from strong data protection principles.\nYou\u0026rsquo;ll continue fighting an uphill battle for budget and executive buy-in.\nThe most damaging cost, however, is the lost opportunity to transform data protection from a perceived business hindrance into a powerful competitive advantage.\nBuilding your reputation on trust #Consumers are changing. Annual surveys consistently show that people expect companies to take charge of social issues alongside environmental concerns. Data protection has joined that list of expectations.\nWhen trust is broken through a data breach or mishandling of information, the consequences extend far beyond regulatory penalties. You lose:\nCustomer loyalty\nMarket reputation\nBusiness opportunities\nVendor relationships\nCompetitive advantage\nIn contrast, businesses that embrace data protection as a strategic imperative create a trust-based relationship with customers that translates into long-term loyalty and advocacy.\nCreating a strategy that works #An effective data protection strategy isn\u0026rsquo;t created in isolation. It must align with your overall business objectives and involve key stakeholders from across your company.\nYour strategy should include clear policies, contextual ways of working, meaningful reporting metrics, and appropriate governance structures. It should address behaviour and culture, ensuring the right mindset and competencies exist throughout your company.\nMost importantly, it should transform data protection from an isolated compliance function into an integrated business enabler that supports growth while protecting what matters most.\nThe bottom line #The question isn\u0026rsquo;t whether your business needs a data protection strategy. The question is whether you want to be perceived as a trustworthy \u0026ldquo;guardian\u0026rdquo; of personal information or a company that treats it as an afterthought.\nThese datas, consumers increasingly vote with their wallets based on company values, the answer to that question could determine your business\u0026rsquo;s long term future.\nThe time to develop your data protection strategy isn\u0026rsquo;t after a breach occurs. It\u0026rsquo;s now – before your reputation and customer trust are put to the test.\nDoes this resonate? Take a look at a couple of client cases and also our service page for Data Protection Purpose and Strategy and then feel free to get in touch to arrange a no obligation call to discuss your current situation and challenges with a view to making tangible improvements.\n","date":null,"permalink":"/what-is-a-data-protection-strategy-and-why-your-company-needs-one/","section":"Learnings","summary":"","title":"What Is a Data Protection Strategy and Why Your Company Needs One"},{"content":" Why Branching Scenarios are the ultimate tool for employee engagement | Purpose and Means Why Branching Scenarios are the ultimate tool for employee engagement #Branching scenarios transform employee education by creating interactive, real-world decision-making experiences that boost engagement, improve retention, and drive meaningful behavioural change.\nEMPLOYEE AWARENESSEDUCATION AND TRAININGAI, DATA PROTECTION AND ESGVIRTUAL COMMS. SUPPORT\nTim Clements\n2/7/20253 min read\nEmployee engagement isn’t just a buzzword, it’s a necessity for companies looking to boost performance, improve decision-making, and build a culture of continuous learning. Nowadays, traditional education and training methods often fall flat, failing to keep employees interested or drive real behavioural change.\nEnter branching scenarios, a powerful, interactive education method that immerses employees in real-world challenges, requiring them to think critically, make decisions, and see the consequences of their actions. Whether you’re educating employees on ESG (Environmental, Social, and Governance) policies, data protection, leadership skills, or customer service, branching scenarios offer a high-impact, engaging learning experience.\nSo why are branching scenarios so effective and how they can transform employee education?\n1. Realistic, decision-based learning #One of the key advantages of branching scenarios is their real-world application. Unlike passive learning methods (such as lectures or static elearning courses), branching scenarios place employees in dynamic, decision-making roles.\nFor example, imagine a Data Protection Officer (DPO) faced with an ESG-related data protection challenge. Instead of reading about best practices, they navigate a realistic simulation where each decision, whether to suggest the company conduct a Data Protection Impact Assessment (DPIA), advise on an AI bias issue, or keep compliance efforts minimal, it all affects the outcome.\nBy mimicking real-life complexities, employees engage deeply and retain more knowledge compared to traditional training.\n2. Immediate feedback and consequence awareness #Many employees don’t just want to know the right answer, they want to understand why it’s the right answer. Branching scenarios give immediate feedback, showing employees how their decisions impact compliance, ethics, business outcomes, and stakeholder trust.\nFor example, in a scenario about Green IT and Data Storage, an employee might need to balance data minimisation, purpose limitation as well as considering environmental impacts. Instead of simply stating \u0026rsquo;this is wrong,\u0026rsquo; the scenario demonstrates the consequences, increased carbon footprint, GDPR non-compliance, and ESG reputational risks.\nThis type of cause-and-effect learning is far more effective than passive training because employees experience the results of their choices firsthand.\n3. High engagement through interactivity #Employee engagement suffers when learning is passive and repetitive. PowerPoint slides, long policy documents, and traditional e-learning modules often result in low retention rates because they fail to hold employees\u0026rsquo; attention.\nBranching scenarios, on the other hand, turn learning into an interactive experience, one that feels like a game rather than a lecture. Employees must:\nMake decisions under realistic constraints\nExplore multiple outcomes based on their choices\nProblem-solve and strategise, rather than memorise facts\nThis keeps employees engaged and motivated, making learning more immersive and impactful.\n4. Safe space for learning from mistakes #One of the biggest barriers to effective decision-making in the workplace is the fear of making mistakes. Employees often hesitate to take action because they worry about the consequences.\nBranching scenarios provide a safe, simulated environment where employees can make mistakes, explore different approaches, and learn without real-world risks.\nFor example, in a scenario about AI bias in hiring, an HR manager may initially ignore an AI-driven discrimination issue. When the scenario reveals that this choice leads to GDPR fines and reputational damage, the employee experiences the mistake without real-world consequences.\nThis kind of fail-safe learning environment helps employees develop better critical thinking and decision-making skills over time.\n5. Aligns with business goals \u0026amp; compliance needs #Branching scenarios don’t just enhance learning, they can be strategically designed to align with key business objectives.\nFor instance, companies focused on ESG compliance can use branching scenarios to:\nTrain employees on ethical AI practices\nTeach sustainable data storage habits\nImprove transparency in ESG reporting\nBy integrating compliance-driven learning into engaging, decision-based training, businesses can ensure that employees are both informed and motivated to act responsibly.\n6. Customisation for different roles \u0026amp; departments #One-size-fits-all training rarely works, because different employees face different challenges in their roles.\nBranching scenarios offer customised learning paths, allowing employees to experience role-specific situations that apply directly to their job functions.\nFor example:\nA customer service agent may navigate a scenario on handling consumer data responsibly.\nAn IT administrator may explore a cybersecurity breach scenario and decide how to respond.\nA sustainability officer may manage a scenario on balancing carbon footprint reduction with business needs.\nBy personalising training experiences, branching scenarios make learning relevant and valuable to each employee.\n7. Supports data-driven learning \u0026amp; improvement #One of the biggest advantages of digital learning is the ability to track progress and measure effectiveness. Our solution can integrate seamlessly into your LMS via LTI, ensuring all learner data and performance metrics flow directly into your system.\nBranching scenarios capture data on employee decision-making, allowing companies to:\nIdentify common knowledge gaps\nAssess decision-making trends\nRefine training programs based on real user data\nFor example, if most employees struggle with AI transparency requirements, the company can adjust future training to focus more on that topic.\nThis data-driven approach ensures that training is continuously improving and addressing the most critical knowledge gaps.\nIf your company wants to move beyond checkbox compliance and create a true culture of learning, it’s time to start using branching scenarios in your training programmes.\nInterested and want to know more? Feel free to book a no-obligation call to discuss your requirements.\n","date":null,"permalink":"/why-branching-scenarios-are-the-ultimate-tool-for-employee-engagement/","section":"Learnings","summary":"","title":"Why Branching Scenarios Are the Ultimate Tool for Employee Engagement"},{"content":"Pioneering Women in AI #A journey through time and innovation\nThis interactive educational resource shows how women have shaped the development of AI from Ada Lovelace\u0026rsquo;s pioneering algorithms in the 1840s, through generations of mathematicians, coders, and ethics leaders who pushed boundaries in computing, programming, robotics, fairness research, and modern AI governance.\nDespite historical under-recognition, women such as Ada Lovelace, Grace Hopper, Margaret Hamilton, Fei-Fei Li, Joy Buolamwini, and Joelle Pineau have been crucial to developing AI\u0026rsquo;s capabilities and ensuring its positive impact on society.\nThis learning resource can be supplemented with knowledge checks and quizzes, and can be integrated with a number of learning management systems (LMS). If your organisation is a non-profit or educational institution, we normally allow you to embed the interactive infographic into your web pages or intranet pages for free. Email us for more information.\nThis resource is best viewed on a desktop computer or tablet. Although it can be viewed on a smartphone, the interface may appear cluttered.\nAbout this resource #This interactive timeline showcases the significant contributions of women to AI, computing, and related fields across nearly two centuries — from Mary Somerville in the late 18th century to contemporary leaders shaping ethical AI today.\nMary Somerville (1780–1872): Mathematician and science writer who mentored Ada Lovelace. Mary Shelley (1810–1850): Author of Frankenstein, exploring scientific ethics and creators\u0026rsquo; moral duties. Ada Lovelace (1815–1852): Created the first computer program, working on Babbage\u0026rsquo;s Analytical Engine. Rózsa Péter (1905–1977): Established vital theoretical foundations in recursive function theory. Grace Hopper (1906–1992): Created the first compiler and championed the COBOL programming language. Mary Kenneth Keller (1913–1985): First person to earn a PhD in computer science in the US. Hedy Lamarr (1914–2000): Co-developed frequency-hopping spread spectrum technology, the foundation for Wi-Fi, Bluetooth, and GPS. Betty Holberton (1917–2001): Invented breakpoints for debugging and wrote the first sort-merge generator. Katherine Johnson (1918–2020): NASA mathematician whose trajectory calculations were essential for the Apollo moon missions. Kay McNulty (1921–2006): One of the original six ENIAC programmers, helping establish foundational principles of software engineering. Frances Spence (1922–2012): One of the original six ENIAC programmers, pioneering computer programming during WWII. Marlyn Meltzer (1922–2008): Original ENIAC programmer who laid early software engineering foundations. Ruth Teitelbaum (1924–1986): Original ENIAC programmer who later trained the next generation of computer programmers. Evelyn Boyd Granville (1924–2023): Second African-American woman to earn a PhD in mathematics; developed programs for NASA\u0026rsquo;s Apollo missions. Jean Bartik (1924–2011): Original ENIAC programmer who helped develop early software methodologies for BINAC and UNIVAC I. Jean Sammet (1928–2017): Created the FORMAC programming language and helped design COBOL. Gladys West (1930–2026): Mathematical modelling of Earth\u0026rsquo;s shape became the essential foundation for GPS. Annie Jean Easley (1933–2011): NASA mathematician who developed essential software for the Centaur rocket stage. Karen Spärck Jones (1935–2007): Developed inverse document frequency (IDF), revolutionising information retrieval and powering modern search engines. Margaret Boden (1936–Present): Advanced understanding of how AI systems can simulate human creativity and consciousness. Donna Haraway (1944–Present): Known for \u0026ldquo;A Cyborg Manifesto,\u0026rdquo; a foundational text in posthumanist theory and techno-feminism. Sherry Turkle (1948–Present): Studies the psychological and social implications of human-computer interaction. Lucy Suchman (1949–Present): Shifted human-computer interaction by showing that human activities are improvisational rather than strictly planned. Shoshana Zuboff (1951–Present): Introduced the concept of \u0026ldquo;surveillance capitalism.\u0026rdquo; Radia Perlman (1951–Present): Known as the \u0026ldquo;Mother of the Internet\u0026rdquo; for inventing the Spanning Tree Protocol (STP). Nonny de la Peña (1957–Present): Pioneer of immersive journalism known as the \u0026ldquo;Godmother of Virtual Reality.\u0026rdquo; Cynthia Dwork (1958–Present): Transformed data protection by developing differential privacy. Lisa Nakamura (1963–Present): Pioneered the study of internet identity and how race, gender, and power dynamics manifest in virtual environments. Martha Wells (1964–Present): Author of the Murderbot Diaries, exploring AI consciousness, free will, and autonomy. Maja Matarić (1965–Present): Develops socially assistive robots for education, healthcare, and people with special needs. Rose Luckin (1966–Present): Specialises in using AI to develop personalised learning technologies. Latanya Sweeney (1966–Present): Exposed vulnerabilities in de-identified data and introduced the k-anonymity privacy framework. Cynthia Breazeal (1967–Present): Transformed human-robot interaction by developing machines capable of recognising and responding to human emotions. Cordelia Schmid (1967–Present): Developed algorithms enabling computers to recognise objects, actions, and scenes in images and video. Daphne Koller (1968–Present): Advanced probabilistic models in AI and co-founded Coursera. Nuria Oliver (1970–Present): Develops innovative, data-driven solutions for global development and healthcare. Regina Barzilay (1970–Present): Applies NLP and deep learning to medical imaging for early cancer detection. Ayanna Howard (1972–Present): Known for developing assistive robotics for children with disabilities. Safiya Noble (1972–Present): Author of Algorithms of Oppression, analysing how search algorithms perpetuate racial and gender biases. Virginia Eubanks (1972–Present): Author of Automating Inequality, investigating how algorithmic decision-making reinforces social inequality. Cathy O\u0026rsquo;Neil (1972–Present): Author of Weapons of Math Destruction, pioneering the concept of algorithmic harm. Nnedi Okorafor (1974–Present): Pioneered the African futurism literary movement. Fei-Fei Li (1976–Present): Created the ImageNet database, a catalyst for the deep learning revolution. Kate Crawford (1976–Present): Co-founder of the AI Now Institute, critically examining the social implications of AI. Ruha Benjamin (1978–Present): Studies the \u0026ldquo;New Jim Code,\u0026rdquo; analysing how structural inequalities are embedded into technological design. Rana el Kaliouby (1978–Present): Pioneered algorithms enabling machines to recognise human emotions. Kashmir Hill (1981–Present): Investigative journalist exploring the ethical implications of digital surveillance. Timnit Gebru (1983–Present): Leading researcher in AI ethics, widely recognised for work on algorithmic bias and large language model risks. Anca Dragan (1986–Present): Focuses on enabling machines to understand and predict human intentions for safer human-robot cooperation. Alice Xiang (1988–Present): Research focuses on algorithmic fairness, transparency, and mitigating bias in automated systems. Joy Buolamwini (1989–Present): Exposed racial and gender bias in facial recognition systems and founded the Algorithmic Justice League. ","date":null,"permalink":"/women-in-ai/","section":"Learnings","summary":"","title":"Women in AI"},{"content":"AI Governance, Data Protection, and ESG (Environmental, Social, and Governance) should no longer be siloed departments. They are converging into a single operational imperative. If you are a Data leader, you are also an ESG leader. If you are an Infrastructure leader, you are also a Tech risk officer.\nFor the past few years, AI has, in some companies, been defined by speed: How fast can they build, how fast they can deploy, how fast can they disrupt. But our latest foresight analysis suggests that the era of \u0026ldquo;move fast and break things\u0026rdquo; is coming to an end.\nAs we move into 2026, a new megatrend is dominating the horizon: Convergence**.**\nAI Governance, Data Protection, and ESG (Environmental, Social, and Governance) should no longer be siloed departments. They are converging into a single operational imperative. If you are a Data leader, you are also an ESG leader. If you are a Infrastructure leader, you are also a Tech risk officer.\nBased on our latest radar developed and published this month, tracking over 50 technologies and signals, here are the four shifts that will define the next five years (an interactive version of the radar is available below).\n1. Compliance is in code\nThe days of vague \u0026ldquo;AI Ethics\u0026rdquo; principles on a website are gone. With the Institutionalisation of AI Governance, boards now face fiduciary liability for algorithmic failures.\nThe shift we see on the radar is purely operational. It is the move from policy to platform.\nThe Insight: You cannot manage 2026-era regulation with spreadsheets.\nThe Tool: We are seeing the rapid adoption of Automated AI Governance Platforms and Compliance Tools. These systems hard-code legal requirements into the development pipeline. If a model doesn\u0026rsquo;t pass the fairness check, it doesn\u0026rsquo;t deploy.\n2. Trust is the new procurement gate\nPerhaps the signal that stood out in our research is the rise of Standards \u0026amp; Expectations. Trust is moving from a sentiment to a certificate.\nThe signal: We are seeing a significant increase in ISO 42001 adoption.\nThe reality: Within the next couple of years, a lack of certification will become a barrier to entry. Major buyers in finance, health, and the public sector will lock out vendors who cannot prove their governance.\n3. Digital is physical\nOur radar revealed a key megatrend, Scrutiny of AI’s Footprint. GenAI is a heavy industry. It consumes vast amounts of water and energy, and creates physical waste.\nThe crisis: By 2030, we face a potential E-Waste Crisis as millions of AI chips hit end-of-life.\nThe strategy: ESG strategies must cover the digital supply chain. This means implementing AI Carbon Accounting to see the true cost of compute, and mandating Circular Hardware procurement to ensure your old servers are recycled, not dumped.\n4. Geography is destiny\nData Sovereignty \u0026amp; Fragmented Digital Markets is a megatrend that reveals a fragmented future where data sovereignty dictates IT architecture.\nThe shift: As geopolitical tensions rise, data laws are becoming borders.\nThe tech response: We are moving toward a \u0026ldquo;multi-sovereign\u0026rdquo; architecture. Companies are deploying Sovereign Clouds and smart Data-Localisation \u0026amp; Routing Tools that automatically keep German data in Germany and Canadian data in Canada. Your infrastructure strategy is now a geopolitical strategy.\n2030 and beyond\nOur radar also picked up two faint but critical signals on the outer rim:\nQuantum is ticking: Companies with long-life data (e.g., pension records) must start their Post-Quantum Cryptography** **migration now, not in 2029.\nThe 6G economy: By 2030, AI-Native 6G** **will allow devices to autonomously negotiate their own connectivity and energy usage, creating a machine-to-machine economy that needs entirely new governance rails.\nTo conclude, the companies that will succeed in the next decade won\u0026rsquo;t just be the companies with the most powerful AI. They will be the companies with the most governed AI.\nThey will be the companies that successfully merge their CISO, CDO, and ESG mandates into a unified strategy. They will use technology not just to innovate, but to prove they can be trusted.\nThe radar below was made using the FIBRES tool - worth looking at if you wish to enhance your foresight capability. Need help in establishing foresight capabilities in your company? Get in touch to hear more about our service offering. And for information about how we can help your company build stronger collaboration across your existing silos, take a look at our AI, data protection and ESG service page.\n","date":"22 December 2025","permalink":"/the-convergence-of-ai-governance-data-protection-and-esg/","section":"Blog","summary":"","title":"The convergence of AI Governance, Data Protection, and ESG"},{"content":"This post introduces a work-in-progress Causal Loop Diagram, evolving from my earlier \u0026ldquo;ripple effects\u0026rdquo; model, that uses the QSEM tool to visualise the messy, interconnected reality of systemic data protection risks and feedback loops.\nIf you have worked in data protection for more than a week, you know that risk is rarely a straight line. You don’t just have a compliance gap that leads neatly to a fine. You have a messy, interconnected ecosystem where a decision made in product development today can trigger an investigation by a Supervisory Authority three years later, which cuts the budget for the very tools needed to fix the original problem.\nI have been trying to move away from flat, linear risk registers and visualise this reality.\nSome of you may remember an earlier \u0026ldquo;ripple effects\u0026rdquo; diagram and post I published earlier this year. The diagram was actually made 2-3 years ago with close input from Jason Cronk. But as I dug deeper, I realised that a simple ripple didn\u0026rsquo;t capture a critical element of our work: feedback loops.\nWhat you see below is the evolution of that work. It is a work in progress Causal Loop Diagram (CLD) representing the systemic risks of processing personal data under frameworks like the GDPR. Click to enlarge the diagram.\nBefore diving into the map, credit where it is due. This diagram was created using the excellent QSEM (Qualitative System Dynamics structuring and analysis modeling) tool which has been developed by Adam Hulme\nIf you are trying to structure complex qualitative problems, I highly recommend exploring it. It allows you to move beyond static flowcharts and model dynamic relationships between variables.\nWhat am I looking at?\nThis is a Causal Loop Diagram. Unlike a standard process diagram that shows how something happens sequentially, a CLD shows why things behave the way they do over time.\nOne important thing to note, is the diagram does not currently reflect the extremely important type of risk that data protection is very much about: risks to the rights and freedoms of individuals - that will come in a future iteration.\nIt represents the data protection risk ecosystem of a typical global company operating in the EU. It maps the inputs (root causes), the operational engine of failure, the triggering of different risk types, and, importantly, how the financial consequences feed back into the system to either fix the problem or make it worse.\nHow to interpret the diagram\nDon\u0026rsquo;t let the initial complexity overwhelm you. Here is how to break down the elements:\n1. The coloured sub-systems\nTo make the map readable, I have grouped related factors into coloured sub-systems:\nPurple (top-left): Root causes \u0026amp; triggers. The foundational pressures such as strategy, tech debt, and culture, that create risk.\nRed (centre): Core operational failures. Where daily processes break down, leading to gaps and personal data breaches.\nDark blue (top-right): Regulatory \u0026amp; legal consequences. The hard risks involving regulators and courts.\nOrange (bottom-right): Market \u0026amp; strategic consequences. The soft (but equally damaging) risks involving reputation and customers.\nLight blue (bottom-left): Financial impact. Where all roads eventually lead, the bottom line and the subsequent budget decisions.\n2. The connections (+ / -)\nThe arrows show causality, and the symbols on them indicate the direction of change:\nA blue plus sign (+) means \u0026ldquo;changes in the same direction.\u0026rdquo; If A increases, B increases. (e.g., More personal data breaches lead to more negative media).\nAn orange minus sign (-) means \u0026ldquo;changes in the opposite direction.\u0026rdquo; If A increases, B decreases. (e.g., More investment in security leads to fewer operational gaps).\n3. The metrics (n and %)\nYou will see numbers on the bottom of each node, for example: n=27 (35%).\nIt\u0026rsquo;s important to note that this diagram is a qualitative tool for understanding structure, not a quantitative tool for calculating financial exposure.\n\u0026rsquo;n\u0026rsquo; (in-degree): within this specific map structure, this indicates connectivity, i.e. how many other variables are directly influencing this node.\n\u0026lsquo;%\u0026rsquo; (relative weight): This is a structural metric calculated by the QSEM tool based on the map\u0026rsquo;s current configuration, indicating the relative centrality or potential structural impact of that node within this specific model.\nDo not read these as real-world probabilities or financial data. They are indicators of systemic connectivity within this map. To work with real-world probabilities and estimate financial losses, I recommend looking at the FAIR model.\nWhat story does the diagram reveal at the moment (remember, it\u0026rsquo;s still work in progress)?\nBy tracing the flows, a clear narrative emerges about how data protection risks materialise.\nThe drivers: It starts in the top-left (purple). Pressures like Aggressive Data Monetisation and Legacy IT/Tech Debt put immense pressure on the system. When combined with a Weak Data Protection Culture, these pressures pour into the centre (red), increasing the Operational Compliance Gap Likelihood.\nThe triggering of risk: Once that operational gap widens, failures become inevitable, personal data breaches, ignored DSRs, or failed data transfers. These failures ripple outward to the right:\nThey trigger Regulatory Scrutiny (dark blue), leading to fines or crippling processing bans.\nSimultaneously, they fuel Negative Media (orange), eroding customer trust and market share.\nThe feedback loops: All these consequences converge in the bottom-left (light blue) as Aggregated Financial Losses. This is where the system reaches a decision point, modeled by two key feedback loops:\nThe vicious cycle: Financial losses create Pressure to Cut Operational Costs. If this pressure leads to divesting from data protection tools, resources and deprioritising culture, the orange arrows show the path looping back to the start, increasing tech debt and weakening culture. The problem just made itself worse.\nThe corrective loop: Alternatively, if the financial pain, regulatory fear, or reputation loss is severe enough, it triggers Executive Board Attention. This attention forces Investment in Data Protection, creating a balancing loop (the blue arrow going straight up) that reduces the operational gaps.\nEmbracing the mess\nWhen you look at this diagram, your first thought might be, \u0026ldquo;That looks a complete mess.\u0026rdquo;\nIt is messy because the reality we manage is messy.\nIf I showed you a neat, linear list of ten risks on a spreadsheet, I would be lying to you about how the world works. Real organisational risk is spaghetti.\nThis diagram is not something you throw up on a slide for a 5-minute board update (unless they specifically request a deep-dive into systemic root causes).\nBut as a Data Protection Leader, I hope you find this useful. It should help you identify where the real levers are. It helps explain why hiring two more data protection analysts won\u0026rsquo;t fix a problem that is rooted in a ten-year-old legacy IT strategy and a sales-first culture.\nThis is an ongoing project. The QSEM has so much more analysis functionality so this diagram will evolve. I will share further progress in the future.\nInterested to know more about this approach to mapping risk, or you would like to discuss your requirements and hear how Purpose and Means can help? Book a call here.\nContext\nFor further context, I have provided descriptions of the factors that are shown on the diagram:\nRoot causes \u0026amp; triggers\nAggressive data monetisation strategy A business strategy focused on maximising revenue from personal data (e.g., behavioural advertising, selling insights) that often outpaces data protection controls and ethical considerations.\nLegacy IT systems \u0026amp; tech debt Outdated infrastructure and fragmented databases that were not built for GDPR. They make it technically difficult to erase data, secure it properly, or find it for access requests.\nEvolving EU legal \u0026amp; regulatory env The constantly shifting baseline of compliance. Includes new CJEU rulings (like Schrems), new guidelines from the EDPB, and relevant laws like the EU AI Act or Data Act.\nHigh-risk third-party vendor reliance Heavy dependence on data processors (e.g., cloud providers, payroll services) in jurisdictions or sectors with lower security standards. Under GDPR, you own their risks.\nWeak data protection culture A lack of genuine commitment from leadership (\u0026ldquo;tone from the top\u0026rdquo;). Employees prioritize speed/profit over compliance because they don\u0026rsquo;t believe privacy matters to the company.\nVolume/complexity of data processing The sheer scale, geographic spread, and sensitivity of data being handled. The more complex the processing (e.g., AI profiling), the harder it is to control.\nPressure to cut operational costs The immediate corporate reaction to financial losses or market downturns, leading to budget freezes across non-revenue generating departments like compliance.\nTone from the top / C-suite attention When data protection risks become so severe they hit the Board agenda. This is usually a reactive state driven by fear of personal liability or massive company damage.\nInvestment in data protection (budget) Budget allocated specifically to data protection headcount (DPOs, legal counsel), training programmes, and technical security tools (encryption, access controls).\nCore failures\nOperational compliance gap likelihood The increasing probability that daily operations are failing basic GDPR requirements (e.g., data minimisation, retention periods, lawful basis documentation).\nPersonal data breaches \u0026amp; security incidents A failure of technical or organisational measures (Art. 32) leading to accidental or unlawful destruction, loss, alteration, or unauthorised disclosure of data.\nInefficient response to DSRs Failure to fulfill Data Subject Rights (access, erasure, rectification) within the one-month deadline due to bad processes or inability to find the data.\nCross-border data flow disruptions Legal inability to transfer personal data to third countries because transfer mechanisms (like SCCs) are deemed invalid or challenged by hacktivists (e.g. Schrems) or regulators.\nDPbDbD failure in new product dev Inability to implement \u0026ldquo;Data Protection by Design and Default\u0026rdquo; (Art. 25) during the innovation phase, meaning new products are launched non-compliant.\n**Risks \u0026amp; ripple effects **\nRegulatory scrutiny \u0026amp; investigations Regulatory risk: Being targeted by Supervisory Authorities for audits, formal inquiries, or demanding information based on complaints or breach reports.\nRegulatory enforcement: GDPR Fines *Regulatory/financial risk: *The issuing of administrative fines by Supervisory Authorities.\nRegulatory processing bans *Regulatory/operational risk: *Orders from Supervisory Authorities under Art. 58(2) imposing a temporary or definitive ban on processing. This is often more damaging than a fine.\nOperational paralysis/interruption Business continuity risk: The immediate halt of core business functions because a processing ban stops you from using essential data or systems.\nData subject claims \u0026amp; class actions *Legal risk: *Lawsuits filed by individuals or collective redress bodies (like consumer unions) seeking compensation under GDPR Art. 82.\nLegal costs \u0026amp; litigation settlements Legal/financial risk: The direct costs of defending lawsuits (external counsel fees, court costs) and the payouts required to settle claims.\nNegative media \u0026amp; scrutiny Reputational risk: Public exposure of data protection failures by journalists or advocacy groups (like NOYB), damaging the company\u0026rsquo;s public image.\nErosion of customer trust \u0026amp; brand *Reputational risk: *The measurable decline in consumer confidence and brand valuation resulting from perceived negligence with personal data.\nLoss of market share \u0026amp; customers Market risk: Existing customers leaving for competitors they trust more, and an inability to acquire new customers due to a toxic reputation.\nDelayed innovation \u0026amp; competitive loss *Strategic risk: *Being slower to market than competitors because products have to be redesigned to fix data protection flaws that weren\u0026rsquo;t caught early.\nM\u0026amp;A transaction failure/devaluation Strategic/financial risk: A merger or acquisition collapsing, or the purchase price being drastically reduced, because due diligence reveals massive hidden data protection liabilities.\nAggregated financial losses *Financial risk: *The total bottom-line impact combining fines, legal costs, lost revenue, operational downtime, and reduced company valuation.\n","date":"12 December 2025","permalink":"/the-messy-reality-of-data-protection-risk/","section":"Blog","summary":"","title":"The messy reality of data protection risk"},{"content":"As we gear up for Data Protection Week 2026, our diverse schedule of interactive sessions, ranging from Lego® Serious Play® to AI ethics, shows that data protection has evolved far beyond a legal box-ticking exercise.\nIf you still think data protection is strictly the domain of legal professionals, a look at our schedule for Data Protection Week 2026 might change your mind. The week centres around Data Protection Day (or Data Privacy Day if you prefer to use the more US-centric term) that occurs each year on 28 January.\nWhile compliance is the baseline, the real work happens in culture, design, and human behaviour. During the week around 28 January 2026, companies are moving away from dry legislative updates and asking Purpose and Means to facilitate sessions that drive real engagement.\nHere is a taste of what we have lined up so far:\nBuilding culture with Lego®: We are running a half-day workshop for a UK financial services company where the data protection leader gathers her team online across 30+ European entities. Instead of PowerPoint, we are using Lego® Serious Play® to establish a cohesive team identity and visualise what \u0026ldquo;working well\u0026rdquo; actually looks like.\nReal Lives, Real Stories: The events/cases that contributed, or triggered the need for specific data subject rights (right to access, right to be forgotten, right to data portability, etc)\nNavigating AI ethics: We are helping employees at a US medical devices corporation address risks in the workplace and in their personal lives with a targeted session on recognising and avoiding AI deception.\nEmbedding Privacy by Design: We are facilitating an interactive Miro session for a US healthcare company, ensuring privacy considerations are baked into the architecture of a fictitious digital health app. I\u0026rsquo;m hoping to get the scope signed off this week so I can begin developing the Miro board.\nStrategic horizon scanning: We\u0026rsquo;ll be updating a European financial services company through a strategic review of the significant shifts of 2025 and helping them prepare for what’s coming in 2026.\nDon\u0026rsquo;t limit awareness to one week (let alone one day)\nI always say to data protection leaders that this annual event is a tremendous opportunity for them to kick-off their strategy or plan for the coming year, and ideally it\u0026rsquo;s the start of a cadence of employee engagement activities that should run throughout the year - not just on one day.\nWe still have a limited number of slots available during Data Protection Week. However, data protection or privacy culture isn\u0026rsquo;t built in five days.\nWhy not schedule a session before or after the rush? Whether you need to develop a compelling theme, deliver a keynote, or run an interactive workshop, Purpose and Means is ready to help you take data protection beyond the legal department and into the heart of your business.\nContact us today to book a call to get some ideas for your session.\n","date":"9 December 2025","permalink":"/data-protection-week-2026-beyond-the-dry-legal-updates/","section":"Blog","summary":"","title":"Data Protection Week 2026: Beyond the dry legal updates"},{"content":"Your Record of Processing Activities (RoPA) is a solid foundation. Now, let’s bring it to life by swapping legal questions for operational essentials.\nIf you have spent the last few years building a Record of Processing Activities (RoPA), you know it is a monumental task. Gathering information from across your company, categorising it, and getting it signed off is a significant achievement in governance. It creates a necessary inventory of obligations.\nBut as we look at the \u0026ldquo;Processing activities\u0026rdquo; box on the left side of my data protection ecosystem diagram below, ask yourself: does your RoPA show how the business actually processes personal data?\nA legal lens sees that box as a list of liabilities to be managed. A data protection leader with Business Analysis skills, or with access to business analysis competences in the company, sees it differently: they see a dynamic, complex system of user journeys and value streams.\nTo move from \u0026ldquo;compliant on paper\u0026rdquo; to \u0026ldquo;living and breathing in practice,\u0026rdquo; a logical add-on for a Data Protection Leader is to embrace the tools and techniques of a Business Analyst.\nThe standard approach to the RoPA is the questionnaire. It is an efficient way to cast a wide net, asking stakeholders about \u0026ldquo;Legal Basis,\u0026rdquo; \u0026ldquo;Retention Periods,\u0026rdquo; and \u0026ldquo;Data Subjects.\u0026rdquo;\nBut, as we well know, most business stakeholders are not legal experts. When they fill out these spreadsheets, they aren\u0026rsquo;t trying to hide things - they are simply trying to interpret legal terminology through the lens of their daily grind. They may share information with you about the official process, but they often leave out the small, necessary workarounds they use to get the job done because they don\u0026rsquo;t realise those workarounds are data processing activities.\nThis is a gap in translation. And it\u0026rsquo;s where Business Analysis comes in.\nBusiness Analysis is underrated\nI often say that among the best courses I have ever done in my career was my Business Systems Analysis Diploma from the BCS (British Computer Society) in the UK over 20 years ago. It included attending a series of certification courses in Edinburgh, Manchester, Bristol and London and studying for a long series of exams, and then finally some interviews at BCS, but boy did I learn a lot, coming away with a huge backpack of tools and techniques that I use to this day.\nWhile it wasn\u0026rsquo;t a data protection course, it gave me the foundation for my understanding of how businesses actually work. It taught me that a business isn\u0026rsquo;t just a collection of contracts and policies, it is a living ecosystem of processes designed to deliver value.\nIn the SFIA 9 framework, Business Analysis falls under Business Situation Analysis (BUSA) and is described as the ability to:\n\u0026ldquo;Investigating business situations to define recommendations for improvement action.\u0026rdquo;\nSFIA suggest the tasks include:\nActivities may include, but are not limited to:\nPlanning for business situation analysis\nEstablishing the investigative approach\nEngaging with relevant stakeholders\nReviewing the strategic context, including the organisation’s vision, mission, objectives, strategy and tactics and external business environment\nDefining problems and analysing root causes\nIdentifying potential changes to address problems or to take advantage of opportunities\nGaining agreement to conclusions and recommendations.\nIncidentally, if you have been reading this \u0026ldquo;Beyond Legal\u0026rdquo; series of blog posts you may remember this skill was also highlighted in an early post covering Why every data protection and AI governance leader needs SIRA competences in their toolkit.\nThis skill is the missing link in the centre of my ecosystem diagram - the red text labeled \u0026ldquo;Aligned.\u0026rdquo; It is the bridge between the Data Protection Strategy and the Business Purpose.\nHere is how I see that by applying a business analysis mindset to your work elevates your data protection programme:\n1. Elicitation\nLegal professionals often have to interrogate to establish facts. Business Analysts, use elicitation to understand flows. It is a subtle but powerful shift in tone.\nInstead of asking, \u0026ldquo;List all data categories used in the event registration system,\u0026rdquo; a business analyst may ask \u0026ldquo;Walk me through the delegate\u0026rsquo;s experience. When they sign up, what happens next? Oh, you ask about dietary needs? That’s great service. How do we make sure the catering team gets that info without seeing everyone\u0026rsquo;s home addresses or potential religious beliefs?\u0026rdquo;\nBy being curious about the business purpose, you uncover the data protection risks (such as special categories of personal data) naturally. You become a valued colleague in solving a business problem (catering logistics) rather than a \u0026ldquo;necessary evil\u0026rdquo; asking for a form to be filled in.\n2. \u0026ldquo;As-Is\u0026rdquo; and \u0026ldquo;To-Be\u0026rdquo;\nDocumenting the journey from \u0026ldquo;As-is\u0026rdquo; to \u0026ldquo;To-be\u0026rdquo; is a core domain of the Business Analyst.\nEmployees often create workarounds (just look at the contributing factors in some personal data breaches), using personal emails, local spreadsheets, or unsanctioned tools, not because they want to be non-compliant, but because they are trying to be efficient.\nA Business Analyst doesn\u0026rsquo;t just ban these practices. They map the \u0026ldquo;As-Is\u0026rdquo; reality, acknowledge the friction the employee is trying to solve, and design a \u0026ldquo;To-Be\u0026rdquo; process that is both compliant and efficient.\nYou don\u0026rsquo;t just close a gap - you improve the employee experience. That is how you help establish the \u0026ldquo;Motivated management \u0026amp; employees\u0026rdquo; shown at the bottom of my ecosystem diagram.\n3. Requirements Definition (REQM)\nOnce you understand the flow, you need to help the technical teams build the controls. This brings us to another key skill: Requirements Definition and Management (REQM).\nEngineers and Product Owners demand clear requirements. They struggle with vague principles. When we tell them to \u0026ldquo;implement Data Protection by Design and by Default,\u0026rdquo; we aren\u0026rsquo;t giving them a spec.\nUsing REQM skills, you act as the translator:\nData Protection Principle: \u0026ldquo;Data Minimisation.\u0026rdquo;\nFunctional Requirement: \u0026ldquo;The \u0026lsquo;Export to CSV\u0026rsquo; function must default to \u0026lsquo;Summary View\u0026rsquo; (5 fields). The \u0026lsquo;Full Data Dump\u0026rsquo; option must be disabled for all users except System Admins.\u0026rdquo;\nCall to action\nYou have already built the foundation with your policies and your RoPA. Now, consider upgrading your toolkit to bring your data protection work to life.\nHost \u0026ldquo;Discovery Sessions\u0026rdquo;: Instead of sending a reminder to update the RoPA, ask a department head for a 30-minute \u0026ldquo;process walkthrough\u0026rdquo; of their critical revenue stream.\nLearn the basics of diagramming: Learning to draw a simple data flow diagram is one of the highest ROI skills a data protection leader can acquire. It turns text-heavy policies into visual logic.\nValidate, don\u0026rsquo;t just verify: Use your skills to validate that the process delivers value and compliance.\nWhen you understand the business process as well as the business owner does, you are no longer just protecting the data; you are protecting the value it creates.\n","date":"8 December 2025","permalink":"/beyond-legal-14-does-your-ropa-really-show-how-your-company-processes-personal-data-why-you-need-business-analysis-skills/","section":"Blog","summary":"","title":"Beyond Legal #14: Does your RoPA really show how your company processes personal data? (Why you need business analysis skills)"},{"content":"How mastering vendor management turns “third-party risk” into “third-party value” in an age of AI and Data Sovereignty.\nIf you look closely at the bottom right corner of my data protection ecosystem diagram below, you will see a cluster of icons that represents both the greatest risk/opportunity and the greatest complexity for many companies: Contractors, Contracts, 3rd Parties, and International Data Transfers.\nIn the earlier posts of this blog series, I focused heavily on the internal mechanics of companies. For example, building a risk culture, engineering data protection into your SDLC, and preparing teams for incident response, essentially \u0026ldquo;getting the house in order.\u0026rdquo;\nBut the reality of any \u0026ldquo;Data Fuelled Business\u0026rdquo; (as shown in the centre of the above diagram) is that no company is an island. To scale, innovate, and remain competitive, companies rely on an often-complex ecosystem of third parties, using SaaS providers for efficiency, cloud hosts for scalability, or specialised agencies for marketing operations. Your company\u0026rsquo;s data doesn’t just sit in your databases, it may be processed in places where you have no clue of where, who, how and when.\nFor years, the industry standard for handling this complexity has been the Data Processing Agreement (DPA), and the important work our legal colleagues. They draft agreements that define liability, set the boundaries of processing, and navigate the complexity of cross-border transfer mechanisms. But remember, a contract should not be seen as a \u0026ldquo;static promise\u0026rdquo; it needs to reflect the dynamic reality that data protection is.\nTo ensure the DPA is worth more than the paper it was written on by your legal colleagues, and to ensure the data is actually safe once it leaves your direct control, you need to expand your competences beyond the legal domain by embracing the operational world of Procurement and Supply Chain Operations.\nAs I\u0026rsquo;ve referenced in earlier posts, I recommend the SFIA skills framework and in this context, it\u0026rsquo;s time to add Sourcing (SORC) and Supplier Management (SUPP) to your competence profile.\nBefore I look into these competencies, I want to quickly cover the importance of mindset when dealing with third parties.\nI recall managing a compliance project on behalf of Group Finance at a major global corporation several years ago. The project was having big problems with a major vendor. Traditionally, the relationship had been managed on a strict \u0026ldquo;them and us\u0026rdquo; basis. Meetings were filled with aggro, accusations of breach of contract, and defensive posturing. It was exhausting, with lawyers detached from the technical reality in and out of meetings, and more importantly, it wasn\u0026rsquo;t moving my project forward.\nThen, the tone changed. We made a conscious decision to stop using the contract as a weapon and start treating the vendor as part of the team. We shifted the language from \u0026ldquo;compliance and penalties\u0026rdquo; to \u0026ldquo;partnership and problem-solving.\u0026rdquo;\nIt was a breath of fresh air. The wall came down. Once they felt treated as partners rather than \u0026ldquo;that lot,\u0026rdquo; their transparency increased, their willingness to go the extra mile improved, and the project succeeded, thankfully. Of course, every vendor relationship has its nuances and challenges from time to time.\nThis is the essence of Supplier (or Vendor) Management. It is not about being soft, it is about being effective.\nThe lifecycle view\nTo understand how to operationalise this partnership, let’s look at the lifecycle of a typical vendor relationship.\nThis diagram illustrates why legal skills alone are insufficient. The \u0026ldquo;Enter into a contract\u0026rdquo; phase is just one brief moment on the timeline. The real work happens before and after.\n**Selection \u0026amp; Due Diligence: **This is the entry point. This is where Sourcing (SORC) skills come into play. By performing strong due diligence here, you prevent problems downstream. You aren\u0026rsquo;t just checking if they can sign the contract, you are checking if they can actually protect the data.\nThe Contract: This formalises the selection but doesn\u0026rsquo;t guarantee the outcome.\nThe Loop (Process \u0026amp; Monitor): Notice how the large green arrows form a cycle? This is the \u0026ldquo;Process on documented instructions, and monitor\u0026rdquo; phase. This is where the relationship lives, often for years. This is the domain of Supplier Management (SUPP). It requires constant engagement, not a one-time signature.\nThe Exit (Delete or Return): Every relationship eventually ends. A diligent Supplier Manager plans for the off-boarding before the on-boarding even happens, ensuring data is safely returned or destroyed rather than lingering in a forgotten cloud bucket.\nNavigating the power dynamic\nOne of the areas that legal frameworks struggle with, but Supplier Management excels at, is the reality of power imbalance because you will rarely find yourself in a partnership of equals. Here are a couple of realities you may be familiar with:\n1. The BigTech reality Sometimes, no matter how much muscle your company has, you have to bow to the giants. When you contract with the major players (AWS, Microsoft, Google) or massive SaaS platforms (e.g. Salesforce), you cannot simply redline their DPA. You accept their terms, or you don\u0026rsquo;t use their service.\nThe Supplier Management approach: Here, the skill is not negotiation, but adaptation. You might accept the Shared Responsibility Model. Since you cannot change their contract, you prioritise securing your own configuration, encryption, and access controls. You focus on how you use the tool, rather than what the paper says. 2. The niche innovation reality At the other end of the spectrum, you may need to work with a small startup, perhaps a niche AI provider or a specialised scientific tool. They have the innovation you need, but they lack the resources to meet your standard Global Data Protection Requirements. They have no CISO, no ISO 27001, and no legal team.\nThe appreciative approach: A lawyer might say, \u0026ldquo;They are non-compliant - do not sign.\u0026rdquo; A Supplier Manager might say, \u0026ldquo;They are high-potential, let’s raise them.\u0026rdquo; Instead of bombarding them with requirements they cannot meet, you enter a \u0026ldquo;working at risk\u0026rdquo; period. You help them secure their environment. You accept the risk because the business value is high, and you mitigate it through close collaboration. This builds loyalty and eventually turns a risky vendor into a secure partner. Data sovereignty\nLet\u0026rsquo;s talk about the rising tide of Data Sovereignty and Residency.\nWe are currently experiencing a global fragmentation of data laws. It is no longer just about \u0026ldquo;EU to US\u0026rdquo; data transfers. Countries across the Middle East, Asia-Pacific, and South America are enacting localisation laws requiring certain data to remain within national borders.\nInstead of viewing these laws as mere bureaucratic hurdles, sourcing skills, or working closely with your sourcing or procurement colleagues will allow you to embrace these complex requirements.\nYou don\u0026rsquo;t just ask \u0026ldquo;Is it secure?\u0026rdquo; You ask \u0026ldquo;Where does it live, and can you guarantee it stays there?\u0026rdquo;\nBy working with procurement early, you can select vendors that offer \u0026ldquo;Sovereignty as a Service\u0026rdquo; - providers with local data centres or granular geographic controls. You ensure that your supply chain respects the local expectations of your customers, enabling trust in those markets.\nSub-processors\nWithin that \u0026ldquo;Green Loop\u0026rdquo; of monitoring in the earlier diagram, among the most critical operational tasks is tracking Sub-processors.\nIf you\u0026rsquo;re using a Data Processor, your vendor is rarely the final destination for your personal data. They rely on their own ecosystem of hosting providers, support teams, and analytics tools.\nAccording to the GDPR, as a Data Controller, your DPA gives you the right to be notified of changes and to authorise the changes. But operationally, these notifications often end up in a generic inbox that nobody checks. If your vendor quietly switches their storage provider from a secure local data centre to a low-cost provider in a high-risk jurisdiction, your risk profile changes overnight. You haven\u0026rsquo;t changed anything, but your data is suddenly exposed. Supplier Management involves maintaining visibility into this chain. It means verifying that your vendor manages their vendors with the same rigour you apply to them.\nVerification\nFinally, there is the element of verification. In our original ecosystem map, you see \u0026ldquo;Audit\u0026rdquo; hovering over the ecosystem. In the context of third parties, this requires the SFIA competencies, Audit (AUDT) and Quality assurance (QUAS)\nHistorically, the \u0026ldquo;right to audit\u0026rdquo; clause in a contract was a threat. (\u0026ldquo;Mess up, and we will send in the auditors.\u0026rdquo;)\nThese days, you\u0026rsquo;ll be lucky to get such a clause into your contract. Instead, companies still use Audit and QA competences to validate trust but typically reviewing a third-party assurance report, like a SOC2 for instance. It’s about Trust but Verify, but with an emphasis on the Trust. When you understand a vendor\u0026rsquo;s internal controls through a SOC report, or certifications, you can stop pestering them with 300-question spreadsheets. You can rely on their validated standards, reducing the burden on both teams. This respects their time and yours.\nTo conclude, your legal colleagues provide the framework (the contract). Your job is to provide the functioning reality (the relationship). Whether you are adapting to the rigidity of a BigTech player or uplifting a small startup, navigating complex data sovereignty laws, or managing AI integration, the answer is not in better legal drafting. The answer is in better relationships. That is how you turn the weakest link into a chain of trust.\n","date":"3 December 2025","permalink":"/beyond-legal-13-from-signed-contracts-to-trusted-partnerships/","section":"Blog","summary":"","title":"Beyond Legal #13: From signed contracts to trusted partnerships"},{"content":"We often confuse a schedule with a strategy, but while a roadmap tells you when a project finishes, only a Game Plan explains how you effectively succeed.\nIn our profession of data protection, GRC and AI governance, certain project/programme artefacts are essential for planning and communication our work.\nWhen a new programme kicks off, two of my initial documents I produce for stakeholder engagement are a Roadmap and a Game Plan, and I really enjoy producing them.\nWhile roadmaps are useful tools for showing the direction of travel, relying on them as your primary engagement tool is a common mistake. A roadmap tells you when things might happen, but it rarely explains how we will achieve our objectives, why we are doing it, or who needs to do what.\nTo truly lead a programme, especially one that requires behavioral change outside of the legal department, you don’t just need a map. You need a Game Plan.\nA roadmap is a schedule. A Game Plan is a strategy on a page, and I make it as visual as possible - a it\u0026rsquo;s a great excuse to get Procreate started up on my iPad.\nIn my experience leading complex security and data work, I’ve found that while detailed A3 plans are essential for underpinning the work, the Game Plan is the narrative tool that aligns the business.\nA successful Game Plan visualises the five pillars of the project: **What, Why, When, Who, and How. **I\u0026rsquo;ve found it to be a useful way to summarise the business case.\nLooking at the following examples from a security programme I managed a while back (click the images for a larger view), you’ll notice they look nothing like a standard Excel project tracker. Here is why this approach works:\n1. It visualizes the \u0026ldquo;Why\u0026rdquo; (the target) #Legal and security projects often fail because they are viewed as abstract compliance exercises. A Game Plan anchors the project in a tangible goal.\nIn the Asset Classification example, the target isn\u0026rsquo;t just \u0026ldquo;compliance\u0026rdquo;, it’s providing a tool to identify and protect highly confidential assets.\nIn Security Awareness, the goal isn’t \u0026ldquo;send 5 emails\u0026rdquo;, it’s \u0026ldquo;Employees adopt secure behaviors and support each other.\u0026rdquo;\n2. It humanises the \u0026ldquo;Who\u0026rdquo; #Roadmaps often ignore the human element. A Game Plan puts people front and centre.\nLook at the People Focus Game Plan. It uses the metaphor of climbing a mountain to show the journey of the Project Team, HR, and Unions.\nIt explicitly lists Key Stakeholders, not just as a list of names, but as active participants in the project.\n3. It articulates that the \u0026ldquo;How\u0026rdquo; is hard (risks \u0026amp; challenges) #A roadmap usually assumes a happy path where task B follows task A. A Game Plan is realistic.\nThe DLP (Data Loss Prevention) Game Plan explicitly lists \u0026ldquo;Challenges\u0026rdquo; and \u0026ldquo;Lessons Learned from the PoC.\u0026rdquo;\nThe Corporate Security Policies plan highlights \u0026ldquo;Risks,\u0026rdquo; such as stakeholder pushback or approval delays.\nBy visualising the friction points (using warning signs or jagged lines), you aren\u0026rsquo;t being negative, you are building trust by showing you understand the landscape.\n4. It narrates the \u0026ldquo;When\u0026rdquo; (phases, not just dates) #Instead of a rigid Gantt chart, a Game Plan shows the flow.\nThe Physical Security plan uses a simple timeline but pairs it with visual metaphors of construction and barriers.\nThe Asset Classification plan moves from \u0026ldquo;Pilot\u0026rdquo; to \u0026ldquo;Rollout,\u0026rdquo; showing the logical progression of maturity rather than just arbitrary deadlines.\nMoving beyond legal #This Game Plan concept sits at the heart of Purpose and Means. If we want to challenge the notion that data protection is primarily a task for legal professionals, we have to change how we communicate.\nA 50-page Project Initiation Document written in legalese will not inspire an engineer to classify their data, nor will it convince a sales leader to adopt new security behaviors.\nBut a Game Plan, one that uses visual storytelling to connect the \u0026ldquo;What\u0026rdquo; to the \u0026ldquo;Why\u0026rdquo;, can bridge that gap. It turns a compliance requirement into a shared mission.\nThis is probably why it\u0026rsquo;s one of the most popular deliverables that I am asked about, and asked to produce by data protection leaders.\nInterested to know more, or need a Game Plan for your project or programme? Get in touch to discuss your requirements.\n","date":"27 November 2025","permalink":"/know-your-game-plan-from-your-roadmap/","section":"Blog","summary":"","title":"Know your Game Plan from your Roadmap"},{"content":"When a financial services Data Protection Leader needs to unite 37 entities across multiple countries for a critical 2026 planning session, we\u0026rsquo;re including some elements of play.\nData Protection Day 2026 is just round the corner and I\u0026rsquo;m currently preparing a few client events. Among them is a half‑day, virtual “From issues to action” workshop for a financial services client. Their European data protection leader is bringing together colleagues from across 37 legal entities, from the UK in the west to Turkey in the east, to do something formal assessments rarely achieve: to make the real work visible and turn it into a 2026 roadmap people create themselves and believe in.\nWhy run a workshop when you already have an assessment? Assessments are useful. They do offer a snapshot of capabilities and compliance gaps, and they often come packaged in colourful heatmaps and maturity scores. But what I have found out when an assessment is based on a generic framework, especially from one of the big larger consultancies, it tends to tell the story of the framework and not the company being assessed. It doesn\u0026rsquo;t always reveal the subtle differences between an entity in one country and another and it can sometimes underplay the realities of local regulators, legacy platforms, offshore support models, and the everyday work in the trenches by teams who are at the sharp end daily.\nAssessments rarely identify and uncover what I call the \u0026ldquo;invisible architectures\u0026rdquo; of a company. All the dynamics, emotions, habits, incentives, and power structures that move things forward in one entity and stall in another. They may explain why DPIAs take a month in one team and three months in another, or why information security and data protection teams exist aligned and in harmony, or are working against each other.\nThe workshop structure The graphic above outlines the typical flow of the session though the workshop in at the end of January will have some subtle variations that I\u0026rsquo;m currently discussing with my client. In four hours, we’ll move from individual sense‑making to shared priorities to an actionable plan:\nIndividual elicitation. Participants begin with quiet thinking. Each participant lists the issues on their plate - workflows, risks, frictions, stakeholder challenges - without group influence. This step is gold dust: it reduces conformity and reveals what people actually experience.\nGroup clustering and prioritisation (affinity mapping). The participants sort individual issues into workgroups and cluster them. What’s working well? What needs improvement? They\u0026rsquo;ll then use a MoSCoW board (Must, Should, Could, Won’t) to prioritise across entities, so local realities inform group choices.\nRoot cause exploration. A short primer on Root Cause Analysis (yes, the Ishikawa or fishbone diagram) helps teams move beyond symptoms. “Our DPIAs are cumbersome” becomes “We have unclear ownership, late engagement, and tool friction” - three different problems requiring different actions.\nSolutions and responsibilities. Teams frame options, owners, and first steps. A quick refresher on planning basics, e.g. work breakdown structures, milestones, and dependencies, keeps things practical.\nProduce a target “to‑be” picture and a high‑level schedule. We close with short presentations that combine the pieces into a 2026 roadmap: themes, outcomes, owners, and critical path.\nWhat\u0026rsquo;s in it for the data protection leader?\nVisibility that isn’t theoretical or generic. You\u0026rsquo;ll get a consolidated view of issues grounded in lived experience, not just framework language.\nA prioritised portfolio. Must/Should/Could/Won’t status across entities, with owners and first steps.\nA 2026 roadmap. Themes (for example: DPIA flow redesign, data discovery automation, vendor due diligence overhaul), milestones, and a first 90‑day sprint.\nA motivated and energised team - these workshops bring together your people who will get to know each other in ways your traditional status calls can\u0026rsquo;t achieve\nIf you’re leading a distributed data protection function and want your 2026 plan to reflect reality, not just the latest framework, get in touch to discuss options. We’ll help you transform issues into action, and action into a roadmap your teams can deliver.\n","date":"10 November 2025","permalink":"/from-issues-to-action-a-playful-practical-workshop-for-data-protection-day-2026/","section":"Blog","summary":"","title":"From issues to action: a playful, practical workshop for Data Protection Day 2026"},{"content":"In this twelfth post of my \u0026ldquo;beyond legal\u0026rdquo; series, I\u0026rsquo;m addressing what many data protection professionals dread: that call you get after you\u0026rsquo;ve just finished your third glass on a Friday night. Some breach response plans look impressive in binders gathering dust, but fall apart the moment someone accidentally shares a database containing millions of customer records, or when ransomware locks down your production systems at 2am on a Saturday morning.\nAn uncomfortable truth about some personal data breach response plans is they\u0026rsquo;re actually more like a legal notification checklist trying to masquerade as an operational readiness artefact. They tell you what to report, when to report it, and who to notify. What they rarely address is how to contain the breach, how to investigate what actually happened, how to communicate effectively under pressure, and how to prevent the same incident from happening again.\nUnfortunately, this theatre of preparedness gets exposed when an actual personal data breach occurs. The plan that looked comprehensive during the annual review becomes worthless when you\u0026rsquo;re trying to work out which systems are compromised, whether customer personal data is still being exfiltrated, and how to coordinate response across legal, IT, security operations, communications, and business continuity teams, all while the clock ticks down to the regulatory notification deadline.\nThree further uncomfortable truths\nI begin with three statements that some data protection leaders may find provocative but I think they need to hear them:\nTruth #1: Your personal data breach response plan is probably just a legal notification template\nOpen your incident response documentation right now. How many pages cover notification requirements (e.g., in GDPR, we\u0026rsquo;re talking Art. 33 and 34) versus operational containment, forensic investigation, or crisis communications? If it\u0026rsquo;s heavily weighted towards the former, you don\u0026rsquo;t have an personal data breach response capability, you have a compliance document.\nTruth #2: The plan falls apart at 2am on a Saturday\nImagine it\u0026rsquo;s 2:15am on a Saturday. Your security operations centre detects unusual database access patterns that suggest data exfiltration. Who do you call? Do you have their actual mobile numbers? Do they know their role without reading a 50-page document?\nRemember, data breaches don\u0026rsquo;t conveniently occur between 9 and 5, Monday to Friday. They don\u0026rsquo;t respect Christmas holidays, Easter breaks, or summer holidays. A couple of data protection professionals shared on a course I was teaching a few years ago at a conference in Brussels that they had participated in breach response on Christmas Eve, during Ramadan, and over Chinese New Year. If your team members don\u0026rsquo;t have specific clauses in their employment contracts covering out-of-hours response obligations, and appropriate compensation reflected in their salaries, you\u0026rsquo;re setting yourself up for failure when you need people most. I am continually amazed by the number of data protection leaders who do not have this in their contracts.\nTruth #3: You discover your plan is worthless precisely when you need it most\nCompanies that have never tested their response plans under realistic conditions suffer the same pains: gaps appear everywhere. The forensics team can\u0026rsquo;t access logs because retention policies weren\u0026rsquo;t configured properly. The communications team doesn\u0026rsquo;t understand the technical details well enough to draft customer notifications. Legal wants to delay disclosure while regulators expect transparency. Meanwhile, every hour of delay increases both the harm and the regulatory consequences.\nLessons from recent high-profile cases\nUnless you have suffered harms yourself, or know someone who has, the past couple of years have provided some great lessons in what happens when data breach response is treated as a compliance exercise rather than an operational capability. Here are three high profile cases that continue to be talked about in the media.\nJaguar Land Rover\nIn September 2025, Jaguar Land Rover experienced a significant cyberattack that caused extensive operational disruptions, including production halts and supplier interruptions. The company confirmed that personal data had been compromised during the incident. While estimates of financial losses circulated, it is important to note that the reported figure of GBP 1.9 billion remains unconfirmed by the company. Regulatory bodies launched investigations emphasising the effectiveness of incident detection and response capabilities, with particular attention to supply chain and vendor-related risks.\nMarks \u0026amp; Spencer\nMarks \u0026amp; Spencer was hit by a ransomware attack in April 2025, which disrupted both online order fulfillment and in-store contactless payments. The attackers gained access to customer contact details, birth dates, and order histories, after initiating the breach through a sophisticated spear-phishing attack against the help desk. The incident demonstrated that strong breach response involves more than technical measures. It highlighted the critical role of staff awareness, social engineering prevention, and the challenge of distinguishing genuine support requests from deceptive attacks.\nHarrods\nIn September 2025, Harrods notified customers of a personal data breach resulting from a vulnerability in a third-party IT service provider. The personal data of approximately 430,000 individuals was affected, though no payment information was compromised. This incident highlights the importance of factoring vendor risks into incident response planning, as Harrods, acting as the data controller, remained liable even though the breach originated from a processor.\nWhat\u0026rsquo;s the common thread here? These breaches weren\u0026rsquo;t primarily legal failures, they were operational challenges involving third-party risk management, detection capabilities, forensic investigation, crisis communications, business continuity, and sustained coordination across multiple disciplines. Legal expertise was essential, but it wasn\u0026rsquo;t sufficient.\nSymptoms of a broken data incident and data breach response capability\nHaving been employed in large global corporations for many years and providing consultancy and training for similarly large companies, I have noted some of the the telltale signs that a company\u0026rsquo;s incident and breach response capability exists primarily on paper:\nThe \u0026ldquo;plan\u0026rdquo; is actually a document, not a capability. You may have comprehensive documentation, but if it\u0026rsquo;s never been tested then you do not have a plan. Aside from testing it in anger, tabletop exercises, simulations, red team testing are different approaches you should regularly use.\nRoles are job titles, not trained individuals. Your plan specifies that \u0026ldquo;the CISO\u0026rdquo; or \u0026ldquo;the DPO\u0026rdquo; will do X, Y, Z, but those individuals have never actually practised their roles together or worked through simulated scenarios.\nYou\u0026rsquo;re focused on notification deadlines, not containment speed. The primary concern is \u0026ldquo;can we meet the 72-hour notification requirement?\u0026rdquo; rather than \u0026ldquo;can we stop the data exfiltration within the first hour?\u0026rdquo; This is a classic case of putting the interests of your company ahead of the rights and freedoms of affected people.\nThird-party incidents are an afterthought. As Harrods learned, many breaches occur at vendors, yet some response plans barely address how you\u0026rsquo;ll coordinate when you don\u0026rsquo;t directly control the compromised systems.\nBreach response exists in isolation. Your personal data breach response plan is standalone, with no integration into existing IT incident management, IT service continuity, business continuity planning, or enterprise crisis management frameworks.\nRisks to individuals are assessed generically. When assessing whether to notify individuals under say GDPR Art. 34, the analysis consists of vague statements like \u0026ldquo;high risk to privacy\u0026rdquo; rather than specific assessment of fundamental rights impacts and concrete harms.\nThe competencies you need\nAs I\u0026rsquo;ve done with all posts in my \u0026ldquo;beyond legal\u0026rdquo; blog series, I want to anchor competencies in the SFIA skills framework because these are measurable, definable competencies you can recruit, develop, or access when needed There are quite a few here that I want to suggest should be in the response team in addition to dedicated data protection and privacy specialists :\nIncident Management (USUP) - End-to-end coordination of response activities. This is operational command under pressure, not administration.\nSecurity Operations (see SFIA\u0026rsquo;s skills for infosec) - Real-time monitoring, threat detection, and technical response. Containing threats, isolating compromised systems, and preserving forensic evidence.\nDigital forensics (DGFS) - Forensic investigation involving log analysis, malware analysis, network forensics, and attack reconstruction.\nBusiness Continuity Planning (COPL) - Maintaining critical business functions during and after an incident. The M\u0026amp;S incident, which affected online orders and contactless payments, required sophisticated business continuity response.\nStakeholder Relationship Management (RLMT) - Managing relationships with regulators, customers, vendors, partners, and investors during a crisis.\nCommunication (COMM) - Crisis communications that translate technical incidents into clear, honest, actionable messages for different audiences.\nRisk Management (BURM) - As I discussed back in post #4 about risk, understanding the \u0026ldquo;ripple effects\u0026rdquo; of incidents across operational, legal, financial, and reputational risk domains.\nSupplier Management (SUPP) - When incidents occur at third parties, coordinating investigation, enforcing contractual obligations, and managing response despite not controlling the compromised systems.\nNotice what\u0026rsquo;s absent? While legal competencies remain essential for notification and regulatory engagement, they represent one piece of a much larger operational puzzle.\nWho should lead the data breach response team? (Probably not you, unless\u0026hellip;)\nHere\u0026rsquo;s an uncomfortable question for data protection leaders: Should you actually lead the breach response team?\nThe honest answer for many is: probably not, unless you have significant operational experience managing incidents under extreme pressure.\nLeading data breach response requires operational experience, cross-functional leadership, pressure resilience, the ability to meet people where they are rather than expecting them to understand GDPR articles, and organisational credibility. When you call someone at 3am, they need to take you seriously.\nIf you don\u0026rsquo;t have this profile, that\u0026rsquo;s not a failure, it\u0026rsquo;s self-awareness. Many exceptional data protection leaders are brilliant at policy development and regulatory interpretation, but don\u0026rsquo;t have the operational background for incident command.\nIn those cases, identify someone who does. Often, this is someone from IT with extensive incident management experience, ideally a situation manager, or a senior IT incident manager, the CISO, or someone who\u0026rsquo;s led major incident response for IT outages. They understand how to work under pressure, coordinate across functions, and drive resolution.\nYour role as data protection leader becomes strategic advisor to the response team lead, providing expertise on assessment of risks to individuals\u0026rsquo; rights and freedoms, regulatory notification requirements, data protection implications of containment decisions, and communication content.\nThere is some specific training for people who\u0026rsquo;ll lead response teams through Kepner Tregoe. Many years ago I did book this training course for myself but the week after, the company I was working for had large global layoffs and all training got cancelled - a course I need to put on my ToDo list!\nBuild on what already exists\nDoes your company already have an IT incident management, or security incident management process?\nI would think that for most large companies, the answer is yes. IT teams have been managing IT and security incidents for years, system outages, security vulnerabilities, service disruptions. They have established processes, trained personnel, escalation paths, and a track record of working under pressure, probably based around an industry standard framework like ITIL or COBIT.\nRather than starting from scratch to build a standalone \u0026ldquo;data breach response\u0026rdquo; capability, integrate with what already exists. By recognising the dependency you have on your IT and security incident management colleagues you can build on their existing capabilities, reduce friction, build relationships, and ensure consistency.\nThe key is ensuring that their processes include specific considerations for personal data breaches: triggers for involving data protection expertise, assessment frameworks for risks to individuals\u0026rsquo; rights and freedoms, regulatory notification decision points, and data subject communication requirements.\nAt the same time, you need to recognise the dependencies you have with other groups in your company:\nIT Service Continuity - When systems must be taken offline for investigation, service continuity plans kick in. The JLR incident demonstrated how production systems and business operations can be disrupted. Disaster Recovery is a critical component of IT Service Continuity, specifically dealing with the recovery of the IT infrastructure and systems after a significant, often catastrophic, event. This includes system backups and restoration, infrastructure failover, and data centre recovery strategies.\nBusiness Continuity Planning (BCP) - The M\u0026amp;S incident affected customer-facing operations. When a breach impacts critical business functions, business continuity plans provide the framework for maintaining operations.\nCrisis Management - Not every breach becomes a crisis, but some do. Understand your company\u0026rsquo;s criteria for declaring a crisis and the escalation path. When a breach escalates to crisis status, it triggers executive leadership involvement and dedicated crisis communications.\nAssessing risks to individuals\nOne of the most critical, and often most commonly botched, aspects of personal data breach response is assessing the risk to individuals\u0026rsquo; rights and freedoms. Too often, this devolves into vague statements: \u0026ldquo;There is a risk to privacy\u0026rdquo; or \u0026ldquo;Identity theft could occur.\u0026rdquo;\nAs I discussed in post #4 about risk management, understanding ripple effects and being specific about risk is essential. When a breach occurs, you need to assess risks with precision, grounded in the Charter of Fundamental Rights of the EU and concrete harms to individuals.\nRead the Charter, and then consider which specific rights might be impacted:\nArt. 7 (Respect for private and family life) - Could the breach expose intimate details, family relationships, private correspondence, or health information?\nArt. 8 (Protection of personal data) - What categories of personal data were compromised? Beyond immediate exposure, what secondary uses might occur?\nArt. 21 (Non-discrimination) - Could the exposed data lead to discriminatory treatment? If special category data was compromised, individuals face risks of discrimination in employment, insurance, housing, or access to services.\nBut read all 50 articles because the processing of personal data can play a part in all of them, so you need to assess practical, concrete harms: financial harm (identity theft, fraudulent transactions), physical harm (safety risks if location data or protected addresses exposed), reputational harm (damage to professional standing), psychological harm (distress and anxiety), and loss of control (inability to exercise rights over data).\nThis specific assessment directly informs:\nThe need to notify individuals if there is a high risk to their rights and freedoms.\nMitigation measures - what actions will you instruct the individuals to take to reduce harm.\nCommunication content - explaining actual risks in clear language.\nRegulatory notification - demonstrating thorough understanding of impact, what happened, how, why, when and what you\u0026rsquo;re doing about it.\nIn case you\u0026rsquo;ve not come across it, here\u0026rsquo;s a reminder of the Charter of Fundamental Rights of the EU (sorry about the text size):\nTesting your plan\nSome companies claim to test their response capability, but scratch the surface and you\u0026rsquo;ll find theatrical exercises where everyone knows their lines and complications are smoothed over.\nReal testing is uncomfortable. It reveals gaps, creates friction, and generates real learning, which is exactly the point.\nMake testing realistic: Run drills at 2am on Saturday. Give participants only the information they\u0026rsquo;d actually have at each stage. Create genuine time pressure and conflicting priorities. Test handoffs between IT incident management, business continuity, and crisis management. If you\u0026rsquo;re testing ransomware response, actually isolate test systems in a controlled environment. Many years ago I participated in a war game exercise where a TV studio had been set up with live news broadcasts, interviews, etc. After 2 days of being locked in the exercise (held in a nice hotel, so it wasn\u0026rsquo;t all bad), we were finally let out, and it really did feel that we had all been involved in an actual breach/crisis.\nShare lessons learned: After each exercise and every real incident, conduct thorough After-Action Reviews. Document what happened versus what should have happened. Share findings broadly, brief leadership, share sanitised case studies across departments, incorporate insights into training. Don\u0026rsquo;t silo lessons within the incident response team.\nApply the improvements: Track implementation of corrective actions with specific deadlines. Close technical gaps (e.g. improve logging, strengthen access controls), process gaps (e.g. clarify escalation paths, revise playbooks), and competency gaps (e.g. recruit or train missing skills). As discussed in post #11 about measuring outcomes, measure whether your capability is actually improving.\nEstablish a cadence of continuous testing: quarterly tabletop exercises, annual out-of-hours technical drills, and rigorous post-incident reviews for every event, no matter how minor.\nConclusion\nHere\u0026rsquo;s the question I want you to honestly answer: If a significant personal data breach occurred right now, or at 23:18pm on New Years Eve, would your response be characterised by coordinated professionalism or chaotic improvisation?\nIncident response capability isn\u0026rsquo;t built by lawyers drafting notification templates. It\u0026rsquo;s built by integrating with existing operational capabilities, ensuring the right person leads response (which may not be the data protection leader - it depends), establishing employment contracts that reflect out-of-hours obligations with appropriate compensation, coordinating seamlessly with IT service continuity and crisis management frameworks, assessing risks to individuals with specificity grounded in fundamental rights, and testing realistically while sharing and applying lessons learned.\nThe cases I mentioned earlier - JLR, M\u0026amp;S, Harrods - demonstrate how even significant companies can struggle when data breach response is treated as compliance documentation rather than operational readiness.\nYour response capability is the ultimate test of whether your data protection function is genuinely embedded in your company\u0026rsquo;s operations or merely performing compliance theatre. When the breach alarm sounds, there\u0026rsquo;s no hiding behind policy documents, no time for debates about legal interpretation, and no opportunity to wish you\u0026rsquo;d done things differently.\n","date":"5 November 2025","permalink":"/beyond-legal-12-from-headless-chickens-to-coordinated-data-breach-response-why-operations-beat-legal-theatre/","section":"Blog","summary":"","title":"Beyond legal #12: From headless chickens to coordinated data breach response - why operations beat legal theatre"},{"content":"I\u0026rsquo;ve been designing and facilitating workshops, and practicing business analysis for decades. After developing a radar covering the future of this field, a few things stood out prompting me to update my personal development plan for 2026.\nWhy your foresight needs structure\nWhen some people view a trends radar, they see a collection of interesting ideas, perhaps one or two things will stand out. A truly effective radar, like ones powered by FIBRES isn\u0026rsquo;t just a colourful diagram. It\u0026rsquo;s a structured sensemaking system. Every individual dot, every connection, every link serves one purpose: to trace the journey from a raw piece of evidence to actionable strategic foresight.\nIn addition to the \u0026lsquo;Horizon Scanning for Beginners\u0026rsquo; blog series I published a few months ago, I want to take a deeper look at some key terms and concepts which in reality are the essential building blocks and describe how they really connect. I\u0026rsquo;m talking Signals, Signal Clusters, Technologies, Trends, and Megatrends. And to prove it’s not just theory, I\u0026rsquo;ll share a radar I made last week and explain how I deployed these concepts in mapping a topic that is close to my heart, the Future of Business Analysis and Workshop Facilitation.\nThe discipline I formalised 25 years ago\nTo give a bit of context, although I had been working for many years before this, \u0026lsquo;running\u0026rsquo; meetings and workshops was always quite ad hoc for me, and it was only when I was thrust into an awkward position in a high-maturity major projects organisation did I realise that I had to up my game in this area. I took a dedicated workshop facilitation course that I think ranks among the best hands-on training course I\u0026rsquo;ve ever done. More on this later.\nThe foresight hierarchy\nLet\u0026rsquo;s begin by looking the strategic flow as I work with it, because understanding the current is the first step to navigating it. As I say, it\u0026rsquo;s how I work with it, I\u0026rsquo;m sure many have different approaches and foresight purists may disagree.\n1. Signals\nSignals are the concrete, observable manifestations happening right now. They are indicators of change: a product launch, a disruptive pilot project, a groundbreaking case study, a new regulation making headlines, or it could be early, academic findings. For** **example \u0026ldquo;Miro announces AI co-facilitator features for workshops (2025).\u0026rdquo;\nIn FIBRES, each signal is a discrete data point, logged with its own link, verifiable source, and timestamp. It\u0026rsquo;s the evidence. A signal can be added from different types of content such as a web page, a document, an image, a podcast.\n2. Signal clusters\nWhen the disparate signals start to align, and point in the same direction, you\u0026rsquo;re not just seeing random events anymore. You\u0026rsquo;re observing a Signal Cluster – an emerging pattern. For example, \u0026ldquo;AI-assisted tools\u0026rdquo; (grouping potent signals like Miro AI, Excalidraw AI, Atlassian Intelligence).\nSignal clusters transform raw noise to the clarity of pattern recognition.\n3. Trends\nTrends are the core of the radar. They represent the insights you must track, the narratives you need to communicate, and the shifts you must act upon. Each trend is supported by one or more signal clusters, often underpinned by enabling technologies and can be heavily linked to a broader megatrend. For example, AI-Augmented Business Analysis:\nLinked to cluster: AI-assisted tools\nLinked to technology: Generative AI\nLinked to megatrend: Human–AI Collaboration\n4. Technologies \u0026amp; Methods\nTechnologies are often the enablers that bind multiple trends together, so they aren\u0026rsquo;t just entries on the radar. They answer the fundamental question: *What makes this trend not just possible, but inevitable? For example, *\u0026ldquo;Generative AI\u0026rdquo; isn\u0026rsquo;t a trend in itself, it\u0026rsquo;s the tech enabling trends like AI-Augmented Business Analysis, Augmented Visual Facilitation, and Algorithmic Sensemaking.\nBy intentionally keeping technologies off the primary radar (but still within the FIBRES network), my radar maintains focus and there\u0026rsquo;s a network view that reveals the web of how these technologies underpin disparate trends. This is systems thinking in action.\n5. Megatrends\nAt the top of the structure are Megatrends. These are long-term, system-level forces shaping everything else.\u2028They aren’t things you can directly “do”, but they provide the strategic context for trends. For example, Human–AI Collaboration, Playful \u0026amp; Ethical Organisations, Future-Ready Enterprises.\nMegatrends give the radar coherence and a higher level narrative and context, connecting clusters of trends and telling a story about transformation.\nHow the links work together\nSo how so these layers connect and work together? In Fibres, this is known as the Network view, and in my mind, it\u0026rsquo;s behind the scenes, showing the mechanics behind the radar. At first sight, it may appear slightly confusing, but once you begin to navigate around the network view you can begin to appreciate the interconnectedness of the structure. In the image below, the diagram on the left shows the overall view of the network for my topic The future of business analysis and workshop facilitation, and on the right, it\u0026rsquo;s the expanded view you can click to by selecting a signal cluster, will will then show the cascade of signals that drive the trend.\nAnd then, once at that level you are able to get information about each signal, linking back to the original source (e.g. a web page, press release, etc.) and see all the linkage to other signals, signal clusters, technologies and trends. The image below shows one signal from the signal cluster AI-assisted tools:\nMy radar\nIn my own radar, I’ve mapped 25 trends across eight thematic sectors:\nPeople and skills\nProcess and practice\nTools and technologies\nMethods and frameworks\nMindsets and culture\nEnvironment and context\nGovernance and standards\nFuture enablers\nEach trend on the radar links back to relevant real-world signals such as:\nInclusive collaboration\nMiro AI\nAtlassian Intelligence\nLEGO Serious Play\nOECD Strategic Foresight Toolkit\nI\u0026rsquo;m plotting the trends across four horizons: 2025, 2026, 2027 and 2028, but I could have easily have chosen now, next and beyond. Or from the perspective of adoption, ease of implementation, etc. Also, remember that I chose to plot trends only but I could also show megatrends, technologies, signal clusters and the signals themselves. A lot depends on your context and the objective of the radar. I prefer to keep this clear and simple, less is more.\nI\u0026rsquo;ve also found choosing a narrow scope helps, instead of trying to map trends across a broad scope where things may get slightly diluted.\nThis structure allows me to explore not just what’s happening, but why it matters, and how it connects to wider transformation themes. This layered foresight approach helps:\nBuild evidence-based narratives instead of relying on intuition.\nSpot early signals of change and link them to real impact areas.\nTranslate foresight into strategy, by aligning trends with technologies and megatrends.\nCollaborate better, as everyone sees the same “map of meaning”.\nHere\u0026rsquo;s an embed of the live radar that you are able to interact with and view the trends. As a Fibres user I am able to log in to the radar and see much more behind the scenes including the network view, all the signals and of course make changes and new radars. You can also view it in a new window by clicking here.\nWhen viewed as a whole, the radar shows a field in rapid transformation where the roles of analyst and facilitator appear to be converging into a single, adaptive discipline. For me, five shifts stand out.\n1. Human–AI collaboration\nTrends such as AI-Augmented Business Analysis and Facilitation Bots point to a near-term (2025–2026) shift from automation to collaboration with intelligent systems.\u2028Tools like Miro AI and Zoom AI Companion show how AI is becoming a thinking partner interpreting intent, summarising discussion, and generating drafts.\u2028The analyst’s skillset will move from data handling to sensemaking and validation.\n2. Hybrid and immersive facilitation\nSignals from Microsoft Mesh and Accenture’s VR onboarding show that collaboration is no longer bound to rooms or screens.\u2028By 2030, hybrid and immersive facilitation will blend physical and virtual presence, requiring new literacies in behavioural design, inclusion, and spatial thinking.\n3. Play and cognitive diversity\nThe radar highlights a growing emphasis on creativity, inclusion, and embodied learning.\u2028From LEGO® SERIOUS PLAY® to EY’s neurodiversity initiatives, companies are rediscovering play as a serious tool for innovation and team insight.\u2028Facilitators will increasingly act as creative catalysts, designing playful, psychologically safe spaces for diverse minds to collaborate.\n4. Ethics inside the workshop\nTrends such as Ethical Facilitation Frameworks and Behavioural Data Ethics in Collaboration Tools reveal that governance and facilitation are merging.\u2028As tools capture more behavioural data, professionals must ensure lawfulness, transparency and fairness in collaborative spaces.\u2028By 2027, ethical awareness will be an expected professional competence, not an optional principle.\n5. Foresight and continuous learning\nSignals from OECD and UNESCO show that foresight and learning are becoming structural capabilities.\u2028Business analysis will evolve from documenting requirements to anticipating change and embedding learning directly into workflows.\nFinal reflection\nAs someone who’s been practising in this field for decades, I’m struck by how much has evolved since that facilitation workshop training 25 years ago.\nI develop and use the radars for client work, and assist clients in developing their own as part of their strategic decision to formalise foresight work. For example, earlier this week I completed a radar for a client covering Intelligent Video Surveillance.\nThe radar I\u0026rsquo;ve shared in this post doesn’t just map external change, it provides a baseline for assessing my own development path.\u2028It’s helped me identify where to invest in new competences for 2026 blending facilitation, foresight, and data-enabled collaboration which has already shaped my next step. In a couple of weeks, I’ll be attending an advanced four-day facilitation training workshop here in Denmark, and I’m so looking forward to getting hands-on with much of what this foresight work has revealed to me.\nPurpose and Means can help you establish your in-house practice, or can develop and host your radar using Fibres. To learn more, take a look at my service description and the numerous blog posts I\u0026rsquo;ve written on this topic that are available below.\nReady to start your foresight work? Book a call to discuss your requirements.\n","date":"30 October 2025","permalink":"/how-to-build-a-future-radar-that-actually-works/","section":"Blog","summary":"","title":"How to build a Future Radar that actually works"},{"content":"In this eleventh post of my \u0026ldquo;beyond legal\u0026rdquo; series, I\u0026rsquo;m addressing perhaps the most forgotten about competency in data protection leadership: the ability to measure what actually matters. Many data protection functions are stuck in a rear-view mirror mindset, counting compliance activities instead of driving forward-looking performance that demonstrates real business value.\nAs I was thinking about, and preparing this post, I remembered reading about something country music artist Jelly Roll said a year or two ago when he won \u0026ldquo;New Artist of the Year\u0026rdquo; at the CMA Awards: \u0026ldquo;The windshield is bigger than the rear view mirror for a reason, because what\u0026rsquo;s in front of you is more important than what\u0026rsquo;s behind you.\u0026rdquo;\nThis epitomises the performance measurement revolution that data protection desperately needs. Too many data protection leaders and their teams are stuck looking backwards counting DPIAs completed, training modules finished, and incidents that already happened, to name a few examples. Meanwhile, they\u0026rsquo;re missing the critical forward-looking indicators that could prevent problems and demonstrate strategic value.\nWhen busy doesn\u0026rsquo;t mean effective\nI\u0026rsquo;ve participated in quite a few client data protection team meetings and heard updates like:\n\u0026ldquo;We completed 7 DPIAs this quarter\u0026rdquo;\n\u0026ldquo;Training compliance is currently at 64%\u0026rdquo;\n\u0026ldquo;Our RoPA has 312 entries and was last viewed 9 months ago\u0026rdquo;\nThese are all activities. They tell you what happened, not whether it mattered. It\u0026rsquo;s like a restaurant boasting about how many meals they served without mentioning whether customers enjoyed them or came back.\nThe problem with activity-based metrics is threefold:\nThey\u0026rsquo;re backwards-looking because you\u0026rsquo;re measuring things that already happened and can\u0026rsquo;t be changed. If your RoPA wasn\u0026rsquo;t touched for 9 months, that information helps no one make better decisions today.\nEmployees play a bit: When you measure training completion rates, people click through modules without engaging. When you measure the number of DPIAs, some teams split single assessments into multiple entries to hit targets.\nThey miss business impact: A 100% training completion rate means nothing if personal data is shared with a third party without a lawful basis, or the product team still maintain that tracking consumer sentiment does not involve processing personal data. High activity doesn\u0026rsquo;t equal high effectiveness.\nLag versus lead indicators\nBefore looking at windshield reporting, it\u0026rsquo;s important to understand the difference between lag and lead indicators. This distinction separates reactive compliance from proactive data protection leadership.\nLag indicators measure outcomes that have already occurred. They\u0026rsquo;re historical, often called \u0026ldquo;results metrics,\u0026rdquo; and tell you what happened after the fact. They\u0026rsquo;re important for accountability and learning from experience, but they can\u0026rsquo;t help you prevent problems or exploit opportunities in real-time.\n*Lead indicators *are predictive measures that signal future performance. They track activities, behaviours, or conditions that influence future outcomes. Most importantly, they\u0026rsquo;re actionable and you can intervene to change direction before problems occur.\nOn a personal level, if you\u0026rsquo;ve ever tried to lose weight, you\u0026rsquo;ll know that simply weighing yourself (lag indicator) each day will rarely help you shed the kilos if you are not mindful of the number of steps or bike rides you accomplish daily (lead indicator) coupled with the amount of calories (lead indicator) of the food you consume each day.\nLet me illustrate this with a couple of data protection examples.\n**Lag indicator #1: **Marketing campaign launch delays due to data protection reviews.\nThis measures business impact after the friction has already occurred and is useful for understanding business costs of current processes or identifying high-friction areas, but it\u0026rsquo;s too late because delays have already frustrated your colleagues in marketing and potentially cost revenue.\nInstead, these **lead indicators **may be more appropriate:\nPercentage of marketing campaigns with data protection consultation in planning phase\nAverage cycle time for routine data protection reviews\nNumber of self-service tools adopted by business units\nLag indicator #2: Data subject request (DSR) response rate.\nThis measures whether you met the required deadlines after requests were submitted and is useful for internal compliance reporting or identifying process bottlenecks. The issue is, it is \u0026ldquo;after the event\u0026rdquo; and misses the breakdown in trust with data subjects that may occur, which may trigger regulatory scrutiny if a complaint is made.\nInstead, these lead indicators may provide you with the essential early warning signs so you can work to address things before go badly wrong:\nAverage data discovery time across key systems\nNumber of projects where DSR requirements were prioritised as non-functional requirements\nQuality score of data mapping accuracy (validated through sample audits)\nThe rear-view mirror analogy\nIn my experience, data protection reporting often falls into what I call \u0026ldquo;rear-view mirror monitoring.\u0026rdquo; You\u0026rsquo;re required to constantly look at what\u0026rsquo;s behind you. In other words, the things that have already happened such as completed tasks, incidents, DPIAs completed, etc. This has several disadvantages including:\nBy the time problems show up in your metrics, it\u0026rsquo;s too late to prevent them. You\u0026rsquo;re always playing catch-up.\nHigh completion rates and low incident numbers give you a false sense of security. All appears fine until a major incident reveals the gaps.\nData protection risks don\u0026rsquo;t conveniently announce themselves through your current metrics. They accumulate slowly through small violations, cultural drift, and emerging threats. This is where key risk indicators (KRIs) come into play, they are a form of lead indicator.\nI first encountered this concept during my IBM days over 20 years ago when as a project manager I was required to use their \u0026ldquo;7 keys\u0026rdquo; methodology. The distinction between looking backward and looking forward transformed how I approached project monitoring and reporting, and, later, I adapted it to data protection performance management. I will make a separate post about this methodology because it deserves it!\nWindshield reporting\nWindshield (or windscreen as people living in the UK might say) reporting focuses on anticipating change and providing early warnings. Instead of only measuring what happened, you\u0026rsquo;re identifying what\u0026rsquo;s likely to happen and what you can do about it.\nIn data protection terms, this means:\nPredictive indicators: Instead of only counting incidents after they occur, track leading indicators that predict future problems - unusual data access patterns, delayed vendor risk assessments, increasing numbers of \u0026ldquo;urgent\u0026rdquo; projects bypassing data protection gates or checkpoints in your portfolio management processes.\nStakeholder health monitoring: Rather than only measuring training completion, assess stakeholder engagement and behavioural change - are people proactively asking data protection questions, are they embedding data protection considerations into projects from the start?\n**Risk analysis: **Instead of static risk heat maps, track how risks are trending - which processing activities are becoming more complex, which vendors are showing compliance drift, which business units are struggling with new technologies?\nAs I have done in all previous in this \u0026ldquo;beyond legal\u0026rdquo; series, suggest you consider looking at the SFIA skills framework to understand the competences you may need to acquire either yourself, or bring onboard. The key SFIA competence you need to make the necessary changes is the Performance Management (PEMT) skill, but you\u0026rsquo;ll also require:\nData analytics (DAAN): Moving beyond basic reporting to statistical analysis that identifies patterns, trends, and predictive indicators.\nBusiness Intelligence (BINT): Designing dashboards and reporting systems that show actionable insights rather then meaningless numbers.\nQuality management (QUMG): Establishing quality metrics and continuous improvement processes that will drive real performance increases.\nMake lead indicators actionable\nNow something that is extremely important. The key to effective lead indicators is ensuring they drive action. Each lead indicator should have:\nClear thresholds: When does the indicator signal need for intervention? For example: \u0026ldquo;When privileged access reviews are more than 30 days overdue for more than 20% of high-risk systems, initiate immediate remediation protocol\u0026rdquo;\nDefined responses: What specific actions should be triggered? For example: \u0026ldquo;When DPIA quality scores drop below 7/10 for any business unit, deploy additional training and assign dedicated support\u0026rdquo;\n**Responsible parties: **Who monitors the indicator and who acts on it? For example: \u0026ldquo;Security Operations monitors access patterns weekly and the Data Protection Team investigates anomalies within 48 hours\u0026rdquo;\nEstablishing windshield monitoring and reporting takes time, and requires buy-in and interest from a wide range of stakeholders but the dividends are worth it. Once up and running you might want to consider taking things a step further by monitoring combinations of lead indicators that tell a more complete story. For example, you could build a model that predicts, say, the increased likelihood of transfer mechanism violations:\nLead: Increasing complexity of international data transfers (new jurisdictions, new data categories)\nLead: Decreasing completion rates of vendor risk assessments\nLead: Rising number of urgent projects bypassing standard review processes\nObviously, you need to walk before you can run.\nMaking metrics meaningful\nRemember that quote from Jelly Roll that opened this post? It\u0026rsquo;s not just about forward-looking versus backward-looking metrics. It\u0026rsquo;s about people.\nYou will succeed or fail based on whether people find it useful, credible, and actionable, which means:\nDesign for your audience: Your CDO cares about different metrics than your CISO, who cares about different metrics than your CMO. Create tailored views that speak to each stakeholder\u0026rsquo;s priorities and decision-making needs.\nMake it visual and accessible: Dense spreadsheets and complex reports get ignored. Use dashboards, infographics, and storytelling to make performance data engaging and understandable.\n**Focus on trends and context: **A single metric means nothing without context. Show trends over time, compare against benchmarks, and explain what the numbers mean for business decisions.\nEnable action: Every metric should point towards a decision or action. If a stakeholder looks at your performance data and doesn\u0026rsquo;t know what to do with it, the metric is useless.\nHere\u0026rsquo;s a suggested implementation roadmap:\nFinally, here are some things to watch out for:\nMetric overload: Don\u0026rsquo;t try to measure everything. Focus on the vital few metrics that drive the most important decisions. Start small and grow.\nPerfection paralysis: Start with imperfect metrics that provide directional guidance. You can refine and improve over time.\nOne-size-fits-all reporting: Different stakeholders need different views of performance. Customise your reporting for maximum impact. Don\u0026rsquo;t assume you know what your stakeholders need.\nStatic benchmarks: Your performance standards should evolve as your capabilities mature and your business context changes.\nPurpose and Means is a niche data protection and GRC consultancy based in Copenhagen but operating globally. We work with global corporations providing services with flexibility and a slightly different approach to the larger consultancies. We have the agility to adjust and change as your plans change. Take a look at some of our client cases to get sense of what we do.\n","date":"26 October 2025","permalink":"/beyond-legal-11-stop-counting-activities-and-start-measuring-outcomes/","section":"Blog","summary":"","title":"Beyond legal #11: Stop counting activities and start measuring outcomes"},{"content":"Data Protection Day, often seen through a legal lens, is a tremendous opportunity for data protection leaders to ignite a year-long, company-wide culture of data stewardship that goes way beyond compliance.\nData Protection Day, observed annually on 28 January, often triggers a flurry of activity in companies across the world. Instead of viewing it as a one-off, why not reframe it as your strategic launchpad for your data protection agenda for 2026? It\u0026rsquo;s a huge opportunity for data protection leaders to set out their stall to establish, or maintain, a broader data protection mindset that resonates across your company.\nFor many data protection leaders, 28 January is a date in the calendar when they do \u0026ldquo;something\u0026rdquo; every year. What I have seen over the years, is there are some forward-thinking leaders that understand that a single day barely scratches the surface. We\u0026rsquo;re already seeing a trend where companies use the entire week surrounding 28 January, and some even dedicating the full month of January to immersive data protection engagement. This extended period is typically part of a broader data protection strategy and is way beyond just ticking a compliance box.\nYour partner in strategic engagement\nAt Purpose and Means, we believe that effective data protection education and awareness is much more than baking cakes and circulating flyers. It\u0026rsquo;s got to be engaging, relevant, and far from dry. We specialise in challenging conventional approaches, which is why November, December and January are often our busiest months, helping our clients plan and prepare their events.\nOur knowledge and experience extends beyond delivering content. We partner with you to:\nDevelop an appropriate theme: Moving beyond generic \u0026ldquo;compliance\u0026rdquo; to a theme that will resonate and align with your company culture and business goals\nSelect topics that will create interest and impact: Identifying the most critical areas for your company, from basic awareness to role-specific challenges.\nSource speakers for live presentations: Bringing in experts who can grab your employees\u0026rsquo; attention inform, ensuring your message is not just heard, but internalised.\nCreate bespoke content: Our content development goes far beyond standard presentations.\nTailored content for targeted impact\nOur content is not just a re-hash of last year\u0026rsquo;s material that you may have used:\nCompany-wide data protection awareness: Engaging your workforce with relevant and up-to-date insights presented in a highly visual and interactive manner that sticks. Forget the yawn-inducing bullet points - think interactive modules, gamification, and scenario-based training that makes data protection relevant to everyone\u0026rsquo;s daily work. At past events we\u0026rsquo;ve occasionally maxed out MS Teams capacity with 1.000+ participants.\nContextual, role-based education: Digital marketers need to understand data protection by design and by default in campaigns, not just GDPR articles. HR professionals require specific guidance on employee data processing, far beyond generic policy statements. IT teams need to dive deep into security controls and data lifecycle management, grounded in practical application. We develop tailored education programmes for departments like HR, Digital Marketing, IT, Product Development, Vendor Management, Legal, Finance and Sales (to name some examples), ensuring the education is directly applicable to their functions.\nWorkshops with your data protection team: This is where internal expertise is tapped into. We facilitate rapid analysis workshops designed to flush out hidden issues, identify critical gaps in your current data protection ways of working, and collaboratively build an action plan for the rest of 2026. These aren\u0026rsquo;t just brainstorming sessions; they are strategic planning initiatives leading to tangible outcomes.\nThe time to act is now\nData Protection Day 2026, and the strategic month of January, is just around the corner. Our calendar for January is slowly filling up with forward-thinking companies keen to kick off their year with purpose. Don\u0026rsquo;t let this opportunity to set a powerful and engaging data protection agenda for 2026 slip away.\nBook a call with Purpose and Means to discuss your needs. Let\u0026rsquo;s start planning now!\n","date":"23 October 2025","permalink":"/data-protection-day-2026-as-your-strategic-launchpad/","section":"Blog","summary":"","title":"Data Protection Day 2026 as your strategic launchpad"},{"content":"Our versatile \u0026ldquo;From Issues to Action\u0026rdquo; workshop approach has many uses. For example, to proactively address regulations like the EU AI Act, DORA, and NIS2, to uncover internal challenges, assessing practice maturity, or integrating stakeholder viewpoints. All this can feed into your 2026 operational plans and budgets.\nThese days, GRC leaders are acutely aware of the ever-evolving regulatory landscape. With major legislation like the EU AI Act, DORA, and NIS2 at the top of their agendas, alongside existing frameworks such as GDPR, the pressure is on to ensure strong compliance and effective risk management.\nFor some, the instinct might be to react to each regulation in isolation, but to truly prepare for 2026 and beyond, using a proactive, structured approach will not just save you time upfront it will also uncover hidden challenges, align teams, and optimise resources.\nThis is where our \u0026ldquo;From Issues to Action\u0026rdquo; workshop approach offers a powerful solution. Designed for versatility and adaptability, it provides a collaborative framework that goes beyond simple compliance checklists, allowing you to go deep into the core of your GRC operations.\nHow is your team doing?\nIn my experience, the biggest hurdles aren\u0026rsquo;t always going to be external laws and regulations. Internal inefficiencies, communication breakdowns, or misaligned priorities within teams often go unnoticed until things go badly wrong.\nOur workshop is ideal at bringing these \u0026ldquo;under the surface\u0026rdquo; issues to light. Through preparation techniques like creating \u0026ldquo;Rich Pictures\u0026rdquo; (visual representations of current states) participants can visually describe relationships, processes, and even emotions, providing a holistic view of their challenges. The workshop itself enables a shared understanding, allowing your team to identify, group, and prioritise key issues. By then engaging in Root Cause Analysis, the workshop doesn\u0026rsquo;t just list symptoms, it surfaces the fundamental reasons why something isn\u0026rsquo;t working or needs improving, paving the way for sustainable solutions rather than quick fixes.\nImpact analysis and planning for emerging regulations (EU AI Act, DORA, NIS2, etc.)\nThe beauty of this structured approach is in its versatility and adaptability. While it\u0026rsquo;s great for internal operational improvements, it\u0026rsquo;s equally potent for navigating new regulations. Instead of viewing each new law as a separate challenge, the workshop can be customised to conduct integrated impact analysis. Teams can use the framework to:\nIdentify specific requirements: Break down complex regulatory texts into actionable items relevant to their department or team.\nAssess current state gaps: Using Rich Pictures and issue identification, pinpoint where existing practices will not live up to new regulatory demands.\nPrioritise and plan actions: Develop a clear \u0026ldquo;to-be\u0026rdquo; state, defining solutions, assigning responsibilities, and co-creating roadmaps that feed directly into your 2026 operational plans and budgets.\nAssessing practice maturity\nProper GRC resilience requires continuous improvement across multiple dimensions. Our \u0026ldquo;From Issues to Action\u0026rdquo; framework facilitates a comprehensive practice maturity assessment, examining:\nWays of working: Are your processes and procedures efficient, clearly defined and understood?\nTools and technology: Are your existing systems adequate to support evolving GRC needs?\nPeople and organisation: Do your teams have the right skills and structures to meet future demands?\nInformation and data: What changes may need to be made to reports, contracts, metrics, or the overall data and information needs of your company?\nBy analysing these pillars, the workshop helps you identify strengths to build upon, and weaknesses to address, and identify the needed work packages to accomplish this.\nIntegrating diverse stakeholder viewpoints\nGRC is rarely the sole responsibility of one department. Data protection, security, and AI governance touch every part of a company. Our workshop structure has \u0026ldquo;cross-functional collaboration\u0026rdquo; built-in. By bringing together diverse stakeholders, from legal and IT to business units, the process ensures that all viewpoints are aired, conflicts are openly discussed, and solutions gain broad consideration.\nPurpose and Means is a niche data protection and GRC consultancy based in Copenhagen but operating globally. We work with global corporations providing services with flexibility and a slightly different approach to the larger consultancies. We have the agility to adjust and change as your plans change. Take a look at some of our client cases to get sense of what we do.\n","date":"7 October 2025","permalink":"/grc-leaders-how-our-workshop-approach-can-help-you-plan-for-2026-and-beyond/","section":"Blog","summary":"","title":"GRC leaders: how our workshop approach can help you plan for 2026 and beyond"},{"content":"Most data protection training fails not because employees don\u0026rsquo;t care about the subject, but because it\u0026rsquo;s often designed by lawyers for lawyers. It\u0026rsquo;s time to learn from the masters of human psychology - advertisers - and start moving employee engagement away from compliance checkboxes into a creative challenge.\nIn this tenth post of my \u0026ldquo;beyond legal\u0026rdquo; series, I\u0026rsquo;m addressing what might be the most overlooked competency in data protection leadership. It’s the ability to change people\u0026rsquo;s behaviour and not just inform them about policies. As a data protection leader, you’re building and refining your governance framework (post #6), integrated with SDLC processes (post #9), and hired that data protection engineer (post #8). But none of it matters if your workforce still logs personal data in plain text, unlawfully deploys tracking pixels, or writes your privacy notices in legalese.\nTraditional data protection training is broken.\nIt’s generic, expensive and treats engagement as an information transfer issue when it\u0026rsquo;s actually a behavioural change challenge. And the evidence is everywhere. Completion rates that look impressive on dashboards, and everyone seems to display certification post-nominals on their LInkedin profiles, yet real-world violations continue.\nThe root causes are multiple. Generic e-learning modules that people click through whilst checking emails, the same tired PowerPoints that put people to sleep, conference rooms bursting at the seams with people struggling to pay attention, or lazy trainers simply reading from their speaker notes.\nI\u0026rsquo;ve always believed we need to learn from sectors that excel at capturing attention and driving behaviour change. That\u0026rsquo;s why for years, I\u0026rsquo;ve looked to advertising - an industry that\u0026rsquo;s mastered the art of cutting through noise, creating memorable messages, and most importantly, getting people to act.\nThe attention economy reality In corporate environments, you need to realise what you\u0026rsquo;re up against. Your carefully crafted data protection training is competing for attention with:\nIT security awareness (e.g. phishing simulations, password policies, social engineering training)\nHR initiatives (e.g. DEI training, performance management, wellness programmes)\nBusiness updates (e.g. quarterly results, strategy changes, new product launches)\nOperational training (e.g. new tools, process changes, compliance requirements)\nPersonal distractions (e.g. social media, family concerns, career development)\nUnfortunately, most of your colleagues have already been conditioned to view \u0026ldquo;compliance training\u0026rdquo; as something to endure, not engage with. They\u0026rsquo;ve learned to hack the system: click through slides, pass the quiz on the second attempt, get the certificate, move on.\nThe advertising industry solved this exact problem decades ago. They learned that in an attention-saturated environment, you need to be different, memorable, and focused on genuine value exchange. They learned that changing behaviour requires understanding psychology rather just presenting information.\nLearning from the masters: what advertising gets right Dave Trott, one of Britain\u0026rsquo;s most influential advertising strategists for many years, has written many books about creative thinking. On Youtube, you\u0026rsquo;ll find many videos of his lectures and presentations.\nHe has a great way to illustrate how we compete for attention, and the need to be different. In the image below, the zeros (\u0026lsquo;0\u0026rsquo;s) represent messages, communications - it could be a training session, a town hall, or just an email. In the \u0026lsquo;current\u0026rsquo; column. which one is your data protection message? And that\u0026rsquo;s the problem, employees struggle to remember because everything looks the same. So you need to ensure that the \u0026lsquo;X\u0026rsquo; is your message, every time.\nYour message has got to stand out, be memorable, be unmissable, and you need to make this happen sooner rather than later, otherwise your messages will never be different from the rest.\nDave Trott also developed a simple but powerful engagement model: Impact, Communicate, Persuade. I\u0026rsquo;ve been applying this framework to data protection training for years, and I like to think the results speak for themselves in the work I do working with corporate clients.\nImpact: Cut through the noise with something unexpected. Advertising creative directors know that if your message looks and feels like everything else, it becomes invisible. The same principle applies to data protection training. If your GDPR awareness session looks identical to every other compliance module, people switch off.\nI once worked with a financial services client who replaced their annual \u0026ldquo;Data Protection Refresher\u0026rdquo; with something called \u0026ldquo;The Privacy Escape Room.\u0026rdquo; Teams had to solve real data protection scenarios to unlock each stage. Participation went from reluctant compliance to genuine enthusiasm, and more importantly, after the event, people began to take responsibility themselves rather than simply forwarding anything that was “data protection” to the central team\nCommunicate: Deliver one clear, memorable message per interaction. Advertisers learned long ago that trying to say everything says nothing. Yet most data protection training tries to cover retention policies, international transfers, consent management, and incident reporting in a single session.\nInstead, focus on one key behaviour change. \u0026ldquo;This week, we\u0026rsquo;re learning about the EU Charter of Fundamental Rights in the context of DPIAs.\u0026rdquo; That\u0026rsquo;s it. Master that topic, then move to the next one.\nPersuade: Design for behaviour change, not knowledge transfer. The goal isn\u0026rsquo;t to educate people about GDPR articles so they can master the language of your data protection world. It\u0026rsquo;s to change what they do on Tuesday morning at 0930 when facing a real decision about personal data.\nThe CREATE method: understanding your audience first I\u0026rsquo;m also a big fan of Ilse Crawford\u0026rsquo;s design approach. Ilse Crawford is a British design leader who\u0026rsquo;s transformed everything from luxury hotels to healthcare spaces by starting with deep human understanding rather than assumptions. Inspired by her design approach, I developed my own approach fo data protection called CREATE. Here’s a brief explanation, which I constantly refine and update myself:\nCollaborate: Get out of your office and actually talk to the people you\u0026rsquo;re trying to influence. I\u0026rsquo;m constantly amazed by how many data protection leaders design training programmes without ever having a proper conversation with a front-line developer, customer service rep, or marketing coordinator.\nAsk them questions: When do you make decisions about personal data? What stops you from following our procedures? What would actually be helpful versus what we think you need? Where do you see the key risks in your work?\nResearch: Understand the real barriers to behaviour change in your specific context. Is it lack of knowledge, competing priorities, unclear procedures, or something else? You can\u0026rsquo;t design effective interventions without diagnosing the actual problem, and the root cause of that problem.\nOne client discovered that their marketing team wasn\u0026rsquo;t ignoring data protection requirements out of defiance. It was a question of they couldn\u0026rsquo;t find the DP review process in their project management tool. The solution wasn\u0026rsquo;t more training about data protection principles. We ended up refining how this was embedded in their operational procedures, and the tool itself.\nExplore: This is where creativity becomes essential, especially when budgets are tight. Look beyond your industry for inspiration. How do retailers train staff? How do airlines ensure safety compliance? How do restaurants maintain food hygiene standards? An area where employee behaviour is almost second nature, and it\u0026rsquo;s all around us, is health and safety. Your H\u0026amp;S colleagues worked hard for years to embed their practices into the fabric of your company. How did they do that? Talk to them and get inspired.\nI\u0026rsquo;ve also borrowed techniques from cooking shows (step-by-step demonstrations), sports coaching (practice scenarios with immediate feedback), and, a personal favourite, children\u0026rsquo;s television (repetition, visual cues, memorable characters).\nActivate: Start small and test different approaches. Just like advertisers A/B test creative concepts, you should pilot different training formats with specific teams before rolling out company-wide.\nTransform: Create systems that reinforce new behaviours beyond the initial training moment. This might include peer networks, visual reminders in workflows, or integration into existing performance management processes.\nEvolve: Continuously adapt based on what actually works, not what you think should work.\nSFIA competencies you need (or need access to) As with previous posts, I want to anchor this in measurable competencies and I’m still using the SFIA skills framework. The skills required for effective employee engagement certainly go well beyond traditional legal or compliance expertise:\n**Learning design and development **(TMCR): Adult learning principles, instructional design, and understanding how people actually absorb and retain new behaviours. This isn\u0026rsquo;t about presenting information, it\u0026rsquo;s all about facilitating genuine behaviour change.\nOrganisational change management (CIPM) and Organisational change enablement (OCEN): Psychology of behaviour change, resistance management, and understanding how new practices become embedded in organisational culture.\nUser experience analysis (UNAN) : Designing experiences that people actually want to engage with rather than endure. This includes understanding journey mapping, pain points, and motivation triggers.\nAlso, while not explicitly in SFIA, you also need access to creative and design thinking competencies, as well as communication for message crafting, storytelling, and understanding how to cut through noise in over-communicated environments. You need to compete with every other corporate message for attention and win. Whether that\u0026rsquo;s internal marketing teams, external agencies, or developing these skills yourself.\nLow-budget, high-impact approaches The good thing about applying advertising principles to employee engagement is that creativity often trumps budget. Some of my most successful interventions have cost virtually nothing, here are a few examples:\nStorytelling over bullet points: Replace policy documents with case studies that people can relate to. \u0026ldquo;Sarah in marketing faced this exact dilemma last week\u0026hellip;\u0026rdquo; is more engaging than \u0026ldquo;Article 6 requires lawful basis for processing.\u0026rdquo;\nVisual thinking: Create simple infographics, decision trees, or hand-drawn sketches that people can reference quickly. A well-designed one-page visual guide has more impact than a 50-page policy document.\nPeer-to-peer learning: Identify natural champions in each department and train them to cascade learning to their colleagues. People trust their immediate peers more than corporate communications.\nIntegration over interruption: Embed data protection considerations into existing meetings, tools, and processes rather than creating separate training events that compete for calendar space.\nGamification: This doesn\u0026rsquo;t mean building expensive apps. Simple competitions, progress tracking, or team challenges can drive engagement without significant investment.\nMeasure what matters Many companies get this spectacularly wrong. They measure training success by completion rates, quiz scores, and certificate distribution. These metrics tell you nothing about whether behaviour has actually changed.\nAdvertisers measure brand recall, purchase intent, and actual sales conversion. Similarly, you should look to measure:\nIncident reduction: Are people making fewer data protection-related mistakes?\nProcess adoption: Are new procedures actually being followed?\nQuality improvement: Are privacy notices getting better? Are DPIAs more thorough?\nProactive behaviour: Are people asking data protection-related questions before problems occur?\nPeer influence: Are champions spreading good practices without prompting?\nFinally, remember that employee engagement is ultimately about narrative. People don\u0026rsquo;t change behaviour for abstract compliance reasons. Change occurs when they understand how it connects to something they care about. This is an area I love. Changing legally-oriented statements to narratives that people embrace and can relate to.\nThe advertising industry learned long ago that people buy emotionally and justify rationally. The same principle applies to data protection training. People need to feel why data protection matters before they\u0026rsquo;ll consistently act on what they\u0026rsquo;ve learned.\nPurpose and Means is a niche data protection and GRC consultancy based in Copenhagen but operating globally. We work with global corporations providing services with flexibility and a slightly different approach to the larger consultancies. We have the agility to adjust and change as your plans change. Take a look at some of our client cases to get sense of what we do.\n","date":"2 October 2025","permalink":"/beyond-legal-10-the-battle-for-minds-why-your-data-protection-training-needs-an-advertising-makeover/","section":"Blog","summary":"","title":"Beyond legal #10: The battle for minds - why your data protection training needs an advertising makeover"},{"content":"If you are spending most of your time reviewing contracts and updating RoPAs instead of influencing how your software and products get built you need to get your hands dirty with your SDLC.\nIn the last post, I suggested it would be worth your while recruiting a data protection engineer. Let’s say, you did that and we fast forward to the engineer’s first day. That person is now sitting at their desk, laptop open, ready to go but have you cleared the way so they can add value from day 1?\nMany data protection teams only get involved in projects after the coding is done. The privacy notices get reviewed, the RoPA gets updated, but meanwhile one of the software development team deploys a feature that stores indefinite user session data. And by the time anyone with data protection competences gets involved, it\u0026rsquo;s the same old story about retrofitting controls into a system that was never designed for them, the costs involved, the impact to the promised golive date, and all the expectations that have been given to leadership.\nSo someone decides to ignore your concerns, and “accept the risk” (which never gets documented anyway).\nYou hired the data protection engineer to influence how software gets built, so if you’re not careful, you may have just hired an expensive compliance reviewer who\u0026rsquo;ll spend their time writing post-mortem reports about data protection violations that could have been prevented.\nThis is why integrating data protection into your Software Development Life Cycle (SDLC) isn\u0026rsquo;t just a nice to have, it\u0026rsquo;s the foundational competence that determines whether your data protection engineer becomes a valued enabler or an expensive (and soon to be) frustrated pen pusher.\nWhat SDLC integration actually means SDLC integration gives you the opportunity embed data protection requirements, security controls, and compliance evidence into every stage of software development. It starts at requirements elicitation through to production monitoring. And the good thing for data protection leaders is, it\u0026rsquo;s a repeatable approach that ensures data protection becomes an engineering deliverable and not a legal afterthought, or bolt-on.\nI believe there are too many companies that still treat data protection as something that happens \u0026ldquo;around\u0026rdquo; software development, which often manifests in wonky implementations. Common examples include long-winded privacy notices that are so far from being clear and concise, RoPAs are too high level, rarely updated and siloed from the rest of the company, and clunky cookie banners that create user friction and fatigue. And then the actual data processing, the retention and deletion mechanisms, the access controls, the dataflow controls - these are engineering decisions that should get baked in to code. If your data protection competences aren\u0026rsquo;t influencing these decisions, you\u0026rsquo;re not doing data protection, you\u0026rsquo;re doing “compliance theatre.” It really is the difference between bolt-on compliance and built-in protection.\nThe SFIA competencies you need As with previous posts, I want to anchor this in the SFIA framework because these are measurable and definable skills that you can recruit for, or develop from within your company:\n**DevOps: **Your data protection engineer needs to understand continuous integration (CI) and continuous delivery (CD) pipelines, automated testing, and infrastructure as code. They can\u0026rsquo;t influence what they can\u0026rsquo;t access. In SFIA, a specific view has been developed to support DevOps.\nSystems Design (DESN): Translating functional and non-functional including DPIA outputs and legal requirements, into architectural patterns and technical specifications that developers can implement.\n**Functional testing (TEST) **and Non-functional testing (NFTS): Data protection controls need to be tested like any other functionality. For example, can users export their data, does data erasure actually work, are consent preferences persisted correctly? Also testing to evaluate performance, security, scalability and other non-functional qualities against requirements.\nRequirements definition and management (REQM): Converting abstract legal obligations into specific, testable user stories that can be prioritised into sprint backlogs.\nProgramming/software development (PROG): Understanding enough about code to review pull requests, design secure APIs, and spot data protection concerns.\nPortfolio management (POMG): Understanding how data protection requirements flow through your company’s project portfolio governance and approval processes.\nAs a data protection leaders, if you don’t have access to people with these competences, they\u0026rsquo;ll remain dependent on others to translate their requirements, creating friction and misunderstanding at every stage.\nGovernance integration Before we look deeper into sprint-level integration, I just want to highlight an important upstream competence that many data protection leaders overlook - project portfolio and project governance integration. Data protection considerations must be addressed at the project initiation stage. You should not wait to think about this during development sprints.\nEvery project initiation document (PID), or project charter, should include mandatory data protection requirements sections. These are gates that should be part of the criteria whether a project gets approved, for example, does the project budget include addressing these requirements, as well as triggering required data protection resource involvement. The considerations could include:\nWhat categories of personal data will be processed?\nWhat are the volumes of data?\nWhat\u0026rsquo;s the nature, and risk of the intended processing?\nAre there cross-border transfer implications?\nWhat are the retention and deletion requirements?\nIs a DPIA required?\nWhat data protection engineering patterns will be required?\nI also suggest your company’s project portfolio management processes include a data protection team member on approval boards. This person should have voting authority and the mandate to prevent projects from proceeding without appropriate data protection considerations. I\u0026rsquo;ve seen too many situations where data protection teams are consulted on projects that are already 80% designed, resulting in last minute bolt-ons being applied that are often ineffective and expense.\nIntegration sprints To get a bit more practical, here\u0026rsquo;s an example of a 5 week implementation plan with data protection considerations for each stage:\nWeek 1: Preparation\nDays 1-2: Governance integration\nReview your project initiation template (PID) and add mandatory data protection fields\nIdentify which approval boards need data protection representation\nDraft an easy to understand one-pager aimed at project managers and their teams\nDays 3-4: Sprint planning Add these data protection related questions to your sprint planning template:\nWhat personal data does this feature collect, process, or store?\nWhat is the legal basis for this processing, and if consent, how will preferences be managed?\nHow long do we need to retain this data, and how will deletion be implemented?\nAre there any automated decision-making or profiling aspects?\nWhat third parties will have access to this data?\nDoes this create cross-border data flows?\nDays 5-7: Team education\nRun a half-day workshop with your development teams covering contextual data protection by design and by default patterns\nIntroduce threat modeling using a framework like LINDDUN\nProvide developers with a “data protection anti-patterns\u0026quot; guide covering common mistakes like logging identifiers, indefinite data retention, reusing data indiscriminately, or covering up personal data breaches\nWeek 2: Automated testing implementation\nDays 8-10: Basic data protection test suite Start with one automated test that validates core functionality, such as:\nCan the system export a user\u0026rsquo;s complete data profile via API?\nDoes user deletion cascade properly through all related data?\nDoes the user’s data persist in cache, backups, or third-party integrations after deletion?\nAre all processing activities traceable and matched against RoPA entries?\nDays 11-12: Security scanning integration\nImplement automated secret scanning in your CI/CD pipeline to catch API keys, database credentials, and hardcoded personal data\nAdd Static Application Security Testing (SAST) rules specifically for data protection violations including personal data in logs or unencrypted sensitive data storage\nDays 13-14: Data classification automation\nImplement automated data discovery tools that can identify personal data in databases, APIs, and log files\nCreate alerts for when new personal data categories are detected in systems\nPeriodic manual validation complements automated tools, as many DLP/discovery tools still generate false positives/negatives.\nWeek 3: Developer workflow integration\nDays 15-17: Pull request data protection checklist Add these mandatory questions to your PR template:\nDoes this code collect, process, or store personal data? If yes, document the data categories, purpose of processing and legal basis\nHave you implemented appropriate retention periods and deletion mechanisms?\nDoes this code log any personal data? If yes, justify why it\u0026rsquo;s necessary and ensure proper redaction\nAre there any hardcoded personal data, API keys, or credentials? (Should be caught by automated scanning)\nIs data minimisation respected and could this feature work with less data?\nDoes this create any automated decision-making that affects users?\nDays 18-19: Privacy-aware code libraries\nCreate reusable components for common privacy and data protection patterns: Consent management and preference persistence\nData export APIs with standardised formats\nSecure logging that automatically redacts personal data\nRetention policy enforcement with automated deletion jobs\nPseudonymisation utilities for analytics datasets\nDays 20-21: Architecture review process\nEstablish data protection review checkpoints in your architecture decision records\nCreate templates that require architects to address data flows, storage locations, encryption requirements, and access controls\nWeek 4: Production monitoring and continuous improvement\nDays 22-24: Monitoring implementation\nSet up monitoring dashboards for unusual data access patterns\nImplement automated alerts for potential data protection violations such as bulk data exports, failed deletion jobs, or processing inconsistencies\nCreate automated reports for data subject request SLAs and completion rates\nMake use of User Behaviour Analytics (UBA) to identify anomalous access to personal data\nExpand monitoring to cover emerging threats, such as shadow IT or unsanctioned data flows\nDays 25-26: Incident response integration\nUpdate incident response procedures to include data protection-specific actions\nEstablish automated personal breach detection rules in your SIEM system\nCreate runbooks for common data protection incidents like accidental personal data exposure\nIntegrating with change management triggers, so any significant new processing automatically initiates a review\nDays 27-28: Evidence generation\nImplement automated compliance reporting that provides evidence of control effectiveness\nSet up audit trails for all data access, modification, and deletion activities\nCreate automated DPIA monitoring that alerts when processing activities change significantly\nWeek 5: Retrospective\nDays 29-30: Evaluation and planning Run a comprehensive retrospective focused specifically on data protection integration:\nWhat worked well?\nWhich gates prevented actual data protection violations?\nWhat automated tests caught issues before deployment?\nHow did developers respond to data protection requirements in their workflow?\nWhat created friction?\nWhere did data protection requirements slow down development unnecessarily?\nWhich data protection and privacy patterns were difficult to implement or understand?\nWhat approval processes created bottlenecks without adding value?\nWhat needs refinement?\nWhich tests produced false positives or missed real issues?\nHow can we make privacy and data protection patterns easier for developers to adopt?\nWhat additional automation would reduce manual data protection reviews?\nKey metrics to track:\nNumber of data protection-related pull request rejections\nMean time to fulfil data subject requests\nData protection incidents caused by code defects\nDeveloper compliance with data protection checklists\nThree common pitfalls to avoid\nPitfall 1: Data protection theatrics\nAdding data protection checkboxes to workflows without enforcing them. If developers can ship code that fails data protection tests, then all you are doing is creating theatre.\nPitfall 2: Over-engineering\nTrying to solve every data protection problem in the first sprint. Focus on your high risk data flows first, perfect the patterns, then expand systematically.\nPitfall 3: Governance bypass\nAllowing \u0026ldquo;urgent\u0026rdquo; projects (or leadership \u0026ldquo;pet\u0026rdquo; projects) to skip data protection reviews. Every exception becomes a precedent that undermines your entire integration effort.\nConclusion Compared with earlier posts in this series, this post covers a more “in the engine room” set of skills, and data protection leaders need to at least to get an understanding of what is happening behind the scenes. It also represents a fundamental shift from reactive compliance to proactive engineering. Instead of your data protection team being the people who explain why you can\u0026rsquo;t do something, they become the people who help you do it correctly from the start.\nIt transforms conversations from \u0026ldquo;Legal says no!\u0026rdquo; to \u0026ldquo;Here\u0026rsquo;s a privacy-preserving analytics pattern that gives you the insights you need while meeting our retention requirements.\u0026rdquo; That\u0026rsquo;s the difference between being seen as a necessary evil and being valued as a strategic enabler.\nYour data protection engineer shouldn\u0026rsquo;t be reviewing privacy notices and updating RoPAs. They should be reviewing system architectures, designing secure APIs, ensuring that controls are tested in every deployment, and most importantly, sitting on approval boards where technical architecture decisions get made.\nAs a data protection leader, ask yourself this question. Can you influence your company’s next product or software release from project initiation through to production monitoring? If not, you need to onboard the competences mentioned above pretty quickly.\nPurpose and Means is a niche data protection and GRC consultancy based in Copenhagen but operating globally. We work with global corporations providing services with flexibility and a slightly different approach to the larger consultancies. We have the agility to adjust and change as your plans change. Take a look at some of our client cases to get sense of what we do.\n","date":"23 September 2025","permalink":"/beyond-legal-9-get-to-know-your-sdlc/","section":"Blog","summary":"","title":"Beyond legal #9: Get to know your SDLC"},{"content":"Move beyond generic privacy notices, compliance bolt-ons and theatre. Bring in a data protection engineer who embeds data protection considerations like minimisation, retention \u0026amp; deletion, monitoring and usable controls into your products and services.\nThe next competence I’m recommending in my ‘beyond series’ is obvious and unavoidable. It’s data protection engineering. Not as another tickbox, but as the pragmatic and knowledgeable facilitator that turns legal requirements and governance decisions into meaningful solutions that people can actually use and trust. This is especially relevant in companies where data protection is treated as administration rather than design.\nWhy data protection engineering, and why now? As I have harped on about so many times over the years, too many companies treat data protection as a legal issue you solve with paperwork. That dominant mindset unfortunately manifests in many painfully familiar scenarios: opaque, long-winded privacy notices no one reads, clunky cookie consent mechanisms that frustrate users, and template-driven DPIAs and RoPAs that align with supervisory authority guidelines but are often ineffective and are off no practical use, to name a few examples. These “legal solutions” create theatre. Loads of documents, lots of generic education but very little real reduction in risk.\nInstead of asking “are we compliant and how do we prove it?” you might want to flip that question to: “how do we build products that behave correctly from the start?” It’s all about getting things right up front, from the outset, rather than doing the bare minimum and then trying to fix things afterwards.\nWhy now? In the past few months I have participated in a few client meetings where the key challenge has been around retrospectively fixing existing business solutions that had zero data protection considerations from the outset. Data protection professionals were not part of the original solution design process aside from a few last minutes reminders - remember the privacy notice, remember to add the solution to the RoPA, update the DSAR intake registry, etc. All after thoughts.\nIt’s also timely because there’s a general perception that data protection is, on the whole, ‘difficult’ and complex to navigate. I’m not suggesting a data protection engineer will transform things overnight, but a positive effect will be realised if you gather the right people who are qualified to operationalise the often, abstract legal requirements.\nThere is also ‘privacy engineering’ which I believe originated in the US. I know NIST has huge resources about the discipline and it seems we’re playing catch-up slightly in Europe, but ENISA publish a good guide a few years back, and a preliminary opinion by the EDPS is also worth a read. In my own network I know a few good privacy and data protection engineers that work outside of bigtech and these seem to be primarily located in Belgium and the Netherlands for some reason. To get a full understanding, I can only recommend you take a look at both NIST and ENISA\u0026rsquo;s material as it contains a wealth of information. What I particularly like about NIST\u0026rsquo;s approach is it\u0026rsquo;s alignment with quality - a term we rarely hear much about in data protection currently. Also, and on a high level, NIST places focus on three primary engineering objectives:\nPredictability: The capability for individuals and system operators to reliably understand how personal data will be processed by a system, so that the system\u0026rsquo;s behaviour matches people’s expectations.\nManageability: The degree to which individuals and system operators can control or influence how personal data is processed, including allowing people to request access, rectification, or erasure of data about themselves as appropriate.\nDisassociability: The ability for personal data to be processed in a way that limits unnecessary association with individuals, allowing data to be used, shared, or analysed without routinely linking back to identities unless necessary.\nI really think these are worthy objectives, which unfortunately are not yet fulfilled by many companies currently. A quick and short question to you if you are a data protection leader: Hand on heart, how close are you to focusing on similar objectives in your data protection practice?\nI see a data protection engineer as a multi-disciplinary facilitator that can comfortably sit between colleagues from legal, product, UX, software engineering, EA and security, and translate legal requirements into architecture, design and operational controls that actually work at scale.\nDespite everything I write in this ‘beyond legal’ series I also want to repeat again, that legal teams are indispensable in data protection. They understand and interpret applicable laws and regulations, explain them to the rest of us in business terms, establish and manage the legal risk framework among many things. But legal alone can’t design solutions needed to implement retention and deletion requirements in legacy systems, or embed pseudonymisation into a data model, or embed data protection considerations in a product’s UX so it’s usable instead of threatening.\nIn my world, ’unity’ is realised when a collection of the needed competences regularly work together at different times depending on the context, risk and difficulty of what’s needed. So, legal, business tech and data protection engineering should work as equal partners in say solutioning an intrusive piece of martech to be deployed across multiple jurisdictions. Legal explains the “what,” engineering designs the “how,” and business tech or product makes the “how” part of the user experience.\nSo, data protection engineers certainly can be a great help, but unfortunately there is not an abundance of people with such competences on the job market at the moment. They are a rare talent these days.\nCommon areas where they can add value Data protection engineering is not about drafting longer notices or inventing more wonky compliance artefacts. It’s about making data protection a quality attribute of a product or service, helping translate risk into technical trade-offs. On a more technical level, an engineer brings these skills and knowledge areas to a data protection team:\nTurns DPIA and risk outputs into engineering requirements and acceptance criteria that developers can implement and test.\nThreat-models features early to find data protection and privacy risks in data flows, business logic and edge cases.\nDesigns architectures that minimise data collection, isolate sensitive data, and make deletion and retention possible.\nBuilds developer libraries, CI/CD (continuous integration and continuous delivery/deployment) checks and policy-as-code that make data protection as the default path in development workflows.\nDesigns usable transparency and control interfaces so people can make meaningful choices without being bullied or shamed.\nProvides monitoring and remediation tooling so you can detect misuse, investigate quickly, and fix systemic causes rather than paper over incidents (have you ever done that?).\nParticipate in incident response and forensics that leads to systemic change, not just legal notifications.\nAs I have done in all my earlier posts, I want to anchor competences using the SFIA skills matrix but unfortunately ‘data protection engineering’ does not (yet?) feature in the framework. I’m not sure whether it’s a US v UK thing because SFIA originates in Britain. If you recognise the shortcomings in your data protection work and you’re building capability, the following skills are relevant and map to the SFIA framework so you can use this as a guide as to who to engage with, recruit or develop:\nSystems design (DESN) - translate outputs from DPIAs and other risk assessments into modular, testable architecture.\nInformation security (SCTY) - design encryption, key management and access models aligned to data protection requirements and risks.\nInfrastructure design (IFDN) and Infrastructure operations (ITOP) - ensure consistent controls across cloud/on-prem/hybrid.\nRequirements definition \u0026amp; management (REQM) - turn DPIA outputs into clear acceptance criteria for backlogs.\nProgramming / software development (PROG) - implement privacy-preserving patterns and libraries.\nDeployment (DEPL) and Release Management (RELM) - embed privacy gates into CI/CD and automate runtime enforcement.\nUser experience analysis (UNAN) - design and develop usable and meaningful user interfaces and control flows that actually work and avoid dark patterns\nData management (DATM) - design lifecycle, minimisation and retention enforcement into schemas and flows.\nData science (DTSD) - apply anonymisation and statistical reasoning to safe analytics.\nQuality assurance (QUAS) and Audit (AUDT) - provide technical evidence of implemented controls.\nLearning \u0026amp; development (TMCR) - develop and run contextual education, training and awareness.\nSafety assessment (SFAS) and Safety engineering (SFEN) - especially relevant for AI product safety\nThreat modelling (THIN) - a proactive process to identify and address potential data protection risks and violations within a system or application. Note: this competence does not appear in the base SFIA framework, but I have seen it in some frameworks that have been adapted by specific sectors.\nAs I mentioned earlier in the post, I see data protection engineers as facilitators. They’ll reduce friction for product teams by offering workable, low-friction technical solutions and by translating legal constraints into (sometimes) innovative alternatives. They’ll prototype lower-risk options, embed data protection into developer workflows, and make it easy for teams to do the right thing, rather than force them to implement clunky and wonky compromises.\nThe result? If allocated correctly their involvement will reduce the need for last minute legal rewrites, lower rework costs, and most importantly, reduces the likelihood of real-world harm.\nPurpose and Means is a niche data protection and GRC consultancy based in Copenhagen but operating globally. We work with global corporations providing services with flexibility and a slightly different approach to the larger consultancies. We have the agility to adjust and change as your plans change. Take a look at some of our client cases to get sense of what we do.\n","date":"18 September 2025","permalink":"/beyond-legal-8-why-your-next-hire-should-be-a-data-protection-engineer/","section":"Blog","summary":"","title":"Beyond legal #8: Why your next hire should be a data protection engineer"},{"content":"Many data protection leaders view their RoPA merely as a compliance mechanism, often leading to outdated, siloed information and missed opportunities. A strategically positioned RoPA can be your company\u0026rsquo;s indispensable data map vital for understanding your data landscape.\nThis is the 7th post in my \u0026ldquo;Beyond Legal,\u0026rdquo; series where I challenge the view held by many that data protection is solely the domain of legal professionals. So far, I\u0026rsquo;ve explored the need to upgrade competences, understand why a data protection leader\u0026rsquo;s role is complex, why some leaders need to elevated their risk game, and importantly, why engaging with your data management and data governance colleagues will be highly beneficial. In the last post I covered the \u0026lsquo;great governance misunderstanding\u0026rsquo;.\nIf these posts have resonated with you, you may share my viewpoint that to really lead data protection in a company, you need to move beyond theoretical compliance. It means getting your hands dirty, understanding the operational realities of data within your company. And the absolute first, non-negotiable step on that path? Knowing what data you have, where it lives, how it moves, and who touches it. Yes, we are talking Records of Processing Activities (RoPA). And this is seen from a GDPR perspective but if you\u0026rsquo;re a global player you\u0026rsquo;ll need something more extensive than what Article 30 dictates, and it will make good sense to do so. Many companies have seen benefits in investing in data maps that are more dynamic and have a wider footprint across the organisation. Typically these companies have greater maturity in both data protection and data management.\nBut hang on, before you, or your external law firm advises you to start making that RoPA, how do you know whether something similar doesn\u0026rsquo;t already exist in your company, or has key processing information residing in different repositories.\nI think you should ask this question no matter what kind of tool you are thinking of investing in, and the question will be answered by following the structured approach further down this post.\nThe perception of RoPAs #For too long, the RoPA has been viewed as a necessary evil, or necessary burden type \u0026lsquo;document\u0026rsquo; mandated by GDPR Article 30, often drafted in a spreadsheet and then quickly forgotten about. Or you may have invested in a automated solution that offers lots of other features. This more fancy RoPA still gets forgotten about or reluctantly updated a few times a year, or even just once a year. This perception profoundly limits its strategic value and, worse, relegates data protection to a siloed (often legal) function.\nI think there\u0026rsquo;s a huge opportunity to change things for the better because there is so much potential if you just involve the right competences. Your RoPA should be the nerve centre of your data protection function or office, informing so many other aspects of your work.\nAs I always do in these posts, I want to align competences with the SFIA skills framework. The competencies required here are less about interpreting GDPR articles and more about detailed Data Management (DATM), Business Situation Analysis (BUSA), Records Management (RMGT), Supplier Management (SUPP) and there\u0026rsquo;s a fair amount of planning, follow-up and reporting so I also suggest Project Management (PRMG) competences are needed.\nThe symptoms of a troubled RoPA #The state of your RoPA is a good indicator of how structured and controlled you were in establishing it in the first place. If corners were cut, and the right people weren\u0026rsquo;t involved at the beginning, the symptoms are usually quite obvious:\nYour RoPA is not regularly maintained and is out of date\nResponsibility for its maintenance falls solely on you and your department, reinforcing the idea that it\u0026rsquo;s \u0026ldquo;just a privacy thing.\u0026rdquo;\nInformation within your RoPA is sometimes duplicated, often in slightly different forms, in other repositories across your company\nGovernance for your RoPA is non-existent\nYou do not have an operational budget allocated for the RoPA\u0026rsquo;s maintenance, improvement, or integration.\nYou are only using a fraction of its functionality, probably just the bare minimum for compliance.\nIt is not fully integrated with key data systems in your company, making it difficult to pull real-time, accurate information.\nYou can\u0026rsquo;t locate the original business case because there probably wasn\u0026rsquo;t one in the first place.\nIf any of these symptoms sound familiar, you are far from being alone, but as I wrote above, it is possible to rescue the situation. That begins with you acknowledging how things are. Yes, these are hard truths.\nFoundations of a RoPA that is fit for purpose #To ensure your RoPA is fit for purpose in your company, in your business context, don\u0026rsquo;t be tempted to purchase the latest tool on the market, or the tool someone in your network recommends you. Just because a tool works in company A or B, it does not mean it will work well in yours. I suggest working through these steps to avoid making costly mistakes and being stuck with the wrong tool for the years to come:\nGather the right competences to conduct the following steps, and gather the key stakeholders across the company. Be sure to include, as a minimum: Data management\nRecords management\nSourcing/procurement\nCompliance\nRisk\nInfosec\nFinance\nEnterprise architecture\nLegal\nConduct a feasibility study including functional and non-functional requirements elicitation\nIf you don\u0026rsquo;t have the necessary competences, get a Business Analyst onboard to help you\nAlign with your company\u0026rsquo;s technology strategy, architectural standards, etc\nScan the market for possible solutions\nFormulate a draft business case that sets out the different options you\u0026rsquo;ve considered\nConduct a structured vendor selection process\nUpdate the business case and request approval\nBy completing these tasks you\u0026rsquo;ll increase the likelihood that you\u0026rsquo;re making the right choice, and you are justifying the needed investment. The first step is extremely important.\nBe sure to understand whether the tool allows you to set relationships across various elements. This is highly relevant to how the records of processing activities are recorded and accessed, so let\u0026rsquo;s get the question I hear asked so often out of the way. How detailed should a RoPA be? There is very little guidance from the data protection regulators about this and I personally find their Excel templates a waste of time, and misleading.\nEstablishing a useful, dynamic, and compliant RoPA requires a delicate balance. Too much detail can make it unmanageable. Too high level, and it becomes meaningless. Here’s my suggestion for how to achieve that balance, and be compliant, and in control.\nThe principles Purpose Limitation, Data Minimisation and Lawfulness (i.e. applying one of the six lawful bases) are your key references when determining the level of granularity. Define processing activities based on purpose of processing and be aware it\u0026rsquo;s not just a simple copy/paste of your business processes.\nTo illustrate this, I\u0026rsquo;ll provide a couple of examples.\n\u0026ldquo;Employee Payroll Management\u0026rdquo; is too high level. This can be broken down as follows:\nPurpose: Calculation and disbursement of salaries and wages Description: To accurately calculate and pay employees their regular remuneration, including basic salary, overtime, bonuses, and commissions.\nPurpose: Statutory deductions and contributions Description: To accurately deduct and remit mandatory contributions to government bodies, such as income tax, national insurance/social security, and pension contributions as required by law.\nThere will be a few more purposes - I won\u0026rsquo;t list them here to say time and space.\nAnother example:\n\u0026ldquo;Recruitment Process\u0026rdquo; is too high level. This can be broken down applying the principles I mentioned above as follows (just a couple of examples):\nPurpose: Candidate sourcing and application management Description: To identify potential candidates, receive and acknowledge applications, and manage the initial pool of applicants for a specific role or general talent pipeline.\nPurpose: Candidate assessment and selection Description: To evaluate candidates\u0026rsquo; suitability for a position through interviews, tests, and background checks, and to make informed hiring decisions.\nAlso resist the temptation to specify a purpose like \u0026ldquo;Processing salary data in System A\u0026rdquo; and \u0026ldquo;Processing tax data in System B.\u0026rdquo;\nGoing beyond Article 30 #Many would question why going beyond a legal requirement is a good idea. Surely it\u0026rsquo;s a waste of time and resources? As I mentioned earlier there are tremendous opportunities to get greater control of your company\u0026rsquo;s processing by establishing a data map. This certainly should not be the preserve of a legal department because it\u0026rsquo;s an enterprise tool encompassing data management. An effective RoPA should be part of a data map and goes deep, capturing a wealth of operational detail that GDPR only hints at:\nData classification \u0026amp; categories: Beyond just identifying \u0026ldquo;personal data,\u0026rdquo; specify whether it\u0026rsquo;s special category data (health, biometric, etc.). Also include data by source – is it provided by the data subject, observed (e.g., website analytics), derived (e.g., credit scores), inferred (e.g., political leaning), or proxy data? This level of detail is vital for understanding risk and obligations.\nVendors \u0026amp; third parties: This section is expansive, moving beyond mere names:\nProcessing Role: Are they a processor, joint controller, or independent controller?\nContact Info: Not just legal, but operational and technical key contacts.\nAgreements: Links to Data Processing Agreements (DPAs) or other contracts.\nBreach notification requirements: Specific requirements and thresholds for third parties\nServices, assets/applications, SLAs: What services do they provide, which of your assets do they touch, and what are the performance expectations?\nVisual data lineage: Moving beyond static spreadsheets, your RoPA becomes a powerful visual tool where data flow diagrams illustrate:\nData across the lifecycle: Where data originates, where it goes, and its transformations.\nVendors: Clearly showing which third parties are involved at each stage.\nData subjects: Who is impacted by each flow.\nAssets: Which technical systems or databases are used.\nInternational transfers: Where data crosses jurisdictional borders.\nExisting controls: What technical and organisational measures are in place. Visualisations are incredibly powerful for spotting gaps, understanding complex relationships, and communicating impact.\nComprehensive contact information: Extend beyond vendors. Include any internal group or external individual connected with processing, incident response, or external support: third-party forensics firms, external legal services, specialist data protection and privacy experts. When a personal data breach occurs, every second counts, and knowing exactly who to call, and what their role is, is invaluable.\nPrivacy notice repository: Maintain an overview of all privacy notices, including version control, linked directly to the relevant processing activities. Ideally, it should also be linked to your consent record management tool for a holistic view of transparency and choice (when consent as a lawful base is applicable).\nBreach notification requirements: Beyond general regulations, detail specific sector requirements, variations across jurisdictions (e.g., GDPR vs. APEC countries, or US states), and requirements to notify other parties (controllers, processors). Document varying data subject thresholds for notification. This proactive planning is key for responding to personal data breaches.\nAssets: For each key asset (database, application, system) involved: state its purpose, what business function it supports, and whether it\u0026rsquo;s in-house or vendor-managed.\nControl catalogue: Document controls deployed for each activity:\nResponsibilities \u0026amp; ownership: Who owns the control, who is responsible for its operation.\nTypes of controls: Technical, organisational, physical.\nStatus \u0026amp; effectiveness: Is it implemented? How effective is it?\nRoPA service continuity: What if your RoPA itself is compromised or inaccessible? This critical repository needs its own continuity plan. If your internal systems are paralysed, how will you access this information? The same consideration applies to your data breach response plan – it needs to be accessible even in a crisis.\nBusiness benefits of a \u0026ldquo;Strategic RoPA\u0026rdquo; #The true value of a fit for purpose RoPA isn\u0026rsquo;t just about avoiding fines. It\u0026rsquo;s about enhancing your business. This is where you, as a data protection leader, must \u0026ldquo;sell\u0026rdquo; the value to the rest of the company, moving beyond the perception that it\u0026rsquo;s \u0026ldquo;only for legal.\u0026rdquo; I believe there are many opportunities that should feature in your benefits case:\nRisk identification: there will be strong visibility into your company\u0026rsquo;s processing operations, enabling proactive identification and mitigation of data protection and security risks.\nUnderstand impact of change: on a process and technical level see who and what will be affected by a proposed change, simplifying communication to various parties across the company\nReveal redundancy: Identify redundant elements that could be decommissioned, leading to cost savings in systems, storage, and effort.\nSpot security vulnerabilities: Visual data flows and asset inventories highlight potential security weak points.\nEnsure business alignment: Assess if you have the right elements in place to meet business needs and achieve the right balance for strategic goals and objectives.\nCommon language: Establish a common processing language across the company, accommodating jurisdictional variations (e.g., controller/processor in EU, business/service provider in California). This enables clarity and reduces misunderstandings.\nAvoid a silo RoPA #The real danger here is when a data protection leader (perhaps inadvertently) downplays the importance of this operational data, reducing data protection to a standalone, siloed function. This can happen by insisting on standalone, often expensive, \u0026ldquo;privacy management tools\u0026rdquo; before adequately understanding existing data management capabilities and integrating with them (see my suggested steps outlined above).\nThis \u0026ldquo;privacy tool first\u0026rdquo; approach, without a thorough understanding of your existing data landscape and the capabilities of your data management colleagues, often leads to isolation. Data protection becomes a niche concern, separate from the broader business and IT strategy, resulting in many of the \u0026ldquo;troubled RoPA\u0026rdquo; symptoms I mentioned earlier. Data has intrinsic value, and downplaying its importance by siloing its protection is detrimental to the entire business.\nBuilding bridges # Establishing the RoPA, or building its required functionality into existing tools, gives you the opportunity to build and maintain relationships across your company, so it is not just a compliance exercise. It\u0026rsquo;s also a tremendous opportunity for you to educate your colleagues in relevant and contextual data protection concepts and principles.\nA few years ago, I worked with a client where their data protection leader had gamified the establishment and population of their RoPA. She had made it into a game with lots of interaction and competition between the different departments that were in scope. This really helped her get the organisation interested and onboard. In my view that\u0026rsquo;s a good example of innovation in data protection when you bring people in with a creative mindset.\nBy co-creating a visual, and operationally relevant view of data, you\u0026rsquo;ll break down silos. You\u0026rsquo;ll move data protection from a perceived constraint to an enabler of business value and strategic growth.\nWithout it, you’re driving blind, always on your back foot, and not in control.\nPurpose and Means is a niche data protection and GRC consultancy based in Copenhagen but operating globally. We work with global corporations providing services with flexibility and a slightly different approach to the larger consultancies. We have the agility to adjust and change as your plans change. Take a look at some of our client cases to get sense of what we do.\n","date":"12 September 2025","permalink":"/beyond-legal-7-you-cant-protect-what-you-are-not-aware-of/","section":"Blog","summary":"","title":"Beyond legal #7: You can't protect what you are not aware of"},{"content":"The failure to differentiate between governance and management is not an academic debate. It is the root cause of companies finding themselves in crises, asking themselves \u0026ldquo;How on earth did we end up here?\u0026rdquo;\nI so wish I could have thought up that diagram at the time as I\u0026rsquo;ve since used it on numerous occasions to explain the \u0026ldquo;why, what and where\u0026rdquo; of governance, also the concept of key assets. I think it shows something extremely critical, and that is, that governance of any key asset cannot sit in isolation. In the diagram you can see governance of financial assets, HR assets and Information and IT assets, among several. Now remember this diagram is over 20 years old when AI governance was not \u0026ldquo;a thing.\u0026rdquo; Towards the end of this post you\u0026rsquo;ll find another diagram that shows where AI governance and (as featured in post #5) data governance belong.\nA key takeaway here is that you can\u0026rsquo;t just implement AI governance without doing some as-is analysis of existing governance structures. This is an upfront task because how can you design and implement a governance framework for AI when you are not aware of how it needs to slot in, avoid overlaps, and so on.\nThese days many companies are struggling on parallel tracks with taming the power and expectation of AI, and the ongoing challenge of data protection, and unfortunately the same confusion has re-emerged. You are no doubt aware of the flood of AI Governance certifications and frameworks. On Linkedin, leaders share insights about \u0026ldquo;governing their data.\u0026rdquo; But what they\u0026rsquo;re almost always describing is management rather than governance (if you look carefully enough).\nIt\u0026rsquo;s not just a case of semantics. To use a naval analogy, it’s the difference between steering the ship and stoking the engine. And unfortunately right now, I get the impression that many ships are sailing full steam ahead with no one on the bridge asking where they\u0026rsquo;re going.\nThe boardroom versus the engine room Let\u0026rsquo;s look at what was drummed into me back in 2012, because I find it\u0026rsquo;s crystal clear:\nGovernance is the domain of the board and executive leadership. Its purpose is to Evaluate, Direct, and Monitor :\nEvaluate: They assess stakeholder needs, the business environment, and strategic options. They ask things like: \u0026ldquo;What are our goals? What is our risk appetite? What are our ethical red lines?\u0026rdquo;\nDirect: They set the strategic direction through prioritisation and decision-making, allocating resources to align with that vision. They say, \u0026ldquo;This is the direction we will take. These are our priorities.\u0026rdquo;\nMonitor: They oversee the company\u0026rsquo;s performance and compliance against the direction they set. They ask, \u0026ldquo;Are we achieving our objectives? Are we operating within our stated principles?\u0026rdquo;\nManagement is the responsibility of the operational layers of the company. Its purpose is to Plan, Build, Run, and Monitor.\nPlan, Build, Run: They take the strategic direction from the governance body and create the plans, build the solutions, and run the day-to-day activities to achieve the company\u0026rsquo;s objectives.\nMonitor: They monitor the performance of processes and services, reporting results back up to the governance body.\nIn simple terms, governance asks \u0026ldquo;Are we doing the right things?\u0026rdquo; while management asks \u0026ldquo;Are we doing things right?\u0026rdquo; Governance is about stewardship whereas management is about execution. (Of course, the actual terms will vary depending upon the governance framework you\u0026rsquo;re using).\nData protection governance In the data protection space, this distinction is also important.\nData Protection Governance is when the board might discuss (obviously depending on context):\n\u0026ldquo;How does our approach to B2B client data protection support our brand promise of being a trustworthy partner?\u0026rdquo;\n\u0026ldquo;What is our corporate risk appetite concerning the use of personal data for new product development?\u0026rdquo;\n\u0026ldquo;Are we investing sufficiently in data protection to use it as a competitive advantage in our markets?\u0026rdquo;\n\u0026ldquo;How do we respond to global geopolitical events that impact our DEI initiatives?\u0026rdquo;\nWhereas Data Protection Management is more around:\nOperationalising a process to handle data subject requests\nImplementing a tool to support GDPR Art. 30 requirements for a RoPA\nEnsuring contextual education and training has ongoing focus across all relevant groups of employees\nEmbedding triggers in operational processes and procedures to trigger DPIA considerations\nBoth are essential. But a company that excels at processing data subject requests without board-level discussion around data ethics is a ship with a highly efficient engine that could be sailing in circles.\nThe certification trap Now, let’s talk AI where the speed and scale of AI development has exacerbated the governance/management confusion and made things quite dangerous especially when you consider some of the large global certification bodies are already getting it wrong. A controversial statement I know but the potential impact is that hundreds, if not thousands of companies around the world are implementing \u0026ldquo;AI governance\u0026rdquo; that will ultimately fail.\nFor example, I believe it is incorrect to label training on AI risk management covering things like bias testing, model validation, and threat modelling, as \u0026ldquo;AI Governance.\u0026rdquo; This is worrying, and a critical error, because that is AI Management. It\u0026rsquo;s the \u0026ldquo;doing things right.\u0026rdquo; They are all extremely important activities, but they are not governance.\nProper AI Governance occurs when the board and C-suite tackle fundamental questions that have no easy technical answer:\nEvaluate: \u0026ldquo;Should our company use generative AI in roles that were previously human-centric, like customer support or therapy? What are the ethical implications for our customers and society?\u0026rdquo;\nDirect: \u0026ldquo;We will not develop systems that create deepfakes for political advertising, regardless of legality or profitability. This is our ethical boundary.\u0026rdquo;\nMonitor: \u0026ldquo;Is our use of AI creating unforeseen societal impacts? Are the outcomes aligning with the values we set out at the beginning?\u0026rdquo;\nWhen a certification course teaches you how to implement a fairness toolkit for a machine learning model, it\u0026rsquo;s teaching you management. When a board debates whether to deploy that model in a high-risk scenario like credit scoring or hiring in jurisdictions where strong AI regulation exists, that is governance.\nWhat many companies are unfortunately doing is creating a false sense of security that allow the execs to believe governance is \u0026ldquo;handled\u0026rdquo; by a technical team, when in fact, no one is steering the ship.\nThe competency to govern As in previous posts in this series, I want to anchor specific non-legal competences because this really is the point behind my \u0026ldquo;beyond legal\u0026rdquo; series. What I have described so far is not just a theoretical model. It\u0026rsquo;s reflected in professional skills frameworks including the SFIA framework that I\u0026rsquo;ve referenced often in this series so far. SFIA defines a specific high-level skill called Governance (GOVN).\nIt\u0026rsquo;s worthwhile taking a look at the description for GOVN, as it includes:\n\u0026ldquo;Directs the definition, implementation and monitoring of the governance framework to meet organisational obligations under regulation, law, or contracts.\u0026rdquo;\n\u0026ldquo;Provides leadership, direction and oversight for governance activities. Integrates risk management into frameworks, aligning with strategic objectives and risk appetite.\u0026rdquo;\n\u0026ldquo;Secures resources required to execute activities to achieve the organisation’s governance goals with effective transparency.\u0026rdquo;\n\u0026ldquo;Provides assurance to stakeholders that the organisation can deliver its obligations with an agreed balance of benefits, opportunities, costs and risks.\u0026rdquo;\nThis is the language of direction-setting, strategic alignment, and stakeholder influence. It is distinct from management skills like Project Management or Business Process Improvement. The board will not be concerning themselves with wrangling or preparing data, or packaging models.\nGovernance requires a different mindset. A mindset that is focused on long-term value, ethics, and accountability, not just project tasks, or how to conduct a fundamental rights assessment.\nAs you may have gathered from my posts over the years I like to visualise, and a model I often use in my workshops illustrates corporate governance should be the encompassing framework, with governance of corporate assets within that. Management executes within those frameworks and provides feedback (typically through monitoring and reporting) that allows the governance body to re-evaluate and re-direct as needed. My own model resembles the diagram below, but to give credit where it\u0026rsquo;s due, you\u0026rsquo;ll find this diagram in \u0026ldquo;Defining organizational AI governance\u0026rdquo; authored by Matti Mäntymäki, Matti Minkkinen, Teemu Birkstedt \u0026amp; Mika Viljanen. You can download the article here: https://link.springer.com/article/10.1007/s43681-022-00143-x#Sec2\nAgain, it\u0026rsquo;s an excellent diagram that speaks volumes.\nThe failure to differentiate between governance and management is not an academic debate. It is the root cause of companies finding themselves in crises, asking themselves \u0026ldquo;How on earth did we end up here?\u0026rdquo; Mind you, the answer could well be that the managers were expertly executing a strategy that the governors never consciously set!\nConclusion It\u0026rsquo;s time for many governance leaders to look themselves in the mirror and recognise their strengths and shortcomings, especially if they see their roles as being very challenging or difficult, or have never established governance in a company before. Acknowledging that you require help from others will avoid failure later on. As I say often, data protection, AI governance they are both team sports.\nSo, if you are up for for it, I challenge you to ask this in your own company. Who is doing the governing for data and AI? Who is asking the difficult, strategic \u0026ldquo;should we\u0026rdquo; questions?\nIf the answer is \u0026ldquo;a legal assistant\u0026rdquo; or \u0026ldquo;our lead data scientist,\u0026rdquo; then it\u0026rsquo;s pretty certain that you don\u0026rsquo;t have a governance function, you have managers. And, as mentioned earlier,while their work is vital, they are managing the \u0026ldquo;how.\u0026rdquo; The responsibility for the \u0026ldquo;what\u0026rdquo; and the \u0026ldquo;why\u0026rdquo; belongs much closer to the boardroom.\nPurpose and Means is a niche data protection and GRC consultancy based in Copenhagen but operating globally. We work with global corporations providing services with flexibility and a slightly different approach to the larger consultancies. We have the agility to adjust and change as your plans change. Take a look at some of our client cases to get sense of what we do.\n","date":"3 September 2025","permalink":"/beyond-legal-6-the-great-governance-misunderstanding/","section":"Blog","summary":"","title":"Beyond legal #6 : The great governance misunderstanding"},{"content":"Data protection leaders must evolve beyond a purely legal and compliance-based role by familiarising themselves with the principles of data management and data governance. To succeed, they must collaborate with technical and business teams, shifting their narrative from risk avoidance to one that enables innovation and builds trust as a competitive advantage.\nIn this fifth post of my beyond legal series, I will outline some more essential non-legal competencies a data protection leader needs, explain how to engage with the teams that hold these skills, and also provide a few examples of the kind of narratives required to change the tone of the conversations you\u0026rsquo;ll need to have with them.\nMany years ago I worked with a bloke who\u0026rsquo;s most used expression was \u0026ldquo;\u0026hellip;we\u0026rsquo;re building on sand.\u0026rdquo; He was head of the server build team in a large financial services company and very mindful that his work was highly dependent on having a firm foundation in place. He was quality conscious, and would often use this expression in project team meetings, especially when we would get status on open issues in the project issues log. He was 100% correct.\nI think of that expression often when hearing data protection leaders taking a minimal compliance-based approach to data protection or AI governance. Attempting to place any kind of framework on top of poor data practices is like building on sand.\nThe data protection leaders that understand this, tend to be more quality-conscious and have a higher likelihood of succeeding.\nBreaking free from the \u0026ldquo;Office of No\u0026rdquo; As I allude to often in my posts, in many companies the data protection team is perceived as the team that says \u0026ldquo;no\u0026rdquo; to innovation, a necessary evil, a cost centre only focused on avoiding fines.\nThe good news is, this view is changing, but if this label persists in your company, it can be highly detrimental, especially if you\u0026rsquo;re working in a data-driven or data-dependent company. You are going to have to change that perception and become a data protection leader that is seen as a strategic partner who understands that well-managed data is non-negotiable.\nThe competencies you need to embrace\nLet\u0026rsquo;s begin with some definitions and I\u0026rsquo;m sure from a data protection perspective you\u0026rsquo;ll immediately see some connections. In this post I am turning to DAMA rather than SFIA because I personally feel DAMA is the primary reference when talking all things data.\nThe DAMA definition of data management is: \u0026ldquo;Data Management is the development, execution, and supervision of plans, policies, programs, and practices that deliver, control, protect, and enhance the value of data and information assets throughout their life cycles.\u0026rdquo;\nIn plain English, this means data management is the professional, end-to-end discipline of actively managing data as a valuable business asset to make it useful, safe, and increasingly profitable.\nDAMA\u0026rsquo;s definition of data governance is \u0026ldquo;\u0026hellip;the exercise of authority and control (planning, monitoring, and enforcement) over the management of data assets,\u0026rdquo; which, in plain English is about using power and rules to manage your company\u0026rsquo;s data effectively, ensuring it\u0026rsquo;s well-organised, kept safe, used correctly, and accessible to those who need it.\nNow, it\u0026rsquo;s worth mentioning at this point the difference between governance and management and the need to differentiate the two. Mark Thomas wrote a great post about this a while back that really does highlight the misuse and lack of understanding of the word governance by some organisations.\nIn a nutshell:\nData Management is the doing, the hands-on work of managing the data lifecycle.\nData Governance is the ruling, the leadership, decision-making, and oversight that ensures the work is done right.\nI will also cover the misuse, building off Mark\u0026rsquo;s post in the coming month or two.\nWhy these competencies are non-negotiable for data protection leaders Compliance will be achieved if you focus on good data practices, but it\u0026rsquo;s not the only objective. Understanding data management and data governance allows a data protection leader to see the upstream processes that create downstream risks. You can\u0026rsquo;t effectively conduct a DPIA if you don\u0026rsquo;t understand the data life cycle or the rules governing its use. You will continually struggle to fulfil data subject requests, and you will always struggle to maintain your RoPA - to name a few examples.\nSo much of what we need to achieve in data protection is dependent on data management (DM) and data governance (DG). I think the clue is in the word data :-)\nTo gain credibility you must speak the language of the teams If you haven\u0026rsquo;t already done so, you will need to build good relations with the DM and DG teams and a data protection leader who only speaks in riddles of legal articles will not be taken seriously by data engineers, data scientists, and product managers.\nDiscussing \u0026ldquo;data retention schedules\u0026rdquo; or \u0026ldquo;data stewardship models\u0026rdquo; shows you understand their world. This builds trust and turns you into a valued collaborator rather than someone who is more interested in dotting \u0026lsquo;i\u0026rsquo;s and crossing \u0026rsquo;t\u0026rsquo;s. I therefore highly recommend familiarising yourself with a framework like DAMA. I\u0026rsquo;m not suggesting you become certified but there are some good Youtube videos that give you the basics, and enough knowledge where you can begin to join the dots of what you need to achieve from a data protection perspective.\nA downside I see often of a strong legal approach is it is often reactive, e.g. responding after a new law or regulation is passed or a complaint is lodged. Once you can see the connections between your world and the world of DG and DM, you can begin to become more proactive and less reactive . in other worlds you are supporting principle 1 of Privacy by Design (PbD) and mastering these competencies will allow you to embed PbD (or the broader discipline data protection by design and by default from a European perspective) into the data protection framework you need to establish. In your own work, get things right the first time, and prevent future legal issues and expensive re-work. This is also at the heart of quality management - a term that is unfortunately rarely mentioned in legally-oriented data protection circles.\nMoving from an Ivory Tower to the Trenches\nA data protection leader is a hub, not an island. Your success really does depend on your ability to connect with and influence those with deep technical and business expertise. Now I acknowledge every company is different and the differences are very important to understand, but in many cases you\u0026rsquo;ll find some of the roles I\u0026rsquo;m mentioning below in your company, albeit with slightly different titles, and these are listed in no particular order of importance. Again, it\u0026rsquo;s just a few examples I\u0026rsquo;m listing here.\n**Chief Data Officer or Head of Data: **This individual is your natural ally because they are focused on leveraging data as an asset. Your work can help them do it responsibly and you need to convince them of that.\n**IT and systems architects: **They design and manage the infrastructure where data lives. Engage with them on data life cycle management, implementation of security controls and pseudonymisation techniques, for example.\nProduct development teams: Build their trust and you might get a seat at the table during the ideation phase, rather than the night before go live. Help them innovate with privacy-enhancing features.\n**Marketing and analytics teams: **They normally have the big budgets which is an excellent reason to get to know them, and because they are at the sharp end of data collection and use, deploying emerging, risky technologies and when things go wrong, it\u0026rsquo;s goes wrong big time. Help them understand the difference between insightful personalisation and intrusive surveillance.\nAs part of your own personal transformation, you\u0026rsquo;ll need to get in the habit of:\nBecoming a problem-solver: Don\u0026rsquo;t just identify problems. Ask, \u0026ldquo;What business outcome are you trying to achieve? Let\u0026rsquo;s find a compliant way to get there.\u0026rdquo;\nUsing their tools and techniques: Participate in their agile sprints, join their project kick-offs, and learn to read (and eventually make) a basic data flow diagram. Show up in their environment.\nBeing a translator: Act as the bridge between technical jargon, business goals, and legal requirements. Translate \u0026ldquo;legalese\u0026rdquo; into actionable steps for the teams, rather than vague, abstract requirements.\nYour new narratives - changing the conversation Your language shapes your company\u0026rsquo;s data protection culture. Get rid of the fear-based narratives you may have used in the past and adopt the language of *strategy and trust. *Here are a few examples, but of course the most effective ones will be your own, in your company context.\nFrom \u0026ldquo;compliance police\u0026rdquo; to \u0026ldquo;innovation enabler\u0026rdquo; Old Narrative: \u0026ldquo;You can\u0026rsquo;t do that because the GDPR says\u0026hellip;\u0026rdquo; New Narrative: \u0026ldquo;I see what you\u0026rsquo;re trying to build. My job is to help you do that in a way that builds consumer trust and avoids future problems. Let\u0026rsquo;s map out the data flows and find a solution together.\u0026rdquo;\nFrom \u0026ldquo;data as a liability\u0026rdquo; to \u0026ldquo;trust as a competitive advantage\u0026rdquo; Old Narrative: \u0026ldquo;Every piece of data we collect increases our risk profile.\u0026rdquo; New Narrative: \u0026ldquo;Our commitment to ethical data handling is our market differentiator. Consumers choose us because they trust us. Our strong data governance isn\u0026rsquo;t a cost, it\u0026rsquo;s an investment in our brand reputation and customer loyalty.\u0026rdquo;\nNarrative 3: From \u0026ldquo;doing no more than we need to do\u0026rdquo; to \u0026ldquo;future-proofing the business\u0026rdquo; Old Narrative: \u0026ldquo;We\u0026rsquo;re doing just enough to be compliant with the GDPR\u0026rdquo; New Narrative: \u0026ldquo;The regulatory and consumer landscape is constantly evolving. I want to align with your strong data practices so we aren\u0026rsquo;t just addressing today\u0026rsquo;s laws, we\u0026rsquo;re creating a resilient data ecosystem that can adapt to whatever comes next.\u0026rdquo;\nTo conclude, it\u0026rsquo;s clear your role of data protection leader has fundamentally changed and though legal expertise gets you a seat at the table, if you want to succeed as the leader and sit at the top of the table, you need to stop building on sand. You need to understand and embrace those in your company that also have data in their titles - they will ultimately help you succeed.\nPurpose and Means is a niche data protection and GRC consultancy based in Copenhagen but operating globally. We work with global corporations providing services with flexibility and a slightly different approach to the larger consultancies. We have the agility to adjust and change as your plans change. Take a look at some of our client cases to get sense of what we do.\n","date":"28 August 2025","permalink":"/beyond-legal-5-get-to-know-your-data-management-and-data-governance-colleagues/","section":"Blog","summary":"","title":"Beyond legal #5: Get to know your data management and data governance colleagues"},{"content":"To be effective and gain strategic influence, data protection leaders must move beyond the vague \u0026lsquo;privacy risk\u0026rsquo; definition and learn to articulate data protection failures in the language and currency of the business.\nData protection leaders need to move beyond the narrow, compliance-driven definition of risk and learn to articulate data protection failures in the language of their business peers by connecting them to potential impacts that will command the board\u0026rsquo;s attention. This will elevate their status from being a \u0026rsquo;necessary evil\u0026rsquo; to a strategic voice. In this forth post, I want to explore the limitations of the traditional view about risk in data protection and introduce the concept of ripple effects, and provide some pointers for articulating risk in terms the C-suite understands and acts upon.\nAs in previous posts, I want to align with the SFIA skills framework and Risk Management (BURM) is the primary skill mentioned. There are also various risk methods and models available online and I urge you to dig deep to get a good understanding of the broadness and depth of this discipline.\nThe dangerous myth of the compliance problem A couple of years ago, a financial services company in Belgium contacted me after reading one of my posts on Linkedin about risk. They shared a problem that was affecting the perception of their data protection team that was often presenting \u0026ldquo;high risks\u0026rdquo; to their data protection board with a strong focus on potential GDPR fines. In rare cases they managed to convince the board to approve huge budgets for projects to implement expensive technical controls, but generally they built a reputation for overplaying the severity of the risks.\nOften the conversation with the board stalled. Why? Because the way they presented risk was abstract, a purely compliance concern that was disconnected from the day to day business of the company, its business strategy or P\u0026amp;L statement (depending on who was at the meeting).\nVague risk definitions In our data protection profession in Europe, we are plagued with confusing definitions that seem to have crept into many data protection leaders\u0026rsquo; vocabulary. The primary one being privacy risk. In a European context, particularly under GDPR, \u0026lsquo;privacy\u0026rsquo; is an interesting word. If you search for it in the GDPR text, you’ll actually find just one reference, and that’s in relation to the ePrivacy Directive, if I’m remembering correctly. Yet, in our day-to-day work, we see \u0026lsquo;privacy\u0026rsquo; and \u0026lsquo;data protection\u0026rsquo; used almost interchangeably.\nBut when you dig into the definitions of \u0026lsquo;privacy risk,\u0026rsquo; things get even more confusing. There’s a lot of inconsistency. Take, for example, the definition from the IAPP, probably the largest global privacy organisation in the world.\nHere’s their definition: \u0026ldquo;A formula to calculate the impact of a new project on the privacy of the consumer base that will use the new systems; to evaluate the risk, one must consider the likelihood of the threat occurring multiplied by the potential impact if the threat occurs.\u0026rdquo;\nThey even acknowledge that it may be hard to quantify, so they suggest comparing projects as a way to understand privacy risk. Now, I’ll let you make up your own mind about how helpful that is. Personally, I don’t find it very clear or useful.\nThen there’s the NIST (National Institute of Standards and Technology) definition: *\u0026ldquo;The likelihood that individuals will experience problems resulting from data processing and the impact should they occur.\u0026rdquo; *\nAgain, this is quite high-level and vague. What exactly do we mean by \u0026rsquo;experiencing problems\u0026rsquo;? And what are those problems? This leaves a lot open to interpretation.\nI think this lack of a clear, consistent definition of \u0026lsquo;privacy risk\u0026rsquo; is a real issue in our industry. We throw the term around, but we don’t always fully understand what it means or have a concrete way to measure it.\nData protection laws start with human rights (in Europe at least) Open your copy of the GDPR and it\u0026rsquo;s there in Art 1(2): This Regulation protects fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data.\nSo at its heart, we need to pay close attention to risks to the rights and freedoms of individuals. This is our non-negotiable starting point. Now you may have come across some data protection leaders whose primary focus is to protecting the interests of their company without mentioning rights and freedoms, which is also important and often drives the compliance-based approach to data protection, often on a minimal level.\nYou may see similar focus from your local supervisory authority where emphasis is given to articles and requirements (the what) with little help with the how to. From a risk perspective, one exception is the UK ICO who published a helpful cause-event-harm model and various materials including examples e.g. linking a failure (poor security) to an event (breach) to individual harm (anxiety, financial loss). It\u0026rsquo;s well worth seeking out that document. The CNIL are also quite advanced with their PIA tool and substantial documentation.\nSo in the above ICO example, the security failure is the first domino to fall. But what happens after it falls? Focusing only here is like reporting on an earthquake\u0026rsquo;s epicentre without mentioning the resulting tsunami.\nRipple Effects: How one data protection failure creates waves across the business This is the central argument of this post. A single data protection event triggers a chain reaction of consequences across multiple risk domains, aka ripple effects.\nAlthough the diagram above may at first appear confusing, let me walk you through an example scenario.\nTrigger: A processing violation, a data breach, or a customer complaint\nFirst ripple (compliance risks): This is the obvious one. Consequences: Supervisory authority attention and potential investigation.\nSecond ripple (business risks): fines or penalties, customer churn, loss of business, expend resources Consequences: financial losses\nThird ripple (legal risks): lawsuits, vendor disputes Consequences: litigation and/or breach of contract\nForth ripple (operational risks): ban or suspension of data processing, poor quality data Consequences: business process disruption, processing errors\nFifth ripple (reputational risks): media attention, customer complaints Consequences: Erosion of trust, brand or reputational damage, strained partner relationships.\nIn reality, there will be a multitude of scenarios to map that are unique to your business and influenced by various factors not least your risk appetite, risk tolerances, all of which must be documented in your risk policy.\nStop saying \u0026ldquo;High Risk\u0026rdquo; and start saying \u0026ldquo;€5M in potential lost revenue.\u0026rdquo; The reason data protection is often siloed is because its data protection leaders and their teams don\u0026rsquo;t speak the language of the business. Instead, they expect the rest of the company to understand their world of articles, recitals and RoPAs. Mastering risk management also involves addressing common challenges in risk measurement and communication. Here are a few I\u0026rsquo;ve come across during initial client investigations along with my normal recommendations:\nStop the subjective measurement: Challenge the vague \u0026ldquo;low, medium, high\u0026rdquo; heat maps. They lack credibility and are easily dismissed. They may look pretty but they are are weak and almost meaningless in many ways. Bias often creeps in, risks are exaggerated and nobody can really truly measure what\u0026rsquo;s at stake.\nStop treating symptoms: Explain that a risk register listing \u0026ldquo;outdated RoPA\u0026rdquo; is tracking an issue, not a risk. The risk is the operational disruption or regulatory fine that results from it. To address something like this you need to perform root cause analysis on the issue itself - there are many reasons why a RoPA becomes outdated and this will be addressed in a future blog post!\n**Communicate in business terms: **Emphasise the need to translate risk into concrete business impacts. Instead of \u0026ldquo;high risk,\u0026rdquo; frame it as \u0026ldquo;a 10% increase in customer churn\u0026rdquo; or \u0026ldquo;a potential contract breach with our largest shipping partner.\u0026rdquo;\n**The power of data-driven budgets: **when quantifying risk, use publicly available historical data (e.g., GDPR enforcement trackers, GDPRhub, etc) to build realistic, scenario-based financial models for risk. Do not rely on gut feelings.\nTo conclude, effective data protection is about safeguarding the entire company and multiple groups of people who interact with it (employees, consumers, partners, students, patients, etc., depending on your business context). It isn\u0026rsquo;t only about avoiding fines. Risks to individuals is the moral and legal starting point, but the ripple effects are what demonstrates the full business impact. Here are a three things you can action immediately:\nMap your own ripples: take a recent data protection issue and map its potential consequences across various domains (e.g. operational, reputational, or financial). Present this to your boss, or key stakeholder in the business to get their reaction\nLearn their language: get out and about in your company to build relationships with different teams to understand the company\u0026rsquo;s broader risk appetite and key business objectives. Digital marketing, HR or product development are always great starting points\nQuantify: move towards data-driven, scenario-based risk assessments that speak the language of your company\u0026rsquo;s operating currency and business objectives.\nWhen you can show your key stakeholders how awkward legal wording in a privacy notice could ultimately impact shareholder value, you cease to be the necessary evil and may start to be perceived as an indispensable strategic advisor!\nPurpose and Means is a niche data protection and GRC consultancy based in Copenhagen but operating globally. We work with global corporations providing services with flexibility and a slightly different approach to the larger consultancies. We have the agility to adjust and change as your plans change. Take a look at some of our client cases to get sense of what we do.\n","date":"26 August 2025","permalink":"/beyond-legal-4-time-to-up-your-risk-game/","section":"Blog","summary":"","title":"Beyond legal #4: time to up your risk game"},{"content":"Distinguishing between difficulty and complexity is crucial. Learn how a multi-disciplinary approach, beyond legal expertise, is essential to establish and run an effective data protection function or office.\nIn this third post of my \u0026ldquo;beyond legal\u0026rdquo; blog series, I\u0026rsquo;m taking a slight detour by focusing on two importatnt words: complex and difficult, because for many leaders, being responsible for data protection in your company isn\u0026rsquo;t just difficult, it\u0026rsquo;s extremely complex. (Simplification is another word we\u0026rsquo;re reading a lot about about these days and yes, it\u0026rsquo;s often associated with complex and difficult but I\u0026rsquo;ll cover that later in this post.)\nYou\u0026rsquo;re going to need to recognise the distinction between the complex and difficult, and acknowledge that you must involve a diverse set of competences and capabilities. That really is the key if you want to build and maintain a robust and future-proofed data protection function or office.\nSo what\u0026rsquo;s the difference?\n\u0026ldquo;Difficult\u0026rdquo; describes work that requires significant effort, expertise, or resources to achieve. It might be challenging, arduous, or require specialised skills and the path to success, while steep, is often discernible. To overcome difficulty, you need to make use of specific competences such as knowledge, skills, and behaviours (e.g. legal analysis, project management, communication).\n\u0026ldquo;Complex,\u0026rdquo; describes a system or challenge where a multitude of interconnected parts interact in non-linear ways that could, if you are not careful result in unpredictable outcomes. It could involve an intricate system where a change in one area can have unforeseen ripple effects elsewhere. This is common in large corporations where business systems involve layers of capabilities that, on a basic level you might categorise as ways of working, technology, people and organisational structures, and data and information.\nHow does this play out in the everyday reality of data protection? I want illustrate this with three common scenarios and will, when possible, reference the SFIA skills framework (from now on it will be SFIA 9 rather than SFIAplus).\nAnd whilst mentioning SFIA, the framework also includes \u0026ldquo;\u0026hellip;generic attributes, business skills and behavioural factors\u0026rdquo; and among these is complexity (COMP) so explore this and the other attributes further if you want to look seriously at your own role and start crafting a professional development plan.\n1. The laws and regulations: complex principles, difficult application\nToo many people transpose the GDPR into checklists, if only it was that simple. Principles like \u0026ldquo;lawfulness, fairness, and transparency,\u0026rdquo; \u0026ldquo;accountability,\u0026rdquo; \u0026ldquo;purpose limitation,\u0026rdquo; \u0026ldquo;data minimisation,\u0026rdquo; and concepts like \u0026ldquo;data protection by design and by default\u0026rdquo; are inherently complex. They demand abstract thinking, ethical consideration, and the ability to apply broad ideas to a broad set of technologies. Applying the GDPR in any organisation, understanding all the intricate interdependencies within a business model is a complex intellectual exercise. It initially requires competences such as legal analysis and interpretation, and business situation analysis (BUSA) to assess business impacts as a first cut.\nThen comes the difficult part. How to apply GDPR consistently across your company\u0026rsquo;s business operations that may be spread across multiple geographies each with other conflicting laws. Harmonising consent mechanisms, cookie banners, or standardising data retention policies across your global operations requires more than just legal knowledge. You\u0026rsquo;ll need to gather a range of competences including requirements definition and management (REQM), feasibility assessment (FEAS), solution architecture (ARCH), systems design (DESN) and privacy engineering (if you have people with this currently rare competence), and don\u0026rsquo;t forget risk management (BURM) to ensure you can begin to categorise the different types of risk you\u0026rsquo;re dealing with and how you should interlock with your company\u0026rsquo;s ERM.\nIf you are tempted to procure a privacy management tool, then I suggest you also include REQM, FEAS as well as enterprise and business architecture (STPL) and sourcing (SORC). Too many legal departments choose these tools without involving the right competences upfront and then wonder why they quickly run into problems.\n2. Strategic vision \u0026amp; foresight (complex) versus tactical compliance (difficult)\nDeveloping a relevant and adaptive data protection strategy that is aligned with business goals, anticipates emerging threats, and prepares for future regulations is a complex challenge. It requires the competence of strategic planning (ITSP), business situation analysis (BUSA), strong stakeholder relationship management (RLMT) to engage with your business colleagues, and often enterprise and business architecture (ARCH).\nNow the difficult part. Maintaining an accurate RoPA, conducting thorough DPIAs, or ensuring your privacy notices are \u0026ldquo;\u0026hellip;concise, transparent, intelligible and easily accessible, using clear and plain language\u0026rdquo; (from a GDPR perspective) yet will create friction in other parts of the world. Figuring out how to comply is extremely difficult. I often suggest taking a product design or service design approach. That way you\u0026rsquo;re linking a process, template or repository to a set of requirements which can be traced throughout the lifecycle. This requires a range of competences but let\u0026rsquo;s call out quality management (QUMG) and quality assurance (QUAS) because I rarely hear or read the word quality in data protection circles! That really is a problem and often results in clunky, awkward solutions like complex cookie consent banners or long-winded privacy notices.\n3. Risk modelling \u0026amp; prioritisation (complex) versus mitigating identified risks (difficult)\nUnderstanding the different categories of risk associated with data protection may, on the surface, appear to be straight forward, right? Risks to the rights and freedoms of individuals or risk of non-compliance. We really are in complex territory here especially when you need to develop risk models that account for evolving threats in your business context and to be able to understand the interconnectedness of risks - the so-called ripple effects. These could be the trigger for other organisational risks such as legal risks, regulatory risks, operational risks, financial risks, reputational risks, etc. And another area of complexity is risk measurement or risk quantification, so you are able to prioritise, request funding and keep senior leadership informed in business language they understand. Here, using colourful 3x3 or 6x6 heat maps are often a waste of time. To overcome this you\u0026rsquo;ll need to engage with those with competences in risk management (BURM) at both strategic and operational levels, privacy risk experts and information security (SCTY, to name a few. More about risk quantification will come in a future blog post.\nImplementing specific controls to treat an identified risk (e.g. implementing time limited location sharing, redacting sensitive information, or just-in-time transparency across a customer journey) is difficult. It requires competences including contextual business knowledge, solution architecture (ARCH), project management (PRMG) and potentially security operations (SCAD). If you have your own portfolio of projects, or your changes/projects have been concentrated in the form of a programme then portfolio management (POMG) and programme management (PGMG) competences will also be relevant. The companies I\u0026rsquo;ve seen that I consider mature, have built strong capabilities for data protection/privacy risk management, systematically identifying, measuring, prioritising, and mitigating risks across all business functions.\nWhy this distinction matters for effective data protection leadership\nI think blaming the difficult aspects of fragmented enforcement or the complex nature of the law misses the point. An effective data protection leader is like the conductor of an orchestra. The leader does not need to be a legal expert, they need to orchestrate a multi-disciplinary effort. This involves cultivating all the individual competences across their team and the wider organisation and at the same time simultaneously build strong capabilities that can respond to the inherent complexity of the company\u0026rsquo;s data landscape.\nA word about simplification Back in 2022, I developed a short course titled Simplicity targeting the data protection teams of a global financial services client. The material I developed was inspired by a great little book called The Laws of Simplicity written by John Maeda. Although it\u0026rsquo;s about 20 years old now, much of the content is still relevant. John Maeda covers 10 principles but in my training I focused on only 4 of the principles: reduce, organise, learn and context. Mastering inherent complexity in data protection is not about making data protection simple in the traditional sense. We can apply John\u0026rsquo;s simplicity principles to manage it effectively.\nSo applying the 4 principles I selected, a data protection leader can reduce the unnecessary noise and distractions by focusing on the key data protection risks and processing activities, rather than getting lost in every minor detail spend all day firefighting and getting nowhere.. They can organise their documents, information, processes, and data flows in logical structures to make everything clear and understandable - here I\u0026rsquo;ve always been a fan of establishing a \u0026ldquo;data protection management system\u0026rdquo; typically in Sharepoint or something similar. Also, leaders can establish a continuous culture of learning, that ensures you explain things to the workforce (in their context) and that they are keep informed, and are equipped to be able to adapt to new threats and evolving interpretations. Finally, all work must be anchored in context, ensuring solutions fulfil specific business needs, in line with your company\u0026rsquo;s risk appetite, and jurisdictional variations. Another client of mine is a global MedTech client that over the years moved beyond generic compliance to embedded, effective data protection by following similar simplicity principles. Get your hands on John\u0026rsquo;s book if you want to be inspired!\nPurpose and Means is a niche data protection and GRC consultancy based in Copenhagen but operating globally. We work with global corporations providing services with flexibility and a slightly different approach to the larger consultancies. We have the agility to adjust and change as your plans change. Take a look at some of our client cases to get sense of what we do.\n","date":"22 August 2025","permalink":"/beyond-legal-3-why-data-protection-leadership-is-more-complex-than-just-difficult-laws-and-what-it-takes-to-succeed/","section":"Blog","summary":"","title":"Beyond legal #3: Why data protection leadership is more complex than just 'difficult' laws, and what it takes to succeed"},{"content":"Your data protection or AI governance work is not stuck because the law is too complicated or your systems systems are outdated. It’s stuck because you’ve not gathered the right competencies to anticipate, assess, plan, implement and eventually run.\nIn an earlier post, Simplify the GDPR? Upgrade your competences instead, successful implementation of data protection or AI laws and regulations isn’t about simplifying rules. It’s more about leadership and possessing competences that drive change. In Beyond legal #1: The data protection leader’s journey begins, I suggested these include business analysis, stakeholder engagement, programme management, and strategic governance. In this second post, I propose that what companies need, especially around data protection and AI governance, is not more checklists, but strong Strategic Impact \u0026amp; Readiness competences.\nStrategic Impact and Readiness Analysis (SIRA) You will always be on the back foot if you wait to react to the deluge of change that is coming. New laws and regulations, changes to existing, new technologies, societal change, geopolitics. It\u0026rsquo;s not going away and the longer you leave to address what\u0026rsquo;s relevant, the harder it will be to wrestle back control.\nAs mentioned in earlier posts, leaders need to look at themselves in the mirror and ask themselves firstly, are we assessing strategic impact and readiness? Are we doing it well, and if if the truthful answer to both questions is no, is to then ask do you have the necessary competencies to perform the work?\nThe key is to recognise whether you do, or do not have the competences, and then acknowledge the gap by taking action. Do not think you\u0026rsquo;ll get by and muddle through - this is often the root cause of failure, and then it\u0026rsquo;s easier to blame \u0026ldquo;that complex law.\u0026rdquo;\nSo what is strategic impact and readiness? It’s the set of capabilities that turns a legal requirement, or emerging tech, into a strategic transition: scanning the horizon for weak signals of change, identifying impact across various perspectives, identifying root causes of related issues, prioritisation, scheduling the delivery of both work products and outcomes in an organised, visual roadmap, and then formulating a business case that you present to senior leadership for their buy-in and approval.\nI\u0026rsquo;ve now mentioned capabilities, and you may be wondering the relationship with competences. They are related but they represent different aspects of our human abilities. From my perspective, capabilities are the broad abilities that enable us to perform a specific work task or our job. Competencies are specific, measurable skills and knowledge that actually contribute to the capabilities.\nSo within the capabilities I\u0026rsquo;ve just mentioned, there are a number of competences that are needed (either yourself, or professionals you bring in), and I\u0026rsquo;ll again reference SFIAplus from BCS:\nStrategic impact and readiness analysis will help you move from reactive compliance to proactive readiness and it collaboration across multidisciplinary teams e.g. legal, risk, data, HR, digital marketing, product, etc. and everyone sees and experiences their part in the change that needs to happen.\nI\u0026rsquo;ll illustrate how this works in practice with a brief case example: AI Regulation Readiness Imagine a mid‑sized financial services company preparing for EU AI Act obligations. Here’s conducting SIRA works in brief:\nHorizon scanning: Legal monitors recitals; risk reviews models; data ops maps systems.\nImpact mapping: They discover opaque model code, weak consent flows, ungoverned data sets, lack of explainability.\n**Root causes: **Legacy data platforms, siloed model developers, no central governance.\nPrioritisation: Explainability and transparency are top‑priority; consent compliance next; platform reforms third.\n**Roadmap: **\nOver Q1, review AI inventory and update documentation.\nOver Q2, deploy explainability tools and train data scientists.\nOver Q3, integrate data governance workflows and audit output.\nIn reality, conducting this type of analysis often involves bring together the colleagues in one or more workshops. It could be a half day workshop, or several workshops spread over days or weeks - in-person, virtual and(or hybrid.\nKey questions for you Do you, does your team, have strategic impact \u0026amp; readiness competences in house?”\nWhere are the gaps, and how might building SIRA avoid reactive chaos next time there\u0026rsquo;s a new regulation or a geopolitical event impacts your company?\n**Purpose and Means **is a niche data protection and GRC consultancy based in Copenhagen but operating globally. We work with global corporations providing services with flexibility and a slightly different approach to the larger consultancies. We have the agility to adjust and change as your plans change. Take a look at some of our client cases to get sense of what we do.\n","date":"20 August 2025","permalink":"/beyond-legal-2-why-every-data-protection-and-ai-governance-leader-needs-sira-competences-in-their-toolkit/","section":"Blog","summary":"","title":"Beyond legal #2: Why every data protection and AI governance leader needs SIRA competences in their toolkit"},{"content":"This first post introduces the critical non-legal skills, such as business analysis and stakeholder management, that a Data Protection Leader must master to embed data protection into a global company’s operations.\nThis is the first of a series of posts outlining the skills and competences a Data Protection Leader requires (beyond legal knowledge and competences) to embed data protection into the fabric of an organisation. I intend to be jurisdiction-neutral throughout the series but will assume the organisation is a private sector company and a global player operating across multiple jurisdictions.\nAs we are talking skills, I am aligning where ever possible, with SFIA plus which is a robust skills and competency framework developed by BCS in the UK. The framework is very much geared for those working with information and technology. As a Fellow of BCS, with Chartered status, and a member for nearly 25 years, I\u0026rsquo;ve used the framework in various companies over the years and highly recommend you take a look at it. In my posts I will include SFIA codes that you directly reference if you wish, e.g. (BUSA).\nThis series of blog posts will not cover the legal competences that are required in establishing and running a data protection function in a company. Also, I am not advocating that a data protection leader must possess all the competences I\u0026rsquo;m going to cover, that is not feasible. To be any kind of leader, leadership skills are essential no matter whether your background is legal, technology, risk, product, etc. - what\u0026rsquo;s most important is the leader can get access to the needed competences when they are required.\nA fresh start, or a revamp #Imagine you start afresh as the Data Protection Leader in the company, or you have been recruited by a company where you are tasked to establish their Data Protection Office, department or function and once in place you are accountable for its business as usual (BaU) operation. Let\u0026rsquo;s assume you have no issues to deal with, but if you do a list presented to you on day one, they are always useful ways of eliciting information along the lines of what I\u0026rsquo;ll outline below. A core truth at this point is to recognise that your work will most probably involve business change, and before you make any changes, you need to a thorough understanding of the as-is, or current state.\nThis realisation is often missed, especially when data protection is anchored in legal departments where you may have a team of top notch legal professionals who, when they studied law at law school were never given tuition in disciplines like business change, programme management, project management, strategy. etc.\nThe root cause of troubled data protection work further down the line is failing to acknowledge this early on, and it is often the General Counsel who misses this by wrongly assuming that anyone can manage a project or programme. The selection of a data protection leader must take this into account, and in cases where driving change will be an integral part of the role, be mindful not to trivialise this point. If the person to be appointed is not experienced at driving change, then the General Counsel or whoever is responsible must ensure that the leader is supported by one of more experienced change professionals for the duration of the implementation.\nBusiness situation analysis (BUSA) #Your initial step is to grasp the as-is, starting with the business context - your company\u0026rsquo;s mission, objectives and the personal data that fuels the business. This is foundational and the understanding and knowledge you gain here will dictate the relevance and priority of subsequent data protection efforts.\nA key reference is studying the company\u0026rsquo;s information resources such as the website, annual report, business strategy documentation, org charts, as well as meeting minutes of recent and relevant governance boards and the existing data protection team meetings. You will need to identify the key senior stakeholders who are responsible for achieving the objectives outlined in the strategy, because you are going to have to get to know them quite quickly.\nBeside engaging with who I call the usual suspects (legal, risk, digital marketing, HR IT, IS, etc) I often find engaging with the Enterprise Architecture team can be very revealing because they should be able to explain to you the information needs of the company based on its value chain, and how this is distilled down into technology strategies, architectural standards, development lifecycles and so on.\nAt the same time, you\u0026rsquo;ll need to get an initial overview of the existing data protection regime in the company. There\u0026rsquo;ll be many questions to ask and preparing and documenting your initial findings is key. This is an art in itself and to do it effectively, getting an experienced business analyst to assist will be beneficial but not essential at this stage. Later, when you have determined the scope of a fuller maturity assessment, the analyst can assist by completing the business situation analysis/investigation with requirements elicitation, prioritisation and sorting, etc.\nThis is important because you also need to be able to demonstrate that you are making a difference rather than saying you\u0026rsquo;re occupied with conducting an assessment in your first month or two.\nKey questions include what is the current budget status, is there a framework, what\u0026rsquo;s the organisational structure, where in the company is data protection anchored, what are the reporting lines, policy frameworks, how does the RoPA look, what supporting tools are there, how risk is managed and what are seen as the key risks, how quality is managed, history of major incidents and breaches, past maturity assessments, etc. There\u0026rsquo;s a lot to cover, and I\u0026rsquo;ll be making a template that can be used in this business situation analysis, in the coming weeks.\nI also recommend accessing the level of what I call \u0026ldquo;legal clunkiness\u0026rdquo; - this is where legal solutions have been implemented without really taking into account the needs of data subject. A few examples include, generic and long-winded privacy notices, clunky consent banners, generic broad-brush eLearning, data subject facing processes and mechanisms with deliberate friction, generic policies written in legalese, to name a few.\nStakeholder relationship management (RLMT) #You will also need to establish relationships with key stakeholders early on at multiple levels of the company and in particular, you will undoubtedly need to exercise political skills at board-level. The seeds you sow early on will help establish alignment which eventually secures mandate, budget, and cross-functional cooperation so you\u0026rsquo;ll need to get out of your office and put yourself about.\nIn many companies, the overall perception of the data protection leader is that you are a necessary evil, brought in to ensure your company is compliant. To your face your peers will say your role is important, but when they get back to their own lines of business, or functions, they\u0026rsquo;ll struggle to see the value you bring in the bigger scheme of things. You will therefore need to cultivate a narrative about data protection that will resonate with your peers and their teams. This is where gaining insight about the importance of their work, and their contribution to the achievement of the company\u0026rsquo;s business objectives is a key task before you go round trying to explain why your role is vitally important and that you have the backing of the CxO.\nYou might want to tailor the narrative depending on whether you are meeting the CHRO, or the CMO, or the Chief Product Officer, CTO, CIO, etc. If you are able to demonstrate to them that you are familiar with their contribution and how critical the processing of personal is in their business area, then you may already begin to see their eyes open after the first couple of meetings that you are not like the last data protection leader they came across who was more comfortable talking in riddles of articles and recitals. Engage with your stakeholder* in their context, in their (business) language and jargon*.\nThat\u0026rsquo;s the first blog in the series, more to come.\nPurpose and Means is a niche data protection and GRC consultancy based in Copenhagen but operating globally. We work with global corporations providing services with flexibility and a slightly different approach to the larger consultancies. We have the agility to adjust and change as your plans change. Take a look at some of our client cases to get sense of what we do.\n","date":"13 August 2025","permalink":"/beyond-legal-1-the-data-protection-leaders-journey-begins/","section":"Blog","summary":"","title":"Beyond legal #1: The data protection leader's journey begins"},{"content":"The perceived complexity of GDPR and other data protection laws is often a result of a lack of competent leadership and multidisciplinary teams, rather than the laws themselves, emphasising the need for a broader skill set beyond legal knowledge for successful implementation.\nWhilst getting my plans in place for the rest of the year taking into account all the ongoing geopolitical impacts a few good articles and interactions stood out for me during the past few weeks especially all the discussions and articles about the need to simplify GDPR. And in recent weeks there have been some great posts from people like Mark Thomas who highlight the frequent misuse of the word “governance.” And then there\u0026rsquo;s some memorable chats I\u0026rsquo;ve had with various professionals I highly respect, people like David Longford and Nora Reháková.\nI think the current debates about simplifying GDPR highlight a different truth. Effective implementation is not just about identifying applicable laws and regulations, understanding them, interpreting legal texts, cases, etc., and then passing it over to the rest of the company to figure it all out.\nIt’s about so many other things like mobilising the right people, securing budgets, building cross-functional buy-in, managing projects to name a few examples. If a GDPR roll-out fails, you can’t blame the law itself or the tools. There\u0026rsquo;s also one word that\u0026rsquo;s missing from many implementation projects and that is quality, which needs to be at the heart of any implementation alongside project management and risk management.\nAnd how about all the data protection education out there? Most of it tells you WHAT to do, but very little tells, or shows you HOW to do it. Much of the data protection work I\u0026rsquo;ve seen, or hear about is driven by compliance checklists and no end of templates and fancy tools, and very little embedment of meaningful and sustainable change in the fabric of the companies\u0026rsquo; inner workings.\nPerhaps some data protection leaders need to look at themselves in the mirror because I believe it\u0026rsquo;s often a matter of missing competences rather than legal complexity. This manifests in, for example, lack of organisational support, inability to secure funding, defaulting to clunky legal solutions, poor planning, failing to understand the complexity of data and technology, to name a few.\nMany companies succeed with far more complex implementations, transformations and organisational change than something like GDPR. That’s why I think there may be little need to simplify laws like the GDPR. The real challenge is getting together the right people and having competent leaders in place to head up the team, office or function.\nTwenty years ago I managed GRC projects related to SOX compliance, financial EU directives, ISO security standards, employment legislation, and among the key success factors was forming multi-disciplinary teams with appropriate subject matter experts, so in these projects these was typically colleagues from finance, HR, Infosec, etc., yet these days we have an almost opposite situation with data protection. To me, the clue is in the word \u0026ldquo;data\u0026rdquo; yet despite this, a legal background is seen as pre-req to lead the project.\nLegal knowledge is only one piece of the jigsaw. To address these issues, I’m launching a series of blog posts that will highlight the essential knowledge and competences for success in data protection and AI governance, explaining why you don’t need a legal background to lead in this space.\nLegal colleagues remain vital, but they are just one part of a much broader team, and for the clients of mine where I see the leaders succeeding, they tend to acknowledge that data protection is a team sport and a multitude of competences is required.\nIs your data protection work troubled, or needs to improve? Get in touch to discuss your challenges and establish dialogue to bring about meaningful change.\n","date":"11 August 2025","permalink":"/simplify-the-gdpr-upgrade-your-competences-instead/","section":"Blog","summary":"","title":"Simplify the GDPR? Upgrade your competences instead"},{"content":"The Coldplay kiss cam scandal exemplifies the dangers of viral digital exposure, as the rush to public judgment not only devastates those directly involved but also inflicts lasting, unintended harm on their families and wider social circles.\nGOVERNANCE\nMuch of the Linkedin posts about the recent Coldplay \u0026lsquo;kiss-cam exposure\u0026rsquo; story has covered the legal aspect, often comparing the implications from a GDPR, CCPA, PIPL, etc., perspective which does make interesting reading for data protection professionals. I read much about \u0026lsquo;right to privacy\u0026rsquo; or \u0026rsquo;expectation of privacy\u0026rsquo; mainly focusing on the two individuals whose private moment at a public event was captured by a camera whose purpose was to entertain yet they - a high profile company CEO and the company\u0026rsquo;s HR director - became viral fodder overnight.\nThe footage of them was not just broadcasted at the concert but also captured on smartphones by other concert goers and then shared on social media. No matter your view as to whether their behaviour was inappropriate or acceptable, the incident sparked outrage, speculation and lots of online searches that ultimately resulted in the CEO resigning.\nThrough the use of well-meant technologies, often carried in our pockets, what we get up to in public are not just fleeting events, they are immortalised, replayed, scrutinised and shared. It could easily have been you or me doing something that we might not have wanted exposed and then you\u0026rsquo;re suddenly thrust into the global spotlight without warning, preparation, and no recourse. Involuntary surveillance conducted under the guise of fun\nMost of the posts I\u0026rsquo;ve read focused on the two people involved yet, in my view, this is only the tip of the iceberg. From a \u0026lsquo;rights and freedoms\u0026rsquo; perspective, these unintended consequences harm not just the data subjects involved, but also others who are directly or indirectly involved.\nWhat about the families and partners of the two main actors? Their privacy and emotional well-being are shattered by the public, viral shaming of someone close to them. They have been forced into the position of having to process not only private betrayal and pain, but to do so at the hands of a global audience hungry for drama and/or or justice. Judgement by a mob who are baying for more juicy gossip, a feeding frenzy that punishes not only the implicated, but anyone linked to them by fate and relationship. The harm resulting from digital shaming is indiscriminate, viral, and total. Remember the Ashley Madison case from a few years back - the suicides resulting from online shaming?\nChildren, uninvolved and innocent may find themselves targets of bullying and forced to carry the stigma of something far beyond their control for years to come. Harms impact social circles, extended families and professional networks,and the wounds don\u0026rsquo;t heal easily.\nI think the nature of these harms is quite cruel. Unlike a rumour or an indiscretion that might fade with time, this scandal will persist. Search engines, archived shares, clips on Youtube, etc., help keep the legacy of viral shame campaigns,\nAll this originates from a culture that treats people in public as fair game, a stance I do not agree with and you see this playing out almost everyday in data protection circles on Linkedin.\nAll the tech that powers kiss cams and social media enable us to surveil, record, archive, and amplify public and private moments like never before, and as we know the ethical frameworks that govern their use are seen as obstacles, and lag way behind.\nThere is a misconception that being in a public place eliminates our right to privacy, and we must question whether the consequences that follow such exposures are proportional or just. Should an evening’s indiscretion, exposed by the randomness of a camera, truly result in the loss of a career, the upending of families, and the potential lifelong damage to children?\nThe Coldplay kiss cam scandal is not unique and it is far from being complex. it is the latest in a line of examples that show just how far we have slid from respecting boundaries and understanding consequence.\nThe lesson is simple: the power to expose must always be balanced with the responsibility to protect, especially those who never chose the spotlight. Only then can we approach a digital society that values both truth and humanity.\n","date":"21 July 2025","permalink":"/how-tech-feeds-a-curious-public-or-a-baying-mob/","section":"Blog","summary":"","title":"How tech feeds a curious public (or a baying mob)"},{"content":"The terms \u0026ldquo;data protection\u0026rdquo; and \u0026ldquo;data privacy\u0026rdquo; are often used interchangeably across different regions and contexts, influenced by factors such as geographic location, professional certifications, and global standards, but it is important to use the correct terminology based on the specific legal framework or context to ensure clarity and effective education.\nI say data protection, you say data privacy.\nI say \u0026lsquo;risks to the rights and freedoms of individuals,\u0026rsquo; you say \u0026lsquo;privacy risk.\u0026rsquo;\nIs it as simple as being a Europe versus American thing? A bit like \u0026lsquo;You say tomato, I say tomato.\u0026rsquo;\nThese days many professionals interchange these terms without thinking. A lot depends on where you are in the world, the company you are working for, its geographical scope, the nature of its business and so on.\nSo, context is a key but that\u0026rsquo;s not all.\nI think much also depends upon the certifications you may have studied for. Many learn the perceived \u0026lsquo;correct terms\u0026rsquo; to pass exams, and then the terms stick in their daily work, often incorrectly, and they use the wrong terms in their policies, educate others and the words and terms spread far and wide, and they eventually become gospel.\nWe must also look at the dominance on a global level of a couple of the major certification organisations. They are US based and despite what is written in the text of European laws and regulations, the US-oriented words and terms get mixed in to their materials.\nUnfortunately, what happens is, despite the complexity of European data protection regulations and data privacy laws (in the US), professionals working in this field then become nonchalant and use terms that they may have needed to remember to pass an exam, but in reality do not reflect the scope or context of their work.\nYears ago, I embarrassingly fell into this trap, but now, I like to think I have largely dragged myself out of it, and I\u0026rsquo;m always happy to be corrected.\nI appreciate that many companies have adopted their own terms in their own data protection and privacy frameworks, which is fine as long as their workforce is educated in a granular and contextual manner that provides true meaning, and not high level fluff.\nIn Europe, many professionals state in their Linkedin profiles they are working in, or with \u0026lsquo;privacy\u0026rsquo; - it\u0026rsquo;s privacy this, and privacy that.\nDoes this really matter? Many will say no, but I think it does, because, it does depend on many factors.\nTake the term \u0026lsquo;privacy risk\u0026rsquo; as an example.\nI hear, and read this term so often, used casually in a GDPR context. The way I\u0026rsquo;ve seen it used is as an all-encompassing term including risk of harms to individuals, compliance risk, legal risk, regulatory risk to name a few.\nTo effective quantify and manage risk you need to separate the different types of risk and understand the downstream consequences, the ripple effects. You can\u0026rsquo;t do this if everything is lumped together.\nIt\u0026rsquo;s interesting to compare definitions of what \u0026lsquo;privacy risk\u0026rsquo; means according to some well-known organisations.\nExample 1: ISACA\n\u0026ldquo;Any risk of informational harm to data subjects and/or organization(s), including deception, financial injury, health and safety injuries, unwanted intrusion and reputational injuries, where the harm or damage goes beyond economic and tangible losses.\u0026rdquo;\nExample 2: NIST\n\u0026ldquo;The likelihood that individuals will experience problems resulting from data processing and the impact should they occur.\u0026rdquo;\nExample 3: IAPP\n\u0026ldquo;A formula to calculate the impact of a new project on the privacy of the consumer base that will use the new systems; to evaluate the risk, one must consider the likelihood of the threat occurring multiplied by the potential impact if the threat occurs.\u0026rdquo;\nMake up your own mind - do they really make sense, are they useful?\nFrom an EU perspective, I think the EU sets its stall out admirably with what one of the high level of objectives of GDPR is in Art. 1(2):\n\u0026ldquo;This Regulation protects fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data.\u0026rdquo;\nI\u0026rsquo;m sure you know that GDPR only mentions the word \u0026lsquo;privacy\u0026rsquo; a couple of times in a reference:\n\u0026ldquo;Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications)\u0026rdquo; - i.e. the ePrivacy Directive\nAnd if you read the ePrivacy Directive, there\u0026rsquo;s much mention specifically of \u0026lsquo;risk to privacy.\u0026rsquo;\nI\u0026rsquo;ve always liked the EU perspective about risks to the rights and freedoms of individuals because it forces you to dig deep into the EU\u0026rsquo;s Charter of Fundamental Rights and read through the 50 rights listed and categorised under Dignity, Freedoms, Equality, Solidarity, Citizen\u0026rsquo;s Rights and Justice. So when you are conducting a DPIA, you should be assessing the risks to all these rights, and not just \u0026lsquo;Respect for private and family life\u0026rsquo; and \u0026lsquo;Protection of personal data.\u0026rsquo;\nAlso, remember if a personal data breach occurs you need to carry out a similar assessment based on the circumstances of the breach: what the data could reveal, the categories of data subjects, volumes, timing, context, etc.\nPrivacy risk - just one example, but there are many we need to be aware of, to avoid being caught in the Privacy Trap.\nI now make a concerted effort to use the correct terms depending upon the context. So if your organisation falls under the recent new \u0026lsquo;data privacy laws\u0026rsquo; in Delaware, Iowa, Nebraska or New Hampshire, then feel free to use \u0026lsquo;data privacy\u0026rsquo; but don\u0026rsquo;t use that term if your organisation is purely a European setup under GDPR, and whatever you do don\u0026rsquo;t report \u0026lsquo;privacy risk\u0026rsquo; to a governance board, unless you can truly articulate the term and the board members get it.\nTo conclude, as professionals we have a duty to educate people and that can never be effective if you are using a mish mash of terms that are incorrect, so why not make an effort to use the correct terms yourself?\n","date":"24 June 2025","permalink":"/avoid-the-privacy-trap-data-protection-or-data-privacy/","section":"Blog","summary":"","title":"Avoid the privacy trap: data protection or data privacy?"},{"content":"Interactive GRC infographics transform complex workplace information into engaging visual experiences that improve employee understanding, boost retention, and modernise internal communications by allowing staff to interact with information at their own pace, making them an invaluable tool for modern companies seeking to effectively communicate abstract concepts.\nInformation overload is a phrase familiar to most employees as they face a constant deluge of having to make sense of complex concepts from various parts of their company. Traditional text-heavy documents often fail to capture attention and convey information efficiently. Interactive infographics have emerged as a solution, offering a visually engaging approach that simplifies complexity, boosts engagement, and ensures information retention.\nI\u0026rsquo;ve just published a couple of examples as part of a forthcoming product launch - they can both be viewed here.\nSimplifying complex GRC information for better understanding #The main advantage of interactive infographics is their ability to transform complex GRC concepts into digestible visual formats. When information is presented graphically, employees can understand complicated ideas more quickly and clearly. This visual representation allows for immediate pattern recognition and relationship identification that might be lost in paragraphs of text.\nInteractive infographics particularly excel at breaking down multifaceted processes and relationships into manageable chunks of information. For example, a company implementing new compliance procedures might use an interactive infographic that allows employees to click through different scenarios, making abstract regulations concrete and applicable to daily work situations.\nThe layered approach of interactive infographics enables information to be structured in ways that prevent cognitive overload. Instead of presenting all data simultaneously, interactive elements reveal information progressively as users engage with different components. This controlled information flow helps employees absorb complex concepts at their own pace without feeling overwhelmed.\nIncreasing employee engagement through active participation #Traditional GRC communications often struggle to maintain employee attention. Interactive infographics address this challenge by transforming passive reading into active participation. Interactive elements such as clickable regions and animations create a more immersive experience that naturally captures and sustains interest.\nResearch published in communication studies indicates that high-quality multimedia content significantly enhances user engagement compared to static text. This engagement stems from the fundamental human attraction to visual stimuli and the satisfaction derived from interactive discovery. When employees can manipulate data visualisations or navigate through information pathways, they become active participants rather than passive recipients.\nThis participation extends beyond the initial interaction. Employees are more likely to discuss, share, and return to interactive content that provided a memorable experience. The resulting conversations can further reinforce understanding and create a culture of knowledge sharing within the organisation.\nEnhancing GRC information retention and application #The most compelling benefit of interactive infographics is their impact on information retention. The combination of visual communication and interactive engagement creates multiple pathways for memory formation. Visual elements activate image processing centres in the brain, while interaction creates experiential memories that tend to be more durable than those formed through passive reading.\nInteractive GRC infographics also appeal to different learning preferences within a diverse workforce. Visual learners benefit from the graphic elements, while kinesthetic learners respond to the interactive components. This multi-modal approach ensures that important information reaches a broader audience effectively, regardless of individual learning styles.\nThe practical application of information is also enhanced when presented through interactive infographics. By allowing employees to explore scenarios or manipulate variables relevant to their specific roles, these tools bridge the gap between abstract concepts and practical implementation. This connection between theory and application is especially valuable when introducing new procedures, technologies, or strategies.\nModernising internal communications for today\u0026rsquo;s workforce #The expectations of today\u0026rsquo;s employees have been shaped by their experiences as digital consumers. The social media era has accustomed people to information that is not only accessible but also visually appealing and interactive. GRC communications that mirror these qualities are more likely to resonate with the modern workforce.\nInteractive infographics represent this shift toward more visual and engaging forms of workplace communication. They signal that an organisation values clear communication and is willing to invest in creating content that respects employees\u0026rsquo; time and attention. This approach can contribute to a more positive perception of internal communications and, by extension, management.\nAdditionally, interactive infographics can help prevent communication issues before they arise. By presenting information clearly and memorably, companies can reduce the likelihood of misunderstandings or knowledge gaps that might otherwise lead to errors or inefficiencies. This preventative aspect makes interactive infographics a valuable risk management tool.\nImplementation considerations #While interactive GRC infographics offer significant benefits, their successful implementation requires thoughtful planning. Companies should:\nBegin with clear communication objectives rather than using interactivity for its own sake\nEnsure the technical accessibility of interactive elements across all devices used by employees\nMaintain a balance between visual appeal and informational clarity\nIncorporate feedback mechanisms to measure engagement and understanding\nConsider cultural and accessibility factors to ensure inclusivity\nInteractive infographics represent a powerful evolution in GRC communication. By simplifying complex GRC concepts, increasing engagement, enhancing retention, and modernising communication approaches, they address many of the challenges organisations face when sharing information internally. As workplaces continue to navigate information complexity and employees\u0026rsquo; expectations evolve, interactive infographics offer a communication strategy that is both effective and aligned with contemporary information consumption preferences. The investment in creating such content can yield significant returns through improved understanding, engagement, and application of important company information.\n","date":"12 May 2025","permalink":"/the-power-of-interactive-grc-infographics-in-workplace-communication/","section":"Blog","summary":"","title":"The power of interactive GRC infographics in workplace communication"},{"content":"This last part of our \u0026lsquo;beginners\u0026rsquo; blog series\u0026rsquo; provides actionable strategies for effectively communicating the value of horizon scanning to executives to get their buy-in and approve to start.\nIn this last post of this \u0026lsquo;beginners\u0026rsquo; series of blog posts I\u0026rsquo;ll cover the not-too-easy task of getting buy-in for establishing a formal approach to horizon scanning. It\u0026rsquo;s a common situation: GRC leaders are often stretched thin, making it difficult to allocate the time and resources needed to champion a new initiative like horizon scanning. The task of building a compelling business case can feel like an uphill battle, but it\u0026rsquo;s an unavoidable one.\nThe underlying principles of securing executive support remain remarkably consistent across different disciplines. The core idea is to initiate with modest steps, demonstrate tangible value, and incrementally build momentum. It\u0026rsquo;s about planting ideas and nurturing them until they mature into a fully-fledged, supported programme.\nOne practical approach is to start by creating concise, insightful overviews of emerging trends and potential risks. These should not be exhaustive reports, instead, frame them as digestible snapshots of what\u0026rsquo;s coming. Document your findings on readily available templates (there are many available online) to visualise the information in a clear and accessible format. By integrating these insights into your existing internal reports, you can begin to raise awareness among your peers and stimulate their interest.\nPeer feedback is essential at this stage. Sharing your initial findings and requesting input from colleagues can help in refining your analysis, identifying potential oversights, and confirming your overall comprehension of the issues at hand. This collaborative methodology not only elevates the quality of your work but also cultivates a sense of shared responsibility, enhancing the likelihood that your peers will support your case for horizon scanning within the company.\nOnce you\u0026rsquo;ve acquired sufficient data and refined your message, the next step is to present your case to the leadership team. A well thought through, but brief presentation is needed here. The key is to frame the matter in terms that resonate with their priorities and concerns. Emphasise the benefits of proactive risk management, such as enhanced decision-making capabilities, increased resilience, and of course all the future-proofing. Also articulate detail the potential impacts of ignoring these risks by doing nothing.\nHorizon scanning, while seemingly self-evident in its advantages, confronts certain inherent challenges. One of the most significant impediments is the inherent uncertainty involved. Predicting future events is, undeniably, an inexact discipline. Executives, frequently driven by quantifiable results and short-term objectives, may exhibit reluctance to invest in an activity that does not yield immediate returns.\nAnother challenge lies in the potential for false positives. Horizon scanning may identify numerous emerging risks and opportunities, but not all of them will necessarily materialise. This can lead to a perception that the process is generating unnecessary noise and diverting resources away from more pressing issues.\nHorizon scanning also requires a diverse range of skills and perspectives. It\u0026rsquo;s not sufficient to simply rely on internal expertise, you also need to tap into external sources of information and insights. This may involve engaging with industry experts, attending conferences, and monitoring relevant publications and online forums. Building a strong network of external contacts can provide valuable perspectives and help to validate your internal findings.\nFrom a risk perspective, be wary of confirmation bias: leaders may selectively endorse horizon scanning that aligns with their preconceived notions of the current reality. This can significantly diminish the efficacy of horizon scanning, as it could be manipulated to reinforce a singular viewpoint.\nNow, consider the advantages that horizon scanning can offer. Beyond the palpable benefits of proactive risk management, it can also foster a culture of innovation and strategic thinking within the organisation. By diligently scanning the horizon for nascent opportunities and threats, employees become more attuned to the evolving environment and more inclined to challenge established norms. This can lead to the development of novel products, services, and business models that drive growth and cultivate a competitive advantage.\nConsider the following five actionable steps to plant the seeds needed to get buy-in\nSpeak their language: the art of targeted framing.\nGeneric presentations on \u0026ldquo;emerging risks\u0026rdquo; simply won\u0026rsquo;t cut it. You must tailor your message to resonate with each executive\u0026rsquo;s specific domain. For the CFO, translate foresight into quantifiable cost avoidance and enhanced ROI. For the CMO, illustrate how horizon scanning unlocks untapped market opportunities and strengthens brand resilience. Data is your ally here: use historical examples where proactive foresight averted crises or propelled strategic advantage. Frame horizon scanning not as a cost centre, but as a strategic investment yielding tangible returns.\nShowcase tangible value: illustrate \u0026ldquo;quick wins\u0026rdquo; with impact\nDon\u0026rsquo;t wait for a fully-fledged horizon scanning programme to prove its worth. Scour the organisation for instances where even informal foresight has demonstrably delivered positive outcomes. Did a proactive risk assessment avert a regulatory fine? Did early trend identification unlock a new revenue stream? Quantify these successes and use them as compelling proof points. Remember, data speak volumes, so let them tell the story of horizon scanning\u0026rsquo;s potential.\nEnlist an executive champion: power of influence\nIdentify a senior leader who \u0026lsquo;gets it\u0026rsquo; and possesses the influence to drive change. This person will serve as your internal advocate and help navigate all the political complexities. Bring your champion up to speed with compelling data, persuasive narratives, and a well thought through mini-business case.\nStrategic alignment: linking foresight to core objectives\nHorizon scanning must not exist in a vacuum. Articulate a crystal-clear connection between your foresight initiatives and the company\u0026rsquo;s strategic objectives and goals. Demonstrate how horizon scanning empowers informed decision-making, strengthens competitive positioning, and accelerates the achievement of strategic KPIs. By aligning foresight with established strategic priorities, you transform it from a peripheral activity into an indispensable driver of business success.\nRecognise you need help to make the business case\nIf you have never made a business case before, get help from those who can provide relevant input. Resist the temptation to download a template and complete it yourself. Quality and accuracy are so important here, and you\u0026rsquo;ll need to be able quantify tangible and intangible costs and benefits - this is not as easy as many people thing, so don\u0026rsquo;t be tempted to make guesses without having a set of underpinning assumptions.\nThis post concludes this blog post \u0026lsquo;beginners\u0026rsquo; series - are you ready to start scanning the horizon?\nWe can get you started by facilitating the end to end processes and then maintain and update your radars on an ongoing basis with regular interaction with you and your team\nWe can can help you get started by facilitating the end to end processes and then handing responsibilities to you, providing support afterwards where needed\n","date":"7 May 2025","permalink":"/horizon-scanning-for-beginners-part-8-the-executive-pitch-selling-foresight-to-the-c-suite/","section":"Blog","summary":"","title":"Horizon scanning for beginners: Part 8 - the executive pitch: selling foresight to the C-Suite"},{"content":"Following horizon scanning workshops, the vital next step involves translating gathered insights into actionable strategies through business case development, roadmap creation, and detailed A3 planning for senior leadership approval.\nThe culmination of foresight and horizon scanning workshops marks not an end, but rather a critical transition into the practical application of the insights gleaned. These workshops, as previously discussed, serve as important, and needed gatherings for ideas, where diverse perspectives converge to illuminate potential future scenarios. However, the true value of these exercises lies not in the generation of ideas alone, but in their translation into actionable strategies. The immediate aftermath of these workshops is a period of intense analysis and planning, where the raw output is refined into tangible deliverables: a draft business case, a supporting roadmap, and detailed A3 plans.\nThe business case serves as the foundation of this transition. It articulates the strategic rationale and justification for pursuing the opportunities identified during the foresight process. This involves quantifying the potential benefits, such as increased market share, reduced operational costs, or enhanced resilience to disruptive events. The business case must also address the potential risks associated with these opportunities, outlining mitigation strategies and contingency plans. It\u0026rsquo;s a balanced assessment that seeks to justify the investment of resources in exploring uncertain futures.\nThe roadmap, in contrast, provides a time dimension to the business case. It outlines the sequence of actions, milestones, and dependencies required to realise the strategic objectives. This involves identifying the critical capabilities that need to be developed, the resources that need to be allocated, and the timelines within which these activities must be completed. The roadmap serves as a navigational tool, guiding the organisation through the complex terrain of the future. Also, it\u0026rsquo;s a very useful communication tool that can be used when interacting with various stakeholders.\nA3 plans, my particular template supports Lean thinking, represent a more granular level of detail. These plans focus on specific projects or initiatives that are deemed essential for executing the roadmap. They typically include a problem statement, a proposed solution, a detailed action plan, and a set of key performance indicators (KPIs). A3 plans are designed to be concise, visual, and easily communicated, facilitating alignment and collaboration across different teams and departments. They are also a strong operational element that articles the \u0026lsquo;doing\u0026rsquo; aspect.\nThe presentation of the business case and roadmap to senior leadership is a critical activity and needs to be thought through and planned well - it can be make or break. This is where the foresight process is subjected to scrutiny, where assumptions are challenged, and where strategic alignment is sought. Senior leaders must assess your justification for change, the strength of the analysis, and the feasibility of the proposed actions. Their approval is not just a formality, but an endorsement that signals the company\u0026rsquo;s commitment to embracing future-oriented thinking.\nThe advantages of this process are many. By proactively anticipating future trends, companies can gain a competitive advantage, identify new market opportunities, and mitigate potential risks. Foresight enables more informed decision-making, leading to more effective resource allocation and improved strategic outcomes. The process itself supports a culture of innovation, encouraging employees to think critically about the future and to contribute to the development of solutions.\nBut there are also hidden advantages that are often overlooked. Foresight can enhance organisational resilience, enabling companies to adapt more effectively to unforeseen events. It can improve stakeholder engagement, building trust and collaboration with customers, suppliers, and regulators. It can also strengthen organisational learning, creating a continuous cycle of reflection, adaptation, and improvement.\nDespite these advantages, there are also challenges and risks that must be acknowledged. Foresight is not an exact science. The future is of course uncertain, and even the most sophisticated models and analyses can be wrong. There is a risk of overconfidence, of believing that the company has a unique ability to predict the future. There is also a risk of paralysis, of becoming so focused on the future that the present is neglected.\nThe hidden challenges are even more insidious. Foresight can be perceived as a threat to the status quo, especially by those who are invested in the current way of doing things. There may be resistance from employees who are uncomfortable with ambiguity or who lack the skills to engage in future-oriented thinking. There is also a risk of groupthink, where the views of dominant individuals or groups prevail, stifling dissent and limiting the range of perspectives considered.\nTo conclude, the transition from foresight workshops to actionable strategies is am important phase that requires meticulous planning, strong analysis, and effective communication. While the advantages of foresight are clear, it is essential to acknowledge the challenges and risks involved. Senior leaders must play a big role in championing the process, supporting a culture of innovation, and ensuring that foresight informs decision-making at all levels of the company. The ultimate success of foresight depends not only on the quality of the analysis, but also on the willingness of the organisation to embrace uncertainty, to adapt to change, and to learn from its mistakes.\nIn my next and last post of this series, I\u0026rsquo;ll cover what you need to do convince your leadership to approve the investment for you to even start this initial process. These days, you can\u0026rsquo;t just break off from your daily work to create a business case, that in itself needs justification and approval - a small pre-project.\nReady to start scanning the horizon?\nWe can get you started by facilitating the end to end processes and then maintain and update your radars on an ongoing basis with regular interaction with you and your team\nWe can can help you get started by facilitating the end to end processes and then handing responsibilities to you, providing support afterwards where needed\n","date":"5 May 2025","permalink":"/horizon-scanning-for-beginners-part-7-senior-leadership-buy-in-presenting-your-foresight-business-case-and-plan/","section":"Blog","summary":"","title":"Horizon scanning for beginners: Part 7 - senior leadership buy-in: presenting your foresight business case and plan"},{"content":"Horizon scanning transitions from identifying weak signals to implementing actionable strategies by engaging key stakeholders to assess impacts, prioritise initiatives, and develop roadmaps that inform a robust business case for future-proofing your company.\nHorizon scanning, as I\u0026rsquo;ve covered in this series, isn\u0026rsquo;t just about peering into the future, it\u0026rsquo;s about preparing for it. I\u0026rsquo;ve talked about identifying those faint signals, the symptoms of change on the horizon. But what happens when those signals increase, when the signals become trends, and the future starts to feel less like a distant possibility and more like an approaching reality? This is where the real work begins – the translation of foresight into action.\nThe transition from observation to implementation is critical. It\u0026rsquo;s not enough to simply acknowledge that change is coming; we need to understand its potential impact and develop concrete plans to navigate it successfully. This is where the power of collective intelligence comes into play. Bringing together key people from across the organisation – those with a vested interest in the changes ahead – is essential. It can be a virtual gathering, in-person or hybrid. These individuals possess a wealth of contextual knowledge, allowing us to assess the potential effects of change on various aspects of the business: processes, technology, people and organisation, and data and information.\nConsider a hypothetical scenario: advancements in AI are identified as a significant trend impacting customer service. A horizon scanning exercise flags this up early. As the trend strengthens, it becomes clear that AI-powered chatbots could potentially handle a large percentage of routine customer inquiries. Now, the real work starts. A workshop is convened, bringing together representatives from customer service, IT, data analytics, and HR. The customer service team provides insights into current pain points and customer expectations. IT assesses the feasibility of integrating AI-powered chatbots into existing systems. Data analytics explores the available data for training the AI. HR considers the implications for workforce training and potential role changes.\nThis collaborative assessment reveals both opportunities and challenges. On the one hand, AI-powered chatbots could improve customer satisfaction, reduce response times, and free up human agents to handle more complex issues. On the other hand, there are concerns about privacy and data protection, the potential for bias in AI algorithms, and the need for ongoing monitoring and maintenance.\nPrioritisation is a crucial element in this phase. We can\u0026rsquo;t tackle every potential impact simultaneously. Some changes will be more urgent, more impactful, or more feasible to implement than others. The workshop participants, drawing on their diverse perspectives, can help to prioritise the work, focusing on the initiatives that will deliver the greatest value and mitigate the most significant risks.\nThe workshop culminates in a set of key deliverables. A first-cut roadmap outlines the key steps involved in implementing the necessary changes. High-level descriptions of work packages break down the roadmap into manageable chunks. Registers for issues, risks, constraints, and assumptions provide a framework for managing potential roadblocks and uncertainties. All of this then feeds into the business case – the justification for the work. This business case answers the fundamental questions: Why is this change necessary? What needs to be done? How will it be implemented? When will it happen? Where will it take place? It\u0026rsquo;s the compelling narrative that secures buy-in from stakeholders and provides a clear direction for the organisation.\nThis process also offers some more subtle advantages. It establishes a culture of collaboration and shared understanding. By bringing together people from different departments and backgrounds, it breaks down silos and encourages a more holistic view of the business. It also promotes a sense of ownership and accountability. When people are involved in the planning process, they are more likely to be committed to the outcome.\nHowever, there are also potential challenges to be aware of. Workshops can be time-consuming and resource-intensive. It\u0026rsquo;s important to ensure that they are well-facilitated and focused on achieving clear objectives. There\u0026rsquo;s also the risk of groupthink, where dissenting opinions are suppressed in favour of consensus. It\u0026rsquo;s crucial to create a safe space where people feel comfortable expressing their views, even if they challenge the prevailing wisdom. Successfully navigating these challenges requires strong leadership, effective communication, and a willingness to embrace diverse perspectives.\nLet\u0026rsquo;s agree that horizon scanning isn\u0026rsquo;t just a theoretical exercise - it\u0026rsquo;s a practical discipline that requires planning, collaboration, and a commitment to action. It\u0026rsquo;s about translating foresight into tangible benefits for the organisation. The real value of horizon scanning lies not just in identifying future trends, but in the ability to proactively shape the future to your advantage.\nIn my penultimate post, I\u0026rsquo;ll cover what you do with the output from these workshops - presenting your compelling narrative to your leadership.\nReady to start scanning the horizon?\nWe can get you started by facilitating the end to end processes and then maintain and update your radars on an ongoing basis with regular interaction with you and your team\nWe can can help you get started by facilitating the end to end processes and then handing responsibilities to you, providing support afterwards where needed\n","date":"1 May 2025","permalink":"/horizon-scanning-for-beginners-part-6-building-your-roadmap-a-practical-guide-to-planning-for-change/","section":"Blog","summary":"","title":"Horizon scanning for beginners: Part 6 – building your roadmap: a practical guide to planning for change"},{"content":"This post looks into the necessity of structured tracking of emerging signals and trends in horizon scanning, highlighting the importance of collaboration and consistent monitoring to transform foresight into actionable insights for strategic decision-making.\nHorizon scanning, as we\u0026rsquo;ve explored in this series, is fundamentally about anticipating the future. But anticipation without action is merely speculation. The real power of horizon scanning lies in translating those possible scenarios for tomorrow into concrete strategies for today. This fifth part covers the critical step of plotting key signals and trends in a structured format, enabling continuous monitoring and informed decision-making.\nAt its heart, this stage is about organising the chaos of emerging information. We\u0026rsquo;re bombarded daily with news, reports, and trends, all competing for our attention. The trick is to sift through this noise, identify the signals that truly matter, and then track their evolution over time. A structured approach is essential for making sense of it all.\nWhile the attraction of sophisticated software solutions is undeniable, it\u0026rsquo;s crucial to remember that the tools are secondary to the process. A dedicated tool like FIBRES can undoubtedly streamline the workflow, providing a centralised repository and automated tracking capabilities - this is the tool I use and I highly recommend it. However, don\u0026rsquo;t let the lack of a fancy platform become a barrier to entry. Starting with something as simple as Excel sheets or a digital canvas like Miro can be surprisingly effective. The key is to establish a consistent framework for capturing and organising your observations.\nConsider the categories you want to use. What are the key themes or domains that are relevant to your organisation? These might include technological advancements, regulatory changes, shifts in consumer behaviour, or emerging geopolitical risks. Within each category, identify the specific signals and trends that warrant closer attention. These could be anything from a breakthrough in artificial intelligence to a new piece of legislation or a growing social movement. Here are two example radars from the FIBRES tool:\nOnce you\u0026rsquo;ve identified these signals, the next step is to track their evolution. This involves regularly monitoring news sources, industry reports, academic research, and social media conversations for any developments. It\u0026rsquo;s also crucial to capture the potential implications of each signal, both positive and negative. What opportunities might it create? What risks might it pose? How might it impact your organisation\u0026rsquo;s strategy, operations, or reputation?\nOne of the most valuable aspects of horizon scanning is its ability to benefit from collaboration and knowledge sharing. If you want to get inspired, take a look at this video from Thoughtworks.\nThoughtworks\u0026rsquo; Technology Radar categorises technologies by type and adoption readiness – Adopt, Trial, Assess, Hold – reflecting practical experience, not comprehensiveness. What sets their approach apart is its democratic, internal process. Diverse insights are gathered from across the organisation, promoting open discussion and debate. This not only ensures that a wide range of perspectives are considered but also helps to build a shared understanding of the emerging landscape.\nTheir Technology Radar serves as a powerful communication tool, enabling teams to align their efforts and make informed decisions about technology adoption. It\u0026rsquo;s a living document, constantly evolving as new technologies emerge and existing ones mature. By making this information accessible to everyone in the organisation, Thoughtworks empowers its employees to become active participants in shaping the future. You can also see their live radar that is publicly available on their website.\nBut be aware that there are challenges to overcome. Maintaining consistency in data collection and analysis can be difficult, especially when dealing with a large and diverse team. It\u0026rsquo;s important to establish clear guidelines and processes to ensure that everyone is on the same page. Another challenge is the potential for bias. We all have our own preconceived notions and blind spots, which can influence how we interpret information. To mitigate this risk, it\u0026rsquo;s essential to actively seek out diverse perspectives and challenge our own assumptions.\nTo conclude, as I have mentioned in earlier posts, horizon scanning is not a solitary activity. It requires a collaborative effort, bringing together diverse perspectives and expertise. Secondly, the process is more important than the tools. Don\u0026rsquo;t get bogged down in finding the perfect software solution. Start with what you have and iterate as you go. Thirdly, continuous monitoring is key. The world is constantly changing, so it\u0026rsquo;s essential to regularly review and update your findings.\nIn the next post, I\u0026rsquo;ll cover what you need to do as the signals and trends become stronger, and the action you need to take to access business impact and develop roadmaps and plans.\nReady to start scanning the horizon?\nWe can get you started by facilitating the end to end processes and then maintain and update your radars on an ongoing basis with regular interaction with you and your team\nWe can can help you get started by facilitating the end to end processes and then handing responsibilities to you, providing support afterwards where needed\n","date":"30 April 2025","permalink":"/horizon-scanning-for-beginners-part-5-signal-intelligence-turning-data-into-actionable-foresight/","section":"Blog","summary":"","title":"Horizon scanning for beginners: Part 5 - signal intelligence: turning data into actionable foresight"},{"content":"This fourth part about horizon scanning explains how collaborative foresight teams use a blend of manual and automated analysis techniques to methodically sort, prioritise, and assess the strategic impact of weak signals identified during horizon scanning.\nNext in this beginners guide to horizon scanning blog post series looks into the techniques used to analyse the subtle indicators of change, weak signals, bringing clarity to their relevance and context. This isn\u0026rsquo;t a solitary activity, the work benefits from collective intelligence and structured collaboration. At its heart, this phase involves grouping related signals and discarding irrelevant noise, work best performed by bringing together the team, working methodically together. Whether meeting physically or virtually, facilitation is key to guiding this process, ensuring diverse perspectives are heard and a structured approach maintained as the team refines signal noise into discernible patterns of potential impact.\nManual analysis techniques #The key to of weak signal identification, especially in the initial sorting phase, leans heavily on shared pattern recognition amongst the team. Manual techniques provide a good starting point, enabling deep understanding and collaborative interpretation. Trend spotting, for instance, allows the group to collectively observe recurring phenomena across disparate data points, drawing parallels that might signal underlying currents missed by an individual analyst. Similarly, thematic analysis offers a structured, yet manual, approach for the team to categorise signals collaboratively. Together, they can build a shared taxonomy, grouping related signals under coherent themes. This collective sense-making not only brings order to the inherent chaos of raw signals but also makes use of the team\u0026rsquo;s diverse knowledge to identify overarching narratives that could shape future organisational strategies far more effectively than any single viewpoint could achieve. These manual methods are particularly useful when conducted in facilitated workshop settings, encouraging debate, discussion, and consensus-building.\nAutomation in foresight #Alongside the manual methods, automated techniques are increasingly becoming part of the foresight toolkit. Clustering algorithms, powered by machine learning, offer a way to organise large volumes of similar signals rapidly. Automation can help mitigate certain inherent human biases and significantly accelerate the initial sorting process, particularly when dealing with vast datasets from multiple sources. These algorithms can differentiate patterns and similarities that might evade even a vigilant human team, enabling a more comprehensive initial scan of the emerging landscape. The objective remains consistent: to ensure no potentially significant signal is dismissed prematurely and to refine the selection process towards identifying a realistic spectrum of possible futures.\nBe aware that the adoption of automation should be carefully considered in line with your company\u0026rsquo;s foresight maturity and technology strategy. While the speed and scale offered by these tools is tempting, the nuanced interpretation, contextual understanding, and critical judgement required, especially when first evaluating signals, often benefit immensely from human-centric, manual methods. For teams new to horizon scanning, or within companies where foresight is still an emerging capability, starting simple is going to be the best approach. Starting out with manual techniques as mentioned above, allows the team to build foundational skills, develop a shared language, and cultivate critical thinking about potential impacts. As the foresight function matures, and the volume and velocity of incoming data increase, introducing automated tools and platforms can then powerfully augment and streamline the process, building upon, rather than prematurely replacing, that essential human foundation.\nConnecting signals to strategy: prioritisation and context #Simply identifying and grouping signals isn\u0026rsquo;t enough, the important next step is prioritisation based on potential impact on core business objectives. This transition demands shared focus and understanding on what matters for your company\u0026rsquo;s growth, resilience, and sustainability. Impact assessments, conducted collaboratively by the team (often involving stakeholders from relevant business units), are vital here. By collectively evaluating the potential positive or negative repercussions of prioritised signals or clusters, the business can allocate attention and resources more effectively, ensuring strategic focus aligns with identified opportunities and threats.\nScenario analysis also gains significant power through a team-based approach. Constructing a range of plausible future scenarios, grounded in the prioritised weak signals, allows the organisation, guided by the foresight team, to collectively explore and prepare for different possibilities. This collaborative exploration encourages forward-thinking across silos, promoting strategic agility and shared understanding. It allows for a proactive stance, enabling the business to navigate uncertain areas with greater clarity and decisiveness. Techniques like** issue framing** help the team collectively understand the broader context – the social, technological, economic, environmental, and political factors – influencing the signals. Framing issues within these specific contexts enhances the collective ability to anticipate market shifts, regulatory changes, or evolving customer needs, steering decision-making towards proactive adaptation.\nEstablishing a foresight culture #The effectiveness of any technique, manual or automated, builds on the underlying organisational culture. There\u0026rsquo;s a real need to cultivate an environment that genuinely values foresight not just as a planning input, but as a strategic, collaborative asset. This involves leadership championing the process, embracing a tolerance for ambiguity and uncertainty, and encouraging inquisitive exploration across all levels. It means creating safe spaces where weak signals, emerging ideas, and uncomfortable potential futures can be surfaced, discussed openly, and debated constructively without fear. Building this atmosphere of perpetual, shared discovery is essential for making horizon scanning a dynamic and impactful capability.\nWhat does this all mean? #The real impact occurs when these diverse techniques are blended and embedded within the cadence of strategic planning, driven by dedicated, well-facilitated teams. Whether starting with manual methods to build understanding or integrating automation to handle scale, the goal is collaborative sense-making. When foresight matures from a niche activity into a core competency synonymous with opportunity recognition and risk mitigation, companies gain a distinct advantage. They can pivot quicker, capitalise on emerging trends ahead of the curve, and align their long-term strategies with the dynamics of an ever-evolving landscape.\nIn the next post I will cover the need to plot key signals and trends in a structured format to enable continuous monitoring.\nReady to start scanning the horizon?\nWe can get you started by facilitating the end to end processes and then maintain and update your radars on an ongoing basis with regular interaction with you and your team\nWe can also help you get started by facilitating the end to end processes and then handing responsibilities to you, providing support afterwards where needed\n","date":"29 April 2025","permalink":"/horizon-scanning-for-beginners-part-4-linking-signals-to-strategy-prioritising-for-impact/","section":"Blog","summary":"","title":"Horizon scanning for beginners: Part 4 - linking signals to strategy: prioritising for impact"},{"content":"This post emphasises the critical role of actively seeking out weak signals and systematically analysing their implications to enable companies to anticipate future challenges and opportunities, shifting from a reactive to a proactive mindset.\nAnticipating future challenges and opportunities is a continuous process, particularly within the field of governance, risk, compliance, data protection, technology, and AI. In the first two parts of this blog series, I looked at setting scope and objectives and getting the team in place and ready to start work which is all about spotting weak signals – those subtle indicators of impending shifts. In this third part, I cover a proactive approach that allows organisations to adapt, innovate, and maintain a competitive edge in a constantly evolving landscape.\nWe begin by looking at what the weak signals may indicate, starting with new laws and regulations, or changes to existing. These represent the formalisation of societal values and often reflect responses to emerging challenges or opportunities. Scrutinising draft legislation, regulatory consultations, and policy discussions can provide crucial insights into future compliance requirements and potential disruptions. For example, think back to the years preceding GDPR which served as a powerful signal for companies worldwide to re-evaluate their data protection practices. It wasn\u0026rsquo;t simply about adhering to new rules; it was about cultivating a culture of data protection and compliance.\nRegulator activity also speaks volumes. Regulators are the gatekeepers of industry standards, and their actions often foreshadow broader trends. Monitoring their pronouncements, enforcement actions, and guidance notes offers a glimpse into their priorities and the areas they deem ripe for scrutiny. A surge in regulatory investigations into anti-money laundering practices, for instance, might signal an impending crackdown and the need for enhanced compliance measures.\nEmerging technologies are, by their very nature, precursors of change. They possess the potential to disrupt established business models, create new opportunities, and reshape entire industries. Keeping a close watch on technological advancements enables companies to anticipate their impact and strategically incorporate them into their operations. It\u0026rsquo;s not about blindly embracing every new piece of tech, but about understanding the underlying principles and assessing its relevance to the organisation\u0026rsquo;s goals.\nEnforcements, while often perceived as punitive measures, can serve as valuable learning experiences. Analysing enforcement actions reveals the specific areas where companies are falling short and the consequences of non-compliance. This information can be used to identify potential vulnerabilities within the company\u0026rsquo;s own practices and implement corrective measures. For instance, a hefty fine levied against a company for personal data breaches might prompt others to bolster their cybersecurity defences and data protection measures.\n**Societal trends **wield significant influence. Evolving demographics, shifting consumer preferences, and changing social values can reshape markets and create new demands. Paying attention to these trends enables companies to adapt their products, services, and marketing strategies to resonate with their target audiences. The growing awareness of environmental sustainability, for instance, has spurred a surge in demand for eco-friendly products and responsible business practices.\n**Competitor activity **provides invaluable insights into market dynamics and strategic directions. Monitoring competitors\u0026rsquo; product launches, marketing campaigns, and strategic alliances can reveal emerging trends and potential threats. This information can be used to refine the organisation\u0026rsquo;s own strategies, identify competitive advantages, and mitigate potential risks. It\u0026rsquo;s not about simply copying what competitors are doing, but about understanding their rationale and adapting their approaches to the organisation\u0026rsquo;s unique circumstances. If you discovered that one of your competitors was gaining market share as a result of highly publicised campaign around ethical processing of personal data and consumer trust, would you just ignore this?\nGeopolitical activity can send ripples across the global landscape, affecting trade flows, supply chains, and regulatory environments. Monitoring geopolitical events, such as trade agreements, political instability, and international conflicts, enables companies to anticipate potential disruptions and develop contingency plans. A trade war between two major economies, for example, might necessitate adjustments to supply chain strategies and pricing policies. To illustrate this point, just look at the impact of the first 100 days of the new administration in the US and its effect globally. Interestingly, the vendor I use for my horizon scanning tool, FIBRES, informed me recently that one of their clients had predicted this very scenario a few years back and was able to respond rapidly when the situation materialised.\nMany organisations employ the PESTEL model – Political, Economic, Social, Technological, Environmental, and Legal – as a structured framework for identifying and analysing weak signals. This model provides a comprehensive lens through which to examine the external environment and identify potential drivers of change.\nThe responsibility for spotting these signals should not be confined to the dedicated foresight colleagues. Ideally, it should be integrated into the daily work of employees across all departments. By establishing a culture of awareness and encouraging employees to report potential signals, companies can utilise the collective intelligence of their workforce.\nThese signals, once identified, should be captured in a central repository or dedicated horizon scanning tool. This repository serves as a collective memory, allowing the company to track the evolution of signals over time and identify emerging patterns. It facilitates collaboration and knowledge sharing among employees, ensuring that valuable insights are not lost in silos.\nTo conclude, foresight and horizon scanning are not mere buzzwords. They are essential practices for navigating the complexities of the business reality. By actively seeking out weak signals and systematically analysing their implications, companies can gain a significant advantage in anticipating future challenges and opportunities. It is about shifting from a reactive to a proactive mindset, empowering companies to shape their own destinies rather than being buffeted by the winds of change. It\u0026rsquo;s a challenging but ultimately rewarding journey, one that demands vigilance, curiosity, and a willingness to embrace the unknown.\nIn the next part of this blog series, I\u0026rsquo;ll cover the techniques that can be used to sort and categorise the weak signals for relevance and context.\nReady to start scanning the horizon?\nWe can get you started by facilitating the end to end processes and then maintain and update your radars on an ongoing basis with regular interaction with you and your team\nWe can can help you get started by facilitating the end to end processes and then handing responsibilities to you, providing support afterwards where needed\n","date":"28 April 2025","permalink":"/horizon-scanning-for-beginners-part-3-deciphering-weak-signals/","section":"Blog","summary":"","title":"Horizon scanning for beginners: Part 3 – deciphering weak signals"},{"content":"To effectively anticipate and navigate emerging challenges and opportunities, companies must cultivate a multi-disciplinary team equipped with the knowledge and skills to seamlessly integrate horizon scanning into their existing roles.\nThe current geopolitical climate together with the ongoing rapid evolution of technology, demands a proactive approach from companies to address uncertainties and impacts. We\u0026rsquo;re not just reacting to change, we\u0026rsquo;re anticipating it, preparing for it, and shaping it to our advantage. In the first part of this blog series, we laid the groundwork by defining the scope and objectives of our foresight initiatives. Now, we turn our attention to the engine that will drive this process: the multi-disciplinary team.\nAssembling this team is not about creating a new department or adding layers of bureaucracy. It’s about strategically leveraging the expertise and insights already present within the company. The key is representation. We need voices from every corner of the business that will be touched by emerging regulations, disruptive technologies, and evolving trends. This includes, but is not limited to, legal, compliance, IT, marketing, and operations. Each department brings a unique perspective, a different lens through which to view the horizon.\nThink of it as building a diverse portfolio of perspectives. Legal sees potential regulatory pitfalls, the tech specialist identifies technological opportunities, the marketer understands shifting consumer preferences, and the operations manager anticipates logistical challenges. The power lies in the synthesis of these perspectives, the ability to connect the dots and identify emerging patterns that would otherwise remain hidden.\nImportantly, these team members are not full-time horizon scanners and we\u0026rsquo;re not creating another silo. Instead, we are embedding foresight into the fabric of their existing roles. This is a crucial point. Overloading individuals with additional responsibilities is a recipe for burnout and ineffectiveness. The goal is to integrate horizon scanning tasks seamlessly into their daily routines, making it a natural extension of their existing duties. This requires careful consideration of workload and a commitment to providing the necessary support and resources.\nEducation is critical. These individuals need a solid grounding in the principles and practices of foresight. They need to understand the \u0026ldquo;why\u0026rdquo; – the strategic imperative for anticipating change. They need to grasp the \u0026ldquo;what\u0026rdquo; – the specific regulations, technologies, and trends that are relevant to the organisation. They need to learn the \u0026ldquo;how\u0026rdquo; – the tools and techniques for gathering, analysing, and interpreting information. They need to know the \u0026ldquo;when\u0026rdquo; – the appropriate timeframe for considering future scenarios. And they need to understand the \u0026ldquo;who\u0026rdquo; – the stakeholders who will be impacted by these changes.\nThis education should be comprehensive, covering a range of topics from trend analysis and scenario planning to risk assessment and decision-making. It should also be practical, providing team members with hands-on experience in using various foresight tools and techniques.\nIn many companies, these teams are often geographically dispersed. Online education platforms offer a cost-effective and efficient way to reach a wide audience. Virtual workshops, webinars, and online courses can provide team members with the knowledge and skills they need to succeed.\nHowever, there is a strong argument for an in-person kickoff meeting, if budget allows. Bringing the team together physically can build a sense of camaraderie and collaboration. It provides an opportunity for team members to build relationships, share insights, and develop a shared understanding of the company\u0026rsquo;s foresight objectives. The benefits of face-to-face interaction should not be underestimated. The informal conversations, the shared meals, and the spontaneous brainstorming sessions can often lead to breakthroughs that would not occur in a virtual environment.\nOf course, the logistics and costs of a global in-person meeting can be significant. Travel expenses, accommodation, and venue hire can quickly add up. However, the potential return on investment in terms of improved team cohesion and enhanced foresight capabilities may well justify the expense.\nThe ideal approach may be a hybrid model, combining online education with occasional in-person meetings. This allows for continuous learning and knowledge sharing while also providing opportunities for face-to-face interaction and relationship building.\nOnce the team is established and educated, it\u0026rsquo;s time to begin the scanning process - this will be covered in part 3 of this blog series.\nReady to start scanning the horizon?\nWe can get you started by facilitating the end to end processes and then maintain and update your radars on an ongoing basis with regular interaction with you and your team\nWe can can help you get started by facilitating the end to end processes and then handing responsibilities to you, providing support afterwards where needed\n","date":"25 April 2025","permalink":"/horizon-scanning-for-beginners-part-2-from-silos-to-synergy-creating-a-cross-functional-foresight-team/","section":"Blog","summary":"","title":"Horizon scanning for beginners: Part 2 – from silos to synergy: creating a cross-functional foresight team"},{"content":"This initial installment of our foresight series illuminates the critical factors, including business sector, nature of business, geographies, and applicable laws, that must guide the definition of scope and objectives for effective horizon scanning, thereby maximising resource allocation and strategic impact.\nThis is the first part of a series of posts explaining why foresight and horizon scanning should be prioritised by every GRC leader, especially with the current turbulent global landscape, characterised by geopolitical instability and relentless technological advancement. The ability to anticipate future challenges and opportunities is not merely advantageous; it\u0026rsquo;s a big must. Foresight and horizon scanning have emerged as critical tools for companies seeking to navigate uncertainty and secure long-term success. But before looking into the intricacies of these processes, let\u0026rsquo;s just define what horizon scanning is from a Purpose and Means perspective: it is a structured process for identifying and analysing the emerging trends, risks, and opportunities that could affect your organisation. It’s about looking beyond the immediate and asking:\nWhat’s coming?\nHow will it impact us?\nWhat do we need to do about it?\nThis first part of the series covers setting objectives and scope.\nAs with many tasks, start with the end in mind. It\u0026rsquo;s crucial to define the objectives and scope, ensuring that company resources are deployed effectively and strategically. Let\u0026rsquo;s explore the key factors that should shape the boundaries of foresight and horizon scanning initiatives.\nFirst and foremost, the business sector in which your company operates exerts a huge influence on the nature and extent of your foresight activities. A technology company, for example, must maintain awareness of emerging technologies, evolving customer preferences, and potential disruptions from competitors. Their foresight efforts might focus on identifying breakthrough innovations, assessing the feasibility of new business models, and anticipating the impact of regulatory changes on the digital landscape. On the other hand, a company in the heavily regulated financial services sector will need to focus on keeping pace with evolving regulations and regulatory expectations, as well as the use of new technologies to counter increasingly sophisticated cyber threats.\nThe nature of the business itself also plays a pivotal role in shaping the scope of foresight and horizon scanning. A company with a long product development lifecycle, such as in the pharma sector, must adopt a long-term perspective, anticipating trends that may not manifest for several years. Their foresight efforts might involve monitoring scientific breakthroughs, assessing the potential impact of demographic shifts on disease prevalence, and anticipating changes in healthcare policy. Conversely, a company in a fast-moving consumer goods industry may need to focus on more immediate trends, such as changing consumer tastes and preferences, the rise of new distribution channels, and the impact of social media on brand reputation.\nGeographic considerations are equally important. A multinational corporation operating in multiple countries must account for the unique political, economic, social, and technological factors in each region. Their foresight efforts might involve monitoring political stability, assessing the impact of trade agreements, and anticipating the emergence of new markets. Also, they must consider the cultural nuances and ethical considerations that may vary across different geographies.\nFinally, applicable laws and regulations form a crucial boundary for foresight and horizon scanning. Organisations must ensure that their activities comply with all relevant legal and ethical standards. Their foresight efforts might involve monitoring changes in data protection, AI governance, cybersecurity laws (to mention a few), assessing the impact of environmental regulations, and anticipating the emergence of new legal challenges. Also, they must be mindful of the potential for unforeseen legal risks and liabilities.\nWho should be involved in setting the objectives and scope of foresight and horizon scanning? This is not a task to be delegated to a junior team or a single department. It requires a collaborative effort involving key stakeholders from across the company.\nSenior leadership: Their strategic vision and understanding of the organisation\u0026rsquo;s goals are essential for aligning foresight activities with overall business objectives. Without their buy-in, the entire process is likely to be ineffective.\nSubject matter experts: Individuals with deep knowledge of specific areas, such as technology, regulation, or market trends, can provide valuable insights and identify potential areas of concern. Their expertise is crucial for ensuring that the foresight activities are grounded in reality.\nRisk management professionals: Their understanding of the company\u0026rsquo;s risk appetite and risk tolerance is essential for identifying potential threats and vulnerabilities. Ignoring their input is a significant oversight.\nStrategic planning team: They are responsible for developing and implementing the company\u0026rsquo;s strategic plans, and their involvement ensures that foresight activities are integrated into the planning process. Without this integration, foresight is merely an academic exercise.\nLegal and compliance team: Their expertise is important for ensuring that foresight activities comply with all relevant laws and regulations. Failure to involve them can lead to costly legal challenges.\nThe composition of this group must be carefully considered to ensure a diverse range of perspectives and expertise. Without this diversity, the objectives and scope of the foresight activities are likely to be narrow and incomplete.\nYou must also reference various sources of information to determine the scope and objectives of foresight and horizon scanning. This is not a process of guesswork or intuition; it requires a systematic and thorough approach to gathering and analysing data - in a way, it\u0026rsquo;s a pre-scan but you\u0026rsquo;ll need to balance it to ensure this first step does not become too overwhelming. Sources to consider include:\nInternal documentation e.g. strategies and expertise\nIndustry reports and analyses\nAcademic research\nGovernment publications and regulatory updates\nNews media and social media\nIn my view, this first phase is critical. Failing to carefully define the scope of foresight and horizon scanning work can lead to wasted resources, missed opportunities, and potentially even existential threats. Imagine a retailer focusing solely on domestic markets, while ignoring the rise of e-commerce giants in other regions. Or consider a manufacturer failing to anticipate the impact of climate change on its supply chains. These are just a few examples of the perils of neglecting foresight. I think that the businesses that will thrive in the coming years will be those that make a strategic, well-informed effort to look ahead.\nYou might want to consider that these factors are not mutually exclusive; they often interact and overlap. The business sector influences the nature of the business, which in turn shapes the geographic considerations and applicable laws. Therefore, a holistic approach is essential, one that considers all of these factors collectively.\nThe most effective foresight initiatives are those that are integrated into an organisation\u0026rsquo;s strategic planning process. Foresight should not be viewed as a standalone activity, but rather as an integral part of decision-making. By incorporating foresight into their strategic thinking, organisations can make more informed choices, anticipate potential risks and opportunities, and ultimately achieve sustainable success.\nLastly, the scope of foresight and horizon scanning work is not a fixed entity; it\u0026rsquo;s a dynamic boundary that must be continually reassessed and refined in light of changing circumstances. By carefully considering the factors mentioned above, your company can ensure that their foresight efforts are targeted, effective, and aligned with their strategic goals.\nOnce the objectives and scope are agreed and documented, it\u0026rsquo;s time to assemble a team around you - this will be covered in part 2.\nReady to start scanning the horizon?\nWe can get you started by facilitating the end to end processes and then maintain and update your radars on an ongoing basis with regular interaction with you and your team\nWe can can help you get started by facilitating the end to end processes and then handing responsibilities to you, providing support afterwards where needed\n","date":"24 April 2025","permalink":"/horizon-scanning-for-beginners-part-1-laying-the-foundation-with-objectives-and-scope/","section":"Blog","summary":"","title":"Horizon scanning for beginners: Part 1 - laying the foundation with objectives and scope"},{"content":"By integrating data protection by design and by default principles into Make.com scenarios, you can can responsibly build GenAI agents that automate tasks, streamline processes, and ensure compliance with regulations like the EU AI Act and data protection laws.\nHaving recently completed an excellent bootcamp from AI Academy introducing me to AI agents and automation tools like Make.com and Zapier, I\u0026rsquo;ve gained a deeper appreciation for the power – and the responsibility – that comes with this technology. The bootcamp was very hands-on and allowed me to scope and develop my own projects that, by the end of the 6 weeks, I was quite proud of. What I particularly liked about the course was the heavy focus on testing which I feel is not mentioned enough when using these tools.\nAs we\u0026rsquo;re well aware, the continued evolution of GenAI provides new ways for automating tasks and enhancing operations and learning how to use them allows a greater understanding of where they need to be constrained. Tools like Make.com now empower non-technical professionals to build sophisticated GenAI agents, connecting applications to create intelligent workflows - this is just what I did and I hope to be able to launch my product on this website in the coming months. However, as we know, innovation must be balanced with a clear understanding of legal obligations. Regulations like the EU AI Act and the GDPR impose constraints that must be addressed proactively, not as an afterthought.\nLeveraging GenAI agents in Make.com for tangible results\nMake.com’s visual platform allows you to design scenarios that automate processes. Integrating GenAI models with this automation engine unlocks significant potential. A couple of applications I did consider, but down-prioritised involved:\nEnhanced customer support: streamline support by analysing incoming tickets, summarising issues, and drafting personalised responses, escalating complex cases when human intervention is required.\nData enrichment and analysis: efficiently extract insights from unstructured data sources like news articles and customer reviews, and integrate these findings into business intelligence dashboards.\nThe ability to prototype and deploy these solutions rapidly with Make.com makes them valuable for companies seeking practical results.\nThe regulatory landscape\nThe opportunities presented by GenAI must be addressed with a clear understanding of the regulatory environment. While the EU AI Act continues to evolve and its principles are well-defined:\nRisk-based approach: AI systems are categorised by risk level, with higher-risk systems subject to more stringent requirements. Even \u0026ldquo;low-risk\u0026rdquo; GenAI agents are subject to transparency and due diligence requirements.\nTransparency and explainability: users need to understand how AI systems function and make decisions. This is particularly important for GenAI, where outputs can be difficult to predict.\nData governance: high-quality, unbiased data is essential for training and operating AI systems. Data security, privacy, and accuracy must be ensured.\nHuman oversight: this is essential to prevent AI systems from making harmful or discriminatory decisions.\nBeyond the EU AI Act, data protection laws like GDPR must be understood and formulated as requirements.\nIntegrating compliance as a fundamental requirement: Building \u0026ldquo;Data Protection by Design and by Default\u0026rdquo; into Make.com\nResponsible GenAI agent development requires treating legal and ethical obligations as fundamental requirements, especially understanding data protection by design and by default into the context of the Make.com scenarios such as these considerations to name a few:\nDataflow mapping and purpose limitation: Map dataflows: document the complete dataflow of your GenAI agent, identifying data sources, processing steps, and destinations. What type of data is being processed, where is it stored, and for how long?\nDefine purpose: if the solution involves processing personal data, clearly define the various purposes for processing personal data. The purposes should be specific, limited, and communicated to users.\nData minimisation:\nMinimise collection: collect only the personal data necessary for the defined purpose. Avoid collecting excessive or irrelevant data by using Make.com’s modules to minimise the data shared with GenAI models – provide only essential information.\nTransparency and notice: Provide clear notices: inform users how their personal data is being processed by the GenAI agent, including the purpose of processing, types of data, and their rights under GDPR.\nExplainable AI: strive to make the GenAI agent\u0026rsquo;s decisions as transparent as possible to build trust and allow users to understand the agent’s decisions.\nSecurity and access control:\nImplement security measures: protect personal data from unauthorised access by implementing strong encryption, access controls, and security monitoring.\nControl model access: restrict access to GenAI models to authorised personnel using role-based access control.\nHuman oversight and intervention:\nImplement human-in-the-loop: design your GenAI agent to include human oversight, allowing operators to review decisions, intervene when necessary, and prevent harmful outcomes.\nEstablish escalation procedures: define clear procedures for escalating cases requiring human intervention to ensure complex issues are handled by qualified personnel. This is where Make.com’s routing modules can direct cases to human operators based on predefined criteria.\nData retention and deletion:\nEstablish retention policies: define how long personal data will be retained, complying with GDPR and other applicable laws.\nImplement deletion procedures: establish procedures for securely deleting personal data when it is no longer needed, using Make.com’s scheduling modules to automate data deletion tasks. Example: creating a centralised Event Hub with Make.com\nMy own bootcamp project involved building a system with Make.com that consolidated information about events (conferences, webinars, meetings, etc.) covering data protection, AI governance, and infosec. It extracts data from emails, RSS feeds, and websites, with the following aspects:\nData sources: The agent collects event information from various sources. Email parsing: when parsing event emails, the focus is on extracting event details: title, date, description, URL.\nRSS feeds/websites: ensure compliance with website terms of service. Extract similar event details.\nPurpose limitation: limited extracted data to essential event information to avoid processing personal data.\nGenAI enhancement: the GenAI model summarises event descriptions and categorise events by topic.\nData storage: store extracted event information in Airtable.\nUser interface: display the consolidated event information on the Purpose and Means website\nTransparency: provide a clear data processing notice specifically for this solution explaining the sources of event information and how it’s being used, and also update the overall website notice.\nAutomation: schedule Make.com to regularly check data sources for new events.\nData deletion: schedule Make.com to automatically delete event information after a defined period\nConclusion: promoting responsible AI implementation\nBuilding GenAI agents with Make.com offers opportunities to automate processes and improve efficiency, but this requires a commitment to responsible implementation. By treating legal and ethical obligations as core requirements and integrating data protection by design and by default into your Make.com scenarios, you can develop effective and compliant GenAI solutions. With Make.com\u0026rsquo;s visual overview, it\u0026rsquo;s also easy to pinpoint where controls are required.\nAs mentioned above, the bootcamp may see light of day on this website sometime in the coming months.\n","date":"22 April 2025","permalink":"/responsible-automation-using-make-from-a-data-protection-perspective/","section":"Blog","summary":"","title":"Responsible automation using Make from a data protection perspective"},{"content":"Companies must proactively balance the innovative potential of GenAI with effective data security measures and employee empowerment to effectively mitigate risks arising from the blurred boundaries between workplace and personal use.\nThe rise of GenAI is transforming the operational landscape across many sectors. From streamlining workflows and enhancing productivity to unlocking people\u0026rsquo;s creativity, GenAI tools are rapidly becoming indispensable assets for companies striving to maintain a competitive edge in the modern marketplace. But this \u0026ldquo;tech revolution\u0026rdquo; has also introduced a complex set of challenges, particularly concerning the increasingly porous demarcation between workplace policies and individual autonomy, specifically as it relates to data security and acceptable use.\nThe division between the regulated environment of corporate IT infrastructure and the almost wild west access to a range of GenAI applications in the personal space is widening, posing a significant threat to sensitive and proprietary organisational data and raising basic questions about the extent of control employers can, and should, exert over their employees\u0026rsquo; digital activities beyond the confines of the workplace.\nA precedent for concern\nThe concerns surrounding GenAI usage are not entirely new. For years, companies have had to overcome the challenges of employees using company-issued devices for personal use, and vice versa. The accessibility of software like Microsoft Office 365, both within the corporate network and on personal devices, has long presented a risk of inadvertent data leakage or policy breaches. But the transformative potential and inherent data-intensive nature of GenAI tools amplify these existing concerns exponentially.\nUnlike traditional software applications that primarily function as conduits for data creation and storage, GenAI tools possess the capacity to actively process, analyse, and synthesise information, often requiring access to vast datasets to effectively execute their intended functions. This inherent reliance on data inputs significantly increases the potential for sensitive organisational information, including intellectual property, confidential client data, and proprietary algorithms, and not least personal data, to be inadvertently or deliberately exposed to external, unregulated environments.\nConsider the hypothetical scenario of an employee using a company-provided GenAI tool, such as Microsoft Co-Pilot, to draft a marketing proposal containing confidential market research data. While the use of Co-Pilot may be sanctioned under the company\u0026rsquo;s AI Acceptable Use Policy and subject to strict security protocols, the same employee might subsequently utilise a publicly available GenAI tool like ChatGPT or Google Gemini from their home computer to refine the proposal\u0026rsquo;s messaging or generate accompanying visuals. In doing so, the employee unknowingly uploads the sensitive market research data to an external server, potentially compromising its confidentiality and exposing the company to significant competitive disadvantage.\nA hypothetical scenario, but closer to reality than you might like to think, and it highlights a crucial vulnerability inherent in the current GenAI landscape: the assumption that corporate policies and technological controls are sufficient to prevent data leakage across the increasingly permeable boundary between work and personal life. While companies may implement stringent policies governing the use of GenAI tools within the workplace, they often lack the visibility and authority to effectively monitor or control their employees\u0026rsquo; digital activities outside of working hours, particularly when personal devices and accounts are involved.\nPolicy vs. practice: the limits of enforcement\nThe implementation of a water-tight AI Acceptable Use Policy is undoubtedly a crucial first step in mitigating the risks associated with GenAI adoption. But a policy alone is insufficient to guarantee data security and compliance. The effectiveness of any policy hinges on its practical enforceability, and the reality is that companies often face significant challenges in monitoring and controlling their employees\u0026rsquo; adherence to these guidelines, particularly outside of the traditional workplace setting.\nEven with the implementation of data loss prevention (DLP) systems and endpoint detection and response (EDR) tools, companies can struggle to detect and prevent employees from deliberately or inadvertently uploading sensitive data to external GenAI platforms. Attempts to excessively monitor employee behaviour outside of the workplace and outside of working hours can trigger privacy concerns and potentially violate local employment laws.\nA recent education session I delivered for a client epitomises this point. The company had invested considerable resources in developing a comprehensive AI Acceptable Use Policy and deploying Microsoft Co-Pilot as its primary GenAI tool. But during the session, employees openly expressed concerns about the policy\u0026rsquo;s enforceability, particularly in light of their existing reliance on a variety of publicly available GenAI tools in their personal lives. The employees\u0026rsquo; questions highlighted a fundamental tension between the company\u0026rsquo;s desire to maintain control over its data assets and its employees\u0026rsquo; need for flexibility and autonomy in their digital interactions.\nSenior leaders representing infosec and data protection expertly addressed these points, but this issue will not go away. It will morph and change as AI continues to evolve.\nMy client\u0026rsquo;s policy is what I might call very close to being \u0026ldquo;best-in-class\u0026rdquo; and contained various use cases and referenced internal approval processes required for higher risk cases. An policy update was already pending at the time I delivered the session, so I got a good sense of the document being living and breathing.\nKey considerations for navigating the Generative AI tightrope:\nTo effectively navigate the complex and evolving landscape of GenAI adoption, companies must adopt a multi-faceted approach that balances workplace innovation with data security and individual autonomy. This requires:\nScenario-based education and training: generic policy statements are rarely sufficient to change employee behaviour. Training programmes should focus on providing relevant, concrete examples of potential GenAI use cases, highlighting the associated risks and outlining practical steps employees can take to mitigate those risks. Training should be contextualised to specific business functions and data types, making it easier for employees to understand the policy\u0026rsquo;s relevance to their day-to-day work. This should also involve regular refresher sessions to reinforce best practices, lessons learned and address evolving threats.\nDynamic policy refinement: AI Acceptable Use Policies should not be treated as static documents. Companies must establish a process for regularly reviewing and updating their policies to reflect the rapidly evolving capabilities and potential risks associated with GenAI tools. This includes proactively monitoring emerging trends in the GenAI landscape, identifying potential vulnerabilities, and adapting policies accordingly.\nTransparent communication and collaboration: open and honest communication between company leadership, IT departments, and employees is essential for building trust, establishing and maintaining a mindset of responsible GenAI usage. Companies should encourage employees to voice their concerns and provide feedback on the AI Acceptable Use Policy, demonstrating a commitment to continuous improvement and collaborative problem-solving.\nPortfolio management integration: embed business cases for AI within wider portfolio management processes to ensure effective risk-based processes can be adopted.\nRisk-based security measures: implement security controls commensurate with the sensitivity of the data being processed by GenAI tools. This may involve restricting access to certain GenAI platforms, implementing data encryption protocols, and utilising advanced threat detection systems to monitor for anomalous activity.\nEmployee empowerment and autonomy: recognise that employees are often the first line of defence against data breaches and policy violations. Empower employees to make informed decisions about their GenAI usage by providing them with the knowledge and resources they need to identify and mitigate potential risks. This includes encouraging employees to report suspicious activity or potential policy violations without fear of reprisal.\nA shared responsibility\nThe successful integration of GenAI into the modern workplace requires a shared commitment from both companies and employees. Companies must invest in contextual policies, security measures, and training programmes to address the risks associated with GenAI adoption, while employees must adopt a mindset of responsible usage and actively participate in safeguarding sensitive data.\nBy establishing open communication, embracing continuous improvement, and recognising the importance of individual autonomy, companies can navigate the GenAI tightrope with confidence, harnessing the transformative power of this tech while safeguarding their data assets and maintaining the trust of their stakeholders.\nContact Purpose and Means for engaging education, training and awareness to support your policy roll outs. Content can be hosted on your LMS or on our own LMS platform.\n","date":"14 April 2025","permalink":"/navigating-the-blurry-lines-between-corporate-and-personal-use-of-genai/","section":"Blog","summary":"","title":"Navigating the blurry lines between corporate and personal use of GenAI"},{"content":"We look at how significant and potentially unsustainable water consumption of BigTech data centres, as highlighted in a recent article in The Guardian, poses a direct threat to their publicly stated ESG goals, particularly their environmental and social commitments, and risks accusations of greenwashing, social washing, and overall ESG washing.\nWhether you like it or not, data centres are the backbone of our online lives. They power everything from streaming services and social media to cloud computing and AI. They\u0026rsquo;re mostly, \u0026ldquo;always on\u0026rdquo; and a critical and often overlooked resource underpins their continuous operation: water.\nA recent investigation by The Guardian has cast a stark light on the huge water consumption of BigTech’s data centres, raising serious questions about their environmental responsibility and the true cost of our digital convenience. The article alleges a significant and often undisclosed reliance on water for cooling these energy-intensive facilities, particularly in regions already facing water scarcity. This revelation has the potential to deeply impact the environmental and social goals championed by these very tech giants, forcing a critical re-evaluation of their sustainability claims and potentially exposing them to accusations of ESG washing.\nAs defined by ESG Matrix, \u0026ldquo;ESG washing\u0026rdquo; refers to practices where companies misrepresent their ESG performance or commitments, making them appear more environmentally and socially responsible than they really are. The allegations surrounding BigTech’s data centre water usage highlight several potential forms of this practice.\nFor years, BigTech have publicly committed to ambitious Environmental, Social, and Governance (ESG) targets. Their websites are full of promises of resource efficiency, carbon neutrality, and positive community impact. However, the allegations surrounding their data centres\u0026rsquo; water usage present a direct challenge to these commitments, potentially exposing a significant blind spot in their sustainability strategies and raising concerns about environmental washing (or greenwashing), where environmental benefits are overstated or negative impacts are downplayed.\nTo understand the seriousness of these allegations, I took a look into the ESG sections of each company’s website, scrutinising their stated environmental and social goals in light of the potential impact of high, and potentially unsustainable, water consumption. I used Gemini 2.0 Flash to perform the analysis.\nDraining resources and clouding sustainability claims (and the risk of Greenwashing)\nThe core environmental concern revolves around the sheer volume of water required to cool the heat generated by thousands of servers operating around the clock. Traditional cooling methods often rely on evaporative cooling towers, which can consume vast quantities of water, particularly in hot and arid climates. The Guardian’s report suggests that this consumption is often not transparently disclosed, making it difficult to assess the true environmental footprint of these companies and raising concerns about a lack of transparency, a key tactic in ESG washing.\nContradiction of resource efficiency goals (and potential cherry-picking): all four companies mentioned in The Guardian\u0026rsquo;s article explicitly state commitments to resource efficiency and minimising their environmental impact. High water consumption, especially in water-stressed areas, directly contradicts these goals. Some of the companies might be engaging in cherry-picking, heavily promoting smaller positive environmental initiatives while downplaying the significant water footprint of their data centres.\nImpact on water stewardship targets (and Sustainability Washing): some of the companies have even set ambitious “water positive” targets, aiming to replenish more water than they consume. Significant, unaccounted-for water usage in their data centres could make these goals unattainable and undermine their credibility as environmental stewards, contributing to sustainability washing by creating a misleading impression of overall positive impact.\nExacerbating water scarcity (and Impact Washing): the withdrawal of large quantities of water for industrial purposes can strain local water resources, impacting agriculture, ecosystems, and the availability of clean water for communities. By not fully disclosing or addressing this potential negative impact while highlighting positive community contributions, some of the companies risk impact washing.\nThe energy-water nexus (and vagueness in reporting): the article also touches upon the energy required to pump and treat the vast amounts of water used in data centres. This creates a negative feedback loop, where high water consumption indirectly increases energy demand, potentially hindering their progress towards renewable energy targets and overall carbon footprint reduction goals. General statements about \u0026ldquo;reducing energy consumption\u0026rdquo; without addressing the water-energy nexus in data centres could be seen as vagueness in reporting.\nRipples through communities and eroding trust (and Social Washing concerns)\nThe social implications of excessive data centre water consumption are equally significant, potentially undermining the companies\u0026rsquo; commitments to responsible community engagement and social equity and raising concerns about social washing, where social responsibility is overstated.\nImpact on local communities (and lack of transparency): If data centres are drawing heavily on local water resources, it can lead to competition and potential conflict with other users, including farmers, residents, and local industries. The alleged lack of transparency surrounding water usage prevents these communities from fully understanding and addressing the potential impacts.\nErosion of stakeholder trust (and misleading narratives): the alleged lack of transparency surrounding water usage erodes trust between the tech companies and their stakeholders, including local communities, environmental organisations, and socially conscious investors. Presenting narratives of positive community engagement while potentially contributing to water stress can be seen as misleading.\nEnvironmental justice concerns (and downplaying negative impacts): The placement of data centres in or near vulnerable communities, coupled with high water consumption, raises environmental justice concerns. Downplaying the potential strain on local water resources in these communities while emphasising broader social responsibility initiatives could be a form of impact washing.\nSocial license to operate (and overall Sustainability Washing): For BigTech companies to maintain their social license to operate – the informal permission granted by local communities and stakeholders – they need to demonstrate that their operations are not detrimental to the well-being of those communities. Significant and undisclosed water usage that impacts local resources can jeopardise this crucial aspect of their long-term sustainability, contributing to a broader sense of sustainability washing if their overall commitments are perceived as disingenuous.\nAn overview of the potential ESG conflicts and washing risks\nBased on my review of their publicly stated ESG goals and the allegations presented in The Guardian, here\u0026rsquo;s a summary of the potential impact on each company, including the types of washing risks (I avoid naming each company specifically):\nTransparency, innovation, and accountability (avoiding the trap of washing)\nThe allegations surrounding data centre water consumption are a reminder that our digital lives has tangible environmental and social costs, and that superficial commitments are insufficient to address them. For BigTech companies to genuinely embrace sustainability and avoid accusations of ESG washing, several key actions are necessary:\nEnhanced transparency (to prevent lack of disclosure): companies must provide detailed and geographically specific data on their water consumption, including the sources of their water and the cooling technologies they employ. This transparency is important for stakeholders to accurately assess their environmental footprint and hold them accountable, preventing a key tactic of washing.\nInvestment in water-efficient technologies (beyond cherry-picking): increased investment in and deployment of advanced cooling technologies that minimise or eliminate water usage is essential, rather than solely relying on highlighting minor efficiency gains while neglecting larger issues.\nResponsible site selection (addressing potential negative impacts): future data centre deployments must carefully consider local water availability and potential impacts on water-stressed regions, moving beyond simply choosing locations with cheap land or energy.\nWater stewardship initiatives (meaningful action over just offsetting): beyond their own operations, companies should actively engage in meaningful water stewardship initiatives in the regions where they operate, supporting conservation efforts and collaborating with local communities to ensure sustainable water management, rather than solely relying on distant offsetting projects.\nMeaningful stakeholder engagement (beyond vague promises): open and ongoing dialogue with local communities, environmental organisations, and investors is vital for understanding concerns, building trust, and co-creating solutions that address the water impact of data centres, moving beyond vague promises of community support.\nIndependent auditing and verification (ensuring credibility): to ensure the credibility of their water usage reporting and sustainability claims, companies should consider independent third-party auditing and verification of their data centre water footprint.\n","date":"10 April 2025","permalink":"/washing-away-responsibility-bigtechs-data-centres-and-the-growing-water-crisis/","section":"Blog","summary":"","title":"Washing away responsibility? BigTech's data centres and the growing water crisis"},{"content":"Establishing collaborative horizon scanning now, concurrently with addressing immediate geopolitical pressures, is not a luxury but a vital leadership imperative for cultivating the foresight essential to long-term organisational resilience and avoiding the strategic disadvantage of perpetually playing catch-up.\nIn my previous post on this blog, I emphasised the need for proactive planning to navigate the current geopolitical uncertainty. I mentioned, and firmly maintain, that a \u0026lsquo;wait and see\u0026rsquo; approach is not wise, leaving companies vulnerable and data protection and GRC leaders looking unprepared. Building a sound plan to address immediate pressures – supply chain vulnerabilities, shifting market dynamics, internal reprioritisation – is absolutely essential. However, in my opinion, focusing solely on the immediate fires, however intense they may be, is dangerously short sighted. True strategic resilience demands looking further ahead. It requires establishing horizon scanning capabilities now, not later.\nIt might seem counterintuitive. When immediate crises demand attention and resources are stretched thin, dedicating effort to scanning the distant future can feel like a luxury. Data protection and GRC leaders are understandably consumed by stabilising the ship in the current storm. Yet, I believe this perspective misses a crucial point: failing to look ahead while managing the present is like trying to navigate a complex waterway by only looking at the water directly over the bow. You might avoid the closest rocks, but you risk running aground on a sandbar just beyond your immediate focus, or missing a crucial channel that leads to safer waters.\nEstablishing a horizon scanning function today serves a dual purpose, vital into account our current challenges. Firstly, it provides a structured way to gain a clearer, more objective sense of the current complex environment. It moves beyond reactive issue resolution to systematically identify the signals, trends, and potential disruptors shaping today\u0026rsquo;s landscape – including those geopolitical factors causing immediate concern. Secondly, it lays the foundation for anticipating what\u0026rsquo;s coming next. By cultivating this foresight capability now, you begin building the organisational \u0026lsquo;muscle\u0026rsquo; needed to identify emerging threats and opportunities before they fully materialise.\nDelaying this implementation means constantly playing catch-up. By the time a future trend or disruption becomes obvious enough to demand immediate attention, the window for proactive strategic response has often narrowed considerably, or even closed entirely. You\u0026rsquo;re forced into reactive mode. Opportunities are missed, threats are harder to mitigate, and your company perpetually feels one step behind. In my view, this reactive stance is unsustainable and significantly increases long-term risk. Starting horizon scanning now, even modestly, allows you to build a baseline understanding and develop the internal processes and skills needed before the next major wave of change hits. And further change is always on the way.\nHorizon scanning: a collaborative endeavor for deeper insight #So, what does effective horizon scanning look like in practice? Crucially, it is not a solitary activity confined to a strategy department or an analyst\u0026rsquo;s desk. At Purpose and Means, our approach is fundamentally rooted in collaboration. We believe that getting the right people around the table is the key to success.\nWhy? Because the future rarely announces itself neatly in one discipline or business function. Signals of change are often weak, ambiguous, and scattered across different domains.\nDiverse perspectives: bringing together individuals from various parts of the company – operations, R\u0026amp;D, marketing, finance, HR, technology – provides a richer set of perspectives. Obviously much depends on the scope and objectives - is this purely from a data protection, AI or GRC context, or are broader elements applicable? Someone in legal might spot a subtle shift in AI regulation driven by a distant political tension, while a technologist might identify an emerging niche technology with disruptive potential, and marketing might sense a subtle change in consumer sentiment, e.g. attitudes towards privacy expectations.\nBreaking silos: horizon scanning inherently cuts across traditional organisational silos. A collaborative approach forces communication and knowledge sharing, connecting dots that might otherwise remain isolated within specific departments.\nCollective sense-making: interpreting weak signals and potential futures is complex. Group discussion, debate, and synthesis are vital for building a shared understanding and evaluating the potential significance and implications of identified trends or events. This collective intelligence is far more powerful than individual analysis alone.\nIdentifying who constitutes the \u0026ldquo;right people\u0026rdquo; is a critical first step, mirroring the process we advocate for in crisis planning workshops. It requires a thoughtful mix of expertise, seniority levels, and even cognitive styles – blending deep subject matter knowledge with broader strategic thinking, and including individuals known for challenging conventional wisdom.\nTools to support the scan: from a simple start to a dedicated platform #While the human element – collaboration and critical thinking – is central, the right tools can significantly enhance the effectiveness and efficiency of horizon scanning. The process involves gathering, filtering, analysing, and interpreting vast amounts of information from diverse sources (news, reports, academic papers, social media, expert networks, etc.).\nDedicated software platforms exist specifically for this purpose. One tool I highly recommend exploring is Fibres. Platforms like Fibres offer features designed to streamline the workflow:\nStructured data collection: providing frameworks to capture signals and trends systematically.\nCollaborative features: enabling team members to share findings, comment, rate significance, and build upon each other\u0026rsquo;s insights.\nVisualisation: offering ways to map trends, identify connections, and visualise potential future scenarios.\nAI augmentation: some tools leverage AI to help filter noise, identify patterns, and suggest relevant connections.\nUsing such dedicated tools can bring scalability, and sustainability to your horizon scanning efforts. However, I strongly caution against letting the desire for the perfect tool become a barrier to starting. To begin with, any collaboration tool can be used.\nThe key, in my opinion, is to start documenting the process and the findings somewhere accessible and collaborative. The features and fuctionality of the tool can evolve as the practice matures within your company. The most critical step is to simply begin the systematic process of looking outwards and forwards, together.\nHow Purpose and Means can help #Recognising the importance of horizon scanning is one thing; implementing it effectively, especially when facing immediate pressures, is another. Many companies struggle with where to begin, how to structure the process, or how to engage the right people effectively.\nThis is where Purpose and Means can provide crucial support. We don\u0026rsquo;t just advocate for planning and foresight; we help companies embed these capabilities. We can assist your company in getting started by:\nEducating your team: we provide foundational training on horizon scanning principles – what it is, why it matters, and the key methodologies involved. We demystify the process and build internal understanding and buy-in.\nTailoring the approach: we work with you to design a horizon scanning process that fits your specific business context, industry, strategic priorities, and available resources.\nFacilitating initial efforts: we can facilitate initial workshops to kickstart the scanning process, help identify relevant domains and sources, guide collaborative sense-making, and establish a sustainable rhythm for the activity.\nDefining roles and responsibilities: we help clarify who needs to be involved and what their contributions should be, ensuring the collaborative effort is structured and productive.\nOur role is to provide the structure, know-how, and facilitation needed to move from recognising the importance of foresight to actively cultivating it within your company.\n","date":"9 April 2025","permalink":"/stop-playing-catch-up-why-your-team-must-scan-the-horizon-starting-immediately/","section":"Blog","summary":"","title":"Stop playing catch-up: why your team must scan the horizon, starting immediately!"},{"content":"Embracing data minimisation is not simply about ticking a compliance box, it represents a fundamental shift in mindset towards a responsible and sustainable approach to data, ensuring that we not only protect individual rights and freedoms but also safeguard the future of our planet.\nWith digital transformation pretty much BaU these days, the convergence of emerging technologies and environmental accountability demands a re-evaluation of traditional business practices for many companies. Environmental, Social, and Governance (ESG) considerations are no longer confined to the realms of resource management and ethical labour practices. They must now take in the very foundation of our digital infrastructure. At the heart of this lies data minimisation, a principle enshrined in most data protection laws and regulations around the world. This principle mandates the collection, processing, and storage of only the data that is strictly necessary for specified, legitimate purposes. To disregard or trivialise data minimisation is not merely a legal oversight, it constitutes a reckless disregard for the environmental well-being of our planet and a betrayal of commitments to a sustainable future made by institutions and bodies like the UN and EU.\nThe environmental footprint of the data behemoth\nThe relentless pursuit of data, often driven by the demands of enhanced insights and competitive advantage, has created a digital behemoth with a huge appetite for resources and a massive environmental footprint. The \u0026ldquo;data hoarding\u0026rdquo; mentality, where companies indiscriminately collect and retain vast quantities of personal data, has created a hidden environmental crisis that demands urgent attention.\nThink about the energy consumption of data centres. These ever-increasing-in-size facilities require tremendous amounts of electricity to power servers, storage devices, and cooling systems. As the volume of data continues to explode, the energy demands of data centres are escalating at an alarming rate, contributing significantly to greenhouse gas emissions and exacerbating the climate crisis. The European Green Deal\u0026rsquo;s ambitious climate neutrality goals cannot be realised without addressing the unsustainable energy consumption of our digital infrastructure.\nThe environmental cost extends beyond energy consumption. The production of servers, storage devices, and networking equipment necessitates the extraction of increasingly scarce rare earth minerals and other valuable resources. Shorter hardware lifecycles, driven by the continual demand for greater storage capacities and faster processing speeds, intensify the pressure on these resources and contribute to environmental degradation. The environmental burden of manufacturing IT equipment is often overlooked, but it represents a significant challenge to achieving a circular economy and reducing our reliance on finite resources.\nFurthermore, the rapid pace of technological innovation leads to a growing volume of obsolete servers, storage devices, and other IT equipment, contributing to the global e-waste crisis. Improperly managed e-waste poses a significant threat to human health and the environment, contaminating soil, water, and air with hazardous substances. The long-term consequences of digital waste are only beginning to be understood, but it is clear that we must adopt more sustainable practices for managing electronic waste and reducing its environmental impact.\nData minimisation: data protection practices meets environmental responsibility\nData minimisation, as articulated in Article 5(1)(c) of the GDPR, offers a framework for mitigating the environmental impact of the data-driven economy and aligning our processing practices with the principles of sustainability. By embracing data minimisation, companies can unlock a range of environmental benefits and contribute to a more sustainable future.\nFirstly, data minimisation can reduce the energy consumption of data centres. Storing only essential personal data minimises the overall energy demand of these facilities, directly lowering carbon emissions. By optimising data storage and processing practices, companies can reduce their carbon footprint and contribute to a cleaner, more sustainable energy future.\nSecondly, data minimisation can extend the lifespan of IT infrastructure. By reducing the need for constant hardware upgrades and expansions, organisations can prolong the useful life of their existing IT equipment, reducing the demand for new resources and mitigating the environmental impact of manufacturing. Extending hardware lifecycles not only reduces resource consumption but also minimises the generation of e-waste.\nThirdly, data minimisation can help to mitigate the e-waste crisis. Implementing data retention and deletion policies in a proper, thought out manner, as required by most data protection laws, helps to minimise the volume of obsolete hardware that ends up as e-waste. And at the same time, by ensuring that personal data is securely deleted when it is no longer needed, companies can reduce risks to the rights and freedoms of individuals\nAlso, embracing data minimisation enhances data security and resilience. A smaller data footprint reduces the attack surface for cybercriminals, making companies less vulnerable to data breaches. By minimising the amount of personal data stored, companies can reduce the potential impact of a data breach on both individuals and the environment.\nYour call to action\nTo fully realise the environmental benefits of data minimisation, companies must embrace a comprehensive and strategic approach that encompasses the following key elements:\nConduct a thorough data audit to identify all personal data collected, processed, and stored, mapping data flows to understand how data is used and shared.\nDefine clear, specific, and legitimate purposes for each processing activity, ensuring that data collection is strictly limited to those purposes.\nImplement data retention policies, specifying how long personal data will be stored and when it will be securely deleted, in accordance with applicable laws and regulations.\nPrioritise data security by implementing data security measures to protect personal data from unauthorised access, use, or disclosure.\nProvide relevant role-based employee education and training to ensure that all employees understand data protection principles and requirements, especially in this context, data minimisation practices.\nEmbed data protection by design and by default, integrating data protection principles into the design of all new products, services, and business processes.\nMake use of privacy-enhancing technologies (PETs) such as anonymisation, pseudonymisation, and differential privacy to minimise the amount of personal data processed while still enabling valuable insights. Be aware you need to also have the right competences in your companies to work with these technologies. The UK\u0026rsquo;s ICO highlights this in it\u0026rsquo;s very useful guidance.\nFinally, promote algorithmic efficiency by using techniques such as feature selection, dimensionality reduction, and model compression to create leaner and more efficient algorithms and models, reducing energy consumption.\nData minimisation is far more than a mere legal compliance exercise, it is a fundamental ethical and environmental imperative. In my unwavering conviction, companies that fail to embrace data minimisation are not only exposing themselves to significant legal and reputational risks but are also actively undermining global efforts to build a sustainable digital future.\nThe time for complacency is over. Companies and public sector bodies must recognise the environmental consequences of unchecked data growth and take decisive action to minimise their data footprint. By embracing data minimisation, they can create a more sustainable, resilient, and equitable digital ecosystem that protects both the rights and freedoms of individuals and the health of our planet.\n","date":"4 April 2025","permalink":"/the-carbon-cost-of-data-why-data-minimisation-matters-now/","section":"Blog","summary":"","title":"The carbon cost of data: why data minimisation matters now"},{"content":"The current scrutiny of ESG initiatives, particularly regarding DEI, forces companies to navigate a precarious balance between upholding social values, respecting individual rights and freedoms, and complying with evolving political demands.\nEnvironmental, Social, and Governance (ESG) principles have become increasingly central to corporate strategy, guiding investment decisions and shaping public perception. However, a recent development highlights a growing tension between ESG initiatives, particularly those related to Diversity, Equity, and Inclusion (DEI), and rapidly changing political landscapes. The demand by the US for Danish companies to sign declarations regarding their diversity efforts has cast a huge spotlight on the potential impact of such scrutiny on individual rights and freedoms.\nThe situation unfolded with the circulation of a letter from the US embassy to several Danish companies. This letter stipulated that suppliers to the US Department of State must certify their compliance with US anti-discrimination laws and confirm they do not have diversity programmes that violate these laws. According to Danish Chamber of Commerce, this request has triggered \u0026ldquo;irritation, uncertainty and confusion\u0026rdquo; among Danish companies.\nThis demand is rooted in the Trump administration\u0026rsquo;s stance on corporate diversity work, which it views as potentially illegal and a threat to American values. Through executive orders, the administration has sought to stop DEI programmes, raising concerns about the scope and impact of such actions.\nThe core of the issue lies in the ambiguity surrounding what exactly companies are being asked to sign. The Danish Chamber of Commerce has pointed out the vaguely formulated nature of the declaration, leading to significant doubt among businesses about the specific requirements for compliance.\nThe impact on companies - a view from Denmark #The Confederation of Danish Industry (DI) has also voiced its concerns acknowledging the \u0026ldquo;difficult times\u0026rdquo; that Danish companies in the US are facing and cautioned against navigating what they called \u0026ldquo;Trump\u0026rsquo;s new world.\u0026rdquo; DI had previously advised its members to exercise caution in promoting their diversity programmes, even suggesting a reduction in the use of the word \u0026ldquo;diversity\u0026rdquo; itself.\nNews like this demands that companies urgently assess the impacts on their ESG programmes, particularly regarding the \u0026ldquo;Social\u0026rdquo; pillar. This requires structured analysis of how these external pressures affect their internal commitments to diversity, equity, and inclusion. It’s a time for strategic decision-making, not knee-jerk reactions that could further compromise their values or damage their reputation.\nFrom my perspective, the question now is how these developments impact the rights and freedoms of individuals, both within these companies and in broader society.\nRights and freedoms of individuals at stake: # Freedom of association and expression: Diversity programmes often result from companies\u0026rsquo; genuine commitment to creating inclusive workplaces where employees from all backgrounds feel welcome and valued. By demanding declarations that effectively discourage such initiatives, the US government is potentially infringing on the freedom of association and expression of these companies and their employees. Companies may feel compelled to scale back or eliminate programmes designed to promote diversity, limiting their ability to express their values and build inclusive cultures.\nEqual opportunity and non-discrimination: While the US government\u0026rsquo;s stated goal is to prevent discrimination, the measures being implemented could inadvertently undermine equal opportunity and non-discrimination efforts. DEI programmes are often designed to address historical inequities and ensure that individuals from underrepresented groups have fair access to opportunities. By discouraging these programmes, the US government risks perpetuating existing inequalities and limiting the potential for diverse talent to thrive.\nPrivacy and data protection: The requirement for companies to disclose details of their diversity programmes raises concerns about privacy and data protection. Companies may be forced to collect and share sensitive information about their employees\u0026rsquo; backgrounds, potentially violating laws and regulations such as the GDPR. The lack of clarity surrounding the use and storage of this data further exacerbates these concerns.\nAutonomy and self-determination: Companies should have the autonomy to determine their own values and priorities, including their approach to diversity and inclusion. By exerting pressure on Danish companies to conform to its own views on DEI, the US government is infringing on their right to self-determination. This intrusion into corporate decision-making sets a worrying precedent and could stifle innovation and creativity.\nA balancing act #Despite the pressures from the US, both the DI and the Danish Chamber of Commerce are urging Danish companies to uphold their values and maintain their commitment to diversity and inclusion. They recommend a balancing act: remaining steadfast in their values while being mindful of the language used and demonstrating flexibility in their approach. Companies must adopt a structured decision-making process during this period to carefully navigate these complex issues, ensuring that any adjustments to their ESG programmes align with their core values and long-term goals.\nLEGO has already been accused of playing down diversity in its annual report. This action may be a result of the shifting landscape and is an example of the pressure companies are feeling. This highlights the difficult decisions companies must make to navigate the changing political climate. Companies should resist knee-jerk reactions that prioritise the short-term over substantive progress toward their DEI goals.\nA broader trend #It\u0026rsquo;s important to note that this situation is not unique to Denmark. Similar requests have been sent to companies in France and other countries, indicating a broader trend of increased scrutiny on corporate diversity efforts by the US government. This epitomises the need for companies globally to proactively assess the potential impacts on their ESG initiatives and develop strategies for navigating these challenges.\n","date":"2 April 2025","permalink":"/the-erosion-of-inclusion-political-interference-threatens-socially-responsible-business-practices/","section":"Blog","summary":"","title":"The erosion of inclusion: political interference threatens socially responsible business practices"},{"content":"Infographics, explainers, and one-pagers are indispensable visual communication tools for data protection and GRC leaders to effectively convey complex information to diverse stakeholder groups.\nAs a data protection or GRC leader, you are tasked with navigating intricate frameworks, deciphering complex regulations, and building a culture of awareness across your company, but how do you effectively translate this often-dense (dare I say boring?) information into digestible insights for diverse audiences, ranging from your execs to every employee?\nThe answer, in my opinion, lies in the power of visual communication. Infographics, explainers, and one-pagers have emerged as invaluable tools offering a dynamic and engaging way to share complex topics and enable understanding for people who need to get the gist of something in the shortest space of time as possible.\nBridging the communication gap: Tailoring the message for every audience\nOne of the biggest challenges for data protection and GRC professionals is the diverse groups of stakeholders they need to engage with. Senior leadership requires high-level overviews, focusing on strategic implications and potential business impact. Broader employee awareness initiatives need clear, concise explanations that resonate with their daily roles and responsibilities. Legal teams need precise representations of processes and compliance requirements, while technical teams might benefit from more detailed visual breakdowns.\nThis is where the versatility of infographics, explainers, and one-pagers comes into play. These visual formats allow you to customise your message to the needs and understanding of your target audience. By strategically employing visuals, you can:\nSimplify complexity for senior leadership: Instead of lengthy reports filled with technical jargon, present key risk indicators, compliance metrics, and the ROI of data protection initiatives in visually appealing and easily digestible formats. A well-designed infographic can quickly convey the essence of a complex situation, enabling faster and more informed decision-making at the highest levels.\nBoost employee awareness and engagement: Supplement your dense policy documents (that often go unread) with explainers and visually rich one-pagers can transform abstract concepts like data protection principles or phishing awareness into relatable and memorable information. Engaging visuals can capture attention and break down complex procedures into actionable steps.\nEnhance clarity for legal and technical teams: Detailed process flows, system architectures, and compliance frameworks can be effectively communicated through visual representations. Diagrams, timelines, and flowcharts can provide a clear and unambiguous understanding of complex legal requirements or technical implementations, reducing ambiguity and fostering better collaboration.\nContext is everything: Defining your objectives and audience\nBefore even considering design elements, the foundation of an effective visual communication piece lies in clearly defining its context and objectives. Ask yourself:\nWho is the target audience? What is their level of understanding of the topic? What are their key concerns and interests? What are their requirements? How can my communication help them succeed?\nWhat is the primary message I want to convey? What are the key takeaways I want the audience to remember?\nWhat action do I want the audience to take (if any)? Do I want them to adopt a new procedure, understand a risk, or approve a new initiative?\nUnderstanding these key questions will guide every aspect of your visual creation, from the type of visual to the level of detail included. A one-pager aimed at raising employee awareness about password security will look vastly different from an infographic designed to brief the board on the company\u0026rsquo;s ESG posture.\nThe power of diagrams: Visualising processes and relationships\nDifferent types of diagrams can play a significant role in conveying specific types of information effectively. Consider the following:\nFlowcharts: Ideal for illustrating processes, procedures, and decision pathways. They can clearly depict the steps involved in data breach response, access control workflows, or the lifecycle of personal data.\nHierarchy charts: Useful for showcasing organisational structures related to data protection responsibilities or the different levels of data sensitivity within the company.\nComparison charts: Excellent for highlighting the differences between various security tools, compliance frameworks, or risk levels.\nTimelines: Effective for illustrating the sequence of events, such as the stages of a project implementation or the steps involved in an audit.\nPersonally, I\u0026rsquo;m a big fan of sequence diagrams and find them to be particularly powerful, especially when explaining interactions between different systems, individuals, or processes over time. They add a dynamic dimension to infographics, allowing you to visualise the flow of information or actions in a sequential manner. For instance, a sequence diagram could effectively illustrate the steps involved in a user authentication process, the flow of data during a transaction, or the interactions between different security controls during a potential threat. I often use them to break down the sequence of events in a high profile personal data breach and then share with my clients allowing them to ask themselves that important question: \u0026ldquo;Could this have been us?\u0026rdquo;\nInfographics, explainers, and one-pagers: the same thing?\nWhile the terms are often used interchangeably, there are subtle nuances that can help you choose the most appropriate format:\nInfographics: Typically visually driven, presenting data, statistics, and information in a compelling and easily digestible format. They often incorporate a variety of imagery, and minimal text to convey key insights. Infographics are excellent for grabbing attention and providing a high-level overview of a topic.\nExplainers: Focus on simplifying complex concepts or processes. They often use a combination of visuals and concise text to break down information into understandable chunks. Explainers might dig into more detail than a purely data-focused infographic and are excellent for educating an audience on a specific topic.\nOne-Pagers: As the name suggests, these are concise summaries of key information presented on a single page. They can incorporate elements of both infographics and explainers, providing a quick and easily digestible overview of a project, initiative, or key findings. One-pagers are often used for executive summaries or for providing a brief overview before a more detailed discussion.\nUltimately, the lines can blur, and the most important aspect is choosing a format that effectively communicates your message to your target audience.\nThe investment in visual clarity: It pays dividends, and is fun\nCreating high-quality infographics, explainers, and one-pagers can sometimes require an investment of time and resources. Researching the information, structuring the narrative, designing the visuals, and ensuring accuracy all contribute to the production process. However, the return on this investment is significant.\nPersonally, I love the process of transforming complex information into compelling visuals - for me its great fun and I can often lose all track of time in thie creative process! There is a certain satisfaction in taking a dense topic and crafting a visual narrative that makes it clear, engaging, and impactful. It’s a creative outlet that I embrace.\n","date":"27 March 2025","permalink":"/cutting-through-complexity-how-visual-communication-helps-data-protection-and-grc-leaders-convey-their-work-to-busy-colleagues/","section":"Blog","summary":"","title":"Cutting through complexity: How visual communication helps data protection and GRC leaders convey their work to busy colleagues"},{"content":"ESG leaders must adapt to shifting political landscapes by conducting regular analysis and align data protection with evolving ESG requirements to maintain integrity, compliance, and stakeholder trust, or risk their programs becoming dangerously misaligned with regulatory realities.\nA plan is only as solid as the last time you tested, or validated it, and right now, what\u0026rsquo;s happening in the political landscape should trigger ESG leaders into urgent strategy and plan reviews.\nLast week, the Trump administration announced what EPA Administrator Lee Zeldin called the \u0026ldquo;largest deregulatory action in U.S. history,\u0026rdquo; targeting 31 environmental regulations. These huge rollbacks aim to \u0026ldquo;unleash American energy\u0026rdquo; but could fundamentally alter how companies approach their environmental commitments.\nLet me be crystal clear: this isn\u0026rsquo;t just a US issue. It\u0026rsquo;s a global wake-up call.\nI\u0026rsquo;ve spent decades helping companies manage change, and one aspect I\u0026rsquo;ve particularly enjoyed is when things change and facilitating the analysis of the impact of the change on the business.\nThe transatlantic divide #Some might dismiss these regulatory shifts as \u0026ldquo;US politics,\u0026rdquo; but that would be a dangerous oversimplification. When the world\u0026rsquo;s largest economy changes direction, the ripples are felt everywhere.\nI\u0026rsquo;ve recently had interesting conversations with a couple of European clients who insist their ESG work remains unaffected. \u0026ldquo;We see things differently in Europe,\u0026rdquo; they tell me. And they\u0026rsquo;re right – to a point. The EU continues strengthening its sustainability framework while the US appears to be reversing course.\nJust like we see in data protection and AI governance, divergent approaches across different jurisdictions creates complexity, confusion, and compliance headaches for multinational corporations. This isn\u0026rsquo;t theoretical, it\u0026rsquo;s happening right now.\nPESTEL: your early warning system #Smart ESG leaders don\u0026rsquo;t just react to change, they anticipate it. That\u0026rsquo;s where PESTEL analysis becomes invaluable. This framework examines Political, Economic, Social, Technological, Environmental, and Legal factors affecting your business environment.\nWhen was the last time your company conducted a thorough PESTEL analysis of your ESG strategy? If you can\u0026rsquo;t remember, you\u0026rsquo;re likely flying blind in increasingly turbulent weather.\nThe political dimension of PESTEL has become especially volatile. Yet many companies develop ESG programmes based on current regulations without considering how political shifts might reshape those very regulations. That\u0026rsquo;s like building a house on sand and hoping the tide won\u0026rsquo;t come in.\nData protection: the hidden ESG vulnerability #Here\u0026rsquo;s something many companies miss entirely: the critical intersection between data protection and ESG goals.\nAt Purpose and Means, we\u0026rsquo;ve developed a model that aligns data protection with broader ESG objectives. Why? Because we recognise that responsible data processing isn\u0026rsquo;t just about compliance, it\u0026rsquo;s about governance, transparency, and social responsibility.\nWhen political winds shift ESG requirements, your data protection strategy must adapt accordingly. Otherwise, you risk misalignment between your stated values and operational reality.\nAdapting without abandoning core values #The key question isn\u0026rsquo;t whether to respond to political changes affecting ESG, it\u0026rsquo;s how to respond without compromising your company\u0026rsquo;s fundamental values.\nWe help companies navigate this delicate balance. We establish clear metrics linking data protection practices to ESG outcomes, ensuring accountability even as regulatory requirements evolve. We build adaptable frameworks that maintain core ESG commitments while flexing to accommodate shifting compliance landscapes.\nThis isn\u0026rsquo;t just theoretical, it\u0026rsquo;s practical, measurable, and impactful, and importantly provides answers to your teams involved in processing personal data, especially: \u0026ldquo;What part do we play in our company achieving it\u0026rsquo;s ESG goals?\u0026rdquo;\nWhat should you be doing right now? #The greatest risk in times is inaction. As political forces reshape ESG expectations, companies must:\nConduct regular PESTEL analysis focusing specifically on ESG factors\nStrengthen the alignment between data protection and ESG strategies\nDevelop scenario plans for potential regulatory changes\nMaintain transparent communication with stakeholders about adaptations\nPreserve core values while adjusting tactical approaches\nYour stakeholders don\u0026rsquo;t just care about what you achieve, they care about how you achieve it. By integrating robust data protection practices with your ESG strategy, you demonstrate a holistic commitment to responsible business conduct.\nPolitical changes are coming for your ESG work whether you\u0026rsquo;re ready or not. The companies that thrive won\u0026rsquo;t be those with the most ambitious targets, but those with the most adaptable strategies.\nAt Purpose and Means, we specialise in helping companies align data protection with ESG goals. As political realities shift these goals, we ensure your alignment strategy shifts too, maintaining integrity, compliance, and stakeholder trust throughout the process.\nBecause in the end, words that work must be backed by actions that work. And in light of our current complex geopolitical landscape, that means building ESG programmess resilient enough to withstand whatever political winds blow their way.\n","date":"17 March 2025","permalink":"/are-esg-leaders-using-pestel-to-navigate-political-crosswinds-or-are-they-flying-blind/","section":"Blog","summary":"","title":"Are ESG leaders using PESTEL to navigate political crosswinds, or are they flying blind?"},{"content":"By tapping into the expertise of younger employees, companies can build a more AI-literate leadership team that helps future-proof their companies.\nFor too long, AI governance has been treated as a legal and technical issue when, in reality, it demands strategic leadership. The EU AI Act has made AI literacy a business imperative, and companies that fail to prioritise this will struggle to navigate the evolving regulatory landscape.\nThe solution? Reverse mentorship.\nReverse mentorship is not new, it\u0026rsquo;s been around since the 1990s. I believe it\u0026rsquo;s worth re-visiting given all the complexities of data protection, AI governance, and compliance with emerging laws and regulations. With the EU AI Act in particular, one challenge stands out: AI literacy at the leadership level. If your company is a provider and deployer of AI systems then you need to be aware of your obligations under article 4 of the EU AI Act.\nThis requirement extends to decision-makers who must navigate compliance, risk assessment, and ethical considerations surrounding AI-driven solutions. Yet, many senior leaders lack the digital fluency necessary to make informed decisions in these areas.\nTraditional mentorship models, where senior executives guide younger employees, is no longer fit for purpose where digital fluency and emerging technologies are second nature to junior employees. Instead, companies can embrace reverse mentorship, a leadership tool where junior employees educate and guide senior executives on AI trends, data-related risks, and responsible governance.\nBuilding a structured reverse mentorship programme #To ensure meaningful and effective impact, reverse mentorship must be structured, goal-oriented, and aligned with business priorities and context. Here’s how companies can implement it effectively:\n1. Identify AI-savvy mentors from within #Junior employees, particularly those in data science, cybersecurity, or digital strategy roles, often have a better grasp of AI technologies than senior executives. These employees should be strategically selected to mentor senior leaders based on their expertise in AI ethics, data protection laws, and digital transformation.\n2. Match mentors and mentees based on AI literacy needs #Pairing should be based on knowledge gaps. For example, an AI governance specialist could mentor a CISO on bias mitigation in machine learning models, while a privacy analyst could educate a CMO on AI-driven marketing risks and GDPR compliance.\n3. Set objectives aligned with the EU AI Act #Mentorship goals should be explicitly tied to AI literacy requirements outlined in EU AI Act and could include objectives aligned with:\nSafeguarding rights and well-being – understanding AI’s impact on fundamental rights and safety. Learning ethical AI principles and real-world risks like bias and misinformation.\nDemocratic oversight and accountability – gaining relevant knowledge of the EU AI Act, compliance requirements, governance structures, and best practices for aligning AI with democratic values.\nEmpowering informed decision-making – understanding AI capabilities, benefits, and risks. Understanding where to embed AI considerations into business strategies and spread relevant AI literacy across teams.\nEnsuring technical and ethical integrity – understanding key AI components, ethical considerations, and the importance of human oversight to prevent failures and biases.\nTransparency and explainability – learning how AI makes decisions, how to communicate AI-driven outcomes clearly, and ensuring compliance with transparency laws.\nBuilding trustworthy AI – understanding AI’s impact on jobs, building stakeholder trust, ensuring compliance, and using AI for responsible innovation.\n4. Encourage open, two-way learning #While junior employees serve as mentors, learning must be reciprocal. Senior leaders should share strategic insights, helping younger employees understand the broader business context. This helps create collaborative decision-making and aligns AI governance with company goals.\n5. Use real-world AI case studies #Practical examples enhance engagement. Reviewing case studies about AI incidents, such as biased hiring algorithms or data breaches due to weak governance, will help senior leaders visualise potential risks and apply learnings in their business context.\n6. Measure impact and adjust accordingly #Tracking progress is essential. Companies should assess:\nAI literacy improvements among senior leaders through pre- and post-programme assessments.\nImplementation of AI governance frameworks post-mentorship.\nLeadership engagement in AI-related decision-making and compliance strategies.\nReverse mentorship is not just about upskilling executives, it’s about creating a culture where responsible AI and data protection are embedded in leadership thinking. It\u0026rsquo;s also about setting the right \u0026rsquo;tone from the top\u0026rsquo; that has a huge impact on the rest of the company.\nLast thought: avoid generic solutions that only tick a box #Like so many aspects of TechReg, complying is a team effort involving multiple competences, from interpreting legal requirements through to analysing the impact of the requirements in the context of the business, to identifying appropriate solutions, to designing, building, testing and deploying the solution - and the needed project management to estimate and plan the work.\nAI literacy is no exception for your solution to be effective, your strategy needs to be aligned to business context - not generic or copy/paste, and remember your executives are just one target group of stakeholders where reverse mentorship may help. Your AI literacy strategy needs to also take into account the unique needs of other groups.\nIt\u0026rsquo;s a team effort and Purpose and Means typically supports legal teams with change management work - get in touch if you need help scoping and developing your AI literacy programme.\n","date":"10 March 2025","permalink":"/ai-risks-compliance-and-leadership-why-younger-mentors-are-key/","section":"Blog","summary":"","title":"AI risks, compliance \u0026 leadership: why younger mentors are key"},{"content":"Data protection leaders are increasingly overwhelmed by regulatory complexity and operational burdens, necessitating a shift from compliance-focused roles to strategic business enablers, supported by advanced tools and resources to unlock their full potential.\n\u0026lsquo;Overloaded\u0026rsquo; has been a common remark I\u0026rsquo;ve heard from many data protection leaders for quite some years. However, it seems that the role has grown increasingly complex, leaving many leaders overwhelmed by mounting responsibilities and many are struggling.\nWhy data protection leaders are drowning in regulatory complexity and operational burdens #The proliferation of data protection and privacy laws worldwide has created a labyrinth of regulations that leaders must navigate. The GDPR is a comprehensive framework in itself that demands meticulous oversight especially it\u0026rsquo;s interplay with other applicable laws and regulations like the ePrivacy Directive, local employment laws, marketing laws and relevant sector-specific laws. And then there\u0026rsquo;s the flood of laws resulting from the EC data, AI and cyber strategy that further complicate compliance efforts. Each law introduces unique requirements, timelines, and interpretations, forcing data protection leaders to juggle multiple frameworks simultaneously.\nOperationally, leaders are tasked with conducting DPIAs, maintaining ROPAs, educating and training staff on contextual best practices, responding to data subject requests, monitoring and reporting to boards, to name a few tasks. The sheer volume of these responsibilities often exceeds the capacity of small data protection teams - and in many cases, the team is just the leader themselves. The \u0026lsquo;one-person data protection army.\u0026rsquo;\nAdding to the burden is the challenge of enforcing requirements with powerful third parties like some of the bigtech players. Leaders often struggle to hold the large players to account due to imbalances in power dynamics. Also, ambiguity in new laws and insufficient guidance from the supervisory authorities exacerbate the problem, leaving leaders to figure out unclear requirements on their own. Without adequate support and tools, this complexity risks burnout among data protection leaders and jeopardises compliance efforts.\nThe shift from compliance-based roles to strategic \u0026lsquo;business enablers\u0026rsquo; #Traditionally seen as \u0026rsquo;necessary evils\u0026rsquo;, the perception of data protection leaders is now changing to strategic business enablers, which is a much needed shift. Companies are now recognising that robust data protection is not just a legal necessity but can be a competitive advantage, if framed properly. By embedding Data Protection by Design and by Default considerations into product development and operations, leaders can help mitigate long-term strategic risks while building customer trust.\nThis shift requires data protection leaders to play a more proactive role in participating in business strategy processes. For example, they should influence product design decisions to ensure data protection safeguards are integrated from the outset. In regulated sectors like financial services and healthcare, where overlapping regulations such as DORA and NIS2 demand heightened cybersecurity measures, they must collaborate across departments to harmonise compliance efforts.\nTo succeed in this expanded role, data protection leaders need greater clout at the executive level and access to resources that will help execute their business-aligned strategies. As I mentioned earlier, this shift is long overdue but places more strain on leaders who lack a business, or strategic skillset and mindset.\nHow companies can support leaders with better resources and automation #Given the complexity of their role, companoes must rethink how they support their data protection leaders. Emerging tools and automation technology offer solutions to alleviate operational burdens while enhancing efficiency.\nThe \u0026lsquo;privtech\u0026rsquo; market is maturing with some long-established players are experiencing clients deserting their platforms to some of the new, more nimble players that streamline routine tasks like generating data flow maps, managing third-party vendor compliance. Also, robotic process automation (RPA) can handle repetitive data entry tasks with precision, freeing up leaders to focus on more strategic priorities. AI-powered analytics tools further assist by identifying risks and generating actionable insights from vast datasets.\nEducation and expertise #Continuous education and training is essential for keeping leaders updated on emerging laws and regulations or ongoing interpretations of existing laws. Contextual training programmes can help data protection teams stay ahead of trends while improving their ability to implement effective safeguards.\nInterdepartmental collaboration #Strong communication channels between data protection leaders and other departments, e.g., IT, legal, and risk management, lines of business, etc., are essential. This collaboration ensures a joined-up approach to compliance while reducing inefficiencies caused by siloed operations.\nSupport from the top #Perhaps most importantly, companies must empower data protection leaders with executive backing. This includes granting them decision-making authority in high-level discussions and ensuring sufficient staffing within for their teams. When supported adequately, leaders can shift from reactive firefighting to proactive strategy-building.\nThe future of data protection: a strategic imperative #As the processing of personal data becomes even more pervasive - and acknowledged as fueling many businesses, the role of the data protection leader will only become more critical. Companies that invest in empowering their data protection teams stand to gain not only the ability to demonstrate compliance, but also enhanced trust among multiple stakeholder groups. Utilising automation tools, encouraging interdepartmental collaboration, and embedding data protection considerations upfront into operations will be key strategies.\nBy transforming data protection into a strategic advantage rather than a necessary evil, businesses can position themselves as leaders in an increasingly regulated digital environment, and ensure their data protection leader thrives rather than drown under the weight of responsibility.\n","date":"7 March 2025","permalink":"/data-protection-leaders-are-overloaded-what-needs-to-change/","section":"Blog","summary":"","title":"Data protection leaders are overloaded. What needs to change?"},{"content":"Most companies unknowingly create an illusion of data protection by relying on compliance checklists and heatmaps, but without quantifiable risk measurement, they are merely performing theatrics rather than effectively managing real threats.\nImagine you\u0026rsquo;re an illusionist, standing on stage, ready to perform your greatest act yet. You\u0026rsquo;ve practiced for months, perfected every movement, and invested in the most elaborate props. The audience is captivated as you dramatically wave your hands and exclaim, \u0026ldquo;Behold, data protection is in place!\u0026rdquo;\nThe reality is you haven\u0026rsquo;t actually protected anything. You\u0026rsquo;ve simply created an illusion of compliance.\nThese days, this act is being performed on a daily basis, but not on a stage, but in the offices of companies all around the world, where many data protection leaders are unwittingly playing the role of this misguided illusionist, and their audience is senior leadership.\nThe companies invest in expensive tools, implement complex policies, and proudly display compliance certificates. Yet, when it comes to truly understanding and measuring their data and technology risks, they\u0026rsquo;re basing their investment on smoke and mirrors.\nThe checklist trap #It\u0026rsquo;s easy to fall into the trap of equating compliance checklists with effective risk measurement. After all, ticking boxes feels productive, and it provides a sense of accomplishment. But checklists are to risk measurement what magic tricks are to actual magic – a convincing performance that often lacks substance.\nDon\u0026rsquo;t get me wrong, checklists have their place. They\u0026rsquo;re excellent for ensuring baseline compliance and creating standardised processes. But when it comes to truly understanding your risks, they\u0026rsquo;re about as effective as painting a frying plan red, removing the handle and hanging it high on a wall hoping it will deter burglars (I did see this once, many years ago)!\nNo more guessing #If checklists aren\u0026rsquo;t the answer, what is? Let\u0026rsquo;s talk about quantifiable risk measurement. It\u0026rsquo;s a place where numbers reign supreme, and gut feelings are politely shown the door.\nIt\u0026rsquo;s a world of data points, probability distributions, and impact assessments. It might seem daunting at first, but I promise it\u0026rsquo;s far less confusing than you might think and will take much of the guesswork out of measuring risk.\nThe power of metrics #You begin by reviewing your existing risk registers as these will provide valuable insight, especially in identifying your critical data assets – you\u0026rsquo;ve heard this expression before I\u0026rsquo;m sure, the \u0026lsquo;digital crown jewels\u0026rsquo; of your company. Then, you select some scenarios that are more probable than possible. This is important and will separate the wheat from the chaff. Just like you\u0026rsquo;ve probably done before, you assess the likelihood of the various probably risk scenarios and quantify their potential impact. But here\u0026rsquo;s where it gets interesting. Instead of simply saying, \u0026ldquo;The risk of a personal data breach is high,\u0026rdquo; you can now say, \u0026ldquo;There\u0026rsquo;s a 15% chance of a personal data breach in the next 12 months, with a potential financial loss of €2.5 million.\u0026rdquo; Suddenly, we\u0026rsquo;ve transformed vague concerns into concrete, actionable insights.\nAssumptions management #One of the most powerful aspects of quantifiable risk measurement is its ability to make assumptions tangible. In the world of checklists and qualitative assessments, loose assumptions often lurk in the shadows, influencing decisions without being properly documented and scrutinised.\nBut when we start putting numbers to our risks, these assumptions are forced into the spotlight. We have to justify our estimates, gather data to support our assessments, and constantly refine our models based on new information.\nIt\u0026rsquo;s like looking under your bed, or cleaning out your draws – suddenly, you can see all the dust, cobwebs and grime that were hidden from view. It might not be pretty at first, but it\u0026rsquo;s the only way to truly clean your house.\nFrom measurement to management #The true power of quantifiable risk measurement isn\u0026rsquo;t just in the numbers themselves – it\u0026rsquo;s in how those numbers transform your entire approach to data management and data protection.\nWith a clear, quantified understanding of your risks, you can:\nPrioritise investments: instead of spreading your resources thin trying to address every possible risk, you can focus on the areas that pose the greatest threat to your company.\nCommunicate effectively: When you can express risks in terms of Euros, probabilities and business impacts, suddenly everyone from the boardroom to the teams involved in the processing of personal data can understand the importance of proper risk management.\nMake data-driven decisions: no more relying on gut feelings or generic risk registers. Every decision can be backed by solid, quantifiable evidence.\nContinuously improve: by regularly reassessing your risks and comparing them to your baseline measurements, you can track your progress and adjust your strategies in real-time.\nBuild true resilience: understanding your risks at a granular level allows you to build targeted, effective defences that go beyond mere compliance.\nBuild confidence in your work #The next time a governance board, or a senior leader asks you about your level of control, resist the urge to pull out your checklist or wave your compliance certificates. Instead, explain your risk metrics, educate them with your probability assessments, and provide them with reassurance of your data-driven approach to data protection.\nThese days, the real magic isn\u0026rsquo;t in creating illusions – it\u0026rsquo;s in controlling risks and prioritising investment through effective measurement and alignment.\nIn this post, I\u0026rsquo;ve been primarily discussing \u0026ldquo;data protection risk\u0026rdquo; from a compliance perspective. We also need to be able to measure \u0026ldquo;risks to the rights and freedoms of individuals\u0026rdquo; perspective. A similar methodology can be used as briefly outlined here, but these risk scenarios will be covered in a future post.\n","date":"6 March 2025","permalink":"/the-illusion-of-data-protection-are-you-measuring-risk-effectively/","section":"Blog","summary":"","title":"The illusion of data protection: are you measuring risk effectively?"},{"content":"Here\u0026rsquo;s an approach that transforms the overwhelming task of responding to an audit into a clear, structured process - breaking down chaos into an actionable plan, just like turning a pile of disorganized bricks into a strong, fortified wall.\nAn audit team has recently visited your department or programme to conduct an audit. It could be data protection, AI governance, information security, or a specific process.\nYou have a copy of the audit report in front of you, and you need to respond to how you intend to address all the findings. It can be an overwhelming task, especially if the report is lengthy.\nYou feel like you\u0026rsquo;ve been hit by a ton of bricks. Fortunately, there is a way forward.\nDealing with a multitude of issues requires a structured, collaborative approach that will also generate documentation (evidence) to demonstrate accountability.\nI\u0026rsquo;ve used the following method to address problems at various levels of a company, e.g., audit findings, performance issues at management level, broken processes, etc. At the end of it, you will be able to present a remediation plan to management and the Audit Committee (if you have one).\nIt\u0026rsquo;s a straight-forward five-step workshop-oriented process:\nWHO? The stakeholders you need to involve\nWHAT? Making sense and understanding the findings themselves\nWHY? The underlying problems\nHOW? The work required to address the issues\nWHEN? Roadmap and timeline\nEssentially you are transforming something that may at first appear to be chaotic (and painful), into a solid structure - your remediation plan.\nStep 1: WHO? Identifying the right people #Identify the people who need to help you. These are the people who must take responsibility now, and from this point forward. They may have been involved in the audit itself. Typically they will be:\nColleagues from the business, e.g., process owners, business marketing, application owners, customer service, etc,\nColleagues from IT, e.g., IT asset owners, IT process owners, etc.,\nColleagues from functional departments, e.g., legal, HR, finance, procurement, etc.\nVendors - depending upon the nature and severity of the audit as well the nature of your operating model, you may need to involve 3rd parties\nMake them aware of the audit report and that you need their help. Share a copy of the audit report with them (if internal policies allow you to do this).\nYou may be accountable for data protection in your company but delegating responsibility appropriately is a must (this may sound obvious - unfortunately many data protection leaders are one-person armies - you need all the help you can get).\nSet a date and book a rapid analysis workshop - could be half-day, a whole day or over a series of days depending upon the number of findings. Book the rooms, book the people, allow time for travel planning, book refreshments, etc., Also start looking for an experienced facilitator - you will need to be a participant, it should not necessary be you.\nStep 2: WHAT? Understanding the findings\nThe workshop itself - you will have prepared all workshop materials in advance by copying each finding onto a piece of card or large post-it with some additional info:\nProposed owner - the individual you consider responsible\nRisk rating - some audit reports will provide this\nAudit finding - copy/paste this to a label/merge function in MS Word or similar and print directly to card or labels\nLoB/market/department - indicate the specific part of your company that the finding may relate to\nLay out all cards/notes on a large bench or across a long wall.\nYou will also have prepared an A3 template to use throughout the workshop - as you\u0026rsquo;ll see, you\u0026rsquo;ll need many copies of the template. Your company may have their own version - here\u0026rsquo;s a rough example:\nAfter all the welcome and introduction part, the first step is for all participants to review the findings (that you\u0026rsquo;ve copied onto card or post-its) and then together identify patterns or similarities (aka Affinity Mapping) across the findings, and group them.\nMove the the similar or related cards/post-it into clusters or groups.\nTypically, there will be several large clusters. Split the participants into workgroups (2-5 people) by either delegating or voting.\nEach workgroup is then assigned a number of clusters. For each cluster, complete the \u0026ldquo;problem statement\u0026rdquo; section of the A3 template.\nA \u0026ldquo;problem statement\u0026rdquo; is a clear and concise statement which explains in simple terms that anybody can understand the issue, the consequences, where the problem manifests itself and how the problem may be tackled.\nIt should be:\nBrief - a few sentences which explain everything needed. It can be revised over time\nNo technical talk or jargon – no three letter acronyms\nShows the size of the problem or impact\nRemember, there will be non-technical, non-legal, non-IT people who will read the audit report and responses.\nNext, the facilitator presents back the problem statements to all participants and facilitates the prioritisation of the clusters. Use a standard method such as MoSCoW*, (*rough definitions supplied - define your own appropriate to your company):\nMUST do - Unlawful processing, high risk to data subjects\nSHOULD do - Critical internal impact: financial, reputation\nCOULD do - Non-critical impact, could be fixed at the same time as other similar/related issues\nWON’T do now - Non-critical, nice-to-have, little consequence if not dealt with immediately\nDepending upon the time available, each workgroup will then take several clusters/problem statements to analyse.\nStep 3: WHY? Identifying root causes #Findings are the tip of the iceberg. The real challenge is what’s beneath the surface.\nFor each problem/cluster conduct root cause analysis (why the issue occured) in the workgroups using a technique such as Ishikawa diagramming. An example here:\nSome root causes may be more obvious than others, but the key is to get to the bottom of the problem rather than address the symptoms. Whatever you do don\u0026rsquo;t try to \u0026ldquo;paper over the cracks\u0026rdquo; by just addressing symptoms! Dig deep!\nStep 4: HOW? Developing solutions #The root causes identified in the previous step will provide input to the solution(s) and work needed to address the problem. The type of work will fall under one or more of the following categories:\nPolicy/procedure\nOrganisation/people\nTechnology\nInformation/document\nBy identifying the category of work now, it will help to understand the nature of the initiatives or projects that need to be launched to address the audit finds, e.g., technical initiatives, OCM, legal documentation, etc.\nFor each problem, populate the A3 document further, stating actions, owners, resources needed, ball park time and cost estimates, etc.\nMake a second level of prioritisation, by plotting the A3s/initiatives on an Ease/Benefit matrix - this is also a collaborative effort:\nStep 5: WHEN? Building the roadmap #The facilitator will finish up by driving the the production of a first cut Roadmap once all the A3 documents are complete. The roadmap takes into account prioritisation, dependencies between the initiatives as well as dependencies with other relevant projects/initiatives in your company, not least available resources. All participants work together to make the roadmap with the facilitator moving things along, settling disagreements etc.\nA detailed schedule covering the coming three months can be developed later by engaging the teams that will be involved. I see trying to plan at a detailed level beyond three months a pointless exercise due to the dynamic nature of companies.\nAfter the workshop, write everything up in the form of an Audit Response Report, including the completed A3s and the roadmap. The report will then feed into your company\u0026rsquo;s business case process, or portfolio management system for consideration and approval. Produce a summary slide-deck of the report for the purposes of presenting to the Audit Committee together with the distribution of the report.\nI\u0026rsquo;ve used this approach umpteen times over the past 20 years in various companies with good, consistent results. I\u0026rsquo;m always keen to hear of alternative approaches so feel free to get in touch for a knowledge exchange.\n","date":"5 March 2025","permalink":"/turning-audit-chaos-into-a-clear-actionable-plan-a-five-step-approach/","section":"Blog","summary":"","title":"Turning audit chaos into a clear, actionable plan: a five-step approach"},{"content":"If your company didn’t see the Trump administration’s rapid rollback of AI regulations and ESG mandates coming, it’s because you weren’t tracking weak signals. Horizon scanning isn’t a luxury, it’s the difference between leading and scrambling to keep up.\nSome companies saw it coming. Most didn’t. The signs were there, subtle at first, then flashing like neon lights. Yet now, we’re watching boardrooms in disarray, compliance teams panic, and leadership teams ask, “Why weren’t we ready for this?”\nDonald Trump is back. And whether you like it or not, his administration is making big moves across the board that significantly impact AI regulation, ESG commitments, data protection and many corporate regulations. Some companies are adjusting, pivoting, and turning change into opportunity. Others? They’re on the back foot, trying to figure out what just hit them.\nThis was predictable. The warning signs weren’t hidden. The question isn’t whether change is happening. The question is: did your company see it coming?\nWashington moves fast, you need to move faster #Just weeks into the new administration, the landscape is shifting, dramatically.\nAI regulation pulled back – The previous administration pushed for oversight and ethical AI safeguards. Trump’s team? They’re rolling regulations back, promoting innovation over restriction. If your company built its AI strategy on compliance-heavy guardrails, you probably need to review and revise.\nESG on the chopping block – Trump has never hidden his stance on ESG (Environmental, Social, and Governance) regulations: he sees them as bureaucratic obstacles. The administration is aggressively cutting back ESG reporting mandates. But interestingly, not all companies are abandoning ESG**.** Why? Because investor expectations, consumer demand, and global market forces are bigger than any administration​. And big news this week from BP abandoning its green ambitions.\nTech shakeups begin – AI regulation is shifting, and tech leaders are making their moves. Case in point: Clearview AI’s CEO just resigned. That’s not random. When leadership at a high-profile AI company suddenly steps down, it signals broader industry shifts​.\nThe future of Diversity, Equity, and Inclusion (DEI)? - The Trump administration’s DEI crackdown is moving fast, eliminating federal programmes, cutting funding, and pressuring schools and companies to fall in line. The \u0026ldquo;EndDEI\u0026rdquo; portal invites complaints, and companies are already scaling back DEI efforts to avoid scrutiny. Civil rights groups are fighting back, but the landscape is shifting.\nThese aren’t isolated events. They’re signals. And the companies that understand how to read the signals are already positioning themselves for what’s next.\nSo, what about your company?\nHorizon scanning: the difference between leading and being on the back foot #Too many companies operate in reactive mode. They wait for regulations to pass before they act. They wait for compliance deadlines before they prepare. They wait until their competitors have already pivoted before they start asking questions.\nBy the time you’re reacting, you’ve already lost control.\nHorizon scanning changes that. It’s not about predicting the future, it’s about tracking relevant weak signals before they turn into regulatory, market, or operational shocks. It’s about seeing what’s coming, not waiting to be blindsided by it.\nAt Purpose and Means, we help companies:\nBuild horizon scanning capabilities from the ground up\nSpot regulatory, tech, and societal shifts before they hit\nDevelop risk-aware strategies that keep them ahead, not playing catch-up\nWe can work with you to develop your radars - our preferred tool is FIBRES, an excellent Finnish solution that keeps getting better and better. Horizon scanning isn’t just for executive teams. It should be embedded in your TechReg, legal, compliance, and GRC departments, because that’s where the impact is felt first. Once you have established your radars, you\u0026rsquo;ll be able to analyse impacts, wargame scenarios and then prioritise business case work. To see a couple of interactive radars made with FIBRES, take a look on our Horizon Scanning and Analyse service page.\nLook back over the past couple of months: did you predict it? #If your company didn’t have a horizon scanning process in place last year, here’s the test:\nDid you anticipate some AI regulations would be rolled back so quickly?\nDid you expect ESG mandates to be gutted, or did it catch you off guard?\nDid you plan for tech industry shakeups, or are you reacting now?\nIf you didn’t see these coming, what else are you missing?\nThe companies that succeed over the next few years won’t be the ones with the best lawyers or the fastest crisis response teams. They’ll be the ones that saw the changes coming first, the ones that anticipated, adjusted, and got ahead of the curve.\nAct now, or get left behind #Trump\u0026rsquo;s impact on AI, ESG, and regulation is just a small piece of the future of your company. More changes are coming. More policies will shift. More tech disruption will happen. And the next wave of companies will be caught off guard.\nYours doesn\u0026rsquo;t have to be one of them.\nAt Purpose and Means, we help companies build horizon scanning into their business strategy so they can operate from a position of strength, not reaction. Because in today’s world, being proactive isn’t optional, it’s the difference between leading and lagging.\n","date":"28 February 2025","permalink":"/did-you-see-it-coming-the-companies-that-track-weak-signals-always-do/","section":"Blog","summary":"","title":"Did you see \"it\" coming? The companies that track weak signals always do"},{"content":"GRC projects need skilled and experienced project and programme managers to ensure compliance, mitigate risks, and drive success. Without proper expertise and support, failure is inevitable.\nGRC projects are becoming more common and critical in most companies. They define how businesses comply with laws that shape entire industries, AI governance for the EU AI Act, data protection for GDPR, protecting critical infrastructure for NIS2 are just a few examples. But too often, these projects are managed by people who lack the necessary expertise in project and programme management, leading to delays, failures, and even regulatory non-compliance.\nIn this post I am discussing project and programme management, not project and programme leadership - related but different disciplines that often require different profiles.\nAbout 10 years ago, an experienced Danish lawyer emphasised to me that legal advice should only be given by a lawyer, and I totally agree. She had seen too many disasters caused by unqualified people stepping into roles they weren’t trained for. And yet, in many GRC projects, the same principle often doesn’t apply. Project and programme management is a profession in its own right, with its own frameworks, methodologies, and best practices. Assigning a GRC project, e.g. covering AI governance, to someone without the right skills is like asking a junior paralegal to lead a mission-critical litigation case - it’s a recipe for disaster.\nProject and programme management is a team sport #Project and programme management is not a solo endeavor, it’s a team sport. Successful execution requires the right mix of skills and subject matter expertise. An AI governance project, for example, needs deep AI governance knowledge, but that doesn’t mean a technical AI expert should be managing the project, unless they also have strong project management experience. Management is its own discipline, and experience in it is essential.\nIn my early days as a project manager, I was out of my depth. It’s not a nice feeling, and more importantly, it’s dangerous for the company. Throwing someone in at the deep end without proper support can lead to costly mistakes, inefficiencies, and even compliance failures. If you’ve already appointed an inexperienced person in a project or programme management role, don’t leave them to sink or swim, give them the support they need. Pair them with an experienced mentor or provide structured guidance.\nMisconceptions about project management #Despite its complexity, project management is often underestimated. A lawyer once told me that managing a project is just about staying on top of people, making sure they do as they’re told, and following up every week. If only it were that simple. Good project and programme management is about much more than tracking tasks, it’s about strategic planning, risk mitigation, stakeholder alignment, and ensuring that the right outcomes are achieved within constraints of time, cost, and quality. I could break down each of those terms that would explain the broadness of them, but that\u0026rsquo;s for a future post.\nA head of information security in a global corporation once dismissed assumptions management as too abstract. He couldn\u0026rsquo;t accept that implementation planning could begin before he had the IS policies written and approved. Ignoring assumptions management is akin to building a house on shifting sand. I saw similar comments on LInkedin from a prominent lawyer who ridiculed others for starting their planning before the EU AI Act was approved. That was a big mistake in his eyes.\nA head of HR once told me she didn’t have the time to document her plan, and it didn\u0026rsquo;t matter anyway, because everything was in her head. That might work for a small initiative, but not for a multi-faceted GRC project where documentation, traceability, and structured communication are essential. A lack of planning leads to confusion, inefficiency, and missed deadlines.\nThen there’s the tech leader who said he would let his organisation figure out ‘in the line’ how to address a damning audit report that had the CFO’s attention. Imagine trying to wing a response to regulators or auditors without a structured plan. Hoping things will somehow resolve themselves is not a strategy, it’s a liability.\nDo people understand what a project or programme actually is? #One fundamental challenge is that many people who are given the responsibility to \u0026lsquo;comply with the EU AI Act\u0026rsquo; don’t fully grasp the difference between a project and a programme. A project is a temporary endeavour to create a unique product, service, or result, while a programme is a collection of related projects managed in a coordinated manner to achieve benefits that wouldn’t be possible if managed individually. Regulatory compliance isn’t just about completing a single task or producing a report, it requires strategic alignment across multiple projects, functions, and stakeholders.\nThe hard questions every company should ask #Before handing a GRC project to an internal colleague who ‘will figure it out,’ companies should provide sufficient education and/or support so the person who\u0026rsquo;ll be managing the project knows:\nWhat is the difference between a project risk and a project issue\nWhat estimating technique is being used\nWhat project methodology is being followed\nHow to estimate and define the scope, effort, and cost of their project\nHow to build a realistic schedule that accounts for dependencies and risks\nHow to monitor progress and adjust based on actual performance\nHow to manage organisational change and ensure lasting compliance, not just short-term fixes\nHow to ensure effective communication across multiple stakeholder groups\nThe difference between leadership and management in a project context\nThese are just a few examples, there are so many other important disciplines, and without this knowledge and expertise, your projects are at risk of failure before they even begin.\nThe value of experienced project and programme managers #Bringing in qualified or experienced project and programme managers doesn’t just ensure smooth execution. it dramatically increases the likelihood of success. They provide structure, accountability, and a disciplined approach to execution. They align stakeholders, manage risks, and drive projects to completion in a way that ensures compliance, mitigates exposure, and optimises business outcomes.\nWould you let someone without legal training draft a complex contract? Would you trust an unqualified person to run financial audits? Then why leave GRC projects in the hands of people without project and programme management expertise?\nIf you’ve already appointed an inexperienced person to a critical role, support them. Give them access to mentorship and expert guidance.\nAt Purpose and Means, we provide exactly this kind of support, helping companies bridge the gap between intent and execution. Our services ensure that change initiatives succeed, not just survive. Purpose and Means also performs project and programme reviews, helping organisations identify gaps and optimise their approach for better outcomes, and in extreme cases we can help turn around troubled projects, using approaches and strategies that cut through.\nThese days, failure is not an option. Success demands leadership, structure, and the right skills in the right roles. If companies want to meet their regulatory obligations without unnecessary pain, they must stop treating project management as an afterthought, and start seeing it as a strategic advantage.\n","date":"25 February 2025","permalink":"/when-lawyers-manage-grc-projects-success-or-disaster/","section":"Blog","summary":"","title":"When lawyers manage GRC projects: success or disaster?"},{"content":"Companies must move beyond vague risk labels like high, medium, and low, or even red, amber, and green, and instead quantify risk in financial terms, ensuring clear definitions aligned with business context, risk policy, and appetite to avoid costly mistakes and misleading assessments.\nRisk. It’s the one word that can make or break a company. Yet, too many TechReg leaders still treat it like an abstract concept, something that can be categorised with vague labels like high, medium, or low. That’s not risk management, that’s deception. And deception, whether intentional or unintentional, is dangerous and costly.\nThe Illusion of high, medium, and low\nSaying a risk is \u0026lsquo;high\u0026rsquo; or \u0026rsquo;low\u0026rsquo; is like saying an investment is \u0026lsquo;big\u0026rsquo; or \u0026lsquo;small.\u0026rsquo; It tells you nothing. If someone told you they had a \u0026lsquo;high-risk\u0026rsquo; stock option, your first question would be: \u0026lsquo;How high?\u0026rsquo; Is it a 20% chance of failure? 50%? 80%? The same logic applies to AI, data protection, cybersecurity, regulatory compliance, and operational risk. Executives don’t make financial decisions based on gut feelings, they make them based on Euros, pounds, dollars (whatever the currency) and probabilities. Risk needs the same treatment.\nFar too often, companies waste valuable time and resources on risk assessments that tell them nothing useful. Risk that isn’t quantified properly is risk that isn’t managed properly. Worse yet, businesses deceive themselves into thinking they are making informed decisions when in reality, they’re navigating in the dark.\nI’ve found it interesting many years ago working in one of the UK’s largest financial services companies that has risk at the heart of its business, yet in the IT/tech division, risk was paid lip service. Similarly, I worked in a global FMCG company where risk and quality were at the core of its business, yet in the IT/tech division, risk was downplayed, often with statements like \u0026ldquo;\u0026rsquo;no, we can\u0026rsquo;t show the business this.\u0026rsquo; I’ve also worked on client bids with one of the largest tech outsourcing companies globally and seen the sales team actively remove risks because \u0026rsquo;the client shouldn\u0026rsquo;t see them.\u0026rsquo; This kind of risk denial is not just negligent, it’s dangerous.\nUsing terms like high, medium, and low, or red, amber, and green is meaningless unless you have carefully defined and explained exactly what the terms represent in your business context, in relation to your risk policy and appetite, as well as the type of risk. Otherwise, you might as well talk in terms of limes, oranges, and apples! These categorisations only hold value when they are tied to clear, measurable criteria that allow stakeholders to understand their real impact and significance.\nRisk management is not a tick-box exercise\nToo many companies treat risk management as a compliance checklist, something to satisfy auditors or regulators. But compliance does not equal security. A company that merely checks the boxes isn’t mitigating risk, it’s documenting it. And documentation in pretty colours doesn’t stop breaches, fines, or reputational damage.\nThe time wasted on compliance-driven risk assessments that don’t reflect real-world threats is staggering. Companies spend countless hours filling out templates and forms that are filed away and forgotten. These exercises provide a false sense of security, when in reality, they achieve nothing.\nA risk template is just a template\nRisk templates are useful, but they aren’t solutions. They provide structure, but they don’t replace analysis. If risk management was as simple as filling out a form, every company would be secure, every compliance requirement met, and every breach prevented. But we know that’s not the case.\nEvery business is different, and so is its risk landscape. A generic template doesn’t capture the nuances of industry-specific threats, evolving challenges, or shifting regulatory requirements. Risk management requires continuous analysis, adaptation, and investment, not just a well-documented spreadsheet.\nA major problem with relying on templates is that it encourages a \u0026lsquo;set and forget\u0026rsquo; mindset. Companies complete them once and assume they are covered. This approach is a trap, leading to outdated risk models that fail to account for new and emerging threats. Worse still, companies are lulled into a false sense of security, believing they are managing risk when they are merely paying lip service\nThe heartbeat of regulatory compliance\nRisk management isn’t a side project, it’s the foundation of regulatory compliance. Companies that embed risk assessments into their operational processes and procedures don’t just meet compliance standards, they go beyond. They identify weaknesses before auditors do. They mitigate financial exposure before investors demand answers. They build resilience before competitors catch on.\nWhen risk management drives compliance, companies transform from reactive to proactive. Instead of scrambling to meet regulatory deadlines, they create a culture where compliance is a natural byproduct of robust risk governance.\nRisk management is a living and breathing function\nRisk isn’t something you evaluate once a year and forget about. It’s not an annual report or a quarterly meeting topic. Risk evolves. Every new piece of tech you deploy, every regulatory change, every new vendor introduces risk into the equation.\nCyber threats don’t wait for your next board meeting. Market disruptions don’t schedule themselves around your fiscal year. Risk management must be dynamic, integrated into daily operations, constantly measured, and continuously refined.\nIgnoring this reality is costly. Companies that treat risk as a static concept often find themselves reacting to crises instead of preventing them. The time and money spent cleaning up a disaster far outweigh the effort required to manage risks correctly from the start.\nAre companies overplaying or undervaluing risk?\nThe answer: both. Some companies inflate certain risks to justify larger budgets, while others downplay threats to avoid costly investments, or in some cases, leaders don\u0026rsquo;t want to be seen washing their dirty laundry in front of their peers. Neither approach is sustainable.\nWithout quantification, risk management is a guessing game. And in business, guessing is expensive. Companies must assess loss exposure in financial terms, identifying which scenarios truly threaten the bottom line and which are overblown distractions.\nDo companies articulate the different types of risk?\nIn simple terms, no. In most cases, it\u0026rsquo;s 2 out of 10 for effort and lots of room for improvement. Not all risks are created equal, yet many companies lump them into a single category. You need to separate and quantify:\nCyber risk: breaches, ransomware, insider threats, what’s the financial impact of a worst-case scenario?\nCompliance risk: GDPR fines, personal data breaches, sectorial regulations, what are the legal and reputational costs?\nOperational risk: supply chain disruptions, technology failures, how do they affect revenue and productivity?\nFinancial risk: financial losses, what’s the exposure in tangible currency?\nReputational risk: negative media coverage, social backlash, how do they translate into consumer loss and stock price drops?\nAI risk: bias, liability, regulatory concerns, how do emerging technologies introduce new vulnerabilities?\nCompanies that can’t articulate their risks can’t manage them, let alone link them from a ripple effect perspective. And if they can’t do that, they can’t mitigate financial losses.\nIt’s cheaper to get it right up front\nRisk management should never be an abstract conversation. Executives care about financial performance, shareholder value, and business growth. If TechReg leaders want their concerns taken seriously, they need to speak the language of the boardroom, Euros, losses, and business impact.\nThe biggest mistake companies make is waiting until something goes wrong before addressing risk. This reactive approach is always more expensive than proactively managing threats from the start. The cost of a breach, a compliance failure, or a reputational hit far exceeds the investment required to do risk management correctly upfront.\nBusinesses must stop deceiving themselves with vague assessments and wasted time on meaningless exercises. Real risk management is about clear financial impact, actionable data, and strategic decision-making.\nDid this resonate? Purpose and Means work with TechReg leaders to align risk, and to build a framework that is customised to your business context in order to quantify risk in terms that will align with the needs of your leadership.\n","date":"24 February 2025","permalink":"/risk-theatre-stop-the-compliance-charade-before-it-costs-you/","section":"Blog","summary":"","title":"Risk theatre: stop the compliance charade before it costs you"},{"content":"We look into into how data protection and AI governance leaders can become more influential by leveraging powerful language and a compelling vision to go beyond compliance, inspiring trust and driving innovation while reshaping the narrative around ethical data practices.\nYou don\u0026rsquo;t lead with rules. You lead with purpose. #I sense a quiet revolution is underway in some global companies. They are not just processing personal data or deploying AI tools with blind abandon, they see complying with Tech regulations as part of \u0026lsquo;doing business,\u0026rsquo; and realise there are benefits and opportunities of going beyond ticking compliance boxes or buying into the fear-mongering of fines and penalties peddled by some law firms or big consultancies\nFor data protection and AI governance leaders, influence is the key to success. Their teams, stakeholders, and customers don\u0026rsquo;t respond to legal jargon or abstract policies. They respond to stories, values, and clarity.\nSo what works for them? Here are a few tips based on conversations and stuff I\u0026rsquo;ve read along the way, and be aware it\u0026rsquo;s an ever-changing area that has shifted much in recent years. The good leaders are very mindful of their audience and the society we live in, and adapt their communication accordingly. And I am not advocating you never utter technical terms associated with data protection or AI governance again especially to your own teams who work with these topics daily, that wouldn\u0026rsquo;t make sense. I\u0026rsquo;m talking about the huge number of people in your company who clam up or drop off every time you start reciting articles from the GDPR or EU AI Act.\n1. Speak human, not compliance #Compliance is the floor, not the ceiling.\nThe best leaders frame data protection and AI governance not as burdens but as opportunities. Their influence stems from their ability to reframe the conversation. Think \u0026rsquo;trust-building\u0026rsquo; instead of \u0026lsquo;risk mitigation.\u0026rsquo; Say \u0026rsquo;empowering innovation\u0026rsquo; instead of \u0026lsquo;restricting access.\u0026rsquo; This shift in language aligns with what people feel, significantly increasing their influence:\nWords and phrases yo use: \u0026lsquo;Guardrails for growth\u0026rsquo; (not \u0026lsquo;restrictions\u0026rsquo;)\n\u0026lsquo;Ethical innovation\u0026rsquo; (not \u0026lsquo;AI governance\u0026rsquo;)\n\u0026lsquo;Customer trust\u0026rsquo; (not \u0026lsquo;data security\u0026rsquo;)\nA Linkedin connection working for a media streaming company shared an interesting nugget with me during our last catch-up call. Apparently their data protection team rebranded the concept of encryption as \u0026lsquo;unbreakable storytelling,\u0026rsquo; which resonated with many in the business. Context is everything and perhaps that phrase wouldn\u0026rsquo;t work in your company but they tapped into the company\u0026rsquo;s core mission - entertainment - and made data protection part of the narrative. This influential approach connected technical needs with emotional resonance. This leads nicely in to my next tip.\n2. Context Is king #Know the business, or fail the mission.\nLeaders who exert the most influence understand their company\u0026rsquo;s DNA. What drives revenue? What keeps customers loyal? How does AI underpin, or undermine, these goals? Your ability to influence others hinges on your grasp of the business context.\nAsk better questions to increase your influence: \u0026lsquo;How does this data policy help us innovate?\u0026rsquo;\n\u0026lsquo;Where does AI align with our brand promise?\u0026rsquo;\n\u0026lsquo;What would our customers feel if we compromised their data?\nAnother example I heard that I found memorable was linking data protection with \u0026lsquo;personalised service, safely delivered.\u0026rsquo; I think it was an online retailer that had elevated the importance of customers being aware of their concise, and visual privacy notice. They had added a competitive edge to an accountability mechanism and aligned with trends that show customers care about their privacy. This influential approach aligned legal and technical needs with business goals.\n3. Lead with \u0026lsquo;why,\u0026rsquo; not \u0026lsquo;how\u0026rsquo; - inspiring through language #When it comes to ensuring a more ethical approach to data protection and AI governance, the words leaders choose can significantly impact their ability to inspire and influence others. Here are some examples of words and phrases to use or avoid:\nWords and phrases to use: # \u0026lsquo;Our vision is\u0026hellip;\u0026rsquo; - this phrase helps connect data protection and AI practices to the larger organisational purpose.\n\u0026lsquo;We\u0026rsquo;re committed to\u0026hellip;\u0026rsquo; - demonstrates dedication and aligns actions with values.\n\u0026lsquo;This aligns with our core values of\u0026hellip;\u0026rsquo; - links data ethics to established organisational principles.\n\u0026lsquo;Let\u0026rsquo;s explore how we can\u0026hellip;\u0026rsquo; - encourages collaborative problem-solving and innovation.\n\u0026lsquo;Our customers/stakeholders trust us to\u0026hellip;\u0026rsquo; - emphasises the importance of maintaining trust.\n\u0026lsquo;We have an opportunity to lead by\u0026hellip;\u0026rsquo; - frames ethical practices as a chance for positive impact.\n\u0026lsquo;This approach ensures\u0026hellip;\u0026rsquo; - highlights the benefits and security of ethical data management.\n\u0026lsquo;We\u0026rsquo;re building a foundation for\u0026hellip;\u0026rsquo; - connects current actions to future success.\nWords and phrases to avoid: # \u0026lsquo;We have to comply with\u0026hellip;\u0026rsquo; - focuses on bare minimum requirements rather than ethical leadership.\n\u0026lsquo;It\u0026rsquo;s not our problem if\u0026hellip;\u0026rsquo; - demonstrates a lack of accountability and foresight.\n\u0026lsquo;The regulations say we must\u0026hellip;\u0026rsquo; - suggests action driven by external forces rather than internal values.\n\u0026lsquo;We\u0026rsquo;ll deal with that later\u0026hellip;\u0026rsquo; - indicates a lack of proactive planning in ethical considerations.\n\u0026lsquo;That\u0026rsquo;s just how it\u0026rsquo;s always been done\u0026hellip;\u0026rsquo; - resists necessary changes for ethical improvements.\n\u0026lsquo;It\u0026rsquo;s too complicated to explain\u0026hellip;\u0026rsquo; - avoids transparency and can erode trust.\n\u0026lsquo;We don\u0026rsquo;t have time for\u0026hellip;\u0026rsquo; - suggests ethical considerations are not a priority.\n\u0026lsquo;The tech team will handle it\u0026hellip;\u0026rsquo; - silos responsibility instead of promoting company-wide ethical culture.\nBy carefully selecting words that inspire and avoiding those that demotivate, leaders can create a narrative that elevates ethical data protection and AI governance from a compliance exercise to a core company value. This approach not only builds a culture of integrity but also positions the company as a trusted leader in responsible AI and data protection practices.\n4. Build bridges, not silos #The C-suite isn\u0026rsquo;t your audience, it\u0026rsquo;s your alliance.\nFrom posts I read on Linkedin, I reckon AI governance leadership is lagging behind tech advancements. Why? Because AI governance is stuck in legal or tech silos. If this is happening in your company, break the cycle and expand your sphere of influence:\nCreate an \u0026lsquo;AI Trust Council\u0026rsquo; with C-suite leaders, ethicists, and frontline employees.\nHost \u0026lsquo;AI storytelling\u0026rsquo; workshops where teams reimagine AI governance as a driver of delighting customers.\nMeasure what matters: track trust metrics alongside compliance KPIs.\n5. Back to \u0026lsquo;beyond compliance\u0026rsquo; #Patagonia didn\u0026rsquo;t build a USD 3 billion empire by following ESG guidelines. They did it by making sustainability their soul. For data and AI leaders, the lesson is clear. Your ultimate goal shouldn\u0026rsquo;t be \u0026rsquo;to avoid fines,\u0026rsquo; it\u0026rsquo;s to build a legacy of trust. This vision-driven approach will extend your influence far beyond your immediate team.\nThree questions to expand your influence at your next leadership meeting: \u0026lsquo;How can we make our communication memorable and unmissable?\u0026rsquo;\n\u0026lsquo;What AI decision would make our families proud?\u0026rsquo;\n\u0026lsquo;Are we the company others aspire to follow?\u0026rsquo;\nTo conclude. as a leader your words, and your vision, don\u0026rsquo;t just shape policies. They shape the future. Your ability to influence others is directly tied to the strength and clarity of your vision.\n","date":"20 February 2025","permalink":"/leadership-in-data-protection-and-ai-governance-influence-through-the-words-and-phrases-you-use/","section":"Blog","summary":"","title":"Leadership in data protection and AI governance: influence through the words and phrases you use"},{"content":"There are striking similarities between AI-driven workforce transformation in 2025 and the governance of enterprise IT shifts I helped navigate in 2011, revealing why structured workforce planning, upskilling, and role evolution are more critical than ever.\nBack in 2011, I designed and facilitated workshops for the global IT division of a top 3 global FMCG company headquartered in Denmark. They knew that the future business strategy would require a shift in how IT goals aligned with business goals - one of the key mechanisms required for governance of enterprise IT - and from a organisational perspective much work was needed. Technology and operating models were shifting dramatically. New roles were emerging. Others were transforming. Some were diminishing, primed for outsourcing.\nThe exercise was blunt but necessary: resource managers had to categorise their teams into four groups:\nNew-to-world roles – brand new functions that didn’t exist before\nRoles to grow – critical skills that needed investment\nRoles to transform – requiring upskilling or lateral movement\nRoles to externalise – those likely to be outsourced or phased out\nA tough and uncomfortable process, but an inevitable one, and these days I see the company still dominating various markets globally.\nNow, in 2025, I see the exact same conversation unfolding, but this time, AI is the catalyst.\nAI: the new workforce disruptor #A recent CSET report on AI and workforce training reveals some interesting numbers:\n80% of US workers will have at least 10% of their tasks impacted by AI\n19% of jobs could see over half their tasks affected\nThat’s not \u0026lsquo;some people, some of the time.\u0026rsquo; That’s nearly everyone, in every role, across every industry.\nBut what I found really surprising is that the most in-demand skills in the AI age aren’t even technical.\nAccording to the CSET Report, 58% of critical future skills are social, cognitive, and foundational, things like:\nComplex problem-solving\nActive learning\nAdaptability\nEthical judgment\nThis isn’t just about coding, data science, or AI engineering, it’s about learning how to work alongside AI, not be replaced by it.\nAI literacy: the new corporate imperative #The EU’s AI Act explicitly calls for AI literacy training in organisations providing or deploying AI, and the European AI Office\u0026rsquo;s \u0026lsquo;Living Repository of AI Literacy Practices\u0026rsquo; published recently is a good read if you\u0026rsquo;re seeking inspiration.\nCompanies like Booking.com, Assicurazioni Generali, and Telefónica have already launched AI upskilling programmes to prepare their teams.\nFor example, Generali’s \u0026lsquo;New Roles Schools\u0026rsquo; focus on:\nCreating new AI-related roles (e.g., AI Business Translators, Smart Automation Experts)\nReskilling actuaries, accountants, and CRM professionals with AI capabilities\nProviding tiered AI training, from basic literacy to advanced engineering\nIn a similar vein, OpenSky Data Systems is training both technical and non-technical staff, teaching developers how to integrate AI into software, while helping non-technical teams leverage AI tools in decision-making.\nThis mirrors the very structure of my 2011 workshops: identifying which roles need to evolve, transform, or disappear, only this time, AI is driving the change.\nHow companies should approach AI workforce planning #Companies need a proven, structured approach to AI-driven workforce transformation.\nThat’s exactly why my AI Workforce Adaptation Workshop, built on my 2011 framework, is now available.\nIf your company is facing AI-driven workforce shifts, the questions you need to be asking are:\nWhich roles will AI create?\nWhich roles need to evolve?\nWhich roles will disappear?\nHow do we prepare people for the transition?\nThese are not questions for the legal team. In these workshops, legal are a participant, along with a wide range of representatives from across the company. Like must GRC endeavours, it\u0026rsquo;s a collaborative effort.\nThe companies getting this right are those that see AI as an augmentation tool, not just a replacement tool.\nHow is your company preparing for the AI workforce transition?\nDoes this resonate? If so, get in touch to arrange a no obligation discussion.\n","date":"17 February 2025","permalink":"/ai-workforce-transformation-lessons-from-2011-and-the-challenges-of-2025/","section":"Blog","summary":"","title":"AI workforce transformation: lessons from 2011 and the challenges of 2025"},{"content":"Before any software release goes live, companies must consider data protection risks to prevent unintended harm, ensure compliance, and avoid costly failures, because once personal data is exposed or misused, there’s no taking it back.\nIn 2020, Microsoft launched a feature in its O365 Workplace Analytics tool. It was designed to help managers understand productivity trends across their teams. But it also gave managers the ability to see data about individual employees, such as the number of emails sent, the number of meetings attended, and so on.\nSuddenly, what was meant to be a workforce insights tool turned into a workplace surveillance system. Employees had no idea their personal productivity was being tracked at that level. No legal ground and no transparency.\nA data protection failure, and also a failure of Privacy by Design principle #1, \u0026lsquo;proactive, not reactive, preventative not remedial,\u0026rsquo; i.e. fixing the problem before it ever becomes one.\nAfter a public backlash, Microsoft had to roll back the feature and adjust how data was aggregated, but by then, the damage was done. In his excellent paper published during November 2024, \u0026lsquo;Tracking Indoor Location, Movement and Desk Occupancy in the Workplace,\u0026rsquo; Wolfi Christl details this Microsoft case alongside many others that are real eye openers.\nAlthough, fingers pointed towards Microsoft, the companies deploying the release also were part of the failure. These days, many software companies release security updates, fixes, new functionality using a relatively short release cycle, often monthly, and then companies deploy the releases in their own environments.\nMicrosoft’s tool processed employee data, analysed their behaviour, and exposed it to their managers, all without informing employees or allowing them any control. This isn’t just bad design, it’s a failure to recognise data protection as a fundamental consideration in software development.\nData protection risks in software releases #Many companies treat software releases as purely a technical process:\nCode is written, or distributed from the vendor\nFeatures are tested\nDeployment is approved\nThe release goes live\nWhat’s missing? An assessment from a data protection perspective.\nBefore every release into the live environment, someone should be asking:\nDoes the release include any functionality involving the processing of personal data?\nDoes it create risks for individuals’ rights and freedoms?\nAre we ensuring transparency, fairness, and compliance with data protection laws?\nOften, by the time these questions are raised - if they are raised at all - the software is already deployed and live in the production environment, and the company may (worse case) be dealing with complaints, regulatory investigations, and reputational damage.\nWhere to embed data protection checks #If your company uses ITIL 4, they\u0026rsquo;ll have processes like \u0026lsquo;Change Management\u0026rsquo; and \u0026lsquo;Release Management\u0026rsquo;. If it follows COBIT 2019, it\u0026rsquo;ll have a \u0026lsquo;Promote to Production and Manage Releases\u0026rsquo; process. There are other frameworks out there that may have similar control processes and this is exactly where a data protection risk trigger needs to be embedded. Before a release is approved, there should be a mandatory checkpoint:\n\u0026lsquo;Does this feature or update require a data protection risk assessment?\u0026rsquo;\nIf the answer is yes, then the right risk assessment should be conducted.\nUnder GDPR, this could be a Data Protection Impact Assessment (DPIA).\nIf AI is involved, new laws like the EU AI Act may require an algorithmic impact assessment.\nBut none of this works unless the right people are trained to spot risks.\nWhy typical data protection training won’t cut it #Many companies assume they’ve covered their bases because they send employees to general data protection training, often purchased from a law firm or one of the big consulting companies. It covers GDPR principles, data minimisation, data subject rights, and so on.\nUnfortunately, that kind of training is too broad to help the teams actually involved in software release management. People responsible for deploying and managing software releases need specific, contextual training, not generic legal-like briefings. They need to understand:\nHow code changes can introduce risks to individuals\nHow AI models can process data in unexpected ways\nHow small design choices can create big compliance failures\nThis training should be practical and scenario-based. Engineers, product managers, and IT teams should be studying real-world failures like Microsoft’s Workplace Analytics case mentioned above, not just abstract legal principles.\nWhy deep knowledge is essential #Not all data protection risks are legal risks. And not all legal risks are technical risks. This is why companies need cross-functional knowledge in data protection:\nLegal teams understand and advise on the laws.\nIT teams understand the tech.\nProduct teams understand use cases.\nNone of these teams can work in isolation. If data protection is not embedded into the release management and change processes, things will go badly wrong:\nFeatures that track users without any attention to data protection principles get deployed.\nAI models that amplify bias go live.\nData gets processed in ways that violate legal and ethical standards.\nAnd companies don’t find out until it’s too late.\nPrevention is easier and cheaper than damage control #A data protection failure in a software release is like a car recall, except much, much worse. If a car has faulty brakes, you can recall and repair it. But once personal data is collected, processed, or exposed improperly, it\u0026rsquo;s complicated, expensive and almost impossible to reverse the situation - you can’t recall the fault. That’s why companies need to build data protection and AI risk checks into their operational processes and procedures such as release and change management (in this case), before harm is done.\nSide note: Solove’s \u0026ldquo;Exclude\u0026rdquo; harm – a case study in data protection failure #The Microsoft case is a textbook example of what Daniel Solove calls \u0026lsquo;Exclude\u0026rsquo; in his taxonomy of privacy harms. \u0026lsquo;Exclude\u0026rsquo; happens when decisions are made about people, using their data, without their knowledge or input.\nSolove\u0026rsquo;s taxonony is also well worth your attention if you\u0026rsquo;ve not come across it.\nDoes this post resonate? If so, get in touch to arrange a no obligation discussion about how your data protection work can be improved.\n","date":"17 February 2025","permalink":"/why-you-need-to-consider-data-protection-with-every-software-release/","section":"Blog","summary":"","title":"Why you need to consider data protection with every software release"},{"content":"AI and data protection are often seen as separate from corporate sustainability strategies, but in reality, they are fundamental to achieving ESG goals. Companies that fail to integrate clean energy into their AI and data infrastructure risk not only regulatory scrutiny but also reputational damage.\nCompanies are under growing pressure to align their business strategies with environmental, social, and governance (ESG) goals. While clean energy commitments are often seen through the lens of carbon reduction, the conversation frequently overlooks an important factor: the role of AI and data protection in driving sustainable business practices.\nAs AI adoption accelerates, companies need to rethink not just how they power their operations, but how they manage and secure the data that fuels AI. Responsible AI is increasingly tied to regulatory compliance and ethical expectations, ensuring AI and data governance align with ESG principles is no longer optional, it’s a necessity.\nThe energy demand of AI and data-driven businesses #AI models, especially large-scale ones, are energy-intensive. Training a single deep-learning model can consume as much energy as five cars over their entire lifetime. While many companies have embraced cloud computing for efficiency and scalability, cloud data centres remain major energy consumers. Without a strategic clean power approach, AI and data-driven companies risk increasing their carbon footprint, rather than reducing it.\nAt the same time, companies are collecting and storing huge amounts of data, much of which is never used but still demands storage, security, and processing power. This isn’t just an operational inefficiency, it’s a sustainability issue. Companies serious about ESG must prioritise optimising AI efficiency, minimising unnecessary data retention, and ensuring that the power driving their digital infrastructure comes from renewable sources.\nRegulatory and ethical considerations: the intersection of AI, data protection, and ESG #We\u0026rsquo;re seeing new AI and data protection laws emerge globally. The EU AI Act, for instance, sets strict compliance requirements for high-risk AI applications. Similarly, the Corporate Sustainability Reporting Directive (CSRD) mandates that large companies disclose how sustainability factors affect their business model. A key implication? Data security, AI governance, and energy efficiency are now deeply intertwined.\nFor example, AI models require vast datasets, often containing personal and sensitive information. If companies fail to implement privacy-preserving AI technologies, such as federated learning, synthetic data, or differential privacy, they risk not only regulatory non-compliance but also ethical mishaps that could undermine consumer trust. ESG-conscious companies must take a dual approach: implementing privacy-by-design principles while ensuring their AI operations run on clean energy sources.\nBalancing AI innovation with sustainability #Companies often justify AI’s high energy consumption by highlighting efficiency gains, better logistics, reduced waste, and improved resource management. While these benefits are real, they don’t automatically offset AI’s environmental impact. Sustainable AI innovation requires:\nEnergy-efficient AI models: optimising algorithms to reduce energy consumption without compromising performance. Techniques like pruning, quantisation, and model distillation can significantly lower energy demands.\nSustainable data storage practices: implementing lifecycle policies that prevent unnecessary data hoarding, reducing the need for massive storage infrastructure.\nTransparent AI governance: ensuring that AI decision-making aligns with ESG principles, including fairness, accountability, and ethical use of data.\nThe case for clean energy-powered AI #To fully align AI and data protection with ESG, companies must go beyond energy efficiency and embrace clean energy sources. This includes:\nPartnering with green cloud providers: procure solutions from vendors whose cloud services are powered by renewable energy. Companies should prioritise these options over traditional data centres reliant on fossil fuels.\nOn-site renewable energy integration: large corporations running AI models at scale should explore direct investments in on-site solar, wind, or geothermal energy sources.\nEnergy demand transparency: companies should report AI-related energy consumption as part of their sustainability disclosures, ensuring accountability and alignment with ESG commitments.\nMoving from compliance to competitive advantage #AI and data protection should not be viewed solely as regulatory obligations, as they often are. They can serve as a competitive differentiator. Companies that build sustainable AI models, implement privacy-first data strategies, and power their operations with clean energy will not only mitigate ESG risks but also enhance brand trust and long-term resilience.\nInvestors, customers, and regulators are all paying closer attention to whether companies are genuinely committed to sustainability, or simply engaging in greenwashing. Companies that transparently measure, report, and optimise the environmental impact of their AI and data practices will stand out as leaders.\nReady to integrate AI and data protection into your ESG strategy? Take a look at our ESG/data protection service and contact us to get started.\n","date":"13 February 2025","permalink":"/aligning-ai-and-data-protection-with-esg-why-clean-energy-must-be-part-of-the-strategy/","section":"Blog","summary":"","title":"Aligning AI and data protection with ESG: why clean energy must be part of the strategy"},{"content":"Rapid Analysis Workshops transform fragmented TechReg networks into action-driven communities by surfacing key issues, prioritising solutions, and creating a structured plan for continual improvement.\nTechReg leaders in global companies depend upon networks of Data Protection Coordinators, Privacy Stewards, AI Governance Coordinators, Local Security Coordinators, Local Risk Officers, there are many different roles that I\u0026rsquo;ve come across.\nWhatever the title, they’re often working in silos. Each person is wrestling with the same problems, but without a way to share insights, collaborate, or move things forward collectively.\nThis isn’t just frustrating. It’s inefficient. And in a world where regulation struggles to keep pace with technology, inefficiency is a huge threat that, if left untreated, results in consequences that have large impacts on the company.\nSo how do we address this?\nTechReg needs a better way to work together #Right now, most of these \u0026rsquo;networks of colleagues\u0026rsquo; operate like a patchwork of isolated experts. Some are deep in the trenches, tackling immediate compliance issues. Some are scanning the horizon, watching for what’s coming next. Some have best practices that should be shared, but no clear way to share them.\nAnd without a structured way to connect, this knowledge stays locked in separate silos. That’s where Rapid Analysis Workshops come in.\nWhat is a Rapid Analysis Workshop? #It’s exactly what it sounds like. A structured, high-energy session that brings your networks together, either in-person or virtually, or hybrid, to do what they do best:\nGet their key issues and pain points on the table, or on the whiteboard\nIdentify what’s outdated\nShare best practices\nSpot what’s coming next\nBut gathering information isn’t enough. The real value comes from sorting, prioritising, and translating it into action.\nThat’s where we come in.\nFrom talking to doing: turning insight and issues into action #Most workshops generate ideas. The problem is, ideas alone don’t change anything. At Purpose and Means, we take a different approach.\nWe gather input from across your network. Everyone has a voice, stewards, coordinators, ambassadors, team leads. We capture their insights, frustrations, and successes.\nWe sort and structure the information. What’s urgent? What’s systemic? What’s a quick fix? What needs deeper work? We create a clear map of priorities.\nWe identify the work needed over the coming months or year. Instead of vague recommendations, we define concrete, actionable steps, who needs to do what, by when, and with whom.\nWe create momentum. Attendees don’t leave with just a list of problems. They leave with a plan, and a team to help execute it.\nWe build an ongoing cycle of improvement. This isn’t a one-off. At regular gatherings, we help you follow up: what’s progressed, what’s stalled, and what new challenges have emerged?\nWhy this works: the power of structure and energy #A typical workshop can feel like a talk shop. Lots of discussion. Lots of notes. But little follow-through. That’s why structure and energy matter.\nWe use proven workshop formats to keep sessions focused, fast-paced, and engaging. No wasted time. No drifting conversations.\nWe create an environment where people want to contribute. No dull presentations, just active participation.\nWe transform passive networks into active communities. Instead of isolated professionals, your teams become collaborators, problem-solvers, and co-creators.\nThis shift, from isolation to impact, is what makes the difference.\n#Next steps: ready to build a stronger, more connected network? #If you’re tired of watching the same issues resurface, the same knowledge gaps persist, and the same silos slow down progress, let’s talk.\nA Rapid Analysis Workshop isn’t just a meeting.\nIt’s a catalyst for real change.\nGet in touch with Purpose and Means to find out how we can help your network move from isolated expertise to collective impact.\nBecause the future of TechReg depends not just on what you know, but on how you and your teams and networks work together.\n","date":"10 February 2025","permalink":"/from-isolation-to-impact-how-rapid-analysis-workshops-build-stronger-techreg-networks/","section":"Blog","summary":"","title":"From isolation to impact: how Rapid Analysis Workshops build stronger TechReg networks"},{"content":"Many data protection leaders assume their programme is in good shape, until a Data Protection Maturity Assessment proves otherwise. In just 1-2 months, this assessment can uncover hidden gaps, misaligned processes, and risks that could cost your business far more than the time it takes to fix them.\nMany data protection leaders assume they have things under control.\nThey’ve got policies. Are they up-to-date? Do employees understand them? Do employees follow them?\nThey’ve got training. Is it relevant to employees\u0026rsquo; responsibilities? Do employees value it? Do you continually update and improve it?\nThey’ve got compliance reports. *Are they scrutinised and acted upon by their bosses? *\nThen they perform a Data Protection Maturity Assessment, and suddenly, things don’t look so solid. It’s not that they’ve done nothing. It’s that what they’ve done isn’t working as well as they thought.\nWhy assumptions are dangerous if you are not managing them #The biggest mistake in data protection is assuming you’re covered, because when you dig deeper, you often find:\nPolicies exist, but no one follows them.\nTraining is delivered, but people don’t understand it.\nProcesses are written down, but they don’t reflect how the business actually operates.\nIt’s like thinking you’re fit because you bought a treadmill. Owning it isn’t the same as using it.\nHow immaturity shows up in the real world #Let’s look at some real-world examples of how an immature data protection programme causes problems:\n1. No alignment with the business A global retailer had a well-documented data protection programme that on paper, ticked every box.\nBut when we conducted the Data Protection Maturity Assessment, it revealed a massive issue: the programme was built around regulatory compliance, not business objectives.\nMarketing teams were launching personalised campaigns that clashed with data protection policies.\nCustomer experience teams were making promises about data usage that legal hadn’t approved.\nThe result? Confusion, friction, and a few close calls in terms of consumer trust.\nWhen a data protection programme doesn’t align with the business, it creates more problems than it solves.\n2. Competence without business knowledge\nA fintech company had a highly skilled data protection team.\nThey knew GDPR inside out. They could quote ISO standards from memory. They had impressive legal backgrounds.\nBut when asked how the company’s tech stack worked, they had no idea. They didn’t understand:\nHow data flowed between their systems\nWhich third-party vendors had access\nWhat security measures were actually in place\nThey were data protection experts, but lacked business knowledge and business experience.\nAnd that meant they couldn’t offer practical, business-friendly solutions. Instead of enabling the company to innovate with guardrails, they just said \u0026ldquo;No, you can’t do that\u0026rdquo; which frustrated leadership and slowed down growth.\nA mature data protection team doesn’t just know the laws and regulations. They know the business, the tech, and the risks that actually matter.\n3. No buy-In from the business\nA healthcare company had strong data protection policies. They’d trained their staff. They’d built risk registers. They’d set up governance committees.\nBut the assessment showed a critical gap - no senior leadership buy-in. C-suite executives saw data protection as a compliance issue, not a business priority. So when budgets were tight, data protection initiatives were the first to be cut.\nWhen deadlines were looming, security processes were bypassed to \u0026lsquo;move faster.\u0026rsquo; And when a data breach finally happened, leadership acted shocked, even though the warning signs had been there for years.\nA mature data protection programme isn’t just about having the right policies. It’s about getting the business to care. Because if leadership doesn’t take data protection seriously, no one else will.\nErr on the side of caution #If you’re not sure whether your programme is mature enough, assume it isn’t. Because the worst position to be in is thinking you’re fine, when you’re not.\nA Data Protection Maturity Assessment doesn’t just highlight compliance gaps. It shows whether your programme is actually working for the business, or just existing on paper.\nIt’s a 1-2 month investment that gives you:\nA real understanding of where you are, and how to improve\nA clear, prioritised remediation plan\nA business outcome roadmap\nA draft business case that sets out what\u0026rsquo;s to be done, why it needs to be done, how much time, and how much budget\nCommunication material to help you present the assessment and proposed next steps to your leadership for approval to move forward\nBecause the companies that succeed in data protection aren’t the ones that assume they’re doing well.\nThey’re the ones that check, challenge, and improve, before someone else does it for them.\nSo the real question is: when’s the last time you checked?\nDoes this resonate? Feel free to book a no-obligation call to discuss your requirements.\n","date":"6 February 2025","permalink":"/data-protection-youre-not-as-mature-as-you-think/","section":"Blog","summary":"","title":"Data Protection: you’re not as mature as you think"},{"content":"Nine events, 2,000 attendees, interactive content, and a flu battle. Data Protection Week pushed preparation, technology, and resilience to the limit.\nThis year\u0026rsquo;s Data Protection Week has been a marathon.\nNot just one event. Not just a couple of PowerPoints.\nNine different topics, covering everything from KSA data protection law to AI deception, ESG, personal data breaches, and the nuances of controller/processor relationships within the same group of companies.\nNot just talks, either. Interactive books, videos, policy packs, dilemmas (branching scenarios), multiple formats, multiple languages.\nAnd, of course, just to keep things interesting, halfway through the week, I went down with flu :-(\nThis meant I had to integrate several extended naps in between sessions, so yes, I was definitely napping on the job.\nContent creation at scale (with a side of coughing) #This kind of work isn’t just about sitting in front of a mic and webcam and talking for an hour.\nIt’s about months of preparation. It’s about making complex topics engaging. It’s about knowing that no one wants to sit through another dull compliance session.\nSo, I invested in new tools. New platforms. New ways to make content stick.\nThe goal? To make data protection feel relevant. To make privacy something people actually want to understand, not something they just endure.\nAnd from the client feedback, it worked.\nTechnology, the unsung hero (or villain?) #Over the years, I’ve learned a painful truth: when you need tech to work, that’s usually when it fails.\nPowerPoint freezes.\nVideos don’t play.\nSlides don’t load.\nSo, I kept it simple. No fancy animations. Just a straightforward PDF when presenting the live sessions. (Thanks, Dylan Jones, for that tip many years ago. Still a lifesaver.)\nAnd the PDF version of the slide contain few words, but lots of great visuals that I craft myself, and I absolutely love doing.\nAnd it worked. No crashes, no panicked reboots. Just clear, uninterrupted delivery.\nReaching thousands (even while running on Paracetamol) #I don’t know the actual total number of attendees across all sessions, but it’s in the region 3.000-4.000.\nOne client alone had a 1,000-person Teams limit per session and they could not get more people in. We ran two sessions on the Tuesday, one early-ish in the morning for EMEA \u0026amp; APAC, and one late afternoon for EMEA \u0026amp; Americas. So that was 2.000 for just one client.\nThe other clients? At least a few hundred for each client here and there.\nThe scale of engagement is clear. People care about this stuff. Not because they have to, but because they need to.\n\u0026ldquo;Data Protection Day should be every day\u0026rdquo; #A lot of people say this. And they’re right.\nWhich is why this kind of work isn’t just for one week a year.\nIt’s about continuous engagement. Making sure privacy, AI, and data protection aren’t just a box-ticking exercise. Embedding them into companies operational procedures and changing mindsets from top to bottom.\nThat’s exactly what my virtual comms service is designed to do.\nSo if you want to take data protection beyond just an annual event, let’s talk.\nBecause compliance isn’t just about rules. It’s about trust. And trust is built every day.\n","date":"31 January 2025","permalink":"/reflections-on-data-protection-week-nine-events-at-least-two-thousand-attendees-and-one-flu/","section":"Blog","summary":"","title":"Reflections on Data Protection Week: nine events, at least two thousand attendees, and one flu"},{"content":"Rapid Analysis Workshops help compliance leaders plan, execute, and move forward by focusing on what truly matters, enabling progress while a structured process manages the smaller, complex details that often slow teams down.\nFrom overwhelm to action: how Rapid Analysis Workshops unlock clarity for Data Protection Leaders #Data protection and GRC leaders have one of the toughest jobs in business. The rules keep changing, they are often perceived as \u0026rsquo;necessary evils,\u0026rsquo; have shoe string budgets and are often lack the help they desperately need. Add to that the overwhelming volume of information, legal texts, guidance notes, risk assessments, and it’s no surprise that many leaders find themselves stuck.\nClarity isn’t a luxury. It’s a necessity.\nThis is where our Rapid Analysis Workshops come in. They’re designed for one purpose: to cut through complexity and empower leaders to move from overwhelm to action. It’s not about throwing more information into the mix, it’s about structuring the chaos into clear, actionable plans.\nPragmatism over perfection: leaders need results, not theories #One of the biggest traps data protection and GRC leaders fall into is chasing perfection. It’s an understandable instinct because getting it wrong can mean fines, reputational damage, and sleepless nights.\nHowever, waiting for a perfect solution often means doing nothing at all. Paralysis sets in. Months pass. And the risks don’t go away, they quietly grow. Day by day.\nRapid Analysis Workshops break this cycle. They aren’t about creating an academic masterpiece or solving every possible scenario. Instead, they focus on what you can do right now. What’s the next step you can take to reduce risk? Where can you make tangible improvements today?\nThe workshops operate on the principle of pragmatism: get it 80% right, then refine as you go. That’s the secret to staying ahead in an environment where the rules can shift overnight.\nFrom firefighting to future-proofing: redefining the role of Data Protection and GRC Leaders #Many leaders feel like firefighters, constantly reacting to the latest blaze. A new regulation gets introduced, and suddenly, the whole team is playing catch-up to interpret its impact. An audit lands, and everyone’s on high alert. It’s exhausting, and unsustainable.\nInstead of being stuck in a reactive loop, our workshops help leaders shift their focus upstream. They enable teams to anticipate challenges, build systems to handle them proactively, and create a framework for long-term success.\nSo, instead of dreading the next compliance update, you’re prepared for it. Instead of feeling blindsided, you’re in control. That’s what future-proofing looks like, and that’s what these workshops deliver.\nFrom problems to pathways: turning challenges into Roadmaps #When faced with a complex regulatory challenge, many teams start with one question: “What’s the problem?”\nBe aware that the more you focus on the problem, the bigger it seems. Before you know it, the team is stuck debating details, and no one knows how to move forward.\nRapid Analysis Workshops take a different approach. They shift the focus from problems to pathways.\nInstead of obsessing over what’s wrong, the workshops ask:\nWhat can we do about this?\nWhat’s the first step?\nHow do we measure progress?\nThis mindset shift is powerful. It transforms the way teams approach challenges, breaking them into manageable steps and creating a clear roadmap for action. Suddenly, even the most intimidating issues feel solvable.\nThis approach doesn’t just deliver short-term solutions. It builds momentum. It empowers teams to tackle bigger challenges with confidence because they’ve seen how effective this framework can be.\nClarity is a leadership MUST HAVE #The issue leaders face isn’t a lack of data. It’s the lack of clarity. And without clarity, even the best leaders can’t make decisions with confidence.\nRapid Analysis Workshops are built to deliver that clarity. They strip away the noise, highlight what really matters, and leave leaders with a clear, actionable plan.\nHere’s what that looks like in practice:\nStructured problem-solving: The workshops follow a proven process that breaks challenges into digestible steps.\nIndependent facilitation: An experienced facilitator guides the process, ensuring focus, objectivity, and progress.\nActionable outcomes: Every workshop ends with a concrete plan. No fluff, no jargon, just clear steps to move forward.\n#Your move\nLeadership isn’t about having all the answers. It’s about creating a system that helps your team find them.\nAnd Rapid Analysis Workshops are that system.\nStop waiting for the perfect moment. Stop letting complexity hold you back. Start moving forward, because clarity is just one workshop away.\n","date":"26 January 2025","permalink":"/plan-execute-move-forward-and-show-progress-focus-on-what-matters-and-let-the-process-handle-the-details/","section":"Blog","summary":"","title":"Plan, execute, move forward and show progress: focus on what matters and let the process handle the details"},{"content":"I’ve always believed that you have to get your hands dirty to understand the inner workings of a tool, or a process. Theoretical concepts are fine, but they pale in comparison to what you learn when you’re in the engine room. I\u0026rsquo;m talking GenAI where I\u0026rsquo;ve found the tools useful, powerful and worrying.\nGenerative AI (GenAI) is being talked about as the ultimate tool for innovation, boosting creativity, speeding up workflows, and reshaping industries. As GenAI itself, may say, it\u0026rsquo;s a \u0026lsquo;game-changer\u0026rsquo; for many. But is it tool that brings value to you, and/or your company, or just a clever mimic stealing ideas from its training data?\nThe ethical and legal challenges of GenAI touch everything from intellectual property to brand integrity, and as companies adopt these tools, there\u0026rsquo;s a lot of balancing that needs to be done.\nCreativity or copycat?\nGenAI doesn’t create in the traditional sense, it predicts. By analysing patterns from its training data, it generates outputs that resemble the information it’s been fed. That supposedly highly original logo design you\u0026rsquo;ve just approved? It\u0026rsquo;s just a remix of countless logos it has encountered. That catchy slogan you just shared on LInkedin? All it is, is a blend of phrases it has absorbed.\nThe blurred boundary lies here: while AI outputs may feel innovative, they’re building on existing works and the sweat and toil of the people who created them originally. Does that make them a form of creativity, or a repackaging of what’s already out there?\nMy AI learning curve\nWhen AI for Marketers was launched a few months ago by 42courses, I signed up immediately. Having discovered 42courses at Nudgestock during the pandemic, I became a big fan and even purchased their \u0026ldquo;free for life\u0026rdquo; membership last year. For anyone wanting to get inside the minds of their digital marketing team, or dive deep into behavioural economics, I can’t recommend Chris Rawlinson\u0026rsquo;s company enough.\nI also joined the AI Academy, run by by my mate James Varnham and their excellent team of AI specialists. Their hands-on approach, teaching how companies use AI, automate processes, and apply tools, has been invaluable. If you’re curious about the real-world applications of AI, I highly recommend their courses.\nAnd then there are all the free courses that are also extremely good, the ones from SAP and IBM stand out.\nAll this reaffirmed something I’ve always believed in: you have to get your hands dirty. Theoretical concepts are fine, but they pale in comparison to what you learn when you’re in the trenches.\nI have created many custom GPTs that I find very useful in different contexts, use Perplexity for a lot of my searches, and enjoy messing around in Midjourney sprucing up my own photos, especially those from my old negative archives from the early 80s.\nThe ethical minefield: who owns the output?\nGenAI’s creative mimicry does raise serious ethical concerns. Most models are trained on datasets scraped from the internet, often without any legal basis. This means that:\nArtists may find their unique styles replicated by GenAI without credit or compensation.\nProprietary content from one company could unintentionally influence outputs delivered to competitors.\nSo who owns AI-generated content?\nThe developers who built the AI?\nThe user who entered the prompt?\nThe creators whose work unknowingly influenced the output?\nThis lack of clarity leaves companies exposed to risks like copyright infringement and reputational damage.\nBut, as I discovered on the AI for Marketers course, many agencies in the UK have created their own GPTs, fed with own IP, that they use as knowledge bases. This means they are less exposed to knowledge drain, and can bring new employees up to speed easier and quicker.\nCreativity versus automation\nSupporters of GenAI argue that it democratises creativity, empowering those without formal design or writing skills to express themselves. It also saves time, automating repetitive tasks so creators can focus on higher-value work. But what about the downsides:\nDevaluation of human creativity: if GenAI becomes the default tool for creative work, will it erode the need for human ingenuity and will our imaginations deminish?\nOver-reliance on AI: companies might prioritise speed and cost-efficiency over originality, producing work that feels mechanical and standard.\nWhat companies can do\nFor companies embracing GenAI, ethical and strategic considerations are non-negotiable. Here are a few suggestion to help navigate the blurred boundaries:\nEducate your teams: encourage hands-on learning to understand the strengths and limitations of the tools. As I mentioned above, courses like AI for Marketers or the AI Academy are a good starting point.\nChoose ethical AI tools: work with tool vendors that prioritise consent in their training datasets to avoid potential legal and ethical pitfalls.\nUpdate your company\u0026rsquo;s internal policies: define clear guidelines for how GenAI should be used within your company, from intellectual property considerations to brand alignment. Then make your policies living and breathing by using one of Purpose and Means\u0026rsquo; Interactive Policy Packs.\nStay yourself updated not just on the tech, but on laws and regulations: the legal environment around GenAI is also moving quickly. Companies must monitor changes to stay compliant, and reduce risk. Consider taking a look at our Horizon Scanning and Analysis offering.\nWhen GenAI works and when It doesn’t\nImagine your marketing team using GenAI to brainstorm ideas for a product launch. The tool generates loads of catchy slogans, providing inspiration and saving hours of work. But what if one of those slogans closely resembles a competitor’s tagline? Or what if it subtly mirrors copyrighted material?\nOf course, these types of issues are not unique to the use of GenAI. I remember working for a global company years ago, top 3 globally in its sector, they re-branded with some smart slogans and were then immediately accused of re-hashing one of their competitors slogans from decades earlier!\nThe benefits and opportunities of GenAI are undeniable, but without oversight, the risks can outweigh them.\nWhere do we draw the line?\nAs I’ve learned through hands-on journey, the best approach to GenAI is one that balances opportunity with responsibility. Theoretical debates are useful, but the real insights come when you actively engage with the tools, experiment, and uncover both their potential and their pitfalls.\nGenerative AI doesn’t think, it predicts. It doesn’t innovate, it mimics. Whether it enhances creativity or steals it depends on how we use it, what boundaries we set, and how much oversight we apply.\n","date":"20 January 2025","permalink":"/blurred-boundaries-is-genai-enhancing-creativity-or-stealing-ideas/","section":"Blog","summary":"","title":"Blurred boundaries: is GenAI enhancing creativity or stealing ideas?"},{"content":"Most companies treat data protection like plugging leaks in a sinking ship. Breach here? Patch it. New regulation? Scramble to comply. New threat? Panic and buy a solution. This reactive approach is exhausting, and worse, it’s unsustainable.\n**Data protection is a huge task, **requiring multiple competences dealing with daily challenges, as well as trying to figure out what\u0026rsquo;s coming. Horizon scanning - predicting and preparing for what’s next - is a great strategy tool for companies stuck in firefighting mode. And yet, most companies completely ignore it because the risks you don’t see feel less urgent than the ones staring you in the face.\nThe invisible risks of staying reactive\nWhen you’re always reacting, you’re always on the back foot. You might think you can demonstrate compliance today, but what about when:\nA new law comes into force and you had no clue about it?\nA competitor launches a new data protection strategy, leaving yours looking outdated?\nA novel threat exploits your lack of foresight?\nThe problem with short-term thinking\nIn many companies, leaders and their teams are rewarded for solving immediate problems, not anticipating future challenges. This means companies end up spending more time and resources on damage control than they would on proactive planning.\nFor example, are you aware of the following, (and I appreciate everybody\u0026rsquo;s context differs):\nQuantum computing is on the horizon, and when it arrives, today’s encryption methods could be rendered obsolete. Is your company thinking about preparing for this, or are you betting the quantum wave won’t hit you?\nAI-driven attacks are evolving faster than most defences. Are you looking into adaptive security strategies, or will you wait until an AI-powered breach costs you millions of Euros?\nConsumers are becoming more and more aware of their choices and rights. Privacy-conscious consumers are demanding more transparency. Are you revamping your UI/UX to meet their expectations, or will you only react when they abandon your brand?\nHorizon scanning as a competitive edge\nProactive companies treat data protection as more than just compliance. They use horizon scanning to stay ahead of the curve, utilising risk management into a strategic advantage.\nImagine being the company that:\nAnticipates how new regulations like AI governance laws will reshape your industry, and aligns early while competitors are caught off guard.\nIdentifies emerging tech that could optimise compliance while cutting operational costs.\nSpots cultural shifts, such as rising consumer demands for eco-friendly data processing practices, and adapts before it’s required.\nHorizon scanning isn’t about predicting the future perfectly, it’s about being ready for what’s probable.\nWhat most leaders overlook\nSome may think this thought is too provocative: failing to invest in horizon scanning isn’t just risky, it’s negligent.\nData protection isn’t just an issue for your legal and tech teams, it connects to your brand reputation and customer trust. Yet, too many leaders see horizon scanning as a luxury, something to do \u0026lsquo;if we have time\u0026rsquo; - a nice to have.\nUnfortunately, you don’t have time. Threats emerge faster than your policies, and once you’re reacting, you’re already losing.\nThe cost of staying blind\nWithout horizon scanning, companies are setting themselves up for:\nUnnecessary spending: running around addressing issues after they arise is always more expensive.\nLost trust: customers aren\u0026rsquo;t stupid, they notice when you have sloppy practices, or when you’re late to comply with laws and regulations.\nStrategic irrelevance: competitors who embrace proactive planning will outpace you, leaving you struggling to catch up.\nTo conclude, the risks of tomorrow are already brewing. The question is whether you’ll see them coming.\nIf your approach to data protection feels like a never-ending game of catch-up, it’s time to make some changes and horizon scanning is your chance to go from being reactive and proactive.\nHow Purpose and Means helps you see around corners\nAt Purpose and Means, we focus on helping companies escape the tiresome cycle of firefighting. Our horizon scanning service for data protection and digital leaders is designed to:\nSpot emerging risks: from upcoming laws and regulations to emerging cyber threats, we help you identify the risks your company will face tomorrow.\nUncover opportunities: not all change is bad, new tech and trends can positively change your approach to data protection. We help you exploit these opportunities.\nBespoke strategies: every business is unique. We create actionable insights that align with your companies goals.\nFor more information, take a look at Horizon Scanning page where you can read about our approach and a couple of examples of radars.\nAlternatively, feel free to book a no obligation call to discuss your requirements.\n","date":"14 January 2025","permalink":"/horizon-scanning-for-data-protection-are-you-future-proofing-or-just-firefighting/","section":"Blog","summary":"","title":"Horizon scanning for data protection: are you future-proofing or just firefighting?"},{"content":"Data protection and environmental sustainability are often seen as separate priorities, but they’re deeply connected, and the social component of ESG makes the relationship even more complex. The data we collect or generate, store, use, secure (and eventually delete) doesn’t just have an environmental impact, it also affects equity, access, and fairness.\nEvery email, photo, or click generates data, and storing it has a cost. Data centres require huge amounts of energy to operate. They run 24/7, consuming electricity and generating heat that demands additional cooling.\nData centres contribute approximately 2.5% to 3.7% of global greenhouse gas emissions, a figure that rivals the aviation industry. In a 2023 report from Loughborough University, it was estimated that by 2025, global data is expected to exceed 180 zettabytes. Zettabyte - that was a new term for me, and now 2025 is here, I wonder how close their estimate was (feel me to mail me if you have the latest figures).\nThe social angle: who pays the price?\nThe social implications of data protection and sustainability are less visible but just as critical. Communities near data centres often bear the brunt of their environmental impact. Water-intensive cooling systems strain local resources, and the reliance on non-renewable energy sources contributes to pollution in nearby areas.\nDigital equity becomes a critical issue. Regions with lower technological infrastructure are disproportionately excluded from advancements in smart solutions. While richer organisations adopt eco-friendly practices and sophisticated measures, smaller businesses and underserved regions struggle to keep up, widening the global digital divide.\nThe tension between data protection and sustainability also highlights ethical questions. Are we designing systems that prioritise everyone’s well-being, or just those with the big pockets to navigate and address complex regulations and environmental challenges?\nThe ESG dilemma: \u0026lsquo;joined up thinking\u0026rsquo; is key\nCompanies often silo their ESG efforts, tackling \u0026lsquo;E,\u0026rsquo; \u0026lsquo;S,\u0026rsquo; and \u0026lsquo;G\u0026rsquo; independently. But real impact requires integrated thinking. Decisions about data protection can’t ignore their environmental costs, and sustainability strategies must address the social and ethical implications of digital transformation.\nFor example, \u0026lsquo;dark data\u0026rsquo; (data collected but never used) contributes significantly to the environmental burden. Identifying and eliminating dark data isn’t just a sustainability move, it’s a governance action that directly supports data protection requirements. Also, offering transparent reporting on both data usage and carbon emissions builds trust with stakeholders and reinforces the \u0026lsquo;S\u0026rsquo; and \u0026lsquo;G\u0026rsquo; pillars.\nAnd we can\u0026rsquo;t ignore recent political shifts, such as Donald Trump’s recent election victory. This has triggered an anti-ESG movement, resulting in some companies to down prioritise their ESG commitments. But with regulations (e.g. the EU\u0026rsquo;s Corporate Sustainability Reporting Directive (CSRD)) tightening and stakeholder expectations evolving, downscaling ESG is a short-sighted strategy. The integration of data protection and sustainability within ESG frameworks isn’t just a moral imperative, it’s becoming a business necessity.\nFortunately, there are some positive examples of companies who are demonstrating their commitment to ESG goals:\n**Ecosia - **this eco-friendly search engine directs ad revenue to planting trees. Ecosia also runs on 100% renewable energy and doesn’t track user data, highlighting that data protection and sustainability can go hand in hand.\n**Posteo - **a Berlin-based email provider prioritises sustainability by using genuine green energy from Green Planet Energy, efficient hardware, and recycled paper, avoiding misleading \u0026lsquo;certified energy\u0026rsquo; claims.\n**Recycled Cloud - **a Swiss-based company (part of e-Durable), combines eco-friendly cloud services through repurposed servers with strict data protection practices.\nHow Purpose and Means can help\nWe help businesses align their data protection efforts with their ESG goals using structured frameworks. Here\u0026rsquo;s one approach:\nThis approach ensures that data protection isn’t treated in isolation. It becomes an integral part of your ESG strategy, addressing environmental, social, and governance dimensions in a cohesive way.\nWith our framework, companies can integrate data protection into their ESG goals, creating a sustainable and socially responsible future.\nWant to know more, or see example ESG-data protection matrices? Feel free to book a no obligation call.\n","date":"13 January 2025","permalink":"/dark-data-is-wasting-resources-and-your-esg-score-isnt-looking-good-either/","section":"Blog","summary":"","title":"Dark data is wasting resources and your ESG score isn’t looking good either"},{"content":"Static AI policies are dead ducks. They sit unused, while AI evolves and creates risks in real time. Living, and breathing governance supported by our Interactive Policy Packs can transform AI policies into actionable tools that truly guide ethical decision-making.\nIn some companies, AI governance is meaningless.\nNot because policies aren’t written, but because the way they\u0026rsquo;ve been written means they\u0026rsquo;ll sit untouched on shared drives, dusty PDFs no one reads.\nAbstract, generic frameworks nobody understands, and not harmonised with related internal frameworks so there\u0026rsquo;s inconsistent terminology, overlaps and conflicting statements.\nPeople sit on Governance Boards with no terms of reference. They might think it\u0026rsquo;s a great career move to be on the board, but they\u0026rsquo;re often struggling to connect the dots.\nYet, AI is making decisions that could cost your business millions, not just in regulatory fines but in reputational damage, ethical failures, and erosion of trust. How can static documents and fluffy frameworks govern something as dynamic, unpredictable, and high-stakes as AI?\nIf your AI governance isn’t living and breathing, actively shaping decisions and guiding behaviour, it’s just window dressing.\nAt Purpose and Means, we do not write policies for clients. We recommend that be an internal task so you can take responsibility for it.\nWe are certainly not experts in AI, but we do have a knack of bringing frameworks alive through close collaboration with you and your colleagues who provide the business context.\nWe provide services that support businesses create effective governance frameworks that don’t just exist, they work. Our Interactive Policy Packs transform passive documents into tools for real engagement:\nDynamic visual explainers simplify complex rules.\nShort videos capture attention and make policies accessible.\nFAQs, quizzes, and dilemma scenarios make policies relevant and memorable.\nInteractive learning ensures employees and management alike understand the implications of AI decisions.\n#Why static governance fails\nThe problem with traditional governance is that it assumes policies are enough. They’re not. AI systems evolve. Context shifts. Employees come and go. A static document doesn’t adapt to these realities.\nAI governance often fails because it’s reactive, not proactive. It addresses problems after they occur, instead of preparing employees and leaders to prevent them in the first place.\nThe controversial truth #Most AI governance is written to tick a compliance box, not to guide real-world decisions. Businesses roll out AI systems to cut costs and gain efficiencies, but they don’t invest the same energy in ensuring those systems operate responsibly.\nAI doesn’t make ethical decisions. People do. If employees don’t understand the risks or the rules, how can they challenge unethical outputs, spot bias, or escalate concerns? They can’t.\nStatic policies give businesses plausible deniability when things go wrong. I recall a conversation I had with a lawyer last year who had recommended disciplining an employee for a relatively minor mistake: “It clearly states in the policy\u0026hellip;bla bla bla” But that’s not governance. It’s abdication.\nGovernance that breathes life into AI decisions #Effective governance is active, not passive. It evolves alongside the technology and the people using it. It engages employees at every level, equipping them with the skills and knowledge to act.\nAt Purpose and Means, we focus on turning governance into action so that:\nEmployees spot risks before they escalate\n**Managers make informed decisions **\nAI operates ethically, with accountability \u0026lsquo;baked in\u0026rsquo;\nWe help you embed governance into the fabric of your company through visuals, scenarios, and real-world applications, that make your policies practical and relevant. The result? A governance framework that actually works, not one that collects digital dust.\nThe stakes are too high to settle for governance in name only (GINO). If your policies don’t engage, educate, and evolve, they’re not governing anything.\nSo, is your AI governance alive, or already dead?\nWant to know more, or see a demo of an Interactive Policy Pack? Feel free to book a no obligation call.\n","date":"12 January 2025","permalink":"/ai-governance-needs-to-be-living-and-breathing-not-a-dead-duck/","section":"Blog","summary":"","title":"AI Governance needs to be living and breathing, not a dead duck"},{"content":"Balancing AI progress with accountability, this post explores the risks of over-reliance and the need for strong governance to maintain control\nAI is certainly changing how we work, and whether we like it or not, how we live. From robots operating construction equipment to algorithms optimising workflows, machines are taking on tasks once thought to require uniquely human skills. At first glance, this looks like undeniable progress. But is it really progress if over-reliance on these tools starts eroding our abilities and creating unforeseen consequences?\nThe issue with over-reliance isn’t just about convenience. It’s about what happens when we outsource too much - skills, judgment, and even responsibility. Consider these commonplace examples:\nMental arithmetic: once a dreaded subject at school (it least for me), mental arithmetic has largely been replaced by calculators. While calculators are indispensable for complex computations, their overuse has undermined confidence in basic mathematics. Many of us now second-guess simple sums without a device in hand. What happens if the tool isn’t available when we need it?\nNavigation apps: GPS has made getting around effortless, but at a cost. Before apps, we memorised routes, used landmarks, and developed spatial awareness. Now, even familiar routes feel daunting without step-by-step guidance. Over time, we lose our ability to navigate, becoming overly dependent on technology to tell us where to go.\nActivity trackers: fitness apps provide detailed insights into our physical activity, but they’ve shifted how we gauge effort. Runners used to listen to their bodies, relying on intuition to measure progress. I use a sleep tracking app. If the device says my sleep wasn’t \u0026lsquo;optimal,\u0026rsquo; I\u0026rsquo;m inclined to believe it - even if I feel great. The reverse is also true - I may have had a terrible nights sleep, but the app tells me I can push myself harder! This undermines trust in our instincts and our bodies.\nThese tools, while valuable, do plant seeds of doubt. Over time, confidence erodes, and dependence grows. What’s worse, over-reliance often creates blind spots that make us vulnerable when things go wrong.\nWith AI, the stakes are even higher. Machines are taking on tasks like decision-making, problem-solving, and predictive analysis. But what happens when AI makes mistakes or fails entirely? If we’ve handed over too much control, do we still have the skills and judgment to intervene?\nThe consequences of over-reliance can be significant:\nLoss of critical skills: over-reliance on AI can lead to skill degradation. Once humans no longer practice or develop essential abilities, reacquiring them becomes much harder. Imagine trying to navigate without GPS after years of letting it do the thinking for you.\nReduced accountability: when decisions are outsourced to algorithms, accountability becomes murky. Who takes responsibility when an AI system makes a flawed decision - its developers, the user, or the machine itself?\nSystemic vulnerability: over-reliance creates single points of failure. If an AI system crashes or produces incorrect outputs, businesses, and individuals alike may find themselves unprepared to adapt or recover quickly.\nErosion of trust: blind reliance on AI can lead to disillusionment when systems fail. Once trust in a technology is broken, rebuilding it becomes a significant challenge.\nEthical blind spots: AI doesn’t understand context, ethics, or nuance. Over-relying on it can lead to decisions that ignore important human considerations, from fairness to cultural sensitivity.\nProper governance is key to mitigating these risks. Businesses need people with the right skills, technical expertise, and ethical grounding to oversee AI deployment and ensure it’s used responsibly. Strong governance frameworks must answer questions like:\nAre we using AI to enhance human ability, or to replace it entirely?\nWhat safeguards are in place if systems fail?\nDo employees have the skills to step in when the technology falters?\nAnd, context is critical, too. AI is invaluable in some situations, like improving safety in hazardous work environments or analysing vast datasets for medical diagnoses. But in other scenarios, it risks replacing judgment, intuition, and adaptability - qualities that only humans bring to the table.\nThe consequences of over-reliance aren’t just theoretical - they’re already playing out in everyday life. Progress isn’t inherently good if it comes at the cost of our capabilities. The real challenge lies in finding the balance: leveraging AI’s potential while ensuring we maintain control, accountability, and the skills that define us.\n","date":"9 January 2025","permalink":"/over-reliance-when-tools-stop-helping-and-self-doubt-creeps-in/","section":"Blog","summary":"","title":"Over-reliance: when tools stop helping and self-doubt creeps in"},{"content":"Lots of talk about ‘governance’ these days in various technology contexts, especially around AI. Lots of misunderstandings too.\nLots of talk about ‘governance’ these days in various technology contexts, especially around AI.\nLots of misunderstandings too.\nAnd is it just me, but isn’t GRC coming back into vogue? About 10 years ago, Gartner said GRC was dead! Yet now I see many companies building up GRC functions and procuring GRC tools.\nI have a strong background in GRC stemming from the WorldCom and Enron scandals and was first exposed to the concepts when integrating financial business processes and systems into an IBM acquired company 20 years ago, followed by managing a compliance project at Carlsberg Group addressing the so-called ‘EuroSox’ EU directives. And then more GRC-related projects and programmes followed and I\u0026rsquo;ve never looked back.\nBack to the governance v management conundrum.\nMany of us are familiar with the rapidly evolving landscape of technology, especially in fields such as AI and data protection so understanding the distinction between ‘governance’ and ‘management’ is critical for legal, AI, information, technology and data protection professionals (to name a few) to ensure effective oversight and operational success.\nWhile both governance and management are leadership roles, they each have their own unique responsibilities and functions.\nGovernance: big picture stuff #Governance is all about the big picture and long-term goals. This is the job of the board of directors. They focus on making sure everything the company does aligns with its mission and long-term objectives. Here are some key points about governance:\nEvaluating stakeholder needs Making sure the needs, conditions, and options of stakeholders are well understood to set balanced and agreed-upon goals\nSetting strategic direction Deciding the direction of the company through prioritisation and decision-making\nMonitoring performance and compliance Keeping an eye on how things are going compared to the agreed goals to ensure everything is on track.\nThe Board constantly asks whether the organisation is working towards its mission, having a positive impact, and being sustainable financially and operationally. They also decide the company’s risk appetite, set up accountability frameworks, and establish policies and procedures.\nManagement: getting things done #Management is about day-to-day operations and putting the strategic direction into action. Managers are the go-betweens for the board and employees, translating high-level plans into actionable goals. Here’s what management does:\nCommunicating expectations Making sure everyone knows the mission, strategy, and policies\nManaging operations Planning, building, running, and monitoring activities to meet the company’s goals‍\nReporting results Keeping the Board updated on progress and outcomes.\nKey differences #Focus: governance is strategic, looking at long-term objectives and overall direction. Management is tactical, focusing on daily operations and implementation.\nResponsibilities: governance sets the strategy and monitors compliance. Management plans and executes operations to meet those strategic goals.\nAccountability: the board is accountable for ensuring the organisation sticks to its mission and long-term goals, while management is responsible for achieving these goals through effective operations.\n","date":"31 July 2024","permalink":"/understanding-governance-versus-management-of-technology/","section":"Blog","summary":"","title":"Understanding Governance versus Management of Technology"},{"content":"If you\u0026rsquo;re striving for excellence in your data protection work then don\u0026rsquo;t accept mediocrity. This is especially true for employee engagement activities where box-ticking often prevails.\nIf you\u0026rsquo;re striving for excellence in your data protection work then don\u0026rsquo;t accept mediocrity. This is especially true for employee engagement activities where box-ticking often prevails. Data protection is about people and as a leader you need to build passion among your employees so they have the right mindset, knowledge and skills to live up to everything written in your policies. Generic eLearning is not effective. You need to meet employees in their daily context outlining actionable steps.\nIn DataGuidance\u0026rsquo;s latest issue of Data Protection Leader - published yesterday - I offer some from-the-heart tips to avoid trivialising employee engagement. You can find my contribution on pages 20-23. Download the issue from the resources section on DataGuidance\u0026rsquo;s website.\nMany thanks to Alexis Kateifides for inviting me to contribute.\nIn the past few years, Purpose and Means has continually supported global data protection leaders with business-aligned data protection strategies and highly contextual employee engagement programmes. Get in contact if you wish to raise the bar in your data protection or digital compliance work.\n","date":"9 February 2024","permalink":"/if-youre-striving-for-excellence-in-your-data-protection-work-then-dont-accept-mediocrity/","section":"Blog","summary":"","title":"If you're striving for excellence in your data protection work then don't accept mediocrity."},{"content":"Happy Data Protection Day!\nIf you\u0026rsquo;re a data protection leader and only just realised what today\u0026rsquo;s date is, and you did not organise any events in your company last week to mark the day, and nothing is arranged for the coming week, this graphic - or plan B event kit - is for you.\nHappy Data Protection Day!\nIf you\u0026rsquo;re a data protection leader and only just realised what today\u0026rsquo;s date is, and you did not organise any events in your company last week to mark the day, and nothing is arranged for the coming week, this graphic - or plan B event kit - is for you. Feel free to download it and make your own little event on your desktop with the various artifacts needed to make your own celebrations go with a swing and it won\u0026rsquo;t involving taking funds from your budget.\nIn all seriousness, you could cut all the things out and arrange them as a \u0026lsquo;Data Protection Day Installation\u0026rsquo; in the canteen, or in your company\u0026rsquo;s reception area. Or why not move it round different part of your offices each day this week?\nIt\u0026rsquo;s all about getting attention for your work, and the installation will get attention, I guarantee you.\n","date":"28 January 2024","permalink":"/happy-data-protection-day/","section":"Blog","summary":"","title":"Happy Data Protection Day!"},{"content":"Purpose and Means is here to help and make you and your data protection team shine. We have tons of content and shed loads of ideas.\nPurpose and Means is here to help and make you and your data protection team shine. We have tons of content and shed loads of ideas. Our schedule is filling up but he still have some gaps:\n22-26 Jan: pre-recorded content only\n29 Jan-2 Feb: availability on 29, 31 Jan and 2 Feb for live delivery, otherwise pre-recorded content.\n","date":"5 January 2024","permalink":"/data-protection-dayweekmonth-2024/","section":"Blog","summary":"","title":"Data protection day/week/month 2024."},{"content":"The converging forces of huge AI growth and the coming EU Digital Product Passport are transforming a potential 5-million-ton e-waste crisis into a strategic opportunity, where precise data measurement turns end-of-life hardware into a valuable \u0026ldquo;second-life\u0026rdquo; asset.\nIn my last post of 2025, I discussed Convergence - in this context, the idea that topics often dealt with in isolation are best understood and addressed together.\nA call I had last week with connections in the UK alerted me to re-visit my Fibres trend radar where you can join the dots and see that the huge growth of AI is colliding with the circular economy, and the fallout is significant. A 2024 signal from MIT Technology Review warns that GenAI e-waste could reach 5 million metric tons by 2030.\nTypically, we frame this as an environmental issue, in fact I did plot it in this way on my radar. But a deeper look suggests it is actually a \u0026ldquo;lifecycle data\u0026rdquo; problem and what\u0026rsquo;s coming from the EU is about to change how companies will procure hardware.\nThe EU Digital Product Passport (DPP) is forcing a convergence between physical chips and digital accountability. The DPP is a digital record mandated by the EU\u0026rsquo;s Ecodesign for Sustainable Products Regulation (ESPR) that will provide sustainability, circularity, and compliance data for physical products sold in the EU. It will become mandatory from the beginning of 2027 for priority product groups, with a full rollout expected by 2030.\nThe DPP will be just like our own passports when we travel - creating a history of our movements, but in this case the DPP travels with every server rack and GPU throughout its life. It will record origin, material composition, repair history, and carbon footprint, and I think it\u0026rsquo;s going to have a huge impact on companies needing to demonstrate accountability. They won\u0026rsquo;t be able to just dispose of old hardware when it\u0026rsquo;s end-of-life. The Passport will create a permanent, auditable chain of custody. By 2030, if your hardware doesn\u0026rsquo;t have a clean digital record of where it\u0026rsquo;s been, it potentially becomes a *toxic *liability on your company\u0026rsquo;s balance sheet. This image is GenAI\u0026rsquo;s impression of how the DPP works for a blade server:\nHow will companies survive in a world where every piece of hardware waste will be tracked? Well, they might want to consider stop creating the waste!\nThere\u0026rsquo;s been quite a bit of coverage (such as this article from last month) saying that 3-year-old AI chips are obsolete. Hyperscalers refresh their hardware every 3 years in order to train the next model. And in this respect, there\u0026rsquo;s a technology trend on the radar that reveals that a GPU that is too slow for training say, GPT-5, is still powerful enough for inferencing (running) enterprise apps.\nThe EU\u0026rsquo;s DPP incentivises companies to recognise and exploit this compute power, so instead of disposing of the asset (and detailing the waste disposal in the Passport), there are opportunities to resell it to the mid-market.\nI mentioned above that e-waste is not just an environmental issue, companies need to be mindful of data obligations because the DPP requires granular, hardware lifecycle data. You can\u0026rsquo;t just claim you are sustainable - the Passport requires you to prove this.\nHow much embodied carbon was saved by refurbishing this rack?\nWhat is the exact material recovery rate?\nThe call I had last week with Ren and his team at InformU was an eye-opener, because I got the impression that they were making headway to tackling this very challenge. They have an ESG tool capable of quantifying these carbon impacts, and it\u0026rsquo;s not just another \u0026ldquo;nice-to-have\u0026rdquo; reporting utility. It can generate the data that allows the Passport to be stamped \u0026ldquo;verified.\u0026rdquo; It\u0026rsquo;s no longer about talking a good ESG game with nothing to back it up.\nThere are some friction points though, in particular data security. It\u0026rsquo;s common these days for CISOs to stipulate physically shredding functional drives because of concerns around wiping, but you can’t verify the circularity of a shredded drive in a Digital Passport. This \u0026ldquo;shred-first\u0026rdquo; mentality is an issue because in order to make the Passport work, the industry must converge on certified data sanitisation, i.e. proving that a device is secure and clean so it can be re-used. And in case you want to know the right terms to use, especially if you interchange deletion and erasure, take a look at this infographic I made last year:\n","date":"28 October 1106","permalink":"/convergence-in-action-esg-data-protection-and-ai/","section":"Blog","summary":"","title":"Convergence in action: ESG, data protection and  AI"}]