Welcome to our Monthly Insider Risk Recap – your briefing on the most significant insider risk cases incidents from the past month.
In this edition, we examine cases spanning telecoms, local government, national defence, and enterprise AI. From social engineering campaigns that bypass authentication controls, to lost devices containing sensitive employee data, to alleged espionage and AI-driven data exposure, each incident offers practical lessons for organisations navigating today’s evolving risk landscape.
In the coming pages we will break down what happened, highlight the underlying patterns and tactics, and outline what these developments could mean for organisational governance, access controls, and insider risk strategy. Let’s get into it!
On February 12th Telecom provider Odido was the target of a cyberattack aimed at stealing customer data from their system.
The criminals who stole the data worked in two steps, leveraging the privileged access retained by employees who became unintentional insiders. According to NOS, first they obtained the access credentials of customer service representatives via phishing. Secondly, using these credentials, they called other employees on the phone posing as Odido’s own IT department – using a technique called voice phishing, or vishing – and persuaded them to approve their access request, so that to skip Multi-Factor Authentication (MFA) and gain access to Odido’s Salesforce platform, the software where customer data was stored. The attackers reportedly exfiltrated data belonging to 6.2 million accounts, successfully carrying out one of the biggest data leaks in Dutch history.
As companies strive to improve their cybersecurity posture by adopting ever safer technical solutions, criminals are shifting their modus operandi to circumnavigate new developments, focusing on the only elements that cannot be updated with a click – human beings. Companies seeking to reduce data leaks should hence focus on safeguarding human-based authentication steps through activities such as phishing awareness training. In the case where preventative measures fail, layered defence mechanisms become essential in containing the threat, such as via access segmentation, so that should one account being fraudulently accessed would still not grant access to the entirety of the system.
On January 9th, an employee of the city of San José reportedly lost a USB drive containing sensitive personal information of fellow employees, inclusive of their Social Security Numbers. It is, at the moment of writing, unknown how many employees’ data have been compromised. However, local outlet San José Spotlight reports that the device also contained records of former employees, some dating back to 2000, suggesting that the number of affected individuals may significantly exceed the current workforce.
City administrators launched an investigation after being informed of the incident on January 12th and waited several days before alerting the police. Employees were subsequently notified – a month after the fact, according to reports – through letters dated February 9th still.
Although it remains unclear whether the USB drive has been retrieved or exploited, the incident raises fundamental governance concerns. Data retention policies should be scrutinised, particularly the continued storage of decades-old employee records containing immutable identifiers such as Social Security numbers. Equally concerning is the delay in notification to both law enforcement and affected individuals. The city has offered one year of complimentary credit monitoring services. However, Social Security numbers do not expire, and the associated identity theft risk persists well beyond a one-year mitigation window. This case underscores the importance of minimising portable media usage, enforcing encryption and endpoint controls, implementing strict data minimisation policies, and ensuring rapid breach notification procedures.
In light of this incident, workers’ unions are calling into question the ability of the city government to deal with its basic IT responsibilities and good practices. However, the city remains determined on embracing new technology and incorporating the use of AI, especially for disaster risk and evacuation management.
On February 3, Polish authorities arrested a long-serving employee of the Polish defence ministry on suspicion of espionage. The man, who had worked within the ministry since the 1990s, reportedly held a mid-level position with access to documents concerning military planning and strategy and is alleged to have been in contact with eastern intelligence services. The suspect has been monitored for several months to build a compelling case against him, and an investigation is currently underway. It is currently unknown how long he had been collaborating with the foreign services. This case underscores the importance of accurate periodical re-screening.
Last week Microsoft acknowledged - for the first time since its first detection last month - a bug in the functioning of its AI assistant, Copilot. Reportedly, they “identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labelled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop” but reassured that access control and data protection policies remained correctly in place.
Experts highlight that issues like this are likely to happen more in the future given the speed of technological innovation in AI capabilities and the lack of tools and know-how to protect themselves from potential errors. Data leakage is deemed almost inevitable, whether intentionally or not, considering the pace AI tools are evolving at and the physiological time it takes for bugs to be detected and corrected. While in other circumstance it would be enough to turn this type of assistants off and wait for governance to catch up, with AI this is not possible due to the amount of hype surrounding it, BBC reports.
Mitigation measures could then include making such tools private-by-default and opt-in only. Microsoft has begun rolling out fixes for this bug in early February and is currently monitoring deployment, without disclosing how many organisations were affected – however, so far it is known that the British NHS was but has successfully solved the problem.
This case flags an emerging insider-adjacent risk category in the form of AI-mediated data leaking, where information is disclosed not through malicious intent but rather through system design limitations and contextual overexposure. To mitigate this risk, organisations should implement clear AI usage policies that define what types of data may and may not be entered into AI tools, particularly if confidential, regulated, or client-identifiable information. Employee training should move beyond generic awareness and include scenario-based guidance on data hygiene and the risks of oversharing sensitive content. Combined with technical controls – such as private-by-default configurations – structured governance and workforce education are essential to avoid AI becoming an insider itself.
On February 19th, Oleksandr Didenko was convicted following a years-long laptop farm scheme. Laptop farms are collections of laptop devices placed in a home or office but operated remotely. This set up allows overseas users to appear as legitimate, locally based employees while concealing their true identity and geographic location. Didenko set the scheme up by paying individuals in Virginia, Tennessee, and California to host laptop farms at their residences. He then created a website where stolen identities of US citizens could be rented or bought by overseas IT workers looking for remote employment. By the time Didenko was arrested, he allegedly managed 871 proxy identities across three different laptop farms, placing fraudulent workers in 40 US companies.
From an insider risk perspective, this case highlights critical weaknesses in pre-employment screening and identity verification controls. Robust background screening – including identity validation, right-to-work verification, cross-checking of employment history, and device and geolocation consistency checks - may have disrupted or detected elements of this operation earlier. Organisations relying heavily on remote hiring models must assume that identity fraud and proxy work arrangements are credible threat vectors. Without layered vetting controls, companies risk onboarding individuals whose true identity, location, affiliations, and intentions remain obscured.
Beyond nefarious intent, applicants may lie on their resumes simply to get the job. While such practice has been existing for quite some time, the help of AI technology has enhanced it, creating a costly risk for organisations who believe they are hiring a professional when they are actually discussing with someone with less experience than necessary. Read more about it, and how lying on your CV can actually cost you the position a few years down the line, from Signpost Six’s CEO Dennis Bijker in an interview with the Telegraaf at this link.
It should be noted that sections of the North Korean IT sector have been involved in an opposite scam scheme too. Between 2022 and 2025, under various campaigns, North Korean hackers have posed as recruiters and infected victims’ systems with malware, using the ClickFix technique. The attackers created several fake websites impersonating finance entities, first establishing a contact for a job position and later asking the applicant to carry out a skill assessment for them, at which point an error display would ask them to copy and paste commands in a command line window and compromise their devices. Most victims had some kind of association with blockchain technologies and cryptocurrencies.
Given the fluidity and mobility of today’s job market, employees actively seeking new roles outside of the companies they are working for may present elevated insider risk exposure to their current employers. A compromised device used during a job assessment, like the one mentioned above, can provide adversaries with a foothold into corporate systems, particularly where behavioural monitoring and access segmentation are insufficient.
This month’s cases reinforce the notion that insider risk rarely stems from a single point of failure. Insiders, whether intentionally or not, draw their potential for harm where trusted access, human factor, and technology intersect, leaving a loophole in the web.
Mitigation therefore requires a structural rather than a reactive response. As the common denominator across cases emerges to be trusted access, organisations must move toward continuous verification frameworks, phishing-resistant authentication methods, enforce of least-privilege principle, and segment access architectures that limit blast radius of malicious intents or simple negligence. Background screening and periodic re-screening should be embedded into workforce lifecycle management, particularly so in sensitive roles. Portable media usage should be minimised and encrypted by default, while data retention policies must align with clear business necessity rather than institutional habits. For AI developments, private-by-default configurations, and adequate testing should precede enterprise-wide rollout, prior to adequate governance set up and oversight.
We hope you found this recap interesting – if so, stay tuned for the March edition! See you next month!
Disclaimer: The cases discussed in this publication are based solely on publicly available information at the time of writing. They are intended for educational and illustrative purposes and should not be interpreted as definitive investigative findings. In some instances, official investigations may still be ongoing, and information may emerge that could alter the understanding of the events described. Signpost Six makes no claims regarding the actions, intentions, or liability of any individuals or organisations mentioned. While every effort has been made to ensure accuracy, Signpost Six accepts no responsibility for any errors, omissions, or misinterpretations arising from the use of publicly sourced information.