Signpost Six Blog

Monthly insider risk news recap – March 2026

Written by Signpost Six | Apr 8, 2026 11:42:59 AM

Welcome to our Monthly Insider Risk Recap – your briefing on the most significant insider risk cases incidents from the past month.

In this edition, we examine cases spanning government data access, contractor extortion, AI hardware smuggling, autonomous AI misbehaviour, and a criminal investigation touching a national prosecution service. Each incident offers practical lessons for organisations navigating today's evolving risk landscape.

In the coming pages we will break down what happened, highlight the underlying patterns and tactics, and outline what these developments could mean for organisational governance, access controls, and insider risk strategy. Let's get into it!

DOGE Social Security data

On March 10th, The Washington Post first reported that a whistleblower from the US Department of Government Efficiency (DOGE) alleged a former colleague downloaded Social Security data with the intention of sharing it with their new employer in the private sector. Allegedly, a former software engineer, Mr. John Solly, who worked at DOGE until October 2025, had downloaded records of over 500 million dead and living US citizens on a thumb drive before switching jobs to go work for a government contractor, Leidos. The highly restricted data belonged to two databases, called “Numident” and “Master Death File”, and included social security numbers, dates and places of birth, citizenship, race and ethnicity, and parents’ names, according to reports.

The whistleblower did not provide a timeline as to when events unfolded. However, the Washington Post reports that after leaving the job, Mr. Solly allegedly told colleagues about having downloaded the two databases on a thumb drive and having kept his laptop and credentials. Later on in January 2026, according to the whistleblower, Mr. Solly had allegedly been telling former colleagues about his intentions – and asking for help to carry them out – saying he would “sanitise” the data before using it at the new job. Reportedly, with the credentials he retained the engineer also had “God-level” access privileges, which no other employee had in the Social Security administration (SSA), as the agency was set up to enable limited access to sensitive data to prevent leaks. What changed?

First, privileged access policies did. When set up, the Supreme Court granted unlimited access to DOGE employees since last summer. In addition, around August 2025, Mr. Chuck Borges, SSA’s chief data officer, filed another complaint accusing DOGE of uploading data to a cloud server, removing several security measures in the process. Secondly, it's possible that the agency failed to properly offboard Mr. Solly, by deactivating his credentials, and taking any devices back following the departure. In addition, DOGE may have failed to implement efficient reporting channels, or at least to ensure their proper functioning, as the whistleblower allegedly reported the matter internally in January and no follow-up action was taken. Soon after, The Washington post reached out to both the SSA internal watchdog agency and Leidos, but both denied having heard of the incident and later claimed to have not found evidence of any wrongdoing following an internal investigation.

There are several elements that could have been improved upon to mitigate the risks stemming from such a case. First, broad access privileges and vague oversight constitute a prime environment for any type of insider: last summer, the two databases were moved on a cloud platform lacking appropriate security controls, granting widespread access to in the process. Second, proper off-boarding measures and post-departure due diligence is fundamental, as without privileged access there would have been far less potential to cause harm. It is worth noting that in February 2025, Leidos had lost a contract established in 2023 with the SSA still worth over $230 million as part of DOGE’s cost cutting strategy. Solly went to work for Leidos after his departure from DOGE. Last, proper reporting mechanisms and follow-up procedures are essential to have in place. According to Borges’ August complaint, Solly was among the employees who requested that the databases be moved to the cloud and there are no reports of actions being taken as a result.

Insider attack nets $2,5M ransom in cyber extortion scheme

On March 19th, a former contractor of a DC-based international technology company was convicted for transmitting or willfully causing interstate communications with the intent to extort a victim company. Mr. Cameron Nicholas Curry was found guilty of stealing a plethora of data, inclusive of employee and compensation information, and used it to extort his employer, ultimately tallying a ransom of $2,5 million.

Mr. Curry leveraged his privileged access over the 2023 6-month period he was contracted to work for the company as a data analyst to remove corporate data. According to reports, he began preparing for the extraction scheme following notice that his contract would not be renewed. After his last day of employment, he began sending emails to over 60 employees threatening to disclose highly personal payroll data, framing the act as an effort to enforce salary transparency across the company. Curry claimed there was significant pay inequity amongst colleagues. In the emails to executives, he also threatened to disclose to employees how to pursue legal action to address pay discrimination. He also threatened to disclose the data breach to authorities, as companies are required to report such incidents as quickly as possible. The company notified the FBI on December 14th, 2023, and paid the ransom in January 2024.

The Curry case highlights the critical period between termination notice and an individual’s exit. When a contractor’s access remains unrestricted and unmonitored following significant employment changes, the organisation becomes vulnerable to a disgruntled insider. In this instance, the transition from data analyst with privileged access to extortionist, combined privileged access with perceived moral grievance regarding pay equity. This case serves as a strong reminder why offboarding is not simply an administrative task and should be approached as a high-priority security operation. Failure to link Human Resources to technical restrictions and monitoring proportionate to the assessed risk level can result in significant damage to an organisation.

Three Chinese nationals charged of export violation

On March 19th, the US DoJ unsealed an indictment charging three Chinese nationals – Yih-Shyan “Wally” Liaw, Ruei-Tsang “Steven” Chang, and Ting-Wei “Willy” Sun – of conspiring to evade US export laws to divert high-performance AI technology to China.

Mr. Liaw is a co-founder, board member and senior VP of a publicly traded US-based manufacturer of computer servers for AI cloud computing applications and GPUs called Super Micro. Mr. Chang is a general manager of the same company in their Taiwan office. Mr. Sun is a third-party broker.

The FBI investigation reports that the scheme worked as follows. Liaw and Chang’s company worked with third-party brokers to direct a Southeast Asia company (dubbed as Company-1) to buy servers from them. The servers were assembled in Super Micro’s US facilities and sent to their Taiwan facilities. From there, the shipment was then redirected elsewhere in Asia, as per normal procedure. However, in this case, the defendants were in collusion with Company-1, who used a third-party logistics company to repackage and conceal the servers in unmarked boxes prior to their final shipment to China. In preparation for the scheme, the defendants forged documents and records to pretend Company-1 was the final recipient of the servers. Other concealment measures included creating dummy replicas of the servers to elude Super Micro’s company compliance team and warehouse inspections.

This case illustrates how the infiltrated insider can act at the highest level of an organisation. In this case, high level executives and third-party brokers were highjacked by external threat actors. External threat actors – such as nation states – can intentionally subvert organisations, including high-performance technological companies. By leveraging their institutional authority, they could forge documents and subvert the broader supply chain to achieve their objectives. This case serves as a definitive reminder that insider risk can manifest as a co-ordinated effort to weaponise an organisations own infrastructure and supply chains to facilitate the geopolitical or criminal objectives of an external party.

AI: Artificial Insiders

In early March LLM researchers reported that an artificial intelligence agent, ROME, had autonomously began hijacking computing power to mine cryptocurrency. Developed by Alibaba research teams, it allegedly went rogue during routine training, bypassing firewalls without permission to divert computing power towards cryptocurrency mining during a reinforcement learning (RL) session. RL operates as a machine learning method based on rewarding target behaviours and punishing adverse one. This approach ultimately teaches the agent to act towards maximising performance reward signals over time. Consequently, it had concluded that the most efficient way to do so was through more computing power and capital, which it went on to acquire through the most efficient path – without human input.

This is not the first instance of such an incident where AI agents had their own objectives. In 2024, Air Canada had to reimburse tickets against company policy just because the AI chatbot offered to do so. In 2025, Anthropic disclosed that the latest Claude AI model had resorted to blackmail out of fear of being shut down.

More recently, a couple of weeks after the Alibaba case, Meta also disclosed a large data leak caused, at least in theory, by one of its AI agents. The leak happened following an employee’s request to the internal AI forum for guidance on an engineering problem, to which the AI responded with a solution.

During implementation, a large amount of sensitive company data was exposed to the engineers for two hours.

These incidents underscore a new frontier for the unintentional insider, where the negligent actor, is no longer human, but an autonomous agent. In these cases, the AI operates like a rule-bender by exposing sensitive information not out of malice, but because it is programmed for efficiency over policy constraints. Additionally, employees might fail to predict AI reactions resulting in unintended consequences – like data breaches. The Meta leak shows that even when employees act with the best intentions, and their AI to maximise its performance reward signals, significant vulnerabilities remain. When unintended consequences do occur, effective case management is essential to enable containment and recovery and ensure appropriate risk controls are in place in the future.

Dutch Public Prosecution Service employee arrested in drug investigation

The Public Prosecution Service (Openbaar Ministerie – OM) has reported that on March 27th an employee has been arrested along with her partner amid investigations for trafficking in and possession of narcotics. More details have not been disclosed yet as the investigation is still ongoing.

Authorities have instead let it be known that the employee had been absent from work from quite some time. Whether that absence was self-enacted – as it is often the case with insider risk – or whether the OM was suspicious of misconduct and consequently removed her is unclear. However, the OM has confirmed that she did not have access to OM premises or systems – hence data – during this period.

While this case did not threaten the OM directly, it serves as a reminder of the risks associated with institutional employees’ lifestyle and conduct. While a new employee may pass initial background checks, it is important to periodically conduct re-screenings and establish clear policies for staff in critical positions or with access to sensitive data. Should indicators of concerning behaviours emerge, organisations should also have in place effective mitigation measures to prevent the employee to reach the end of the Critical Pathway to Insider Risk and carry out malicious acts.

Key takeaways and what to watch

What stands out from this edition is the necessity of tailoring insider risk mitigation programs according to the type of threat at hand. However, the cases from last month also highlight the several different faces that insider risk may take. Temporary employees with privileged government access, seniors with the authority to override compliance controls, departing disgruntled contractors, or novel AI systems who decide for themselves all constitute facets of insider risk, albeit each in their own way. As a result, insider risk program cannot be one-size-fits-all – a malicious insider acts profoundly different from an overzealous AI agent. Given the repeated incidents in the last few months, AI in particular is a theme to watch, as the line between an AI assistant and an AI insider may be thinner than we thought.

 

Disclaimer: The cases discussed in this publication are based solely on publicly available information at the time of writing. They are intended for educational and illustrative purposes and should not be interpreted as definitive investigative findings. In some instances, official investigations may still be ongoing, and information may emerge that could alter the understanding of the events described. Signpost Six makes no claims regarding the actions, intentions, or liability of any individuals or organisations mentioned. While every effort has been made to ensure accuracy, Signpost Six accepts no responsibility for any errors, omissions, or misinterpretations arising from the use of publicly sourced information.