Lab Manager | Run Your Lab Like a Business
3D render of raised cubes with binary code and padlocks on them
iStock, JuSun

Silicon Pathogens: How Cybercriminals Are Targeting Laboratories

Along with insights from CERN on fending off these attacks

by
Holden Galusha

Holden Galusha is the associate editor for Lab Manager. He was a freelance contributing writer for Lab Manager before being invited to join the team full-time. Previously, he was the...

ViewFull Profile.
Learn about ourEditorial Policies.
Register for free to listen to this article
Listen with Speechify
0:00
5:00

In November 2023, Idaho National Laboratory (INL) suffered a data breach. As one of the largest employers in the state, the personal information of thousands of people was accessed, some of which was leaked.

The hacking group SiegedSec, self-described as “just a bunch of cats with internet,” claimed responsibility for the hack. They announced their exploit on X (formerly Twitter), posting screenshots from within INL’s internal systems and ending the post with a link to download a sample of the stolen data. When asked by another user on X why they targeted INL, SiegedSec simply responded, “We wanted to, so we did.” As one might surmise from the crude language, absurd humor, and general air of being “terminally online,” SiegedSec skews young. It’s suspected that its members range from 18 to 26 years old.

Get training in Creating an Environment of Success and earn CEUs.One of over 25 IACET-accredited courses in the Academy.
Creating an Environment of Success Course

Strange as the story is, the INL breach is part of a growing pattern of cyberattacks on universities, research centers, and other knowledge institutions. For instance, in early 2023, Reuters reported that Russian attackers targeted US nuclear laboratories much like INL.  Nature reported this past February that the frequency of attacks against knowledge institutions has increased since 2015, with a notable 102 percent jump in distributed-denial-of-service, or DDoS, attacks alone in 2021.

Thanks to recent innovations in malware and hacking techniques, the digital arms race is accelerating, and stories like those of INL will only become more common. To keep up, labs must take a holistic approach to security, reevaluating not just technical infrastructure, but culture.

Why are research institutions being targeted?

There are a few key reasons why universities and research organizations make prime targets for attackers. For one, they typically have massive digital surface areas, comprised of thousands of workstations, IoT devices, and equipment, leaving lots of room for vulnerabilities. Furthermore, they house very valuable information: intellectual property, ongoing research, cutting-edge technology, and as was the case with INL, the personal information of thousands of people.

Furthermore, pursuing science is a collaborative endeavor. Labs across academia and industry alike publish papers, speak at conferences, and help each other solve problems. Without this collaborative attitude, we would innovate at a much slower pace than we do. However, different scientific organizations have different security practices and standards, making it difficult for them to keep confidential information protected throughout collaborative projects. While vital to scientific progress, there are challenges inherent in reconciling this transparency with cybersecurity best practices. And in the face of developing innovations in cybercrime, this reconciliation is imperative.

Innovations in cybercrime

The growing threat of ransomware

Ransomware—a type of malware that, upon infecting a system, encrypts its files and demands that the user pay a ransom, usually in the form of cryptocurrency, to unlock its files—is one of the most common attacks that scientific institutions are experiencing. As reported in Malwarebytes’ 2024 ThreatDown State of Malware Report, there was a 68 percent rise in ransomware attacks in 2023, along with a sharp increase of average ransom payments to $740,000—a 126 percent rise from the first quarter of 2023 to the second. Cybercriminals are evolving the attack, “getting scrappier and more sophisticated to target a higher volume of targets at the same time,” the ThreatDown report says. One ransomware group ran a series of brief, automated campaigns, allowing them to extort hundreds of targets at once.

Beyond refining the attack itself, cybercriminals are advancing the business model of ransomware. These incidents are increasingly found to originate from Ransomware-as-a-Service (RaaS). RaaS is a fairly recent innovation that lowers the barrier to entry to carrying out ransomware attacks. The business model is simple:

  1. Attackers (called Initial Access Brokers, or IABs, in this context) gain access to an organization’s network.
  2. The IABs then sell that access to ransomware “affiliates” (those who wish to carry out an attack), who acquire the ransomware itself from vendors such as ALPHV or LockBit.
  3. Armed with the access and ransomware, the affiliates then extort their target, splitting the ransom with the RaaS operators who provided software, support, etc.

RaaS has democratized cybercrime, allowing younger and less experienced threat actors to execute attacks. Hacking is a field historically populated by young but precocious people, much like SiegedSec. Now that RaaS has leveled the playing field, more young criminals will surface and capitalize on opportunities.

While vital to scientific progress, there are challenges inherent in reconciling this transparency with cybersecurity best practices.

RaaS has already made cybercrime more accessible, but another innovative advancement may accelerate this trend.

The advent of black-hat generative AI

One of the great strengths of generative artificial intelligence (genAI) is that it can expedite software development. Software developers are finding that tools like GitHub’s Copilot allow them to code and debug faster. Malware developers are now experimenting with genAI as well. For instance, in April PureAI reported that threat researchers discovered a malicious PowerShell script with the telltale signs of being AI-generated. Ironically, one of those signs is grammatically correct, detailed comments in the code explaining what it did so that humans could easily follow the logic. With genAI raising the ceiling of what's possible within a particular set of constraints, we may see more threat actors using it to develop malware faster and with greater sophistication.

GenAI can also be used to dynamically generate malware on-the-fly on a target’s device, bypassing security measures that would otherwise catch it if the malicious code were to be delivered normally. Earlier this year, HYAS Infosec Inc. published a whitepaper illustrating this concept with BlackMamba, an AI-generated keylogger. When the seemingly benign program is executed on the target’s computer, it contacts the OpenAI API and instructs a large language model to write the code for a keylogger. When that code is returned, it is then executed by the program, recording the user’s keystrokes. What’s more, all traces of the keylogger are destroyed when the program is closed—the malicious code remains “totally in-memory,” never being written to long-term storage. This makes it incredibly difficult to detect the keylogger. HYAS notes that BlackMamba was tested against a leading security solution many times and was never detected.

So, faced with increased attention from cybercriminals and rapid innovations in cybercrime, how can laboratories protect themselves effectively? 

One research institution is striving to fight off attacks every day and has been successful so far: the European Council for Nuclear Research, otherwise known as CERN.

Lessons from CERN

Birthplace of the World Wide Web and home to the Large Hadron Collider (LHC), CERN is the subject of admiration and conspiracy theories alike. CERN boasts more than 17,000 scientists collaborating there, as well as a core staff of 2,500 people. This large scientific community, with their come and go, requires CERN to employ a Bring Your Own Device (BYOD) policy. “Scientists are used to BYOD, the freedom to use their own computer, free choice of operating system, any appropriate and legal software package, programming language, or IT tool,” says CERN’s chief information security officer (CISO) Stefan Lüders in an interview with Lab Manager. Suffice it to say, CERN has a massive digital surface area for potential attacks and the prestige to attract attackers. 

Yet Lüders and his team have an exceptional track record of fending off attacks. CERN hasn’t seen a newsworthy breach since 2008, when the auxiliary webpage of one LHC experiment was defaced. When asked what makes CERN secure in a time when research centers are under siege, Lüders explains:

“A mélange of many factors: International collaboration also in the realm of ‘security,’ in-depth defense, and early anticipation of potential threats as well as a portion of luck. We prepared CERN very early against the rising threat of ransomware-related breaches and deployed a plethora of overlapping protective means such [that] we can allow one layer to fail (or be incomplete) while the other layers hold firm.”

While most labs don’t have such luxuries as international collaboration to inform their cybersecurity strategy, there is one thing CERN does that any lab can also do: implement overlapping, redundant layers of security in a “defense-in-depth” approach. For example, a lab may mandate the use of two-factor authentication and a virtual private network that users must be logged in to in order to access the organization’s network. A lab may also employ hardware firewalls as the first line of defense, followed by in-depth mail filtering, malware quarantining, and workstation-specific anti-malware protection. 

Thanks to recent innovations in malware and hacking techniques, the digital arms race is accelerating, and stories like those of INL will only become more common.

Lüders also says that his team works to stay current with evolving threats. For the last few years, his team has started improving CERN’s protection against supply chain attacks, in which an attacker injects malicious code into third-party software that their target uses, thereby executing the code when the target opens the software. “. . . Supply chain attacks are the new, huge problem to be addressed,” Lüders notes. His concern was validated in March of this year when XZ Utils, a widely used set of data compressors available on Linux, was found to contain a backdoor, leaving thousands of machines exposed. Luckily, the malicious code was found before it was more widely distributed.

Technical safeguards can only take you so far. Ultimately, computer security is a cultural concern. Many research institutions struggle to reconcile their cultures of transparency and collaboration with the suspicion of security best practices. “CERN is a challenging combination of two worlds: On one hand, CERN is academic. And in a university-like, open, and free environment, imposing cybersecurity from the top down comes with its own challenges . . . On the other hand, CERN is running industrial installations, which are basically [the] commercial devices of the ‘Internet of insecure things.’” Working in these unique conditions makes Lüders’ job a balancing act: he must find the intersection between securing their internet-connected industrial and compute equipment while not stifling the open, academic environment that makes CERN a premier research institution.

To accomplish this, Lüders says that he manages this security by offloading it onto the scientists themselves: “One important step forward is making every user of IT resources (scientists, programmer, engineers, operators) actively aware and responsible for their use of IT. They are the persons responsible for the security of their IT resources.” Lüders’ role is to offer support and guidance to users. As such, it’s in the users’ best interest to mandate security training of their team, keep their devices updated, develop software in a secure fashion, and learn how to handle data responsibly, among other measures. Lüders and his team at CERN have cultivated a holistic approach to security in which everyone shares responsibility, and security is baked into day-to-day operations. And by all indicators, it seems to be working very well.

Note: Generative artificial intelligence assisted in the content creation stage of this article, but everything contained within is true and original to the writer.

References:

  1. “Idaho National Laboratory experiences massive data breach; employee information leaked online”. https://www.eastidahonews.com/2023/11/idaho-national-laboratory-experiences-massive-data-breach-employee-information-leaked-online/.
  2. “Exclusive: Russian hackers targeted U.S. nuclear scientists”. https://www.reuters.com/world/europe/russian-hackers-targeted-us-nuclear-scientists-2023-01-06/.
  3. “Cyberattacks on knowledge institutions are increasing: what can be done?” https://www.nature.com/articles/d41586-024-00323-1.
  4. “Understanding Cyber Threats Against the Universities, Colleges, and Schools”. https://arxiv.org/pdf/2307.07755.
  5. “2024 ThreatDown State of Malware report”. https://try.threatdown.com/2024-state-of-malware/?ref=pressrelease&_ga=2.192488042.1909936409.1719711840-1434983469.1717420778.
  6. “Ransom Monetization Rates Fall to Record Low Despite Jump In Average Ransom Payments”. https://www.coveware.com/blog/2023/7/21/ransom-monetization-rates-fall-to-record-low-despite-jump-in-average-ransom-payments.
  7. “AI Might Be Source of PowerShell Script Used in Phishing Attack: Researchers”. https://pureai.com/Articles/2024/04/11/AI-PowerShell-Phishing-Attack.aspx.
  8. “BlackMamba Research Whitepaper”. https://www.hyas.com/blackmamba-research-whitepaper.