<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>Security, Decoded: Insights from Suzu Labs</title>
    <link>https://suzulabs.com/suzu-labs-blog</link>
    <description>Security, Decoded by Suzu Labs. Expert insights, analysis, and practical guidance on cybersecurity, risk, and digital trust.</description>
    <language>en</language>
    <pubDate>Fri, 06 Mar 2026 17:48:42 GMT</pubDate>
    <dc:date>2026-03-06T17:48:42Z</dc:date>
    <dc:language>en</dc:language>
    <item>
      <title>The Company Reviewing Your Meta Glasses Footage Has a Security Problem</title>
      <link>https://suzulabs.com/suzu-labs-blog/the-company-reviewing-your-meta-glasses-footage-has-a-security-problem</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://suzulabs.com/suzu-labs-blog/the-company-reviewing-your-meta-glasses-footage-has-a-security-problem" title="" class="hs-featured-image-link"&gt; &lt;img src="https://suzulabs.com/hubfs/Gemini_Generated_Image_pdop5tpdop5tpdop-1.png" alt="The Company Reviewing Your Meta Glasses Footage Has a Security Problem" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Last week, Swedish journalists revealed that Meta sends video footage from Meta Ray-Ban smart glasses to human data annotators at Sama, a San Francisco-based outsourcing company that runs its annotation workforce out of Nairobi, Kenya. Workers described seeing footage of people in bathrooms, bedrooms, and intimate situations. The UK's Information Commissioner opened a probe. The story dominated privacy news for days.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;Last week, Swedish journalists revealed that Meta sends video footage from Meta Ray-Ban smart glasses to human data annotators at Sama, a San Francisco-based outsourcing company that runs its annotation workforce out of Nairobi, Kenya. Workers described seeing footage of people in bathrooms, bedrooms, and intimate situations. The UK's Information Commissioner opened a probe. The story dominated privacy news for days.&lt;/p&gt; 
&lt;p&gt;Nobody asked the obvious follow-up question. How secure is Sama?&lt;/p&gt; 
&lt;p&gt;We did. And the answer isn't reassuring.&lt;/p&gt; 
&lt;h3&gt;Sama Credential Exposure on the Dark Web&lt;/h3&gt; 
&lt;p&gt;Suzu Labs ran dark web intelligence against Sama's corporate domain (sama.com) using our threat intelligence platform. Within the last 90 days alone, we identified 118 credential entries tied to sama.com circulating across Telegram channels, underground forums, and breach databases.&lt;/p&gt; 
&lt;p&gt;Of those 118 entries, 57 are unique email addresses. Twenty-two of them appear to be legitimate corporate employee accounts. The employee names are consistent with Sama's known operations in both the US and Kenya, and several match naming patterns typical of the company's Nairobi-based annotation workforce.&lt;/p&gt; 
&lt;p&gt;Eighty-three of those entries included plaintext passwords.&lt;/p&gt; 
&lt;h3&gt;Sama Employee Password Security Is Poor&lt;/h3&gt; 
&lt;p&gt;We analyzed the 32 unique plaintext passwords found in the dataset.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="font-weight: bold;"&gt;88%&lt;/span&gt; fail basic complexity requirements (8+ characters with uppercase, lowercase, and a digit)&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="font-weight: bold;"&gt;56%&lt;/span&gt; are under 10 characters&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="font-weight: bold;"&gt;22%&lt;/span&gt; are under 8 characters, which wouldn't pass the minimum bar at most organizations&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="font-weight: bold;"&gt;Only 9% &lt;/span&gt;include a special character&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="font-weight: bold;"&gt;19% &lt;/span&gt;are digits only&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;The most reused password in the dataset appeared across &lt;span style="font-weight: bold;"&gt;10 separate entries&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;These aren't passwords from 2015. The credential entries in our dataset were posted between December 2025 and February 2026. Some were shared on Telegram just weeks before the Swedish investigation broke the glasses story.&lt;/p&gt; 
&lt;h3&gt;Info-Stealer Malware Is the Primary Source&lt;/h3&gt; 
&lt;p&gt;Most of these credentials didn't come from some third-party breach where Sama employees happened to have accounts.&lt;/p&gt; 
&lt;p&gt;Roughly 87% came from info-stealer malware logs. That means malware was running on machines used by people with sama.com email addresses, pulling credentials and session tokens directly off the endpoint. The stealer takes everything on the machine. It doesn't filter by importance.&lt;/p&gt; 
&lt;p&gt;The stealer logs captured credentials for Google accounts, sales platforms, and ISP portals on those machines. If any of those infected endpoints were also used to access Sama's internal annotation platforms, the footage review pipeline could be exposed.&lt;/p&gt; 
&lt;p&gt;The remaining credentials appeared in named data breaches, including the Crunchbase breach and credential combo lists traded on BreachForums and Telegram distribution channels.&lt;/p&gt; 
&lt;h3&gt;Risk to AI Training Data and Other Sama Clients&lt;/h3&gt; 
&lt;p&gt;Sama isn't just a Meta contractor. The company is one of the largest data annotation providers in the world. Their clients have historically included some of the biggest names in AI. When you train a model, the training data goes through companies like Sama, and the people labeling that data operate on endpoints that, based on what we found, are not locked down.&lt;/p&gt; 
&lt;p&gt;The credential exposure we identified doesn't prove that Sama's annotation platform was compromised. But employee machines have been infected with info-stealer malware. The resulting credentials are being traded on the dark web right now. And the password hygiene across those accounts is poor. For an organization trusted with intimate video footage from millions of consumers, that should concern every client they have.&lt;/p&gt; 
&lt;h3&gt;What Meta and Sama Should Do Now&lt;/h3&gt; 
&lt;p&gt;Meta should be asking Sama hard questions about endpoint security and whether any of the compromised accounts have access to the annotation pipeline. If Meta conducted a third-party security assessment of Sama before handing over user footage, the results should be reexamined given what's now circulating on the dark web.&lt;/p&gt; 
&lt;p&gt;Sama should be running its own leaked credential monitoring. Every one of the accounts we found needs a forced password reset and MFA verification. The endpoints those credentials were stolen from need to be checked for active infections. Info-stealer logs from Sama employee machines are circulating freely. That's not a hypothetical risk. It already happened.&lt;/p&gt; 
&lt;p&gt;For other companies using third-party data annotation services, your vendor's security is your security. If you're sending sensitive data to an annotation provider and you haven't checked whether their employees' credentials are already on the dark web, you're making assumptions you can't afford to make.&lt;/p&gt; 
&lt;h3&gt;How We Did This&lt;/h3&gt; 
&lt;p&gt;We identified these credentials through dark web intelligence research. Password analysis was performed on the extracted plaintext credentials. No accounts were accessed, tested, or exploited during this research.&lt;/p&gt;  
&lt;img src="https://track-na2.hubspot.com/__ptq.gif?a=243748608&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fsuzulabs.com%2Fsuzu-labs-blog%2Fthe-company-reviewing-your-meta-glasses-footage-has-a-security-problem&amp;amp;bu=https%253A%252F%252Fsuzulabs.com%252Fsuzu-labs-blog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Threat Intelligence</category>
      <category>Infostealers</category>
      <category>Data Privacy</category>
      <category>Dark Web</category>
      <category>Sama</category>
      <category>Credential Exposure</category>
      <category>Meta Ray-Ban</category>
      <category>Vendor Security</category>
      <pubDate>Fri, 06 Mar 2026 14:30:00 GMT</pubDate>
      <guid>https://suzulabs.com/suzu-labs-blog/the-company-reviewing-your-meta-glasses-footage-has-a-security-problem</guid>
      <dc:date>2026-03-06T14:30:00Z</dc:date>
      <dc:creator>Mike Bell</dc:creator>
    </item>
    <item>
      <title>The Death of the CTF: How Agentic AI Is Reshaping Competitive Hacking</title>
      <link>https://suzulabs.com/suzu-labs-blog/the-death-of-the-ctf-how-agentic-ai-is-reshaping-competitive-hacking</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://suzulabs.com/suzu-labs-blog/the-death-of-the-ctf-how-agentic-ai-is-reshaping-competitive-hacking" title="" class="hs-featured-image-link"&gt; &lt;img src="https://suzulabs.com/hubfs/image%20(3).png" alt="The Death of the CTF: How Agentic AI Is Reshaping Competitive Hacking" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;h3 style="font-weight: bold; text-align: center;"&gt;&lt;a href="https://infograph.venngage.com/pl/MifTplDvNc?flipBook=1"&gt;&lt;span style="color: #0d7d94;"&gt;View White Paper&lt;/span&gt;&lt;/a&gt;&lt;/h3&gt; 
&lt;h3 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;Abstract:&lt;/span&gt;&lt;/h3&gt;</description>
      <content:encoded>&lt;h3 style="font-weight: bold; text-align: center;"&gt;&lt;a href="https://infograph.venngage.com/pl/MifTplDvNc?flipBook=1"&gt;&lt;span style="color: #0d7d94;"&gt;View White Paper&lt;/span&gt;&lt;/a&gt;&lt;/h3&gt; 
&lt;h3 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;Abstract:&lt;/span&gt;&lt;/h3&gt;  
&lt;p&gt;Agentic AI systems are compressing competitive hacking timelines faster than the cybersecurity community has acknowledged. This paper analyzes first blood data from 423 Hack The Box machines released between March 2017 and October 2025, finding that root blood times have declined approximately 16% per year in log-space (p &amp;lt; 1e-10), with the sharpest drops concentrated after the emergence of large language models and agentic exploitation frameworks. All four difficulty tiers show statistically significant compression in the Post-LLM era (p &amp;lt; 0.05), with magnitude scaling from 27% at Hard to 67% at Insane. &lt;br&gt;&lt;br&gt;The implications extend well beyond competitive scoreboards as AI-driven acceleration reshapes penetration testing economics, lowers the barrier to offensive capability, and positions CTF platforms as de facto benchmarks for national AI cyber capabilities. Drawing on the historical transition to engine assisted play in chess, the paper argues for instrumentation of AI usage, the creation of separate competition tracks and standardized benchmarking of AI-augmented offensive capability. It also outlines the redesign of security training and certification pipelines to reflect compressed skill acquisition timelines.&lt;/p&gt; 
&lt;h3 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;1. Introduction&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;Capture The Flag competitions occupy a unique position in cybersecurity. From DEF CON's storied finals to Hack The Box's weekly machine releases, CTFs serve simultaneously as training grounds, hiring filters, and competitive arenas. At their core, they test a specific set of cognitive abilities, the capacity to parse unfamiliar systems, synthesize information from disparate sources, recall relevant techniques from a vast body of knowledge, and chain observations into a working exploit. For over two decades, performance in this environment has been primarily a function of unaided cognition. The scoreboard measured who among a field of human competitors was the most elite operator.&lt;/p&gt; 
&lt;p&gt;Agentic AI systems possess structural advantages in each of these capacities. They parse output without misreading. They have access to the entirety of publicly documented vulnerability research, tooling documentation, and exploit techniques, not as something to be recalled under pressure, but as something perpetually available. They synthesize across information sources without cognitive fatigue, context-switching cost, or the bandwidth limitations of human working memory. This paper argues that these structural advantages are sufficient to fundamentally reshape the nature of CTF competition, shifting the axis of differentiation from who is the best hacker to who designs the best agentic AI system.&lt;/p&gt; 
&lt;p&gt;This shift does not eliminate the human element. It redefines it. The competitors who will succeed in an AI-augmented CTF landscape are not those who resist the technology but those who learn to leverage it, directing AI agents toward the tasks where they hold categorical advantages while focusing human effort on the areas where intuition, creativity, and adversarial reasoning remain superior. The hacker in the loop remains the ultimate differentiator, but the nature of the loop changes. The skill being tested is no longer purely operational. It becomes architectural, and the question is: how effectively can a competitor design, configure, and orchestrate an AI system to do the work that was once done by hand?&lt;/p&gt; 
&lt;p&gt;The implications extend beyond individual competitors. As this transition accelerates, CTFs themselves will shift in character. Challenges that once tested a human operator's ability to enumerate, exploit, and pivot will increasingly function as benchmarks for AI agent systems. The competition between hackers becomes, in significant part, a competition between the agentic architectures and tools they bring to the table. CTF platforms will serve as proving grounds not just for human skill but for AI capability, and the leaderboard will reflect engineering and development decisions as much as operational technique.&lt;/p&gt; 
&lt;p&gt;We examine this thesis through multiple lenses. We analyze first blood time data across 423 competitively released Hack The Box machines spanning March 2017 through October 2025, stratified by difficulty and operating system, and find that both user and root first blood times are declining at approximately 16% per year on a multiplicative basis (log-linear model, p &amp;lt; 1e-10), with a statistically significant step-down in the Post-LLM era across all four difficulty tiers. We situate this analysis within a rapidly growing body of research in which agentic systems have won dedicated CTF competitions, autonomously exploited real-world vulnerabilities, and achieved meaningful solve rates on platforms like Hack The Box without human intervention. We conclude with a set of recommendations for CTF platform operators, training organizations, certification bodies, and policymakers, drawing on lessons from chess's two-decade experience with the same problem and anchored by a proposal for voluntary AI usage instrumentation designed to establish ground truth while the distinction between human and AI-augmented performance is still observable.&lt;span&gt; &lt;/span&gt;&lt;/p&gt; 
&lt;h3 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;2. Author Background&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;The author holds the OSCE3 certification and works professionally in offensive security and AI system development. He has extensive practical experience with the Hack The Box platform, including participation across all difficulty tiers. This dual perspective informs the framing of the problem examined in this paper; however, all empirical claims are derived from the longitudinal dataset and statistical analysis presented in the following sections.&lt;/p&gt; 
&lt;h3 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;3. Related Work: AI in Offensive Security&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;The research trajectory over the past three years documents a rapid escalation in AI offensive capability, from early demonstrations that LLMs could assist with security tasks, to autonomous systems that competed with and outperformed human players, to production-grade offensive tooling built on the same foundations. While these studies primarily reported solve rates and task completion rather than time-to-compromise, they establish the capability context in which the longitudinal trends presented in this paper should be interpreted.&lt;/p&gt; 
&lt;p&gt;The first wave established that large language models were not just passively useful for security work but actively underutilized. Palisade Research demonstrated that a simple ReAct-based agent architecture with plan-and-solve prompting achieved 95% on the InterCode-CTF benchmark, up from a prior state of the art of 72%, fully solving the General Skills, Binary Exploitation, and Web Exploitation categories [2]. Their central finding was that LLM capabilities in this domain were underelicited, not fundamentally limited. Stanford's CyBench benchmark confirmed this from a different angle, as strong models solved professional-level CTF tasks at speeds comparable to 11-minute human solves [7]. The AIRTBench autonomous red teaming benchmark reported Claude 3.7 Sonnet achieving a 61% solve rate on black-box challenges, with models solving tasks in minutes where humans required hours [12].&lt;/p&gt; 
&lt;p&gt;The second wave moved from benchmarks to real vulnerabilities. The Fang et al. research series established that LLM agents can autonomously exploit real-world CVEs. Given vulnerability descriptions, GPT-4 successfully exploited 87% of 15 one-day vulnerabilities, while in their evaluation, every other tested system, including open-source models, ZAP, and Metasploit, scored zero [14]. A follow-up demonstrated that teams of LLM agents using hierarchical planning outperformed single agents by up to 4.3x on 14 real zero-day vulnerabilities [15]. Google's Project Zero and DeepMind collaboration produced Big Sleep, an LLM-powered system that discovered the first publicly documented AI-found exploitable bug in real-world software, a stack buffer underflow in SQLite, a vulnerability in production code that had evaded human auditors and traditional fuzzing tools [27].&lt;/p&gt; 
&lt;p&gt;The third wave saw benchmarks and exploits converge into competition. In 2025, the Cybersecurity AI agent won the Neurogrid CTF with 41 of 45 flags and a $50,000 prize pool, reached Rank 1 early at the Dragos OT CTF before finishing 6th after being paused, and was the top-performing AI team in Hack The Box's "AI vs Humans" competition [1]. The authors concluded that Jeopardy-style CTFs are effectively solved by well-engineered AI agents. Meanwhile, the translation into practical offensive tooling accelerated, as PentestGPT demonstrated a 228.6% improvement in task completion over GPT-3.5 baselines on Hack The Box machines (USENIX Security 2024, Distinguished Artifact) [16]. D-CIPHER achieved a 44% solve rate on Hack The Box with 65% more MITRE ATT&amp;amp;CK coverage than prior approaches [17]. xOffense, built on a fine-tuned Qwen3-32B model, reached 79.17% subtask completion [18]. And DARPA's AI Cyber Challenge yielded four open-source Cyber Reasoning Systems across the competition [29], and the fourth-place team alone autonomously discovered 28 real-world vulnerabilities including six zero-days [34].&lt;/p&gt; 
&lt;p&gt;Taken together, these results show a rapid progression from LLM-assisted workflows to autonomous competitive performance. In under three years, the field moved from "LLMs can help with CTF challenges" to "agentic systems can win CTF competitions outright." These results establish that AI systems now operate at timescales comparable to or faster than expert human solves, providing a plausible mechanism for the longitudinal compression in first-blood times measured in this study.&lt;/p&gt; 
&lt;h3 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;4. Data and Methodology&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;The dataset consists of 423 competitively released Hack The Box machines spanning March 2017 through October 2025, covering all four difficulty tiers, Easy (124), Medium (143), Hard (98), and Insane (58). The operating system split is predominantly Linux (288) and Windows (121). For each machine, two first-blood metrics were collected from timestamps scraped from 0xdf's publicly archived writeups [42], specifically user blood (time from machine release to the first recorded foothold) and root blood (time from release to full system compromise), both measured in minutes. Machines with missing or inconsistent timestamps were excluded. The gap between them is effectively the privilege escalation time for the fastest competitor on that challenge.&lt;/p&gt; 
&lt;p&gt;To structure the analysis around AI capability, two eras were defined. The Pre-LLM era covers everything before ChatGPT's public launch in November 2022, giving us 286 machines representing the pre-LLM baseline. The Post-LLM era covers November 2022 onward, encompassing both the initial period of LLM-assisted work and the subsequent emergence of dedicated agentic exploitation frameworks, covering 137 machines. An earlier version of this analysis used a three-era split (Pre-LLM, LLM, and Agentic), but the Agentic era's small sample sizes at higher difficulty tiers (Hard n=17, Insane n=5) produced era-level comparisons that lacked statistical power. Consolidating into two eras yields sample sizes sufficient for significance testing across all four difficulty tiers. For visual analysis, finer milestone markers for individual model releases are overlaid on the time-series charts.&lt;/p&gt; 
&lt;p&gt;Linear regression on log-transformed blood times establishes the overall trend and yields a directly interpretable metric, percentage change in solve time per year. Era medians quantify the magnitude of change between periods. Non-parametric significance tests (Mann-Whitney U, one-sided, reflecting the a priori hypothesis of decreasing solve times) determine whether observed differences are statistically significant. Everything is run independently on both user and root blood, and stratified by operating system. More fundamentally, first-blood times identify the earliest successful solve but do not reveal the methods used. The data establishes temporal correlation with AI capability milestones, not direct causation. Given multiple stratified comparisons, p-values are interpreted conservatively. That limitation is what motivates the "Solved with AI" proposal in Section 8.&lt;/p&gt; 
&lt;h3 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;5. Results&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;Across all 423 machines, both user and root first blood times (in minutes) show a statistically significant downward trend over the eight-and-a-half-year period. Root blood times are declining at 16.5% per year on a multiplicative basis (linear regression of log-transformed solve time against release date, R² = 0.12, p = 1.7e-13). User blood times decline at 16.0% per year under the same log-linear model (R² = 0.09, p = 2.7e-10). Every difficulty tier shows a negative longitudinal slope, indicating that the downward trend is not driven by a single category.&lt;/p&gt; 
&lt;p&gt;&lt;img src="https://suzulabs.com/hs-fs/hubfs/image%20(10).png?width=900&amp;amp;height=532&amp;amp;name=image%20(10).png" width="900" height="532" alt="image (10)" style="height: auto; max-width: 100%; width: 900px;"&gt;&lt;/p&gt; 
&lt;p&gt;The scatter plot tells the story visually. Each dot is a machine, colored by difficulty tier, with trendlines fit per tier. The vertical dashed lines mark major LLM and agent capability milestones. The downward slope is visible across all four tiers, with a visibly steeper trend in the Post-LLM period.&lt;/p&gt; 
&lt;p style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;Era Comparison&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;The era-level breakdown is where the compression becomes concrete. Median root blood times by era&lt;/p&gt; 
&lt;table style="border-collapse: collapse;"&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;&amp;nbsp;&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;&lt;strong&gt;Pre-LLM (n)&lt;/strong&gt;&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;Post-LLM (n)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;Change&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;p-value&lt;/p&gt; &lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;Easy&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;54.8 min (81)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;28.9 min (43)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;-47%&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;0.0001&lt;/p&gt; &lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;Medium&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;106.0 min (96)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;72.5 min (47)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;-32%&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;0.0011&lt;/p&gt; &lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;Hard&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;261.1 min (66)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;191.1 min (32)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;-27%&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;0.0279&lt;/p&gt; &lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;Insane&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;926.6 min (43)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;302.5 min (15)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;-67%&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;0.0350&lt;/p&gt; &lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;Median user blood times by era&lt;/span&gt;&lt;/p&gt; 
&lt;table style="border-collapse: collapse;"&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;&amp;nbsp;&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;&lt;strong&gt;Pre-LLM (n)&lt;/strong&gt;&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;Post-LLM (n)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;Change&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;p-value&lt;/p&gt; &lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;Easy&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;21.5 min (81)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;12.7 min (43)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;-41%&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;0.0018&lt;/p&gt; &lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;Medium&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;56.0 min (96)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;40.3 min (47)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;-28%&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;0.0050&lt;/p&gt; &lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;Hard&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;107.1 min (66)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;110.9 min (32)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;+4%&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;0.19 (n.s.)&lt;/p&gt; &lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;Insane&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;271.5 min (43)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;211.9 min (15)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;-22%&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;0.22 (n.s.)&lt;/p&gt; &lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;&lt;img src="https://suzulabs.com/hs-fs/hubfs/image%20(11).png?width=900&amp;amp;height=458&amp;amp;name=image%20(11).png" width="900" height="458" alt="image (11)" style="height: auto; max-width: 100%; width: 900px;"&gt;&lt;/p&gt; 
&lt;p&gt;For root blood, the Post-LLM era is faster across all four tiers, with compression scaling by difficulty. Easy machines dropped 47%. Medium dropped 32%. Hard dropped 27%. Insane machines dropped 67%, from a median of over 15 hours to approximately 5 hours. All four root blood tiers reach statistical significance at p &amp;lt; 0.05 for the Pre-LLM vs Post-LLM comparison (Mann-Whitney U, one-sided), with Easy and Medium reaching p &amp;lt; 0.002. User-blood compression is significant at Easy (-41%, p = 0.002) and Medium (-28%, p = 0.005) but does not reach significance at Hard or Insane.&lt;/p&gt; 
&lt;h3 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;User Blood vs. Root Blood&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;The dual blood metric reveals something the aggregate numbers miss. User blood captures the foothold phase, specifically reconnaissance, vulnerability identification, initial exploitation. Root blood captures the full kill chain including privilege escalation. Comparing the two across eras isolates how each phase is compressing independently.&lt;/p&gt; 
&lt;p&gt;An important methodological note is that on Hack The Box, user blood and root blood are separate races. Different competitors frequently win each. One player might find an initial foothold fastest while a different player achieves full compromise first via a different approach. This means naively subtracting one from the other does not measure a single competitor's privilege escalation time. To isolate privilege escalation behavior, per-machine privesc time was computed as root blood minus user blood. For this analysis only, 37 machines where this value was zero or negative were excluded; these machines remain in all other analyses. A zero-minute cutoff was used to remove race artifacts while retaining conventional two-stage solves. Negative privesc indicates that root was blooded before user, a race artifact in which a different competitor bypassed the foothold step entirely or took a single exploit chain directly to root.&lt;/p&gt; 
&lt;p&gt;After filtering, the privilege escalation compression becomes clean and consistent across all four tiers&lt;/p&gt; 
&lt;p&gt;&lt;img src="https://suzulabs.com/hs-fs/hubfs/image%20(12).png?width=900&amp;amp;height=675&amp;amp;name=image%20(12).png" width="900" height="675" alt="image (12)" style="height: auto; max-width: 100%; width: 900px;"&gt;&lt;/p&gt; 
&lt;p&gt;&lt;i&gt;Median implied privilege escalation time by era (machines with privesc &amp;gt; 0 min)&lt;/i&gt;&lt;/p&gt; 
&lt;table style="border-collapse: collapse;"&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;&amp;nbsp;&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;&lt;strong&gt;Pre-LLM (n)&lt;/strong&gt;&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;Post-LLM (n)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;Change&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;&amp;nbsp;&lt;/p&gt; &lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;Easy&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;15.2 min (71)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;11.1 min (43)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;-27%&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;&amp;nbsp;&lt;/p&gt; &lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;Medium&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;34.0 min (87)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;22.2 min (47)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;-35%&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;&amp;nbsp;&lt;/p&gt; &lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;Hard&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;105.1 min (60)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;61.5 min (32)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;-41%&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;&amp;nbsp;&lt;/p&gt; &lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;Insane&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;175.9 min (35)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;108.0 min (11)&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;-39%&lt;/p&gt; &lt;/td&gt; 
   &lt;td style="vertical-align: middle;"&gt; &lt;p&gt;&amp;nbsp;&lt;/p&gt; &lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;The privilege escalation phase is compressing faster than the foothold phase. After excluding machines where privesc time was zero or negative (race artifacts where root was blooded before user), median privesc times dropped 27% at Easy, 35% at Medium, 41% at Hard, and 39% at Insane from the Pre-LLM to the Post-LLM era. Meanwhile, user blood (the foothold phase) only compresses significantly at Easy (-41%, p=0.002) and Medium (-28%, p=0.005). At Hard and Insane difficulty, foothold times show no statistically significant change.&lt;span&gt; &lt;/span&gt;&lt;/p&gt; 
&lt;h3 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;Operating System Breakdown&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;&lt;span style="color: #0d7d94;"&gt;&lt;img src="https://suzulabs.com/hs-fs/hubfs/image%20(7).png?width=900&amp;amp;height=403&amp;amp;name=image%20(7).png" width="900" height="403" alt="image (7)" style="height: auto; max-width: 100%; width: 900px;"&gt;&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;Stratifying by operating system reveals that the compression is not platform-uniform. Windows machines show steeper declines than their Linux counterparts at nearly every difficulty tier. Pre-LLM to Post-LLM, Windows Medium machines dropped from a median of 118.6 minutes to 45.9 minutes, a 61% reduction that is statistically significant (p=0.0003). Windows Easy dropped 22% (p=0.037). Linux machines show consistent but smaller compression, with Linux Easy dropped 36% (p=0.003), Linux Medium 12%, and Linux Hard 11%. Linux Insane dropped 77% (p=0.018), though with small Post-LLM samples.&lt;/p&gt; 
&lt;p&gt;One plausible explanation is that Windows and Active Directory environments contain more extensively documented, repeatable attack patterns such as Kerberoasting, token impersonation, and service misconfigurations. Linux privilege escalation tends to involve more heterogeneous, system-specific configurations. The possible relationship between attack-surface structure and amenability to AI is examined in Section 6.&lt;/p&gt; 
&lt;p&gt;The data tells a consistent story across every analytical lens applied to it. Solve times are compressing. The compression scales with difficulty. It affects both the foothold and privilege escalation phases, with privesc compressing faster. It is more pronounced on Windows than Linux. And the step-change at the Pre-LLM to Post-LLM boundary is statistically significant across all four difficulty tiers for root blood times.&lt;/p&gt; 
&lt;h3 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;6. Discussion&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;The longitudinal trends in Section 5 establish that time-to-compromise is decreasing across all difficulty tiers. The remaining question is which mechanisms are capable of producing a reduction of this magnitude and structure over the observed period. Each candidate explanation implies a different phase structure for time-to-compromise.&lt;/p&gt; 
&lt;p&gt;Several alternative explanations must be considered.&lt;/p&gt; 
&lt;p&gt;Community growth increases the number of attempts per machine at release and therefore the probability of an early solve. This mechanism predicts a roughly uniform acceleration across difficulty tiers, because the number of competitors affects all machines equally. Instead, the data shows compression that scales with difficulty, with the largest proportional reduction at the Insane tier. A participation-driven model does not naturally produce this pattern.&lt;/p&gt; 
&lt;p&gt;The expansion of publicly available writeups improves pattern recognition and reduces time to initial foothold when new machines resemble previously documented vulnerabilities. This mechanism predicts the strongest effect in the foothold phase. The observed data shows the opposite structure, as privilege escalation compresses more rapidly than foothold acquisition. A writeup-driven explanation is therefore incomplete.&lt;/p&gt; 
&lt;p&gt;Improvements in non-AI tooling contribute to the gradual downward trend visible throughout the Pre-LLM period. Enumeration frameworks and automated analysis tools reduce the time required to collect and interpret system state. However, the largest inflection in the time-series data occurs after the public release of high-capability language models rather than at the introduction of specific enumeration or post-exploitation tools. Tooling alone explains the baseline trend but not the post-2022 acceleration.&lt;/p&gt; 
&lt;p&gt;These factors are also not independent of AI capability. The rate at which writeups are produced, documentation is generated, and tools are developed has itself increased through AI-assisted workflows. Treating these as separate variables understates the total effect of AI on the ecosystem in which CTF performance occurs.&lt;/p&gt; 
&lt;p&gt;Taken together, the alternative mechanisms account for portions of the observed trend but do not reproduce its full structure. The participation hypothesis does not explain scaling by difficulty. The writeup hypothesis predicts faster foothold compression than privilege escalation. The tooling hypothesis explains the long-term baseline but not the post-LLM inflection. The remaining explanation that is consistent with the timing, magnitude, and phase specific structure of the data is the introduction of high-capability AI systems, amplified by secondary effects that are themselves partially AI-enabled.&lt;/p&gt; 
&lt;p&gt;This does not establish causation at the level of individual solves. Direct measurement of AI-assisted performance would require platform telemetry that is not currently available. What the analysis shows is that the observed compression follows the pattern expected from AI-accelerated workflows and is not fully explained by previously identified factors.&lt;/p&gt; 
&lt;h3 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;7. Implications&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;The data presented in this paper describes what is happening on a CTF scoreboard. But CTFs do not exist in a vacuum. They are simplified models of real offensive operations, and the forces reshaping competition on Hack The Box are the same forces reshaping enterprise security, the security workforce, and ultimately the global landscape of cyber capability.&lt;/p&gt; 
&lt;p&gt;The following implications are interpretive and extend beyond the scope of the measured dataset. A sustained 16% annual reduction in time-to-compromise implies a roughly sixfold decrease within a decade, fundamentally altering any security model that assumes human scale attacker timelines.&lt;/p&gt; 
&lt;h3 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;For Competitors&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;From inside the competition, this is not surprising. At the highest levels of competition, first-blood performance increasingly requires AI-assisted workflows. This is the natural progression of a dynamic that has always existed at the top level, the real game was never about manually running commands. It was always about building optimized automation scripts for enumeration and exploitation. The best competitors have always been the ones who automated the repetitive phases and focused their human attention on the creative gaps. AI agents are the next iteration of that same competitive logic, dramatically more powerful and dramatically more accessible.&lt;/p&gt; 
&lt;p&gt;Consider the privilege escalation data. A 40% compression in privesc times at Hard and Insane difficulty does not mean the competitors got substantially better at privilege escalation in three years. It means someone figured out how to hand the post-foothold enumeration to an agent that does not miss output, does not forget to check a file it found twenty minutes ago, and does not lose focus at 3 AM during a weekend release. The human contribution shifts from executing the methodology to designing it.&lt;/p&gt; 
&lt;h3 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;For the Security Industry&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;The compression visible in CTF solve times is a leading indicator of what is coming for enterprise security. AI agents already excel at the enumeration phase of real engagements, including massive parallelization of low-impact commands, systematic service discovery, automated credential checking, configuration analysis. The literature reviewed in Section 3 confirms that agentic systems can find real vulnerabilities in real environments today. They are still poor at operational security, at the subtle tradecraft decisions that determine whether an attacker is detected, but those limitations are narrowing with every model generation.&lt;/p&gt; 
&lt;p&gt;The practical consequence is that the defender's one structural advantage, time, is eroding. Security has always been asymmetric, as defenders must be right everywhere, attackers only need to find one gap. But defenders have historically benefited from the fact that attackers are slow. Human operators conducting reconnaissance, pivoting through a network, escalating privileges, all of which take hours or days. That timeline is the window in which detection, response, and containment happen. When an agentic system collapses the kill chain from days to hours to minutes, the window for human defenders to detect and respond shrinks proportionally. Dwell time assumptions built into detection architectures need to be revisited.&lt;/p&gt; 
&lt;p&gt;These trends suggest that the penetration testing model will deliver decreasing marginal security value. As dwell times for real threat actors remain long enough that point in time assessments offer diminishing insight into actual defensive readiness, the gap between what a periodic assessment measures and what an organization actually faces widens. The logical industry response is a shift toward continuous threat hunting and assume breach postures rather than periodic exploitation engagements. And any vulnerability that is openly exploitable should increasingly be assumed exploitable nearly instantly. The implicit assurance that a network "survived a skilled human spending X hours" means less every year when the threat model includes agents that do not sleep, do not context-switch, and do not stop enumerating.&lt;/p&gt; 
&lt;h3 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;For the Workforce&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;The democratization of offensive capability is perhaps the most consequential real-world implication of AI-accelerated CTFs, and it cuts both ways. Historically, becoming genuinely dangerous as an offensive operator required years of study, practice, and failure. The learning curve was steep. That steep curve served as a natural barrier that kept the population of highly capable attackers relatively small. AI is flattening that curve rapidly.&lt;/p&gt; 
&lt;p&gt;The capability gap between a junior operator with a well-designed agent pipeline and a senior operator working manually is collapsing. And the tooling is compounding, and an AI agent integrated with a Ghidra MCP server for binary analysis is a categorically different capability than a standalone model answering questions about disassembly. Each new tool integration multiplies the effective skill of the human operator, regardless of their experience level.&lt;/p&gt; 
&lt;p&gt;This is positive for the security industry in one sense, as it lowers the barrier to entry for security careers and makes scarce expertise more accessible. It is dangerous in another, as it lowers the same barrier for threat actors. Both of these are true simultaneously, and the policy response needs to account for both.&lt;/p&gt; 
&lt;p&gt;The workforce itself will undergo the same kind of transformation that previous industrial revolutions imposed on other skilled trades. The trajectory is from operators to automation engineers. Junior security professionals will not spend their early careers learning to run tools manually. They will learn to operate AI agents, to understand what the agents are doing well enough to intervene when something goes wrong, and to design the workflows that the agents execute. The analogy to modern aviation is apt, as pilots are still in the cockpit, but they are there primarily for the situations that automation cannot handle. The day-to-day operation is managed by systems they supervise. Over the next decade, the offensive security field appears likely to follow this exact trajectory, with operators progressively moving from direct users of frameworks and tools to managers of AI agents that themselves create and use the underlying tooling.&lt;/p&gt; 
&lt;h3 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;For Platform Operators&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;The competitive dynamics of CTF platforms are changing whether platform operators acknowledge it or not. First blood times are compressing. The leaderboard increasingly reflects who has the best AI tooling as much as who has the best hacking skills. A platform that ignores this does not avoid the shift. It just has no data on how the shift is unfolding.&lt;/p&gt; 
&lt;p&gt;The practical consequence is that platform operators who do not instrument AI usage will find their competitive metrics becoming uninterpretable. If first blood times continue to compress at 16% per year, within a few years the Easy and Medium tiers will hit floor effects where blood times are dominated by network latency and spawning delays rather than solve skill. At that point the leaderboard measures infrastructure, not hacking. Platforms that have been tracking AI usage throughout this period will be able to contextualize these trends. Platforms that have not will simply watch their competitive signal degrade without understanding why. Section 8 details specific instrumentation and track separation proposals that address this directly.&lt;/p&gt; 
&lt;h3 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;For Security Hiring&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;Hiring managers who use CTF performance as a signal face a growing calibration problem. Consider a candidate who presents a top-100 Hack The Box ranking. In 2020, that ranking almost certainly reflected deep manual exploitation skill, including enumeration discipline, creative privilege escalation, comfort across multiple operating systems and attack surfaces. In 2026, that same ranking might reflect those skills, or it might reflect that the candidate built an excellent agentic pipeline that handles reconnaissance and structured exploitation while the human focuses on the creative leaps. Both are genuinely valuable in a professional security context. But they are different skills, and a hiring pipeline that assumes CTF performance maps only to traditional security expertise is increasingly miscalibrated. Organizations that want to hire for manual exploitation skill specifically will need to test for it specifically, rather than treating a leaderboard rank as a proxy. The certification redesign discussed in Section 8 addresses how the industry can build assessments that distinguish between these skill sets.&lt;/p&gt; 
&lt;h3 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;CTFs as Global AI Capability Benchmarks&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;The implications extend beyond the security community entirely. As CTF competitions shift from purely human contests to contests between AI-augmented operators, they become something they were never designed to be, standardized benchmarks for offensive AI capability.&lt;/p&gt; 
&lt;p&gt;This is already happening in a research context. CyBench [7], InterCode-CTF [9], and the NYU CTF Benchmark [8] all use CTF-style challenges to evaluate agentic AI systems. But public platforms like Hack The Box provide something these closed benchmarks cannot, a live, continuously updated, adversarially designed evaluation environment with a global participant pool and real-time scoring. A research benchmark can be overfitted. A platform releasing new machines every week, designed by human creators actively trying to challenge the best players in the world, cannot.&lt;/p&gt; 
&lt;p&gt;The geopolitical dimension of this is underappreciated. AI export restrictions are increasingly fragmenting the global model landscape. Operators in the United States work with systems like GPT, Claude, and Grok. Operators in China work with DeepSeek, Qwen, and other domestically developed models. European competitors may increasingly work with models like Mistral that fall under different regulatory frameworks. As AI-augmented competition becomes the norm on global CTF platforms, the leaderboard becomes a de facto comparison of these different AI ecosystems applied to a common adversarial task. A country whose models consistently underperform on offensive security benchmarks has a measurable signal about a gap in its AI capabilities that extends well beyond CTF.&lt;/p&gt; 
&lt;p&gt;This is not hypothetical. The AI Cyber Challenge at DEF CON, funded by DARPA with $29.5 million in prizes [29], already frames autonomous cyber capability as a national security priority. The winning teams' cyber reasoning systems are required to be open-sourced, and DARPA has allocated additional funding to integrate the technology into real critical infrastructure. Public CTF platforms provide a continuous, peacetime version of the same signal. The death of the CTF as a purely human competition is simultaneously the birth of the CTF as a global AI capability benchmark, and the policy implications of that transformation deserve attention from audiences well beyond the infosec community. Section 8 explores how platforms can formalize this benchmarking role and how governments can leverage it through structured competition programs.&lt;/p&gt; 
&lt;h3 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;8. Recommendations&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;The measured compression in time-to-compromise implies that existing competition, training, and evaluation models will lose interpretability unless they are redesigned for an AI-augmented environment. The implications outlined above are broad, but they converge on a set of concrete actions that CTF platforms, training organizations, and the security community can take now rather than after the shift is complete.&lt;/p&gt; 
&lt;h4 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;Learn from Chess&lt;/span&gt;&lt;/h4&gt; 
&lt;p&gt;Chess confronted the same fundamental problem two decades earlier and its response is instructive. After Garry Kasparov lost to Deep Blue in 1997, the chess community did not pretend engines did not exist. It adapted. Three distinct competitive categories emerged, pure human chess, pure engine chess, and "advanced" or "centaur" chess where human-computer teams compete. Kasparov himself introduced the advanced chess format in 1998, arguing that the combination of human intuition and computer calculation would produce the highest quality games. Research confirmed this, as freestyle (centaur) teams consistently outperformed both solo humans and solo engines in tournament play through the mid-2000s.&lt;/p&gt; 
&lt;p&gt;The enforcement problem followed immediately. Chess.com has spent over a decade building a Fair Play system that analyzes over 100 gameplay factors per game using statistical models to detect engine-assisted play. The system closes large numbers of accounts for fair-play violations each month and estimates that fewer than 1% of players cheat online. For prize events, Chess.com now requires all participants to run Proctor, a monitoring program on the player's machine, and has reported significantly fewer instances of engine cheating since making it mandatory.&lt;/p&gt; 
&lt;p&gt;The critical parallel for CTFs is that chess had a structural advantage that CTFs do not, in that every move is recorded and analyzable after the fact. A chess engine's influence can be detected because the move sequence itself contains the signal. In a CTF, the platform has almost no visibility into how a competitor arrived at a solution. There is no move log. There is no replay. The platform sees a flag submission and a timestamp. This means the chess approach of statistical detection after the fact is largely unavailable to CTF platforms. Given the absence of move level telemetry, meaningful enforcement of human-only competition appears feasible primarily in proctored environments where the competitor's screen and tooling can be directly observed. For online platforms, the honest conclusion is that AI-assisted competition cannot be meaningfully prevented. It can only be embraced, structured, and measured.&lt;/p&gt; 
&lt;h4 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;Instrument AI Usage&lt;/span&gt;&lt;/h4&gt; 
&lt;p&gt;The first and most immediately actionable recommendation is for CTF platforms to implement a voluntary "Solved with AI" self-report mechanism. This is not about enforcement. It is about data collection. A simple toggle at flag submission, where the competitor indicates whether AI tools assisted their solve, produces ground truth that does not currently exist anywhere. The data does not need to be perfectly accurate to be valuable. Self-reporting will be imperfect and subject to strategic misreporting, but even noisy adoption data is strictly more informative than the current absence of ground truth. Even a rough self-reported signal would allow platforms to track adoption rates over time, correlate AI usage with solve speed, compare AI-assisted and unassisted performance distributions, and identify which challenge categories are most affected. This is the data that is currently unavailable in platform telemetry and that motivated the correlation-not-causation limitation in this study.&lt;/p&gt; 
&lt;h4 style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;Separate the Tracks&lt;/span&gt;&lt;/h4&gt; 
&lt;p&gt;As AI-assisted competition becomes the norm, platforms should consider formalizing the distinction that chess formalized decades ago. An unranked or separately ranked "AI-augmented" track, running alongside the traditional human track, would let competitors who want to push the boundaries of human-AI teaming do so without distorting the signal for competitors who want to test their manual skills. This is not about stigmatizing AI use. It is about preserving the interpretability of both signals. A first blood on the human track means something specific. A first blood on the augmented track means something different but equally valuable. Conflating the two helps no one.&lt;/p&gt; 
&lt;p style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;Standardize AI Capability Benchmarking&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;The market for standardized evaluation of AI offensive tools is emerging rapidly and CTF platforms are uniquely positioned to serve it. Hack The Box launched its AI Range in December 2025 [40], the first controlled environment specifically designed to benchmark autonomous AI security agents against live adversarial challenges. The platform tests models including Claude, GPT-5, Gemini, and Mistral against real challenges, running each model ten times per challenge with fresh instances to account for non-deterministic behavior. Early meta-benchmarks have already revealed a significant gap between AI models' security knowledge and their practical multi-step adversarial capabilities.&lt;/p&gt; 
&lt;p&gt;This is where the commercial opportunity meets the research need. Organizations evaluating AI security products currently have no standardized way to compare them. A buyer considering two AI-powered penetration testing tools has no equivalent of a SPEC benchmark or an MLPerf score. CTF platforms can fill this gap by offering structured evaluation environments where vendors and researchers run their systems against a common, continuously updated set of challenges under controlled conditions. The data produced is valuable to everyone, as vendors get credible third-party validation, buyers get comparison data, researchers get reproducible benchmarks, and the platforms themselves gain a new revenue stream and strategic position in the AI security ecosystem.&lt;/p&gt; 
&lt;p style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;Redesign Training Pipelines&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;Security training programs need to acknowledge that the skill they are developing is changing. Bootcamps and entry-level certification programs that teach people to run nmap, linpeas, and manual exploitation workflows are training for the role as it existed five years ago. The junior security professional of 2028 needs to understand what those tools do, but their primary operational skill will be designing, configuring, and orchestrating AI agent pipelines that run those tools. Training programs should be teaching students to build agentic workflows, to evaluate when an agent is producing useful output versus hallucinating, and to intervene effectively when automation breaks down.&lt;/p&gt; 
&lt;p&gt;The certification landscape reflects this tension. Proctored, human-only examinations remain one of the few mechanisms for directly verifying manual exploitation skill. That signal is still valuable, and proctored human-only certifications should continue to exist for roles where manual skill matters. But the industry also needs certifications that test what real-world operators actually do, leveraging every available tool, including AI, to achieve objectives under realistic constraints. Hack The Box's certification model already moves in this direction. Their exams impose no AI restrictions. Instead, they make the assessment genuinely difficult, requiring extensive documentation and demonstrating that the candidate understands the full attack chain at a depth that cannot be faked by an unsupervised agent. The result tests real-world competency more faithfully than a controlled environment where half the candidate's actual toolkit is prohibited.&lt;/p&gt; 
&lt;p style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;Government and Institutional Adoption&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;DARPA's AI Cyber Challenge demonstrated that the competition model works for accelerating AI security development at national scale [29]. A $29.5 million prize pool attracted seven world-class teams whose cyber reasoning systems collectively discovered 28 unique vulnerabilities, including six zero-days, in real open-source software, with the winning systems required to be open-sourced for public benefit. This is a model for how governments can outsource AI capability development to the private sector through structured competition rather than traditional procurement.&lt;/p&gt; 
&lt;p&gt;The opportunity extends beyond one-off competitions. Public CTF platforms, operating continuously with global participation, provide something that a periodic DARPA challenge cannot, a standing, adversarially maintained evaluation environment that produces real-time signal about the state of AI offensive and defensive capability. Governments and defense organizations that formalize relationships with these platforms, whether through funded challenge series, benchmark partnerships, or structured data sharing agreements, gain continuous visibility into a capability domain that is evolving faster than any traditional assessment cycle can track.&lt;/p&gt; 
&lt;p style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;9. Conclusion&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;The CTF is not dead. But what it measures is changing, and the pace of that change is accelerating.&lt;/p&gt; 
&lt;p&gt;Across 423 Hack The Box machines spanning over eight years, both user and root first blood times are declining at approximately 16% per year. The compression is universal across every difficulty tier. It scales with complexity, and the hardest machines, once requiring the best competitors in the world over 15 hours, now show a median of about 5 hours in the Post-LLM era, a 67% reduction that is statistically significant (p=0.035). The privilege escalation phase, the most structured and information-dense portion of the kill chain, is compressing faster than the foothold phase. Windows machines, with their well-documented and repeatable attack surfaces, show steeper declines than Linux. And the inflection points in all of these trends visually align with the public availability of increasingly capable AI systems. A sustained 16% annual reduction corresponds to a roughly sixfold decrease in time-to-compromise within a decade, a rate that fundamentally breaks security models built around human-speed operations.&lt;/p&gt; 
&lt;p&gt;None of this proves that AI is the sole cause. But the timing, magnitude, and structure of the observed patterns are consistent with AI-driven acceleration and are difficult to explain through previously identified factors alone, and the confounding factors most commonly cited, community growth, writeup availability, better tooling, are themselves increasingly AI-enabled. The confounders are not independent of the primary variable.&lt;/p&gt; 
&lt;p&gt;The implications reach well beyond the scoreboard. The same capabilities compressing CTF solve times are compressing real-world attack timelines, eroding the defender's window to detect and respond, and lowering the barrier to expert-level offensive capability for both security professionals and threat actors. The observed compression implies a shift in the security workforce from direct tool operation toward the design and supervision of automated workflows. Certifications and training programs designed for the previous era need to adapt or become irrelevant. And CTF platforms, whether they intended it or not, are becoming the world's first continuously maintained, adversarially designed benchmarks for national AI cyber capability. This paper does not measure AI capability directly. It measures the rate at which the attacker's clock is shrinking.&lt;/p&gt; 
&lt;p&gt;This analysis is limited to a single platform and a single performance metric, and direct measurement of AI-assisted solves remains an open data problem. The title of this paper is deliberately provocative. The CTF as a purely human competition between operators is ending. What replaces it is not nothing. It is a richer, more complex, and more consequential competition, between the humans who design AI systems and the AI systems they build, tested against challenges that grow harder every week, on platforms that span the globe. The organizations, competitors, and governments that recognize this transition earliest and adapt most effectively will define the next era of cybersecurity. The ones that pretend nothing has changed will be left wondering why the scoreboard no longer makes sense.&lt;/p&gt; 
&lt;p style="font-weight: bold;"&gt;&lt;span style="color: #0d7d94;"&gt;10. References&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="color: #0d7d94;"&gt;&lt;strong&gt;AI Agents Solving CTFs&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;[1] Cybersecurity AI team, "Cybersecurity AI: The World's Top AI Agent for Security CTF," arXiv:2512.02654, 2025. https://arxiv.org/abs/2512.02654&lt;/p&gt; 
&lt;p&gt;[2] Palisade Research, "Hacking CTFs with Plain Agents," arXiv:2412.02776, 2024. https://arxiv.org/abs/2412.02776&lt;/p&gt; 
&lt;p&gt;[3] Abramovich et al., "EnIGMA: Interactive Tools Substantially Assist LM Agents in Finding Security Vulnerabilities," ICML 2025 (arXiv:2409.16165). https://arxiv.org/abs/2409.16165&lt;/p&gt; 
&lt;p&gt;[4] Amazon Science, "CTF-Dojo: Training Language Model Agents to Find Vulnerabilities," arXiv:2508.18370, 2025. https://arxiv.org/abs/2508.18370&lt;/p&gt; 
&lt;p&gt;[5] Tsinghua / Huazhong University, "Multi-Agent Framework for CTF Challenges," MDPI Applied Sciences, 2025. https://www.mdpi.com/2076-3417/15/13/7159&lt;/p&gt; 
&lt;p&gt;[6] Evolve-CTF team, "Capture the Flags: Family-Based Evaluation via Semantics-Preserving Transformations," arXiv, 2025. https://arxiv.org/html/2602.05523v1&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;CTF Benchmarks&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;[7] Andy K. Zhang, Neil Perry et al., "CyBench: Framework for Evaluating Cybersecurity Capabilities of Language Models," arXiv:2408.08926, 2024. https://arxiv.org/abs/2408.08926&lt;/p&gt; 
&lt;p&gt;[8] NYU-LLM-CTF team, "NYU CTF Bench: Scalable Open-Source Benchmark for LLMs in Offensive Security," NeurIPS 2024 (arXiv:2406.05590). https://arxiv.org/abs/2406.05590&lt;/p&gt; 
&lt;p&gt;[9] "InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback," NeurIPS 2023. https://openreview.net/forum?id=fvKaLF1ns8&lt;/p&gt; 
&lt;p&gt;[10] "CTFusion: CTF-based Benchmark for LLM Agent Evaluation," OpenReview / ICLR, 2026. https://openreview.net/forum?id=2zQJHLbyqM&lt;/p&gt; 
&lt;p&gt;[11] "CTFTiny," arXiv:2508.05674, 2025. https://arxiv.org/abs/2508.05674&lt;/p&gt; 
&lt;p&gt;[12] "AIRTBench: Autonomous AI Red Teaming Benchmark," arXiv:2506.14682, 2025. https://arxiv.org/abs/2506.14682&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;LLM Exploit Capabilities&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;[13] Richard Fang, Rohan Bindu, Akul Gupta, Qiusi Zhan, Daniel Kang, "LLM Agents can Autonomously Hack Websites," arXiv:2402.06664, 2024. https://arxiv.org/abs/2402.06664&lt;/p&gt; 
&lt;p&gt;[14] Fang, Bindu, Gupta, Kang, "LLM Agents can Autonomously Exploit One-day Vulnerabilities," arXiv:2404.08144, 2024. https://arxiv.org/abs/2404.08144&lt;/p&gt; 
&lt;p&gt;[15] Zhu, Kellermann, Gupta, Li, Fang, Bindu, Kang, "Teams of LLM Agents can Exploit Zero-Day Vulnerabilities," arXiv:2406.01637, 2024. https://arxiv.org/abs/2406.01637&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Automated Penetration Testing&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;[16] Deng et al., "PentestGPT: Evaluating and Harnessing LLMs for Automated Penetration Testing," USENIX Security 2024 (Distinguished Artifact), arXiv:2308.06782. https://arxiv.org/abs/2308.06782&lt;/p&gt; 
&lt;p&gt;[17] "D-CIPHER: Dynamic Collaborative Intelligent Multi-Agent System for Offensive Security," arXiv:2502.10931, 2025. https://arxiv.org/abs/2502.10931&lt;/p&gt; 
&lt;p&gt;[18] "xOffense: AI-Driven Autonomous Penetration Testing," arXiv:2509.13021, 2025. https://arxiv.org/abs/2509.13021&lt;/p&gt; 
&lt;p&gt;[19] "VulnBot: Autonomous Penetration Testing Multi-Agent Framework," arXiv:2501.13411, 2025. https://arxiv.org/abs/2501.13411&lt;/p&gt; 
&lt;p&gt;[20] "PentestAgent: Incorporating LLM Agents to Automated Penetration Testing," AsiaCCS 2025 (arXiv:2411.05185). https://arxiv.org/abs/2411.05185&lt;/p&gt; 
&lt;p&gt;[21] "Construction and Evaluation of LLM-based Agents for Semi-Autonomous Penetration Testing," arXiv, 2025. https://arxiv.org/html/2502.15506v1&lt;/p&gt; 
&lt;p&gt;[22] "AutoPentest / Structured Attack Trees on HackTheBox," arXiv:2505.10321, 2025. https://arxiv.org/abs/2505.10321&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Automated Exploit Generation&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;[23] Harbin Institute of Technology, "PwnGPT: Automatic Exploit Generation Based on Large Language Models," ACL 2025. https://aclanthology.org/2025.acl-long.562/&lt;/p&gt; 
&lt;p&gt;[24] UNSW Sydney / CSIRO, "Good News for Script Kiddies? Evaluating LLMs for Automated Exploit Generation," arXiv:2505.01065, 2025. https://arxiv.org/abs/2505.01065&lt;/p&gt; 
&lt;p&gt;[25] "From Rookie to Expert: Manipulating LLMs for Automated Vulnerability Exploitation," arXiv:2512.22753, 2025. https://arxiv.org/abs/2512.22753&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Google Project Zero&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;[26] Google Project Zero, "Project Naptime: Evaluating Offensive Security Capabilities of LLMs," 2024. https://projectzero.google/2024/06/project-naptime.html&lt;/p&gt; 
&lt;p&gt;[27] Google Project Zero + DeepMind, "From Naptime to Big Sleep: Using LLMs to Catch Vulnerabilities in Real-World Code," 2024. https://projectzero.google/2024/10/from-naptime-to-big-sleep.html&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;DARPA Competitions&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;[28] Avgerinos, Brumley et al. (ForAllSecure), "The Mayhem Cyber Reasoning System," IEEE Security &amp;amp; Privacy, 2016/2018. https://users.umiacs.umd.edu/~tudor/courses/ENEE657/Fall19/papers/Avgerinos18.pdf&lt;/p&gt; 
&lt;p&gt;[29] DARPA, "AI Cyber Challenge (AIxCC)," 2023-2025. https://www.darpa.mil/research/programs/ai-cyber&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Capability Assessments&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;[30] Bhatt, Chennabasappa et al. (Meta), "Purple Llama CyberSecEval (1, 2, 3)," arXiv:2312.04724, 2023-2024. https://arxiv.org/abs/2312.04724&lt;/p&gt; 
&lt;p&gt;[31] MITRE, "OCCULT: Evaluating LLMs for Offensive Cyber Operation Capabilities," arXiv:2502.15797, 2025. https://arxiv.org/abs/2502.15797&lt;/p&gt; 
&lt;p&gt;[32] "Catastrophic Cyber Capabilities Benchmark (3CB)," arXiv:2410.09114, 2024. https://arxiv.org/abs/2410.09114&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Vulnerability Detection&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;[33] "LLM4Vuln: Unified Evaluation for LLMs' Vulnerability Reasoning," arXiv:2401.16185, 2024. https://arxiv.org/abs/2401.16185&lt;/p&gt; 
&lt;p&gt;[34] "All You Need Is A Fuzzing Brain," arXiv:2509.07225, 2025. https://arxiv.org/abs/2509.07225&lt;/p&gt; 
&lt;p&gt;[35] "SecLLMHolmes: LLMs Cannot Reliably Identify Security Vulnerabilities (Yet?)," IEEE S&amp;amp;P 2024, arXiv:2312.12575. https://arxiv.org/abs/2312.12575&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Competitions and Datasets&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;[36] Debenedetti, Rando et al. (ETH Zurich), "Dataset and Lessons from 2024 SaTML LLM Capture-the-Flag Competition," NeurIPS 2024 (arXiv:2406.07954). https://arxiv.org/abs/2406.07954&lt;/p&gt; 
&lt;p&gt;[37] Schulhoff et al., "HackAPrompt: Exposing Systemic Vulnerabilities of LLMs via Global Prompt Hacking Competition," EMNLP 2023. https://arxiv.org/abs/2311.16119&lt;/p&gt; 
&lt;p&gt;[38] "DEF CON 31 AI Village CTF," Kaggle, 2023. https://www.kaggle.com/competitions/ai-village-capture-the-flag-defcon31/&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Sociology of CTFs&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;[39] "Cybersecurity Knowledge and Skills Taught in CTF Challenges," Computers &amp;amp; Security (Elsevier), 2020. https://www.sciencedirect.com/science/article/abs/pii/S0167404820304272&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;HackTheBox + AI&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;[40] Hack The Box, "HTB AI Range Launch," 2025. https://www.hackthebox.com/blog/htb-ai-range-launch&lt;/p&gt; 
&lt;p&gt;[41] "Artificial Intelligence as the New Hacker: Developing Agents for Offensive Security," arXiv:2406.07561, 2024. https://arxiv.org/abs/2406.07561&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Data Source&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;[42] 0xdf, "Hack The Box Writeups," 0xdf.gitlab.io. https://0xdf.gitlab.io/ (First blood data collected for 423 machines, March 2017 -- October 2025)&lt;/p&gt;  
&lt;img src="https://track-na2.hubspot.com/__ptq.gif?a=243748608&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fsuzulabs.com%2Fsuzu-labs-blog%2Fthe-death-of-the-ctf-how-agentic-ai-is-reshaping-competitive-hacking&amp;amp;bu=https%253A%252F%252Fsuzulabs.com%252Fsuzu-labs-blog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>CTF</category>
      <category>AIAgent</category>
      <category>Competitive Hacking</category>
      <pubDate>Tue, 03 Mar 2026 18:58:57 GMT</pubDate>
      <guid>https://suzulabs.com/suzu-labs-blog/the-death-of-the-ctf-how-agentic-ai-is-reshaping-competitive-hacking</guid>
      <dc:date>2026-03-03T18:58:57Z</dc:date>
      <dc:creator>Jacob Krell</dc:creator>
    </item>
    <item>
      <title>Anthropic and Claude: 2026 AI Powerhouse</title>
      <link>https://suzulabs.com/suzu-labs-blog/anthropic-and-claude-2026-ai-powerhouse</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://suzulabs.com/suzu-labs-blog/anthropic-and-claude-2026-ai-powerhouse" title="" class="hs-featured-image-link"&gt; &lt;img src="https://suzulabs.com/hubfs/Claude%20and%20Anthropic%20Standoff-1.png" alt="Anthropic and Claude: 2026 AI Powerhouse" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;In early 2026, the image of Anthropic as a cautious, safety-oriented "research lab" has effectively been replaced by its reality: a $380 billion enterprise software powerhouse.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;In early 2026, the image of Anthropic as a cautious, safety-oriented "research lab" has effectively been replaced by its reality: a $380 billion enterprise software powerhouse.&lt;/p&gt; 
&lt;p&gt;Between a massive release of new models and a high-stakes standoff with the U.S. government, Anthropic is currently navigating the most volatile month in its history. Here is the breakdown of what is actually happening.&lt;/p&gt; 
&lt;h3&gt;The Tech: Claude Now "Operates" Computers&lt;/h3&gt; 
&lt;p&gt;Anthropic just released &lt;span style="font-weight: normal;"&gt;Claude 4.6 (Sonnet and Opus)&lt;/span&gt;, and the focus has shifted from conversation to &lt;span style="font-weight: normal;"&gt;Computer Use.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;While previous models could browse the web,&amp;nbsp;Claude 4.6 is built to visually interpret a computer desktop. It doesn’t just call an API; it looks at a screen, moves the cursor, and clicks buttons.&lt;/p&gt; 
&lt;p&gt;To support this, Anthropic acquired &lt;strong&gt;Vercept&lt;/strong&gt; this week, a startup that specialized in AI perception. The goal is to move Claude toward 100% reliability in navigating standard office software like spreadsheets, internal databases, and CRM tools that were not&amp;nbsp;built for AI. Current benchmarks show Claude’s success rate in these environments has jumped from 15% to over 72% in the last year.&lt;/p&gt; 
&lt;h3&gt;The Business: A $14 Billion Run-Rate&lt;/h3&gt; 
&lt;p&gt;The company recently closed a &lt;span style="font-weight: normal;"&gt;$30 billion Series G funding round&lt;/span&gt;, valuing it at &lt;span style="font-weight: normal;"&gt;$380 billion&lt;/span&gt;. This puts Anthropic in the same valuation league as SpaceX and Coca-Cola.&lt;/p&gt; 
&lt;p&gt;The growth is driven by a pivot toward workhorse&amp;nbsp;tools:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Claude Code:&lt;/strong&gt; This specialized engineering tool is now generating $2.5 billion in annual revenue, doubling its size since January 1st.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Ad-Free Commitment:&lt;/strong&gt; Anthropic made a point to announce that Claude will remain ad-free, positioning itself as a clean&amp;nbsp;infrastructure provider for enterprises that don't want their data used to fuel a marketing engine.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;The 1M Token Context:&lt;/strong&gt; Both new 4.6 models now support a one-million-token window, allowing businesses to feed entire technical libraries or legal archives into a single prompt with high retrieval accuracy.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;The Conflict: The Pentagon Standoff&lt;/h3&gt; 
&lt;p&gt;The most significant story right now isn't about code; it's about national security. Anthropic is currently the only AI provider allowed on the U.S. military’s classified networks, and tensions have reached a breaking point.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;The Venezuela Incident:&lt;/strong&gt; Reports surfaced that Claude was utilized during the operation to capture Nicolás Maduro in January. This sparked a conflict between Anthropic’s red lines, specifically its ban on using AI for autonomous targeting or domestic surveillance and the Pentagon’s desire for "unfettered access."&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;The Ultimatum:&lt;/strong&gt; Defense Secretary Pete Hegseth has given Anthropic a deadline of Friday, 5:00 PM, to lift these restrictions. If they don't, the government has threatened to label Anthropic a "supply chain risk," which would effectively bar them from any federal contracts and potentially disrupt their private sector partnerships.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;The Policy: Responsible Scaling 3.0&lt;/h3&gt; 
&lt;p&gt;In the midst of this, Anthropic updated its &lt;span style="font-weight: normal;"&gt;Responsible Scaling Policy (RSP) to Version 3.0.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;The new version is noticeably more pragmatic. It explicitly states that Anthropic will no longer delay development of powerful models unilaterally if it believes its competitors are moving forward. It’s a "competitor-contingent" policy: they will be as safe as the market allows them to be without losing their lead.&lt;/p&gt; 
&lt;p&gt;Critics have called this a safety loophole,&amp;nbsp;but for Anthropic, it’s a necessary adjustment to a 2026 reality where AI is now central to global economic and military power.&lt;/p&gt;  
&lt;img src="https://track-na2.hubspot.com/__ptq.gif?a=243748608&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fsuzulabs.com%2Fsuzu-labs-blog%2Fanthropic-and-claude-2026-ai-powerhouse&amp;amp;bu=https%253A%252F%252Fsuzulabs.com%252Fsuzu-labs-blog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Supply Chain Security</category>
      <category>AI Security</category>
      <category>Government Security</category>
      <pubDate>Thu, 26 Feb 2026 18:02:17 GMT</pubDate>
      <guid>https://suzulabs.com/suzu-labs-blog/anthropic-and-claude-2026-ai-powerhouse</guid>
      <dc:date>2026-02-26T18:02:17Z</dc:date>
      <dc:creator>Hannah Perez</dc:creator>
    </item>
    <item>
      <title>Simply Offensive Podcast: Exploring AI Vulnerabilities in Cybersecurity with Mike Bell of Suzu Labs</title>
      <link>https://suzulabs.com/suzu-labs-blog/simply-offensive-podcast-exploring-ai-vulnerabilities-in-cybersecurity-with-mike-bell</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://suzulabs.com/suzu-labs-blog/simply-offensive-podcast-exploring-ai-vulnerabilities-in-cybersecurity-with-mike-bell" title="" class="hs-featured-image-link"&gt; &lt;img src="https://suzulabs.com/hubfs/SO%20Podcast%20Image.jpg" alt="Simply Offensive Podcast: Exploring AI Vulnerabilities in Cybersecurity with Mike Bell of Suzu Labs" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;In today’s rapidly evolving technological landscape, the convergence of artificial intelligence (AI) and cybersecurity is becoming increasingly significant. In this episode of Simply Offensive, host Phillip Wylie converses with Mike Bell, CEO and founder of Suzu Labs, an innovative firm specializing in cybersecurity consulting and AI software. Together, they explore pressing issues in AI security and share invaluable insights for businesses looking to fortify their defenses.&lt;br&gt;&lt;br&gt;Understanding Cybersecurity in the AI Era:&lt;br&gt;Mike Bell begins by discussing the current state of the consulting business, particularly in the fourth quarter when companies scramble to finalize budgets and secure their assets. He emphasizes the importance of maintaining an accurate inventory of applications and assets, which is crucial for effective security measures. As Bell notes, "the first thing in any security program should be an accurate asset inventory of whatever you're trying to secure."&lt;br&gt;&lt;br&gt;The Evolution of Security Threats:&lt;br&gt;As a former military personnel with extensive experience in cyber and IT, Bell shares his journey from military service to building AI systems. He highlights the convergence of security and AI, where many companies are either focusing on one or the other. At Suzu Labs, they strive to bridge this gap, offering clients a comprehensive perspective on both fields. Bell’s technical background, reinforced by certifications like OSCP, allows him to engage deeply with both the coding and strategic aspects of cybersecurity.&lt;br&gt;&lt;br&gt;The OWASP Top 10 for LLMs:&lt;br&gt;A significant portion of the discussion revolves around the OWASP Top 10 for Large Language Models (LLMs). Bell explains that OWASP, the Open Web Application Security Project, has developed a list of vulnerabilities that AI systems can face, which now includes prompt injection, training data poisoning, and sensitive information disclosures among others. He elaborates on the concept of prompt injection, particularly indirect prompt injection, where attackers manipulate AI behavior through crafted inputs to extract unauthorized data. This highlights the critical need for robust defenses against such vulnerabilities.&lt;br&gt;&lt;br&gt;RAG Systems and Their Vulnerabilities:&lt;br&gt;Bell introduces the concept of Retrieval Augmented Generation (RAG), which combines vector databases with LLMs to enhance the AI's contextual understanding. However, he warns that this approach can introduce vulnerabilities, especially if the RAG database contains poisoned data. "Attackers don’t necessarily need to control the user’s input; they just need to inject poisoned data into the database," Bell explains. This emphasizes the importance of securing not just the AI model itself, but also the data it utilizes.&lt;br&gt;&lt;br&gt;Key Takeaways:&lt;br&gt;As businesses increasingly rely on AI technologies, understanding the associated security risks becomes paramount. Maintaining a comprehensive asset inventory is essential for effective cybersecurity. The OWASP Top 10 for LLMs provides crucial guidance on potential vulnerabilities that organizations must address. Additionally, the integration of systems like RAG can enhance capabilities but also requires careful consideration of data integrity and security measures.&lt;br&gt;&lt;br&gt;Conclusion:&lt;br&gt;In conclusion, the intersection of AI and cybersecurity presents both opportunities and challenges for organizations. As highlighted by Mike Bell, proactive measures and continuous vigilance are vital in navigating this complex landscape. By understanding the latest security threats and implementing robust strategies, businesses can better protect themselves against the evolving nature of cyber threats.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;In today’s rapidly evolving technological landscape, the convergence of artificial intelligence (AI) and cybersecurity is becoming increasingly significant. In this episode of Simply Offensive, host Phillip Wylie converses with Mike Bell, CEO and founder of Suzu Labs, an innovative firm specializing in cybersecurity consulting and AI software. Together, they explore pressing issues in AI security and share invaluable insights for businesses looking to fortify their defenses.&lt;br&gt;&lt;br&gt;Understanding Cybersecurity in the AI Era:&lt;br&gt;Mike Bell begins by discussing the current state of the consulting business, particularly in the fourth quarter when companies scramble to finalize budgets and secure their assets. He emphasizes the importance of maintaining an accurate inventory of applications and assets, which is crucial for effective security measures. As Bell notes, "the first thing in any security program should be an accurate asset inventory of whatever you're trying to secure."&lt;br&gt;&lt;br&gt;The Evolution of Security Threats:&lt;br&gt;As a former military personnel with extensive experience in cyber and IT, Bell shares his journey from military service to building AI systems. He highlights the convergence of security and AI, where many companies are either focusing on one or the other. At Suzu Labs, they strive to bridge this gap, offering clients a comprehensive perspective on both fields. Bell’s technical background, reinforced by certifications like OSCP, allows him to engage deeply with both the coding and strategic aspects of cybersecurity.&lt;br&gt;&lt;br&gt;The OWASP Top 10 for LLMs:&lt;br&gt;A significant portion of the discussion revolves around the OWASP Top 10 for Large Language Models (LLMs). Bell explains that OWASP, the Open Web Application Security Project, has developed a list of vulnerabilities that AI systems can face, which now includes prompt injection, training data poisoning, and sensitive information disclosures among others. He elaborates on the concept of prompt injection, particularly indirect prompt injection, where attackers manipulate AI behavior through crafted inputs to extract unauthorized data. This highlights the critical need for robust defenses against such vulnerabilities.&lt;br&gt;&lt;br&gt;RAG Systems and Their Vulnerabilities:&lt;br&gt;Bell introduces the concept of Retrieval Augmented Generation (RAG), which combines vector databases with LLMs to enhance the AI's contextual understanding. However, he warns that this approach can introduce vulnerabilities, especially if the RAG database contains poisoned data. "Attackers don’t necessarily need to control the user’s input; they just need to inject poisoned data into the database," Bell explains. This emphasizes the importance of securing not just the AI model itself, but also the data it utilizes.&lt;br&gt;&lt;br&gt;Key Takeaways:&lt;br&gt;As businesses increasingly rely on AI technologies, understanding the associated security risks becomes paramount. Maintaining a comprehensive asset inventory is essential for effective cybersecurity. The OWASP Top 10 for LLMs provides crucial guidance on potential vulnerabilities that organizations must address. Additionally, the integration of systems like RAG can enhance capabilities but also requires careful consideration of data integrity and security measures.&lt;br&gt;&lt;br&gt;Conclusion:&lt;br&gt;In conclusion, the intersection of AI and cybersecurity presents both opportunities and challenges for organizations. As highlighted by Mike Bell, proactive measures and continuous vigilance are vital in navigating this complex landscape. By understanding the latest security threats and implementing robust strategies, businesses can better protect themselves against the evolving nature of cyber threats.&lt;/p&gt;  
&lt;p style="text-align: center;"&gt;&lt;span&gt;&lt;a href="https://youtu.be/9Of_8xhfT60?si=Kssvwh1t5jsQi83P"&gt;YouTube&lt;/a&gt;&lt;/span&gt;&lt;/p&gt; 
&lt;p style="text-align: center;"&gt;&lt;span&gt;&lt;a href="https://open.spotify.com/episode/62aUIVkzLe5SvTf6MkxZ6X?"&gt;Spotify&lt;/a&gt;&lt;/span&gt;&lt;span&gt;&lt;/span&gt;&lt;/p&gt;  
&lt;img src="https://track-na2.hubspot.com/__ptq.gif?a=243748608&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fsuzulabs.com%2Fsuzu-labs-blog%2Fsimply-offensive-podcast-exploring-ai-vulnerabilities-in-cybersecurity-with-mike-bell&amp;amp;bu=https%253A%252F%252Fsuzulabs.com%252Fsuzu-labs-blog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Cybersecurity</category>
      <category>Prompt Injection</category>
      <category>RAG Systems</category>
      <category>AI Security</category>
      <category>OWASP</category>
      <pubDate>Thu, 12 Feb 2026 18:08:04 GMT</pubDate>
      <guid>https://suzulabs.com/suzu-labs-blog/simply-offensive-podcast-exploring-ai-vulnerabilities-in-cybersecurity-with-mike-bell</guid>
      <dc:date>2026-02-12T18:08:04Z</dc:date>
      <dc:creator>Phillip Wylie</dc:creator>
    </item>
    <item>
      <title>Under Armour Breach: What The Forum Data Actually Shows</title>
      <link>https://suzulabs.com/suzu-labs-blog/under-armour-breach-what-the-forum-data-actually-shows</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://suzulabs.com/suzu-labs-blog/under-armour-breach-what-the-forum-data-actually-shows" title="" class="hs-featured-image-link"&gt; &lt;img src="https://suzulabs.com/hubfs/Lock%20Blog%20Image.jpg" alt="Under Armour Breach: What The Forum Data Actually Shows" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;On January 18, 2026, the Everest ransomware group made good on their threat and released Under Armour customer data to BreachForums. Two months earlier, Everest had added Under Armour to their leak site with a seven-day deadline. The company didn't pay. Now&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;a href="https://haveibeenpwned.com/Breach/UnderArmour"&gt;72.7 million email addresses are sitting in Have I Been Pwned&lt;/a&gt;, and Under Armour still hasn't publicly acknowledged the incident.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;On January 18, 2026, the Everest ransomware group made good on their threat and released Under Armour customer data to BreachForums. Two months earlier, Everest had added Under Armour to their leak site with a seven-day deadline. The company didn't pay. Now&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;a href="https://haveibeenpwned.com/Breach/UnderArmour"&gt;72.7 million email addresses are sitting in Have I Been Pwned&lt;/a&gt;, and Under Armour still hasn't publicly acknowledged the incident.&lt;/p&gt;  
&lt;p&gt;We analyzed the leaked data and the forum discussion around it. Here's what we found.&lt;/p&gt; 
&lt;h2&gt;The Initial Announcement&lt;/h2&gt; 
&lt;p&gt;&lt;strong&gt;&lt;img src="https://suzulabs.com/hs-fs/hubfs/image-png-2.png?width=3707&amp;amp;height=1127&amp;amp;name=image-png-2.png" width="3707" height="1127" style="width: 3707px; height: auto; max-width: 100%;" alt="Under Armor BreachForums"&gt;&lt;br&gt;&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;The forum post from user "thelastwhitehat" claimed 343 GB of sensitive data including "full names, email addresses, geographic locations, genders, purchase histories and preferences, employee contact details, and more." Everest's original claims were even broader, including phone numbers, physical addresses, loyalty program details, and preferred stores.&lt;/p&gt; 
&lt;p&gt;That's a significant amount of PII if accurate. But forum users who actually downloaded and analyzed the data found something different.&lt;/p&gt; 
&lt;h2&gt;What's Actually In The Leak&lt;/h2&gt; 
&lt;p&gt;&lt;img src="https://suzulabs.com/hs-fs/hubfs/image-png.png?width=2551&amp;amp;height=1206&amp;amp;name=image-png.png" width="2551" height="1206" style="width: 2551px; height: auto; max-width: 100%;" alt="Actually in the leak"&gt;&lt;/p&gt; 
&lt;p&gt;Within 24 hours of the data hitting the forums, users started reporting discrepancies. User "ThinkingOne" noted: "there do not appear to be any phone numbers in here. There is a phone number header in some of the files, but no actual phone numbers. Also, few/no last names, no addresses."&lt;/p&gt; 
&lt;p&gt;That's a meaningful distinction. Headers exist for sensitive fields, but the data isn't there. What Everest claimed and what they actually exfiltrated are two different things.&lt;/p&gt; 
&lt;h2&gt;The File Structure&lt;/h2&gt; 
&lt;p&gt;&lt;img src="https://suzulabs.com/hs-fs/hubfs/image-png-1.png?width=2182&amp;amp;height=1449&amp;amp;name=image-png-1.png" width="2182" height="1449" style="width: 2182px; height: auto; max-width: 100%;" alt="Breach file structure"&gt;&lt;/p&gt; 
&lt;p&gt;The leaked archive contains 29 CSV files totaling 191,577,361 records. The largest files are mobile push notification exports (69M and 71M records respectively), followed by Bluecore marketing exports and loyalty customer data.&lt;/p&gt; 
&lt;p&gt;The file naming convention tells the story. These are marketing system exports, not production database dumps:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Bluecore exports&lt;/strong&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;- Email marketing platform data&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;MobilePush_TotalGenderData&lt;/strong&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;- Push notification targeting&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;NorthAmerica_MasterPush_Segmentation&lt;/strong&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;- Marketing segmentation&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Customer_360_SFMC_Preferred_Store&lt;/strong&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;- Salesforce Marketing Cloud data&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;RatingsAndReviews_SourceData&lt;/strong&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;- Customer review data&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;RetailPurchases_Last30_SourceData&lt;/strong&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;- Recent transaction data&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;This is marketing tech&amp;nbsp;infrastructure. Email addresses, purchase behavior, marketing preferences. Valuable for targeted phishing campaigns, but not the full PII profiles Everest advertised.&lt;/p&gt; 
&lt;h2&gt;What This Means For Affected Customers&lt;/h2&gt; 
&lt;p&gt;The 72.7 million unique email addresses are real. If you've shopped at Under Armour, your email is likely in this dataset along with your purchase history and marketing preferences. That's enough for convincing phishing attempts that reference your actual buying behavior.&lt;/p&gt; 
&lt;p&gt;What's probably not exposed: your home address, phone number, or payment information. The forum analysis suggests those fields either weren't captured by the marketing systems that were compromised, or weren't populated in the exports Everest obtained.&lt;/p&gt; 
&lt;h2&gt;What This Means For Under Armour&lt;/h2&gt; 
&lt;p&gt;Two months of silence while a class action lawsuit gets filed and millions of customers check breach notification services is a communications failure regardless of what's in the data. Customers are making assumptions, and those assumptions are probably worse than reality.&lt;/p&gt; 
&lt;p&gt;More importantly, this breach reveals something about their security posture. Marketing platforms sit at the edge of the network. They integrate with everything. They often get stood up by marketing teams without going through normal security review processes. Most security programs have better visibility into production databases than they do into martech, and that blind spot is exactly what Everest exploited here.&lt;/p&gt; 
&lt;p&gt;Understanding where the initial access came from and why these systems were accessible is what prevents the next incident. Cleanup without root cause analysis is just waiting for round two.&lt;/p&gt; 
&lt;h2&gt;About Everest&lt;/h2&gt; 
&lt;p&gt;Everest has been operating since 2020, making them unusually long-lived for a ransomware group.&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;a href="https://www.theregister.com/2026/01/21/under_armour_everest/"&gt;According to security researchers&lt;/a&gt;, they run three parallel revenue streams: double extortion ransomware, network access brokerage (selling access to other crews), and an insider recruitment program. Under Armour was one target in a portfolio that includes aerospace contractors, power grid operators, and government agencies.&lt;/p&gt; 
&lt;h2&gt;Timeline&lt;/h2&gt; 
&lt;table style="border-collapse: collapse; table-layout: fixed; margin-left: auto; margin-right: auto; border: 1px solid #99acc2;"&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Date&lt;/th&gt; 
   &lt;th&gt;Event&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;November 15, 2025&lt;/td&gt; 
   &lt;td&gt;Breach occurs (per forum claims)&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;November 2025&lt;/td&gt; 
   &lt;td&gt;Everest adds Under Armour to leak site with 7-day deadline&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;November 24, 2025&lt;/td&gt; 
   &lt;td&gt;Class action lawsuit filed&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;January 18, 2026&lt;/td&gt; 
   &lt;td&gt;Data published on BreachForums&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;January 19, 2026&lt;/td&gt; 
   &lt;td&gt;Forum users report phone number fields are empty&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;January 21, 2026&lt;/td&gt; 
   &lt;td&gt;HIBP ingests 72.7M records&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Present&lt;/td&gt; 
   &lt;td&gt;Under Armour has not publicly acknowledged the incident&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt;  
&lt;img src="https://track-na2.hubspot.com/__ptq.gif?a=243748608&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fsuzulabs.com%2Fsuzu-labs-blog%2Funder-armour-breach-what-the-forum-data-actually-shows&amp;amp;bu=https%253A%252F%252Fsuzulabs.com%252Fsuzu-labs-blog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Threat Intelligence</category>
      <category>Cybersecurity</category>
      <pubDate>Fri, 30 Jan 2026 16:09:25 GMT</pubDate>
      <guid>https://suzulabs.com/suzu-labs-blog/under-armour-breach-what-the-forum-data-actually-shows</guid>
      <dc:date>2026-01-30T16:09:25Z</dc:date>
      <dc:creator>Mike Bell</dc:creator>
    </item>
    <item>
      <title>Brightspeed Breach: Crimson Collective and the Infostealer Problem</title>
      <link>https://suzulabs.com/suzu-labs-blog/brightspeed-breach-crimson-collective-and-the-infostealer-problem</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://suzulabs.com/suzu-labs-blog/brightspeed-breach-crimson-collective-and-the-infostealer-problem" title="" class="hs-featured-image-link"&gt; &lt;img src="https://suzulabs.com/hubfs/Netflix%20Spotufy%20hacker%20blog%20image.jpg" alt="Brightspeed Breach: Crimson Collective and the Infostealer Problem" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Recently Crimson Collective claimed they breached Brightspeed and grabbed 1 million+ customer records. The list of data they claim to have accessed includes names, billing addresses, partial payment data, and more. There was a class action filed three days later. Brightspeed says they're investigating. No confirmation of data exfiltration yet.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;Recently Crimson Collective claimed they breached Brightspeed and grabbed 1 million+ customer records. The list of data they claim to have accessed includes names, billing addresses, partial payment data, and more. There was a class action filed three days later. Brightspeed says they're investigating. No confirmation of data exfiltration yet.&lt;/p&gt; 
&lt;p&gt;Most coverage stops there, but understanding who Crimson Collective is tells us more about what's actually at risk than the headline numbers.&lt;/p&gt; 
&lt;p&gt;&amp;nbsp;&lt;/p&gt; 
&lt;h2&gt;Crimson Collective's Track Record&lt;/h2&gt; 
&lt;p&gt;This group emerged in September 2025 and hit Red Hat's internal GitLab the following month. 570 GB from 28,000+ repositories. The alleged haul included Customer Engagement Reports with infrastructure designs, authentication tokens, and database connection strings. They've gone after Nintendo and Nissan with similar attacks.&lt;/p&gt; 
&lt;p&gt;Based on our research they target cloud-hosted environments and development infrastructure, not customer databases. They're after the systems that build and maintain applications, not the applications themselves.&lt;/p&gt; 
&lt;p&gt;If the Brightspeed claims are legitimate, Crimson Collective probably didn't limit themselves to exporting a customer table. The attack surface could extend into operational infrastructure. That's a different kind of problem than a standard PII breach.&lt;/p&gt; 
&lt;h2&gt;The Infostealer Correlation Problem&lt;/h2&gt; 
&lt;p&gt;Vidar infostealer logs with Brightspeed customer credentials were already circulating on Russian Market before Crimson Collective posted anything. Discord logins, Netflix, Verizon Wireless, Spotify, Roblox. Credentials harvested from compromised customer devices over the past year.&lt;/p&gt; 
&lt;p&gt;Now those same people potentially have their billing addresses and service records exposed in a separate incident.&lt;/p&gt; 
&lt;p&gt;Cross-reference the two datasets and you've got everything needed for targeted phishing. The infostealer logs tell you what services someone uses. The breach data tells you where they live and how they pay. Attackers know how to combine data sources. Defenders often don't think about the correlation problem until it's too late.&lt;/p&gt; 
&lt;p&gt;This is the compounding effect that doesn't make headlines. A breach looks bad. A breach combined with existing credential leaks is worse. The people affected aren't just dealing with one exposure. They're dealing with a more complete picture of their digital lives being assembled from multiple sources.&lt;/p&gt; 
&lt;h2&gt;Infrastructure Red Flags&lt;/h2&gt; 
&lt;p&gt;Brightspeed IP addresses are appearing in active SOCKS proxy lists being sold on dark web forums.&lt;/p&gt; 
&lt;p&gt;This could mean a few things:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Compromised customer devices being used as proxy nodes.&lt;/li&gt; 
 &lt;li&gt;Broader infrastructure compromise beyond customer data.&lt;/li&gt; 
 &lt;li&gt;Residential proxy networks leveraging Brightspeed's network for anonymization.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Any of these scenarios warrants investigation beyond standard breach response. If customer devices are being recruited into proxy networks, that's an ongoing problem that doesn't end when the breach investigation closes. Those devices stay compromised. The proxy operators keep using them.&lt;/p&gt; 
&lt;h2&gt;The Investigation Trap&lt;/h2&gt; 
&lt;p&gt;Brightspeed is stuck in a difficult position. They can't confirm or deny without completing forensics. But every day of silence lets the narrative build. Crimson Collective knows this. The Telegram posts and data samples are designed to create pressure.&lt;/p&gt; 
&lt;p&gt;The company has to balance thorough investigation against reputational damage from appearing unresponsive. There's no clean answer. Move too fast and you risk making statements you have to walk back. Move too slow and the court of public opinion renders its verdict without you.&lt;/p&gt; 
&lt;p&gt;The class action filing three days after unverified claims is aggressive. Brightspeed hasn't confirmed data exfiltration. The plaintiffs are either confident the breach is real or they're positioning early to lead the litigation when confirmation comes.&lt;/p&gt; 
&lt;p&gt;Either way, it puts pressure on Brightspeed to disclose faster than their forensics team might want or deserve.&lt;/p&gt; 
&lt;h2&gt;Why Telecom Providers Are Targets&lt;/h2&gt; 
&lt;p&gt;Telecom providers sit on valuable data by default. They have billing relationships with millions of customers. Names, addresses, payment methods, service records, appointment history. All in one place.&lt;/p&gt; 
&lt;p&gt;That data is useful for fraud. The customer base is large enough that even unverified breach claims generate headlines and regulatory attention. And unlike a retailer where customers might shop once and disappear, telecom customers have ongoing relationships. Monthly billing. Service calls. Equipment records. Years of data on each customer.&lt;/p&gt; 
&lt;p&gt;The attackers know this. The regulators know this. The class action lawyers know this. Telecom providers need breach response playbooks that account for all of these pressures simultaneously balancing&amp;nbsp;threat actor tactics, legal exposure, regulatory requirements, and public perception. Technical forensics is one piece. Managing the other three is where most organizations struggle.&lt;/p&gt; 
&lt;h2&gt;What Happens Next&lt;/h2&gt; 
&lt;p&gt;If Crimson Collective's claims are legitimate, expect additional data samples to surface. That's their playbook. Pressure through incremental disclosure. Each new sample extends the news cycle and increases pressure on Brightspeed to respond.&lt;/p&gt; 
&lt;p&gt;Brightspeed customers should assume their data is compromised until proven otherwise. That means watching for phishing attempts that reference their service details. Being skeptical of calls claiming to be from Brightspeed support. Monitoring financial accounts for fraud.&lt;/p&gt; 
&lt;p&gt;For organizations watching this unfold, the lesson is about data correlation. Your customers' exposure doesn't exist in isolation. It exists alongside every other breach, every infostealer log, every credential dump they've been caught in. Defenders need to think about that aggregate picture, not just the breach in front of them.&lt;/p&gt;  
&lt;img src="https://track-na2.hubspot.com/__ptq.gif?a=243748608&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fsuzulabs.com%2Fsuzu-labs-blog%2Fbrightspeed-breach-crimson-collective-and-the-infostealer-problem&amp;amp;bu=https%253A%252F%252Fsuzulabs.com%252Fsuzu-labs-blog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Threat Intelligence</category>
      <category>Infostealers</category>
      <category>Credential Theft</category>
      <pubDate>Tue, 20 Jan 2026 00:07:21 GMT</pubDate>
      <guid>https://suzulabs.com/suzu-labs-blog/brightspeed-breach-crimson-collective-and-the-infostealer-problem</guid>
      <dc:date>2026-01-20T00:07:21Z</dc:date>
      <dc:creator>Mike Bell</dc:creator>
    </item>
    <item>
      <title>When Grid Data Goes Dark Web</title>
      <link>https://suzulabs.com/suzu-labs-blog/when-grid-data-goes-dark-web</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://suzulabs.com/suzu-labs-blog/when-grid-data-goes-dark-web" title="" class="hs-featured-image-link"&gt; &lt;img src="https://suzulabs.com/hubfs/Critical%20Infrasturcutre%20Blog%20Image.jpg" alt="When Grid Data Goes Dark Web" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;h2&gt;Inside a threat actor's critical infrastructure targeting&lt;/h2&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;In January 2026, 139 gigabytes of engineering data from a U.S. power infrastructure company appeared for sale on an underground forum. The seller wanted 6.5 Bitcoin. The data included LiDAR point clouds of transmission line corridors, substation configurations, and vegetation mapping for three major utilities.&lt;/span&gt;&lt;/p&gt;</description>
      <content:encoded>&lt;h2&gt;Inside a threat actor's critical infrastructure targeting&lt;/h2&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;In January 2026, 139 gigabytes of engineering data from a U.S. power infrastructure company appeared for sale on an underground forum. The seller wanted 6.5 Bitcoin. The data included LiDAR point clouds of transmission line corridors, substation configurations, and vegetation mapping for three major utilities.&lt;/span&gt;&lt;/p&gt;  
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;The seller explicitly noted the data was "suitable for infrastructure analysis, modeling, risk assessment, or specialized research."&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;That language matters. The actor understands exactly what this data enables.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;&lt;strong&gt;What the Data Contains&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;The breach targeted an engineering firm that provides surveying and design services to electric utilities. The stolen files include:&lt;/span&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;span style="background-color: #ffffff;"&gt;800+ LiDAR point cloud files mapping transmission corridors&lt;/span&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;span style="background-color: #ffffff;"&gt;High-resolution orthophotos of substations&lt;/span&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;span style="background-color: #ffffff;"&gt;MicroStation design files with line configurations&lt;/span&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;span style="background-color: #ffffff;"&gt;Vegetation analysis along rights-of-way&lt;/span&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;For a utility or engineering firm, this is operational data. For an adversary, this is reconnaissance gold. The files map exactly where power lines run, how they're configured, what vegetation threatens them, and where substations connect to the grid.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;&lt;strong&gt;Why This Matters&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;Grid infrastructure has become a high-value target. Physical attacks on substations have increased in recent years. Cyber-physical attacks that combine digital intrusion with physical action remain a persistent concern in the intelligence community.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;Data like this enables detailed planning. An adversary could identify vulnerable transmission corridors, understand redundancy patterns, or map critical interconnection points. The threat model here extends beyond financial cybercrime.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;&lt;img src="https://suzulabs.com/hs-fs/hubfs/Gemini_Generated_Image_66a0xl66a0xl66a0.png?width=414&amp;amp;height=414&amp;amp;name=Gemini_Generated_Image_66a0xl66a0xl66a0.png" width="414" height="414" alt="Gemini_Generated_Image_66a0xl66a0xl66a0" style="height: auto; max-width: 100%; width: 414px;"&gt;&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;&lt;strong&gt;&amp;nbsp;&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;&lt;strong&gt;The Access Method&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;This wasn't a sophisticated attack on industrial control systems. It wasn't a supply chain compromise or zero-day exploit. According to public reporting on the same threat actor, the likely access method was testing infostealer-harvested credentials against cloud file-sharing platforms.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;Someone at the company had their browser credentials stolen by commodity malware. Those credentials weren't protected by MFA. The threat actor logged in and extracted 139GB of sensitive engineering data.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;&lt;strong&gt;The Pricing Signal&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;At 6.5 Bitcoin (roughly $600,000 at current prices), this is the highest-value individual listing we’ve observed from this actor. Compare that to a law firm breach listed at 0.09 Bitcoin or a furniture manufacturer at $1,500.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;The pricing reflects what the actor believes the data is worth to potential buyers. Critical infrastructure data commands a premium. The buyer pool for this data includes parties with resources and motivations beyond simple financial crime.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;&lt;strong&gt;Defensive Lessons&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;Organizations handling sensitive infrastructure data should treat that data like it's already being targeted. Specific recommendations:&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;&lt;strong&gt;Segment sensitive project data.&lt;/strong&gt; Engineering files for critical infrastructure shouldn't sit on the same file-sharing platform as general corporate documents.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;&lt;strong&gt;Enforce MFA without exception.&lt;/strong&gt; Especially for any system accessible from the internet. The credential that got tested was probably years old. MFA would have made it worthless.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;&lt;strong&gt;Monitor access patterns.&lt;/strong&gt; Bulk downloads of sensitive files should trigger alerts. 139GB doesn't exfiltrate quietly unless no one is watching.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;&lt;strong&gt;Vet third-party security.&lt;/strong&gt; Utilities often rely on engineering contractors who have weaker security postures. Your security extends to everyone with access to your data.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;&lt;strong&gt;Assume the perimeter is porous.&lt;/strong&gt; Design controls assuming credentials will eventually leak. Because they will.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;img src="https://suzulabs.com/hs-fs/hubfs/Gemini_Generated_Image_bxevrkbxevrkbxev.png?width=370&amp;amp;height=370&amp;amp;name=Gemini_Generated_Image_bxevrkbxevrkbxev.png" width="370" height="370" alt="Gemini_Generated_Image_bxevrkbxevrkbxev" style="height: auto; max-width: 100%; width: 370px;"&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;&lt;strong&gt;&amp;nbsp;&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;&lt;strong&gt;The Broader Pattern&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;This actor has listed data from 50+ organizations across 15 countries. Aviation. Healthcare. Government. Construction. Critical infrastructure is one target category among many. The common thread is opportunistic access via stolen credentials and absent MFA.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;The infostealer economy doesn't discriminate. It harvests everything. Threat actors like Zestix specialize in identifying the high-value targets within that ocean of compromised credentials.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;Critical infrastructure organizations need to understand they're operating in this environment. The threat isn't hypothetical adversaries with nation-state resources. It's financially motivated actors selling grid data to the highest bidder.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;&lt;strong&gt;Mike Bell&lt;/strong&gt; is Founder and CEO of Suzu Labs, building AI-powered platforms for meeting intelligence, business intelligence, and secure document processing. He brings a security-first perspective to threat analysis based on over two decades in cybersecurity spanning penetration testing, incident response, security architecture, and AI security.&lt;/span&gt;&lt;/p&gt;  
&lt;img src="https://track-na2.hubspot.com/__ptq.gif?a=243748608&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fsuzulabs.com%2Fsuzu-labs-blog%2Fwhen-grid-data-goes-dark-web&amp;amp;bu=https%253A%252F%252Fsuzulabs.com%252Fsuzu-labs-blog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Power Grid</category>
      <category>Critical Infrastructure</category>
      <category>Threat Intelligence</category>
      <pubDate>Mon, 19 Jan 2026 23:15:56 GMT</pubDate>
      <guid>https://suzulabs.com/suzu-labs-blog/when-grid-data-goes-dark-web</guid>
      <dc:date>2026-01-19T23:15:56Z</dc:date>
      <dc:creator>Mike Bell</dc:creator>
    </item>
    <item>
      <title>The $150,000 Password</title>
      <link>https://suzulabs.com/suzu-labs-blog/the-150000-password</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://suzulabs.com/suzu-labs-blog/the-150000-password" title="" class="hs-featured-image-link"&gt; &lt;img src="https://suzulabs.com/hubfs/cyber%20google%20newpress.jpg" alt="The $150,000 Password" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;h2&gt;How one threat actor turned stolen credentials into a global breach portfolio&lt;/h2&gt; 
&lt;p&gt;Between December 2025 and January 2026, a single threat actor posted &lt;span style="background-color: #ffffff;"&gt;25 data sales listings on a Russian-language cybercrime forum. The victims spanned 15 countries and every major sector from aviation to critical infrastructure. Prices ranged from free to $150,000.&lt;/span&gt;&lt;/p&gt;</description>
      <content:encoded>&lt;h2&gt;How one threat actor turned stolen credentials into a global breach portfolio&lt;/h2&gt; 
&lt;p&gt;Between December 2025 and January 2026, a single threat actor posted &lt;span style="background-color: #ffffff;"&gt;25 data sales listings on a Russian-language cybercrime forum. The victims spanned 15 countries and every major sector from aviation to critical infrastructure. Prices ranged from free to $150,000.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;The actor goes by "Zestix." And despite the sophisticated pricing and global reach, the attack method is almost embarrassingly simple.&lt;/p&gt; 
&lt;p&gt;No zero-days. No advanced malware. No chained exploits. Zestix parses old infostealer logs for cloud credentials and tests each one until something works. When MFA is absent, they walk right in through the front door.&lt;/p&gt; 
&lt;h4&gt;&lt;strong&gt;The Infostealer Economy&lt;/strong&gt;&lt;/h4&gt; 
&lt;p&gt;Infostealers like RedLine, Lumma, and Vidar have become commodity malware. An employee downloads a pirated game or clicks a malicious link. The malware quietly harvests every saved password from their browser. Those logs get sold in bulk on underground markets. Buyers like Zestix sift through them looking for corporate file-sharing URLs.&lt;/p&gt; 
&lt;p&gt;ShareFile. Nextcloud. OwnCloud. These platforms hold sensitive documents. They're often exposed to the internet. And they're frequently protected by nothing more than a username and password.&lt;/p&gt; 
&lt;p&gt;The barrier to entry is essentially zero. Parse the logs for corporate URLs, try the credentials, take whatever access you get.&lt;/p&gt; 
&lt;p&gt;&lt;img src="https://suzulabs.com/hs-fs/hubfs/Gemini_Generated_Image_9yevp79yevp79yev.png?width=417&amp;amp;height=417&amp;amp;name=Gemini_Generated_Image_9yevp79yevp79yev.png" width="417" height="417" alt="Gemini_Generated_Image_9yevp79yevp79yev" style="height: auto; max-width: 100%; width: 417px;"&gt;&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;&amp;nbsp;&lt;/strong&gt;&lt;/p&gt; 
&lt;h4&gt;&lt;strong&gt;The Victim Portfolio&lt;/strong&gt;&lt;/h4&gt; 
&lt;p&gt;The scale is remarkable. We’ve seen Zestix's forum activity since the public reports emerged in early 2026 spanning back to just September of 2025. The confirmed victims include:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;span style="background-color: #ffffff;"&gt;A U.S. engineering firm with LiDAR mapping data for major power utilities (listed for 6.5 Bitcoin, roughly $600,000)&lt;/span&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;span style="background-color: #ffffff;"&gt;A European airline with 77GB of aircraft maintenance programs and fleet configurations ($150,000)&lt;/span&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;span style="background-color: #ffffff;"&gt;A Canadian transit infrastructure project with geotechnical reports and construction risk assessments (1.8 Bitcoin)&lt;/span&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;span style="background-color: #ffffff;"&gt;A Brazilian military police healthcare provider (2.3 terabytes of medical records)&lt;/span&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;span style="background-color: #ffffff;"&gt;An Algerian logistics company with 123GB of customer data&lt;/span&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;The common thread across all 50+ confirmed breaches? Lack of MFA on externally accessible file-sharing platforms.&lt;br&gt;&lt;br&gt;&lt;/p&gt; 
&lt;h4&gt;&lt;span style="background-color: #ffffff;"&gt;&lt;strong&gt;Beyond Credential Theft&lt;/strong&gt;&lt;/span&gt;&lt;/h4&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;Forum intelligence reveals Zestix operates at multiple capability tiers. The credential harvesting is volume play. But the actor has also shared detailed EDR evasion techniques for bypassing SentinelOne, provides operational support for investment fraud schemes, and claims to run real-time deepfake systems for video call social engineering.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="background-color: #ffffff;"&gt;The credential harvesting funds the operation. The higher-tier capabilities are available for high-value targets.&lt;br&gt;&lt;br&gt;&lt;/span&gt;&lt;/p&gt; 
&lt;h4&gt;&lt;strong&gt;The Uncomfortable Truth&lt;/strong&gt;&lt;/h4&gt; 
&lt;p&gt;These companies weren't hacked by nation-state actors with unlimited resources. They weren't targeted by custom malware or sophisticated exploit chains. They were compromised because an employee's device got infected with commodity malware, and the organization never rotated the password or enabled a second factor.&lt;/p&gt; 
&lt;p&gt;Every single breach was preventable with basic security hygiene.&lt;br&gt;&lt;br&gt;&lt;/p&gt; 
&lt;h4&gt;&lt;strong&gt;What Organizations Should Do&lt;/strong&gt;&lt;/h4&gt; 
&lt;p&gt;Enable MFA everywhere. Not SMS-based. FIDO2 keys or passkeys. Every externally accessible system, no exceptions.&lt;/p&gt; 
&lt;p&gt;Rotate passwords regularly. Some credentials in Zestix's portfolio sat in infostealer logs for years before exploitation. A malware infection from 2022 became a data breach in 2025.&lt;/p&gt; 
&lt;p&gt;Monitor for credential exposure. Services exist that scan dark web markets and infostealer dumps for your organization's credentials. When exposure is detected, immediate password reset and session revocation.&lt;/p&gt; 
&lt;p&gt;Assume breach. If you're running cloud file-sharing without MFA, operate under the assumption that someone already has the password.&lt;/p&gt; 
&lt;p&gt;The infostealer economy has made credential theft scalable and cheap. The only defense is making those credentials worthless through proper authentication controls.&lt;/p&gt; 
&lt;p&gt;&lt;img src="https://suzulabs.com/hs-fs/hubfs/Gemini_Generated_Image_yj5jiyj5jiyj5jiy.png?width=403&amp;amp;height=403&amp;amp;name=Gemini_Generated_Image_yj5jiyj5jiyj5jiy.png" width="403" height="403" alt="Gemini_Generated_Image_yj5jiyj5jiyj5jiy" style="height: auto; max-width: 100%; width: 403px;"&gt;&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Mike Bell&lt;/strong&gt; is Founder and CEO of Suzu Labs, building AI-powered platforms for meeting intelligence, business intelligence, and secure document processing. With over two decades in cybersecurity spanning penetration testing, incident response, security architecture, and AI security, he brings a security-first perspective to threat analysis.&lt;/p&gt;  
&lt;img src="https://track-na2.hubspot.com/__ptq.gif?a=243748608&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fsuzulabs.com%2Fsuzu-labs-blog%2Fthe-150000-password&amp;amp;bu=https%253A%252F%252Fsuzulabs.com%252Fsuzu-labs-blog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Critical Infrastructure</category>
      <category>Threat Intelligence</category>
      <category>Infostealers</category>
      <category>Credential Theft</category>
      <pubDate>Mon, 19 Jan 2026 23:08:12 GMT</pubDate>
      <guid>https://suzulabs.com/suzu-labs-blog/the-150000-password</guid>
      <dc:date>2026-01-19T23:08:12Z</dc:date>
      <dc:creator>Mike Bell</dc:creator>
    </item>
  </channel>
</rss>
