DARPA launched AIxCC in order to find out if AI-driven cybersecurity tools could be developed to automatically secure critical infrastructure at the scale required. The ASC took place in 2024 and proved that AI systems are capable of not only identifying but also successfully patching vulnerabilities. This is a major advancement in cybersecurity. During the competition a team’s system uncovered a real-world zero-day vulnerability and inspired an additional vulnerability discovery by the cybersecurity community.
How it worked
Leading up to the ASC, teams developed Cyber Reasoning Systems (CRSs) – advanced cybersecurity tools leveraging the latest cybersecurity techniques and AI technology. AIxCC received 42 CRS submissions and tested each against an identical corpus of challenge projects that had a basis in a real-world, open-source project: Jenkins, Linux kernel, Nginx, SQLite3, and Apache Tika.
The AIxCC challenge projects (CPs) and challenge project vulnerabilities (CPVs) vary in size, scale, and complexity. To communicate the game details to a variety of audiences—including competitors, security researchers, policymakers, and students—the AIxCC Organizers developed code visualization videos to illustrate the scope of the code involved, the impact of vulnerability exposures, and the difficulty of the tasks faced by competitors.
To learn more, take a look at the playlist below and pay specific attention to the color-coded functions highlighted across each CP and CPV: Green connections show the full extent of the code in a particular CP; yellow focuses on code reachable from an interface exposed to competitors; and red narrows down to the path where a CPV is triggered, showing detailed diagnostic information typically used to exploit or patch such vulnerabilities.
The AIxCC Challenge Project development team discusses their approach to developing their specific Challenge Projects for the Semifinal Competition.
The AI Cyber Challenge Semifinals
During the AIxCC Semifinals, 42 teams vied for a spot in the Finals by discovering and patching software vulnerabilities. Competitors used advanced Large Language Models to tackle challenges in complex systems such as Linux Kernel, SQLite, NGNIX, Jenkins, and Tika.
>>>Read more
Forty-two (42) teams submitted novel Cyber Reasoning Systems and competed in ASC. Seven (7) of those had been selected for the Small Business Track and each received $1 million to enable entrepreneurial innovation. The top seven (7) scoring teams at ASC received $2 million each and will compete in the Final Competition.
42-b3yond-6ug
LACROSSE
PANACEA
Shellphish
Trail of Bits
VERSATIL
Zellic
42-b3yond-6ug
all_you_need_is_a_fuzzing_brain
LACROSSE
Shellphish
Team Atlanta
Theori
Trail of Bits
42-b3yond-6ug
AI Guard
Aitomatic_DAIC_Collaboration
Alien Illuminati R Us
all_you_need_is_a_fuzzing_brain
Autonomous Cyber
BlueWaterWhiteSmoke
Buffalo
CASAndra
CipherTen
Composure
Debug Dragons
DSU Trojans
Fire Eye
FoodieBoys
Fuzzland
HammerAI
Healing Touch
illinoiSE
IntelliAgents
Interceptive
Kalibr Research
KORIA
Kri Labs
LACROSSE
Lauretta Labs
LoneStarFuzz
MADBUGS
Mindrake
Mistborn_417
PANACEA
PicoloLabs
SashiKode
Shellphish
SODIUM-24 AI
Team Atlanta
Theori
Trail of Bits
VERSATIL
WASPS
wpAppDev
Zellic