// SEEKING TO JOIN OWASP GENAI SECURITY PROJECT AS A PHYSICAL AI INITIATIVE //
ALWAYS A SURPRISE
AI is escaping the data center. It's in warehouses, hospitals, roads, and living rooms.
Physical AI systems act in the world — and the security community hasn't caught up.
THREAT LANDSCAPE
Physical AI refers to AI systems that perceive and act upon the real world through sensors, actuators, and robotic platforms. Unlike chatbots and content generators, these systems have physical consequences — a misconfigured robot arm doesn't hallucinate text, it moves steel.
The attack surface is entirely different. Prompt injection in a robot's visual pipeline. Adversarial sensor spoofing. Firmware tampering in servo controllers. Unsafe envelope overrides through malicious I2C commands. These aren't theoretical — they're engineering realities today.
The security frameworks we have weren't written for systems that can tip over a shelf, block an exit, or pick a lock. That's the gap this initiative exists to close.
FRAMEWORK
Adapting STRIDE, PASTA, and attack tree methodologies for cyber-physical systems. Mapping sensor spoofing, actuator hijacking, and protocol abuse to real attack paths against robot arms, autonomous vehicles, and industrial control systems.
When an AI system causes physical harm, who responds? What's the containment procedure? How do you preserve evidence on an embedded controller? The Physical AI Appendix in the OWASP GenAI IR Guide 1.0 starts answering these questions.
Safety interlocks are not security controls. Procurement checklists for physical AI vendors. Policy language that covers autonomous physical action. Regulatory alignment for an environment that moves faster than legislation.
PUBLISHED WORK
The OWASP GenAI Incident Response Guide 1.0 includes a dedicated Physical AI Appendix — the first published OWASP guidance specifically addressing AI systems that interact with the physical world.
It covers containment procedures for physical systems, evidence preservation on embedded hardware, coordination between cybersecurity responders and operational safety teams, and the key questions IR teams need to ask when an AI with actuators is involved in an incident.
READ THE GUIDE → genai.owasp.orgCOMMUNITY
FIND US IN PERSON
THE INITIATIVE
The goal is to establish a dedicated Physical AI Security initiative under genai.owasp.org —
bringing threat modeling, incident response, and governance frameworks to AI systems
that act in the physical world.
If you want to be notified as this initiative is formalised — and only for that purpose —
add your name below.
YOUR EMAIL WILL ONLY BE USED TO NOTIFY YOU AS THIS OWASP INITIATIVE IS FORMALISED — NOTHING ELSE. DOUBLE OPT-IN. UNSUBSCRIBE ANY TIME.