Security Considerations and Potential Implications of AI
Large language models (LLMs) are the foundation of the current wave of AI products, most notably chatbots such as ChatGPT. As LLM-based features are increasingly being built into many types of software, from content generators to development environments and even operating systems, a significant security concern arises: prompt injection attacks.
Join this thought-provoking discussion and learn:
- Known types of prompt injection
- The dangers they can bring
- Approaches to minimize risk
Join Invicti Chief Architect, Dan Murphy and Invicti CTO and Head of Security Research, Frank Catucci

Dan Murphy has 20+ years of experience in the cybersecurity space, specializing in web security, distributed systems, and software architecture. As a distinguished architect at Invicti, his focus is on ensuring that Invicti products across the entire organization work together to provide a scalable, performant, and secure dynamic analysis experience.

Frank Catucci is a global application security technical leader with over 20 years of experience, designing scalable application security specific architecture, partnering with cross-functional engineering and product teams. Frank is a past OWASP Chapter President and contributor to the OWASP bug bounty initiative and most recently was the Head of Application & Product Security at Data Robot. Prior to that role, Frank was the Sr. Director of Application Security & DevSecOps and Security Researcher at Gartner, and was also the Director of Application Security for Qualys. Outside of work and hacking things, Frank and his wife maintain a family farm. He is an avid outdoors fan and loves all types of fishing, boating, watersports, hiking, camping and especially dirt bikes and motorcycles.