logo inner

Lead Product Security Engineer – Multimodal & Generative AI

Luma AIPalo Alto, California, United StatesOnsite

Product Security Engineer – Multimodal & Generative AI


About Luma Labs


At Luma Labs, we’re pioneering the next generation of multimodal generative AI, enabling models to create hyper-realistic videos and images from natural language and other rich input modalities. Our products empower creators, developers, and companies to generate content that was previously impossible instantly and intelligently.As we scale our AI platform and reach millions of users, we are hiring our Product Security Engineer to set the foundation for security across everything we build. This is a critical role that blends hands-on security engineering with strategic leadership ideal for someone who thrives in fast-paced, high-impact environments and wants to shape security from day one.

Role Overview


You will be Luma Labs’ first dedicated security engineering hire. As the Product Security Engineer, you’ll own the security posture of our products, services, and generative AI systems. You’ll work directly with engineering, ML, infrastructure, and leadership to proactively design and implement secure systems with a strong focus on the unique risks and opportunities in multimodal video and image generation.This is a leadership-track position with both strategic ownership and deep technical execution.

What You’ll Do


  • Own Product & Application Security: Define and drive Luma’s approach to secure product development from design reviews to automated scanning to runtime protections.
  • Secure GenAI Systems: Analyze and secure the full lifecycle of generative models (image, video, multimodal), including data ingestion, model inference, and API surface.
  • Lead Threat Modeling & Reviews: Run deep security reviews on new features, architectures, and model capabilities, with a focus on abuse prevention, data leakage, and content safety.
  • Build Security Infrastructure: Stand up tools and systems for static analysis, dependency scanning, secrets detection, and CI/CD hardening.
  • Define Misuse & Abuse Guardrails: Partner with ML and product teams to mitigate prompt injection, jailbreaks, adversarial inputs, and misuse of generative outputs.
  • Lead Security Incident Detection & Response Management: Lead investigations and forensics for security incidents, vulnerabilities, or model abuse cases.
  • Influence Org-wide Security Culture: Establish best practices, run internal training, and serve as a go-to security expert across Luma’s growing technical teams.
  • Build the Function: Help hire and grow a high-caliber security team as the company scales.

Requirements:


Must-Have:


  • Deep Experience in security engineering and application security.
  • Successful track record of getting products through security certifications, particularly SOC 2.
  • Proven ability to operate as a hands-on engineer and technical leader.
  • Strong understanding of generative AI systems or high-complexity ML applications.
  • Hands-on experience with generative models (e.g., diffusion, transformers, vision-language) and related risks (e.g., prompt injection, data leakage).
  • Proficiency in secure development,Python preferred.
  • Experience securing cloud-native environments (e.g. AWS, GCP, Oracle) and common infrastructure design platforms (e.g. Docker/K8s).
  • Experience with threat modeling, secure design, and modern application security tooling (SAST, DAST, IaC scanning, etc.) and automation.
  • Ability to make fast, thoughtful decisions and execute in a fast-moving startup environment.
  • Experience successfully leading cross-functional teams to drive security initiatives.
  • Excellent written and verbal communication skills; comfortable collaborating across research, product, engineering, and leadership.

Bonus / Nice-to-Have:


  • Exposure to red teaming, adversarial ML, or AI safety frameworks.
  • Public speaking, open-source contributions, or research in security or AI fields.

Why This Role is Unique


  • Greenfield Security: You’ll be defining the security architecture of one of the most advanced generative AI stacks in the world from the ground up.
  • Cross-Disciplinary Impact: Collaborate directly with ML researchers, creative technologists, infrastructure engineers, and product designers.
  • Fast Path to Leadership: This is a founding role with direct access to leadership and influence over future hires and security roadmap.

Deep Tech with Real Users: Work on dynamic, cutting-edge video and image generation tools already in production and scaling fast.

Related Sub

This job belongs to these sub. Explore related roles here:
Open source jobs
Your tracker settings

We use cookies and similar methods to recognize visitors and remember their preferences. We also use them to measure ad campaign effectiveness, target ads and analyze site traffic. To learn more about these methods, including how to disable them, view our Cookie Policy or Privacy Policy.

By tapping `Accept`, you consent to the use of these methods by us and third parties. You can always change your tracker preferences by visiting our Cookie Policy.

logo innerThatStartupJob
Discover the best startup and their job positions, all in one place.
Copyright © 2025