

Product Security Engineer - AI
Crusoe
Product Security Engineer - AI at Crusoe, focused on securing AI/LLM ecosystems, defining long-term AI security architecture, and delivering secure MLOps and tooling across cloud infrastructure.
Qualification
- 3+ years of professional experience building and maintaining production systems with strong Python programming skills across the stack (backend/frontend).
- Deep expertise in advanced Generative AI techniques, including implementing Retrieval-Augmented Generation (RAG), designing AI Agents and Multi-step Cognitive Processes (MCP), and building with workflow orchestration frameworks.
- Proven ability to own the entire model lifecycle by designing and managing robust MLOps pipelines; experience with Docker, virtualization (VMs), and cloud platforms (AWS, GCP, Azure) is a plus.
- Experience in designing, implementing, and fine-tuning custom LLMs, with solid understanding of NLP fundamentals, transformer architectures, PyTorch/TensorFlow, and data structures.
- Strong curiosity about security and familiarity with threat modeling, secure SDLC practices, and privacy-by-design considerations.
- Ability to collaborate with cross-functional teams and mentor engineers on secure development in the GenAI domain.
- Excellent communication skills and a proactive approach to security improvements across AI systems.
Responsibility
- Act as the AI Security SME and strategic partner, defining the long-term security architecture roadmap for AI/LLM security and driving cross-functional initiatives.
- Own LLM architecture and design for secure Generative AI solutions, focusing on Retrieval-Augmented Generation (RAG) patterns.
- Architect and implement custom, AI-powered security tooling to automate threat detection, vulnerability analysis, and data access control, scaling from PoC to production.
- Establish governance and processes for secure MLOps pipelines, defining standards for model versioning, deployment, and monitoring to meet compliance and security requirements.
- Lead threat modeling exercises for novel AI systems, apply advanced security and privacy best practices, and mentor senior engineers on secure GenAI development practices.
- Drive the entire lifecycle of critical AI security projects, ensuring system-level ownership and delivery.




