Modal logo

Forward Deployed Engineer - ML

ModalNew York
FullTimeUSD 180,000 – 250,000 per yearjavascriptnodenodejs+13 more
Apply Now
Modal logo

Forward Deployed Engineer - ML

Modal

Apply Now

About Us

Modal provides the infrastructure foundation for AI teams. With instant GPU access, sub-second container startups, and native storage, Modal makes it simple to train models, run batch jobs, and serve low-latency inference. We have thousands of customers who rely on us for production AI workloads, including Lovable, Scale AI, Substack, and Suno.

We're a fast-growing team based out of NYC, SF, and Stockholm. We've hit 9-figure ARR and recently raised a Series B https://modal.com/blog/announcing-our-series-b at a $1.1B valuation. Our investors include Lux Capital https://www.luxcapital.com/, Redpoint Ventures https://www.redpoint.com/, Amplify Partners https://www.amplifypartners.com/, and Elad Gil https://eladgil.com/.

Working at Modal means joining one of the fastest-growing AI infrastructure organizations at an early stage, with many opportunities to grow within the company. Our team includes creators of popular open-source projects (e.g. Seaborn https://github.com/mwaskom/seaborn, Luigi https://github.com/spotify/luigi), academic researchers, international olympiad medalists, and experienced engineering and product leaders with decades of experience.

The Role

  • Work hands-on with companies like Suno, Lovable, Cognition, and Meta to architect and optimize production AI workloads on Modal
  • Contribute to open-source projects — members of the team are active contributors to SGLang — and publish technical content that demonstrates Modal's capabilities across the AI stack
  • Collaborate with Modal's product and sales teams, contributing to the platform as both an engineer and a product stakeholder
  • Build trusted relationships with technical leaders (CTOs, VPs of Engineering, ML leads) at companies doing frontier AI work
  • Conduct technical demos, experiments, and proof-of-concepts that make Modal's performance advantages tangible

Requirements

  • 2+ years of professional ML engineering experience, ideally with hands-on work in inference optimization, model training, GPU programming, or ML infrastructure
  • Familiarity with the serving (e.g., vLLM, SGLang) and training (e.g., slime, verl, TRL) toolchains. You don't need all of these, but you should be able to go deep on at least one.
  • Strong communicator who can go deep on technical architecture with an engineering team and clearly articulate tradeoffs to technical leadership
  • Genuine interest in working directly with customers — you find it energizing to understand someone else's problem and help them solve it
  • Bonus: side projects, open-source contributions, or published work you're proud of in ML or systems performance
  • Willing to work in-person in New York City, San Francisco, or Stockholm

Similar Jobs