Fabrion logo

ML Ops Engineer — Agentic AI Lab (Founding Team)

FabrionSan Francisco Bay Area
Apply Now
Fabrion logo

ML Ops Engineer — Agentic AI Lab (Founding Team)

Fabrion

Apply Now

Agentic AI Lab is seeking a full-time ML Ops Engineer to join their founding team in the San Francisco Bay Area. The role focuses on bridging ML research and production systems, automating model training, deployment, and observability pipelines for AI applications. The company is backed by 8VC and aims to innovate in intelligent infrastructure using open-source LLMs and advanced AI techniques.

Qualification

  • 4+ years in MLOps, ML platform engineering, or infra-focused ML roles.
  • Deep familiarity with model lifecycle management tools: MLflow, Weights & Biases, DVC, HuggingFace Hub.
  • Experience with large model deployments (open-source LLMs preferred): LLaMA, Mistral, Falcon, Mixtral.
  • Proficient with Terraform, Helm, K8s, and container orchestration.
  • Experience with CI/CD for ML (e.g. GitHub Actions + model checkpoints).
  • Familiarity with LangChain, LangGraph, LlamaIndex or similar RAG/agent orchestration tools.

Responsibility

  • Build and maintain secure, scalable, and automated pipelines for LLM fine-tuning, SFT, LoRA, RLHF, DPO training.
  • Manage hybrid compute infrastructure (cloud, on-prem, GPU clusters) for training and inference workloads using Kubernetes, Ray, and Terraform.
  • Containerize models and agents using Docker, with reproducible builds and CI/CD via GitHub Actions or ArgoCD.
  • Implement and enforce model governance: versioning, metadata, lineage, reproducibility, and evaluation capture.
  • Create and manage evaluation and benchmarking frameworks (e.g. OpenLLM-Evals, RAGAS, LangSmith).
  • Integrate with security and access control layers (OPA, ABAC, Keycloak) to enforce model policies per tenant.
  • Instrument observability for model latency, token usage, performance metrics, error tracing, and drift detection.
  • Support deployment of agentic apps with LangGraph, LangChain, and custom inference backends (e.g. vLLM, TGI, Triton).

Similar Jobs