About the Role
As our Machine Learning Engineer (Inference), you’ll push the limits of frameworks, refine our agent architecture, and build the benchmarks that define performance at scale. You’ll help take our frontier models from the lab into lightning-fast production-ready services.If you relish experimenting with the latest serving research, building optimizations, and shipping infrastructure for researchers, then we invite you to apply!
Responsibilities
- Architect and optimize high-performance inference infrastructure for large foundation models
- Benchmark and improve latency, throughput, and agent responsiveness
- Work with researchers to deploy new model architectures and multi-step agent behaviors
- Implement caching, batching, and prioritization to handle high-volume requests
- Build monitoring and observability into inference pipelines
Qualifications
- Strong experience in distributed systems and low-latency ML serving
- Skilled with performance optimization tools and techniques, and experienced in developing solutions for critical performance gains
- Hands-on with vLLM, SGLang, or equivalent frameworks
- Familiarity with GPU optimization, CUDA, and model parallelism
- Comfort working in a high-velocity, ambiguity-heavy startup environment
What makes us interesting
- Small, elite team of ex-founders, researchers from top AI Labs, top CS grads, and engineers from top companies
- True ownership You will not be blocked by bureaucracy, shipping meaningful work within weeks rather than months
- Serious momentum We're well-funded by top investors, moving fast, and focused on execution
What we do
- Ship consumer products powered by cutting-edge AI research, and
- Build infrastructure that facilitates research and product, and
- Innovate cutting-edge research that will open up new consumer product forms
The Details
- Full-time, onsite role in Menlo Park
- Startup hours apply
- Generous salary, with additional benefits to be discussed during the hiring process