logo inner

ML Infra Engineer (TPU/Jax/Optimization)

LocationSan Francisco, California, United States
TypeOnsite
Sub
Software Engineer
In this role you will help scale and optimize our training systems and core model code. You’ll own critical infrastructure for large-scale training, from managing GPU/TPU compute and job orchestration to building reusable and efficient JAX training pipelines. You’ll work closely with researchers and model engineers to translate ideas into experiments—and those experiments into production training runs.
This is a hands-on, high-leverage role at the intersection of ML, software engineering, and scalable infrastructure.

The Team


The ML Infrastructure team supports and accelerates PI’s core modeling efforts by building the systems that make large-scale training reliable, reproducible, and fast. The team works closely with research, data, and platform engineers to ensure models can scale from prototype to production-grade training runs.

In This Role You Will


- Own training/inference infrastructure:

Design, implement, and maintain systems for large-scale model training, including scheduling, job management, checkpointing, and metrics/logging.

- Scale distributed training:

Work with researchers to scale JAX-based training across TPU and GPU clusters with minimal friction.

- Optimize performance:

Profile and improve memory usage, device utilization, throughput, and distributed synchronization.

- Enable rapid iteration:

Build abstractions for launching, monitoring, debugging, and reproducing experiments.

- Manage compute resources:

Ensure efficient allocation and utilization of cloud-based GPU/TPU compute while controlling cost.

- Partner with researchers:

Translate research needs into infra capabilities and guide best practices for training at scale.

- Contribute to core training code:

Evolve JAX model and training code to support new architectures, modalities, and evaluation metrics.

What We Hope You’ll Bring


- Strong software engineering fundamentals and experience building ML training infrastructure or internal platforms.- Hands-on large-scale training experience in JAX (preferred), PyTorch.- Familiarity with distributed training, multi-host setups, data loaders, and evaluation pipelines.- Experience managing training workloads on cloud platforms (e.g., SLURM, Kubernetes, GCP TPU/GKE, AWS).- Ability to debug and optimize performance bottlenecks across the training stack.- Strong cross-functional communication and ownership mindset.

Bonus Points If You Have


- Deep ML systems background (e.g., training compilers, runtime optimization, custom kernels).- Experience operating close to hardware (GPU/TPU performance tuning).- Background in robotics, multimodal models, or large-scale foundation models.- Experience designing abstractions that balance researcher flexibility with system reliability.

Your tracker settings

We use cookies and similar methods to recognize visitors and remember their preferences. We also use them to measure ad campaign effectiveness, target ads and analyze site traffic. To learn more about these methods, including how to disable them, view our Cookie Policy or Privacy Policy.

By tapping `Accept`, you consent to the use of these methods by us and third parties. You can always change your tracker preferences by visiting our Cookie Policy.

logo innerThatStartupJob
Discover the best startup and their job positions, all in one place.
Copyright © 2025