logo inner

Storage Protocols Engineering Manager

LambdaSan Francisco OfficeOnsite

Lambda is the #1 GPU Cloud for ML/AI teams training, fine-tuning and inferencing AI models, where engineers can easily, securely and affordably build, test and deploy AI products at scale. Lambda’s product portfolio includes on-prem GPU systems, hosted GPUs across public & private clouds and managed inference services – servicing government, researchers, startups and Enterprises world-wide.
If you'd like to build the world's best deep learning cloud, join us. *Note: This position requires presence in our San Francisco office location 4 days per week; Lambda’s designated work from home day is currently Tuesday.Engineering at Lambda is responsible for building and scaling our cloud offering. Our scope includes the Lambda website, cloud APIs and systems as well as internal tooling for system deployment, management and maintenance.In the world of distributed AI, raw GPU and CPU horsepower is just a part of the story.

High-performance networking and storage are the critical components that enable and unite these systems, making groundbreaking AI training and inference possible.The Lambda Infrastructure Engineering organization forges the foundation of high-performance AI clusters by welding together the latest in AI storage, networking, GPU and CPU hardware.Our expertise lies at the intersection of:

  • High-Performance Distributed Storage Solutions and Protocols: We engineer the protocols and systems that serve massive datasets at the speeds demanded by modern clustered GPUs.
  • Dynamic Networking: We design advanced networks that provide multi-tenant security and intelligent routing without compromising performance, using the latest in AI networking hardware.
  • Compute Virtualization: We enable cutting-edge virtualization and clustering that allows AI researchers and engineers to focus on AI workloads, not AI infrastructure, unleashing the full compute bandwidth of clustered GPUs.

About the Role:


We are seeking an experienced Software Engineering Manager with a history in the development of storage protocols and distributed storage systems to lead a team of Storage Software Engineers and Distributed Systems Engineers in the design, development, and optimization of cutting-edge distributed storage solutions. Your team will be responsible for building high-performance, scalable, and reliable implementations of object, block, and file protocols, specifically tailored to serve performance demanding AI training and inference workloads.

This is a unique opportunity to work at the intersection of large-scale distributed systems and the rapidly evolving field of artificial intelligence infrastructure. You will be building the foundational infrastructure that powers some of the most advanced AI research and products in the world.

What You’ll Do


  • Team Leadership & Management:
  • Grow/Hire, lead, and mentor a top-talent team of high-performing software engineers focused on delivering distributed storage protocols.
  • Foster a high-velocity culture of innovation, technical excellence, and collaboration.
  • Conduct regular one-on-one meetings, provide constructive feedback, and support career development for team members.
  • Drive outcomes by managing project priorities, deadlines, and deliverables using Agile methodologies.
  • Technical Strategy & Execution:
  • Drive the technical vision and strategy for our distributed storage protocols (e.g., S3, NFS, iSCSI) and their underlying distributed systems.
  • Oversee the development of highly optimized storage solutions designed to meet the performance demands of AI/ML workloads (e.g., high throughput, low latency, optimization for AI workload access patterns).
  • Lead the team in tackling complex distributed systems challenges, including concurrency, consistency, fault tolerance, and data durability across multiple data centers.
  • Guide engineering team in problem identification, requirements gathering, solution ideation, and stakeholder alignment on engineering RFCs.
  • Deeply understand the performance bottlenecks of existing storage systems and guide the team in developing innovative solutions to overcome them.
  • Lead the team in supporting customers.
  • Cross-Functional Collaboration:
  • Work closely with AI/ML research and products teams to understand customers storage needs and translate them into technical requirements.
  • Work closely with the product engineering team to deliver high quality products to customers to meet their unique needs.
  • Collaborate with product management to define the product roadmap and prioritize features.
  • Work closely with HPC Architecture, Networking, Compute, and Storage Engineering teams to deploy high-performance distributed storage protocols to serve AI/ML workloads.
  • Partner with fleet engineering and platforms teams to ensure seamless deployment, monitoring, and maintenance of the distributed storage protocols.
  • Work in lock-step with the Storage Engineering team to provide reliable storage products on top of a variety of physical storage solutions.
  • Innovation & Research:
  • Stay current with the latest trends and research in distributed systems, storage technologies, and AI/ML hardware/software advancements.
  • Work with the Lambda product team to uncover new trends in the AI inference and training product category.
  • Encourage and support the team in exploring new technologies and approaches to improve system performance and efficiency.

You


  • Experience:
  • 10+ years of experience in software development, with at least 5+ years in a management or lead role in storage software engineering.
  • Demonstrated experience leading a team of software engineers on complex, cross-functional projects in a fast-paced startup environment.
  • Extensive hands-on experience in designing and implementing distributed storage systems.
  • Experience with storage protocols serving storage volumes at a scale greater than 20PB.
  • Experience developing and tuning distributed storage protocols across scaling challenges using namespacing, sharding, and caching strategies.
  • Familiarity with deploying and running applications on Kubernetes or other container orchestration systems (e.g., AWS ECS, Hashicorp Nomad).
  • Strong project management skills, leading high-confidence planning, project execution, and delivery of team outcomes on schedule.
  • Technical Skills:
  • Knowledge in one or more of the following storage protocols: object storage (e.g., S3), block storage (e.g., iSCSI), or file storage (e.g., NFS, SMB, Lustre).
  • Professional individual contributor experience in languages such as C++, Go, Rust, or Python.
  • Familiarity with modern storage technologies (e.g., NVMe, RDMA) and their role in optimizing performance.
  • Experience with containerization technologies (e.g., Docker, Kubernetes) and their integration with storage solutions.
  • Distributed Systems Knowledge:
  • Solid understanding of distributed systems concepts, including consensus algorithms (e.g., Raft, Paxos), distributed caching, failure recovery, consistency models (e.g., eventual consistency), fault tolerance, data replication, load balancing, and distributed consensus algorithms
  • People Management:
  • Experience building a high-performance team through deliberate hiring, upskilling, planned skills redundancy, performance-management, and expectation setting.

Nice to Have


  • Experience:
  • Demonstrated delivery of distributed storage protocols in a CSP (Cloud Service Provider), NCP (Neo-Cloud provider), HPC-infrastructure integrator, or AI-infrastructure company.
  • Experience with storage protocols serving storage volumes at a scale greater than 100PB.
  • Implementation of distributed storage protocols backed by a variety of storage solutions, performance-tuned for AI/ML workloads.
  • Experience driving cross-functional engineering management initiatives (coordinating deployments, strategic planning, coordinating large projects).
  • Technical Skills:
  • Deep expertise in one or more of the following storage protocols: object storage (e.g., S3), block storage (e.g., iSCSI), or file storage (e.g., NFS, SMB, Lustre).
  • Strong programming skills in languages such as C++, Go, Rust, or Python.
  • In-depth knowledge of operating system internals, including file systems, caching, and I/O scheduling.
  • AI/ML Domain Knowledge:
  • Experience working with AI/ML training and inference frameworks (e.g., TensorFlow, PyTorch).
  • Understanding of the unique data access patterns and performance requirements of AI workloads.
  • Distributed Systems Knowledge:
  • Proven ability to design and debug highly concurrent and fault-tolerant systems.
  • People Management:
  • Experience driving organizational improvements (processes, systems, etc.)
  • Experience training, or managing managers.

Salary Range Information


The annual salary range for this position has been set based on market data and other factors. However, a salary higher or lower than this range may be appropriate for a candidate whose qualifications differ meaningfully from those listed in the job description.

About Lambda


  • Founded in 2012, ~400 employees (2025) and growing fast
  • We offer generous cash & equity compensation
  • Our investors include Andra Capital, SGW, Andrej Karpathy, ARK Invest, Fincadia Advisors, G Squared, In-Q-Tel (IQT), KHK & Partners, NVIDIA, Pegatron, Supermicro, Wistron, Wiwynn, US Innovative Technology, Gradient Ventures, Mercato Partners, SVB, 1517, Crescent Cove.
  • We are experiencing extremely high demand for our systems, with quarter over quarter, year over year profitability
  • Our research papers have been accepted into top machine learning and graphics conferences, including NeurIPS, ICCV, SIGGRAPH, and TOG
  • Health, dental, and vision coverage for you and your dependents
  • Wellness and Commuter stipends for select roles
  • 401k Plan with 2% company match (USA employees)
  • Flexible Paid Time Off Plan that we all actually use

A Final Note:


You do not need to match all of the listed expectations to apply for this position. We are committed to building a team with a variety of backgrounds, experiences, and skills.

Equal Opportunity Employer


Lambda is an Equal Opportunity employer. Applicants are considered without regard to race, color, religion, creed, national origin, age, sex, gender, marital status, sexual orientation and identity, genetic information, veteran status, citizenship, or any other factors prohibited by local, state, or federal law.Compensation Range: $330K - $495K

Life at Lambda

Lambda provides computation to accelerate human progress. We're a team of Deep Learning engineers building the world's best GPU workstations and servers. Our products power engineers and researchers at the forefront of human knowledge. Our customers include Apple, MIT, Los Alamos National Lab, Microsoft, Tencent, Kaiser Permanente, Stanford, Harvard, Caltech, and the Department of Defense.
Thrive Here & What We Value- Generous cash & equity compensation- Investors include Gradient Ventures, Google’s AIfocused venture fund- Experiencing high demand for systems with quarter over quarter, year over year profitability- Wildly talented team of 300, and growing fast- Health, dental, and vision coverage for you and your dependents- Commuter/Work from home stipends for select roles- Flexible Paid Time Off Plan that we all actually use- Equal Opportunity Employer

Related Sub

This job belongs to these sub. Explore related roles here:
Product manager jobsMachine learning jobs
Your tracker settings

We use cookies and similar methods to recognize visitors and remember their preferences. We also use them to measure ad campaign effectiveness, target ads and analyze site traffic. To learn more about these methods, including how to disable them, view our Cookie Policy or Privacy Policy.

By tapping `Accept`, you consent to the use of these methods by us and third parties. You can always change your tracker preferences by visiting our Cookie Policy.

logo innerThatStartupJob
Discover the best startup and their job positions, all in one place.
Copyright © 2025