Onebridge, a Marlabs Company, is a global AI and Data Analytics Consulting Firm that empowers organizations worldwide to drive better outcomes through data and technology. Since 2005, we have partnered with some of the largest healthcare, life sciences, financial services, and government entities across the globe. We have an exciting opportunity for a highly skilled AI/ML Engineer to join our innovative and dynamic team.
AI/ML Engineer | About You
As an AI/ML Engineer, you are responsible for building the infrastructure, services, and tooling that bring machine learning models into scalable, production-ready environments. You thrive at the intersection of software engineering and applied ML, designing systems that enable data scientists and researchers to deliver high‑impact models. You have a strong understanding of distributed systems, modern ML pipelines, and cloud-native compute environments. You care deeply about building reliable, well-tested, and maintainable machine learning systems.
You are motivated by accelerating discovery and enabling teams to operationalize ML at scale. AI/ML Engineer |
Day-to-Day
- Design, build, and maintain scalable ML pipelines, services, and infrastructure to support model training, deployment, and lifecycle management.
- Collaborate with data scientists to productionize models, ensuring strong performance, reliability, and seamless integration with research workflows.
- Implement containerized ML workloads and orchestrate them with Kubernetes to support training, inference, and scaling needs.
- Develop and maintain APIs and microservices that expose ML capabilities and facilitate cross‑platform interoperability.
- Monitor ML systems in production, analyzing performance, drift, and operational metrics to ensure model health and stability.
- Troubleshoot complex model deployment, data pipeline, and distributed‑system issues to ensure smooth operation of ML-driven applications.
AI/ML Engineer | Skills & Experience
- 3+ years of experience delivering ML or AI systems into production, with strong software engineering fundamentals.
- Proficiency in Python (preferred for ML workflows) and experience with at least one compiled language such as Go, Rust, Java, or C++.
- Hands-on experience deploying ML workloads using containers, Kubernetes, and cloud-native infrastructure.
- Strong understanding of ML model deployment patterns, including REST/gRPC serving, batch inference, and real-time inference architectures.
- Experience applying test-driven development, automated testing, and CI/CD practices to ML systems and data pipelines.
- Solid understanding of distributed systems, including performance tuning, scaling strategies, and building reliable, high-throughput ML services.