Onebridge, a Marlabs Company, is a global AI and Data Analytics Consulting Firm that empowers organizations worldwide to drive better outcomes through data and technology. Since 2005, we have partnered with some of the largest healthcare, life sciences, financial services, and government entities across the globe. We have an exciting opportunity for a highly skilled Data Engineer to join our innovative and dynamic team.
Data Engineer | About You
As a Data Engineer, you are responsible for designing and delivering scalable, production-grade data solutions that power scientific analysis and decision‑making. You thrive in complex data environments and enjoy building pipelines, integrating heterogeneous sources, and optimizing systems for performance and reliability. You are comfortable working with modern cloud-native architectures and distributed query engines to support large-scale datasets. You communicate effectively across technical and non-technical teams, ensuring data is accurate, accessible, and well‑governed.
You take pride in building infrastructure that enables researchers and analysts to work faster and smarter.
Data Engineer | Day-to-Day
- Design, build, and optimize data pipelines and ETL processes to integrate scientific data from numerous heterogeneous sources.
- Develop and maintain Lakehouse architectures on AWS (S3, Glue, Athena) supporting high‑volume, multibillion‑record datasets.
- Build federated query capabilities using distributed engines such as Trino to enable unified access across diverse platforms.
- Implement data harmonization solutions to standardize compound, assay, and experimental data across multiple scientific modalities.
- Optimize performance for PostgreSQL, Iceberg, and other analytical databases using tuning, caching, and query optimization techniques.
- Implement data quality checks, validation frameworks, and governance practices to ensure accurate, compliant, and well‑documented datasets.
Data Engineer | Skills & Experience
- 5+ years of experience in data engineering, data warehousing, or related roles with a proven track record of production-grade data pipeline development.
- Strong proficiency in Python and SQL, including experience with libraries such as pandas or PySpark for data manipulation.
- Deep experience working with relational databases (e.g., PostgreSQL, Oracle) and modern cloud data warehouses (e.g., Snowflake, Redshift).
- Hands-on experience with AWS services including S3, Glue, Athena, Lambda, and RDS, supporting scalable data platforms.
- Strong knowledge of distributed processing tools and query engines such as Spark, Trino, or Presto.
- Proficiency in ETL/ELT development, version control with Git, and experience with visualization tools such as Power BI or Spotfire.