Job FunctionsDesigning, deploying, and managing ElasticSearch clustersDeveloping and maintaining CI/CD pipelines for automating build, test, and deployment processesManaging and optimizing version control workflows using GitImplementing Infrastructure as Code (IaC) solutions using Terraform, CloudFormation, or Ansible for cloud and on-prem infrastructureCollaborating with data engineering teams to design and deploy scalable ETL/ELT pipelines using Apache Kafka, Apache Spark, Kinesis, Pub/Sub, Dataflow, Dataproc, or AWS Glue
Job Requirements3+ years of experience in DevOps, Data Engineering, or Infrastructure EngineeringStrong expertise in ElasticSearch, including cluster tuning, indexing strategies, and scalingHands-on experience with CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI/CD, or ArgoCDProficiency in Git for version control, branching strategies, and code collaborationExperience with Infrastructure as Code (IaC) using Terraform, CloudFormation, Ansible, or PulumiSolid experience with cloud platforms (AWS, GCP, or Azure) and cloud-native data engineering toolsProficiency in Python, Bash, or Scala for automation, data processing, and infrastructure scriptingHands-on experience with containerization and orchestration (Docker, Kubernetes, Helm)Experience with data engineering tools, including Apache Kafka, Spark Streaming, Kinesis, Pub/Sub, Dataflow, Dataproc, or AWS Glue
SkillsElasticSearchCI/CD pipelines using Jenkins, GitHub Actions, GitLab CI/CD, or ArgoCDGit for version control, branching strategies, and code collaborationInfrastructure as Code (IaC) using Terraform, CloudFormation, Ansible, or PulumiPython, Bash, or Scala for automation, data processing, and infrastructure scriptingContainerization and orchestration (Docker, Kubernetes, Helm)Data engineering tools, including Apache Kafka, Spark Streaming, Kinesis, Pub/Sub, Dataflow, Dataproc, or AWS GlueStrong communication skills to collaborate with cross-functional teamsAbility to work in a fast-paced environment and adapt to changing prioritiesPassion for learning new technologies and staying up-to-date on industry trendsWillingness to take ownership of projects and drive results
About Infinitive:
Infinitive is a data and AI consultancy that enables its clients to modernize, monetize and operationalize their data to create lasting and substantial value. We possess deep industry and technology expertise to drive and sustain adoption of new capabilities. We match our people and personalities to our clients' culture while bringing the right mix of talent and skills to enable high return on investment.Infinitive has been named “Best Small Firms to Work For” by Consulting Magazine 7 times most recently in 2024.
Infinitive has also been named a Washington Post “Top Workplace”, Washington Business Journal “Best Places to Work”, and Virginia Business “Best Places to Work.”
About the Role:We are seeking a skilled DevOps Engineer with data engineering experience to join our dynamic team. The ideal candidate will have expertise in ElasticSearch, CI/CD, Git, and Infrastructure as Code (IaC) while also possessing experience in data engineering. You will be responsible for designing, automating, and optimizing infrastructure, deployment pipelines, and data workflows. This role requires close collaboration with data engineers, software developers, and operations teams to build scalable, secure, and high-performance data platforms.
Key Responsibilities:
DevOps & Infrastructure Management:
- Design, deploy, and manage ElasticSearch clusters, ensuring high availability, scalability, and performance for search and analytics workloads.
- Develop and maintain CI/CD pipelines for automating build, test, and deployment processes using tools like Jenkins, GitHub Actions, GitLab CI/CD, or ArgoCD.
- Manage and optimize version control workflows using Git, ensuring best practices for branching, merging, and release management.
- Implement Infrastructure as Code (IaC) solutions using Terraform, CloudFormation, or Ansible for cloud and on-prem infrastructure.
- Automate system monitoring, alerting, and incident response using tools such as Prometheus, Grafana, Elastic Stack (ELK), or Datadog.
Data Engineering & Pipeline Automation:
- Collaborate with data engineering teams to design and deploy scalable ETL/ELT pipelines using Apache Kafka, Apache Spark, Kinesis, Pub/Sub, Dataflow, Dataproc, or AWS Glue.
- Optimize data storage and retrieval for large-scale analytics and search workloads using ElasticSearch, BigQuery, Snowflake, Redshift, or ClickHouse.
- Ensure data pipeline reliability and performance, implementing monitoring, logging, and alerting for data workflows.
- Automate data workflows and infrastructure scaling for high-throughput real-time and batch processing environments.
- Implement data security best practices, including access controls, encryption, and compliance with industry standards such as GDPR, HIPAA, or SOC 2.
Required Skills & Qualifications:
- 3+ years of experience in DevOps, Data Engineering, or Infrastructure Engineering.
- Strong expertise in ElasticSearch, including cluster tuning, indexing strategies, and scaling.
- Hands-on experience with CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI/CD, or ArgoCD.
- Proficiency in Git for version control, branching strategies, and code collaboration.
- Experience with Infrastructure as Code (IaC) using Terraform, CloudFormation, Ansible, or Pulumi.
- Solid experience with cloud platforms (AWS, GCP, or Azure) and cloud-native data engineering tools.
- Proficiency in Python, Bash, or Scala for automation, data processing, and infrastructure scripting.
- Hands-on experience with containerization and orchestration (Docker, Kubernetes, Helm).
- Experience with data engineering tools, including Apache Kafka, Spark Streaming, Kinesis, Pub/Sub, or Dataflow.
- Strong understanding of ETL/ELT workflows and distributed data processing frameworks.
Preferred Qualifications:
- Experience working with data warehouses and lakes (BigQuery, Snowflake, Redshift, ClickHouse, S3, GCS).
- Knowledge of monitoring and logging solutions for data-intensive applications.
- Familiarity with security best practices for data storage, transmission, and processing.
- Understanding of event-driven architectures and real-time data processing frameworks.
- Certifications such as AWS Certified DevOps Engineer, Google Cloud Professional Data Engineer, or Certified Kubernetes Administrator (CKA).
Life at infinitive
Infinitive enables global companies to master the digital world, specializing in marketing and advertising solutions, customer data & analytics, and digital and business transformation.
Infinitive is included on the latest LUMA Partners Knowledge LUMAscape and is regularly featured by a range of leading industry publications, including AdAge, MediaPost, AdExchanger, and more.
Our strong workplace culture has received recognition from Inc. magazine, The Washington Post, Consulting Magazine, Washington Business Journal and other top media outlets and awards programs.
Thrive Here & What We Value1. Named "Best Small Firms to Work For" by Consulting Magazine 6 times (most recently in 2023)2. Recognized as a "Top Workplace" by Washington Post, "Best Places to Work" by Washington Business Journal and Virginia Business3. Emphasizes living Infinitive values such as Do the Right Thing, Strive to Be Great, Honor Commitments, Think and Act Like an Owner, Have a Bias for Action4. Focuses on enabling global brands through insights, innovation, and efficiency5. Possesses deep industry and technology expertise to drive adoption of new capabilities6. Matches personalities with clients' culture while bringing the right mix of talent and skills for high ROI7. Dynamic team environment with opportunities for growth and development8. Equal Opportunity Employer9. Passionate and motivated individuals