logo inner

AI Engineer - Casera

LocationSeattle, Washington, United States
TypeRemote, Hybrid, Onsite

AI / Data Engineer 


About the Role 


We’re seeking a highly skilled AI/Data Engineer to design, build, and optimize data pipelines, machine learning infrastructure, and intelligent applications that turn complex data into actionable insights. You’ll collaborate closely with data scientists, ML engineers, and software developers to deploy scalable, production-ready AI systems and ensure data quality, observability, and performance across the stack. 

Key Responsibilities 


  • Design and implement robust data ingestion and transformation pipelines (batch and streaming) using tools such as Airflow, Spark, Databricks, or AWS Glue. 
  • Develop and maintain ETL/ELT workflows for structured and unstructured data from multiple sources (APIs, event streams, databases, third-party services). 
  • Collaborate with data scientists to operationalize ML models, including feature engineering, model serving, and real-time inference pipelines. 
  • Deploy, monitor, and maintain machine learning models using MLflow, SageMaker, Vertex AI, or similar frameworks. 
  • Implement MLOps best practices, including CI/CD for model retraining, versioning, and testing. 
  • Optimize performance and scalability of data storage solutions (e.g., Redshift, BigQuery, Snowflake, or Delta Lake). 
  • Ensure data quality, lineage, and governance through monitoring, validation, and documentation. 
  • Contribute to infrastructure-as-code (IaC) setups for reproducible deployments using Terraform, CDK, or CloudFormation. 
  • Collaborate with cross-functional teams to support AI-driven analytics, dashboards, and decision intelligence applications. 

Qualifications 


  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. ● 5+ years of experience in data engineering, ML engineering, or backend systems. ● Strong programming skills in Python, SQL, Scala, Java or any OP languages. ● Proficiency with cloud platforms (AWS, GCP, or Azure) and their data/AI services. ● Experience with ML pipelines, including feature stores, model registries, and inference APIs. 
  • Familiarity with containerization and orchestration (Docker, Kubernetes). ● Solid understanding of data modeling, warehousing, and schema design. 
  • Knowledge of modern AI frameworks (PyTorch, TensorFlow, scikit-learn) and vector databases (Pinecone, Weaviate, FAISS) is a plus.
  • Understanding of data privacy, security, and compliance (HIPAA, GDPR, SOC 2) preferred. 

Nice to Have 


  • Experience implementing LLM-powered systems (e.g., retrieval-augmented generation, embeddings, prompt optimization). 
  • Knowledge of real-time analytics and event-driven architectures (Kafka, Kinesis, Pub/Sub). 
  • Familiarity with observability stacks (Prometheus, Grafana, OpenTelemetry). ● Contributions to open-source or AI research projects. 

What We Offer 


  • Opportunity to build scalable AI systems that drive measurable business impact. ● Collaborative environment working alongside data scientists, ML researchers, and software engineers. 
  • Flexible hybrid/remote work culture. 
  • Competitive compensation, benefits, and growth opportunities.

Your tracker settings

We use cookies and similar methods to recognize visitors and remember their preferences. We also use them to measure ad campaign effectiveness, target ads and analyze site traffic. To learn more about these methods, including how to disable them, view our Cookie Policy or Privacy Policy.

By tapping `Accept`, you consent to the use of these methods by us and third parties. You can always change your tracker preferences by visiting our Cookie Policy.

logo innerThatStartupJob
Discover the best startup and their job positions, all in one place.
Copyright © 2025