logo inner

Data Engineer, Active Grid Response

CompanyGridware
LocationSan Francisco, California, United States
TypeHybrid, Onsite
Sub
Software Engineer
Backend Developer

About Gridware


Gridware is a San Francisco-based technology company dedicated to protecting and enhancing the electrical grid. We pioneered a groundbreaking new class of grid management called active grid response (AGR), focused on monitoring the electrical, physical, and environmental aspects of the grid that affect reliability and safety. Gridware’s advanced Active Grid Response platform uses high-precision sensors to detect potential issues early, enabling proactive maintenance and fault mitigation. This comprehensive approach helps improve safety, reduce outages, and ensure the grid operates efficiently.

The company is backed by climate-tech and Silicon Valley investors. For more information, please visit www.Gridware.io.

Role Overview


As a Data Engineer at Gridware, you’ll help build and maintain the pipelines and data systems powering our Active Grid Response platform. You’ll work closely with cross-functional engineers to ensure telemetry, sensor data, and operational information flows reliably through our Lakehouse and into analytics and monitoring tools. This is a hands-on, high-growth role ideal for engineers ready to deepen their expertise in distributed data systems.

Responsibilities


  • Building ETL/ELT pipelines that ingest transformer, pole, and sensor telemetry into Gridware’s Data Lake and Lakehouse
  • Developing and maintaining real-time and batch ingestion processes using Python, SQL, Databricks, and Spark
  • Implementing data quality checks, validation rules, and automated testing for stable operations
  • Collaborating with Software, Firmware, and Data Science teams to define ingestion schemas and transformations
  • Working with cloud-native tools to optimize pipeline throughput and cost efficiency
  • Monitoring pipelines for reliability, troubleshooting issues, and contributing to on-call rotations
  • Writing documentation for data processes, models, and metadata

Required Skills


  • 2–4 years of experience as a Data Engineer (or Backend Engineer with heavy data exposure)
  • Strong proficiency in Python and SQL
  • Familiarity with data warehouses, Lakehouse platforms, or big data tools (Databricks, Spark, or equivalent)
  • Experience with pipeline orchestration tools (Airflow, Dagster, Prefect, etc.)
  • Understanding of event-driven systems or streaming platforms (Kafka, Kinesis, Pub/Sub)
  • Solid foundation in data modeling, testing, and version control
  • Ability to work collaboratively in a high-autonomy, fast-paced environment

Bonus Skills


  • Experience with IoT, telemetry ingestion, or time-series data
  • Exposure to Unity Catalog, governance, or schema enforcement
  • Understanding of Protobuf, Avro, Parquet, or serialization formats
  • Hands-on experience with observability tools (Grafana, OpenTelemetry)

This describes the ideal candidate; many of us have picked up this expertise along the way. Even if you meet only part of this list, we encourage you to apply!

Benefits


Health, Dental & Vision (Gold and Platinum with some providers plans fully covered) Paid parental leave Alternating day off (every other Monday)“Off the Grid”, a two week per year paid break for all employees. Commuter allowance Company-paid training Apply for this job

Your tracker settings

We use cookies and similar methods to recognize visitors and remember their preferences. We also use them to measure ad campaign effectiveness, target ads and analyze site traffic. To learn more about these methods, including how to disable them, view our Cookie Policy or Privacy Policy.

By tapping `Accept`, you consent to the use of these methods by us and third parties. You can always change your tracker preferences by visiting our Cookie Policy.

logo innerThatStartupJob
Discover the best startup and their job positions, all in one place.
Copyright © 2025