Description
We are looking for Google Data Engineering with 6 – 8 years of experience.Required Technical Skill Set: GCP, PySpark, GCP (BigQuery, Dataflow, Dataproc, GCS, Pub/Sub, Composer), developing scalable solutions.Key Responsibilities:
- Build and Manage data ingestion (batch/Streaming) using Pub/Sub, Cloud Functions, Data Fusion.
- Design and optimise BigQuery data Models, partitions, clustering, and big query performance.
- Data engineer with strong coding and technical architectural experience in Google Data Solution
- Should be able to guide a new team to build algorithms and data products, resolve blockers and bugs and be the code custodian.
- Work across the full data pipeline on projects (ingestion, data modelling, data warehousing, Data Quality validity).
- Knowledge on CI/CD Pipelines
Good-to-Have
- Excellent communication and collaboration skills.
- Commitment to continuous learning and improvement.
Responsibilities:
- Hands-on Scheduling the Data Pipelines with dependency flow using any Cloud Services – Airflow/Composer, Schedulers etc.
- Troubleshoot and resolve technical issues promptly.
- Communicate effectively with both technical and non-technical stakeholders.
- Architect cloud-based database solutions tailored to business needs, ensuring scalability, reliability, and performance.
- Continuously learn and adopt new Google Cloud features and best practices to improve database systems.
- Deploy and maintain databases on Google Cloud services like Cloud SQL, Cloud Spanner, and Bigtable.
- Apply encryption, access controls, and other security measures to protect sensitive data and comply with regulations.
- Migrate data from on-premises or other cloud platforms to Google Cloud with minimal downtime and data integrity.