Seeking a highly skilled Data Engineer to design, develop, and maintain robust data pipelines and ETL processes. The ideal candidate must have extensive experience with Azure Services and strong SQL
Roles and Responsibilities
- Build data pipelines, data validation frameworks, job schedules with emphasis on automation and scale
- Contribute to overall architecture, framework, and design patterns to store and process high data volumes
- Design and implement features in collaboration with product owners, reporting analysts / data analysts, and business partners within an Agile / Scrum methodology
Preferred Skills
- Experience in data projects with focus on data integration and ingestion
- Must have Experience in Azure Data Factory, Data Lake, Azure Databricks, Azure Synapse
- Azure experience must be focused on Azure Data Factory, Azure storage solutions (such as Blob and Azure Data lake Gen2) and Azure data pipelines
- Good experience on Azure Databricks
- Should have good experience in Pyspark
- Experience in power shell, shell scripting and python
- Experience in building data pipelines for large volumes of data across disparate data sources
- Experience in working with agile/scrum methodologies
- Knowledge and experience on Big data and Data Vault methodology and DBT
Education
- Bachelors/Master’s degree in Engineering/Computers