pal.tech logo

Data Engineer

Location

Hyderabad, India

Job Category
Software Engineering
Experience
3-5 years
Location
Hyderabad
Type
Work From Office
Posted Dated
Jan 24, 2025
Mode
Full Time
Paltech Icon

About Paltech

PalTech is a leading offshore IT consulting firm specializing in delivering transformative digital solutions and services to small, medium, and large organizations across diverse industries. With a legacy spanning over two decades, we have established ourselves as pioneers in driving innovation through advanced technology and empathetic understanding of client needs.



Why Choose Paltech

Expertise

Our team comprises seasoned experts with extensive experience across various domains.

Client-Centric Approach

Our approach focuses on understanding and meeting our clients needs.

Global Reach

We operate on a global scale, offering services worldwide.

Commitment to Excellence

We aim to maintain and deliver top-notch quality in every aspect of our work.

Job Description

Seeking a highly skilled Data Engineer to design, develop, and maintain robust data pipelines and ETL processes. The ideal candidate must have extensive experience with Azure Services and strong SQL

Roles and Responsibilities

  • Build data pipelines, data validation frameworks, job schedules with emphasis on automation and scale 
  • Contribute to overall architecture, framework, and design patterns to store and process high data volumes
  • Design and implement features in collaboration with product owners, reporting analysts / data analysts, and business partners within an Agile / Scrum methodology  

Preferred Skills

  • Experience in data projects with focus on data integration and ingestion
  • Must have Experience in Azure Data Factory, Data Lake, Azure Databricks, Azure Synapse 
  • Azure experience must be focused on Azure Data Factory, Azure storage solutions (such as Blob and Azure Data lake Gen2) and Azure data pipelines
  • Good experience on Azure Databricks 
  • Should have good experience in Pyspark
  • Experience in power shell, shell scripting and python 
  • Experience in building data pipelines for large volumes of data across disparate data sources
  • Experience in working with agile/scrum methodologies
  • Knowledge and experience on Big data and Data Vault methodology and DBT

Education

  • Bachelors/Master’s degree in Engineering/Computers 

Application

"*" indicates required fields

Accepted file types: pdf, doc, docx, Max. file size: 210 MB.