pal.tech logo

Databricks Solution Architect

Location

Hyderabad, India

Job Category
Sales
Experience
7+ Years
Location
Hyderabad
Type
Work From Office
Posted Dated
Mar 19, 2025
Mode
Full Time
Paltech Icon

About Paltech

PalTech is a leading offshore IT consulting firm specializing in delivering transformative digital solutions and services to small, medium, and large organizations across diverse industries. With a legacy spanning over two decades, we have established ourselves as pioneers in driving innovation through advanced technology and empathetic understanding of client needs.



Why Choose Paltech

Expertise

Our team comprises seasoned experts with extensive experience across various domains.

Client-Centric Approach

Our approach focuses on understanding and meeting our clients needs.

Global Reach

We operate on a global scale, offering services worldwide.

Commitment to Excellence

We aim to maintain and deliver top-notch quality in every aspect of our work.

Job Description

We are looking for a skilled Databricks Solution Architect who will lead the design and implementation of data migration strategies and cloud-based data and analytics transformation on the Databricks platform. This role involves collaborating with stakeholders, analyzing data, defining architecture, building data pipelines, ensuring security and performance, and implementing Databricks solutions for machine learning and business intelligence. The candidate should possess over 7 years of data engineering experience, including 3+ years of hands-on Databricks experience, strong cloud platform knowledge, and expertise in Spark, Delta Lake, Python, Scala, and SQL, with a focus on delivering scalable and optimized data architectures, we would love to hear from you! 

Roles and Responsibilities

  • Design and develop the migration strategies and processes 
  • Collaborate with stakeholders to understand business requirements and technical challenges. 
  • Analyze current data and scope for optimization during the migration process. 
  • Define the architecture and roadmap for cloud-based data and analytics transformation on Databricks. 
  • Design, implement, and optimize scalable, high-performance data architectures using Databricks. 
  • Build and manage data pipelines and workflows within Databricks. 
  • Ensure that best practices for security, scalability, and performance are followed. 
  • Implement Databricks solutions that enable machine learning, business intelligence, and data science workloads. 
  • Oversee the technical aspects of the migration process, from planning through to execution. 
  • Work closely with engineering and data teams to ensure proper migration of ETL processes, data models, and analytics workloads. 
  • Troubleshoot and resolve issues related to migration, data quality, and performance. 
  • Create documentation of the architecture, migration processes, and solutions. 
  • Provide training and support to teams post-migration to ensure they can leverage Databricks. 

Required Skills and Qualifications: 

  • Expertise in Databricks architecture and best practices for data processing. 
  • Strong knowledge of Spark, Delta Lake, DLT, Lakehouse architecture, and other latest Databricks components. 
  • Proficiency in Databricks Asset Bundles 
  • Expertise in design and development of migration frameworks using Databricks 
  • Proficiency in Python, Scala, SQL, or similar languages for data engineering tasks. 
  • Familiarity with data governance, security, and compliance in cloud environments. 
  • Solid understanding of cloud-native data solutions and services.

Experience

  • 7+ years of experience in data engineering, cloud architecture, or related fields. 
  • 3+ years of hands-on experience with Databricks, including the implementation of data engineering solutions, migration projects, and optimizing workloads. 
  • Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their integration with Databricks. 
  • Experience in end-to-end data migration projects involving large-scale data infrastructure. 
  • Familiarity with ETL tools, data lakes, and data warehousing solutions

Application

"*" indicates required fields

Accepted file types: pdf, doc, docx, Max. file size: 210 MB.