Job Type: Contract
Job Category: IT

Job Description

Job Title: Azure Databricks Developer
Location: Louisville, KY (Day 1 Onsite)
Employment Type: Contract

 

Job Summary:

We are seeking a Senior Data Engineer with extensive experience in building enterprise-grade data platforms and scalable data pipelines. The ideal candidate will have deep expertise in Azure, Databricks, Big Data technologies, and data engineering best practices, particularly in healthcare or large-scale enterprise environments. This role requires a strong balance of technical depth, leadership, and collaboration with cross-functional teams.

 

Key Responsibilities:

  • Architect, design, and develop scalable, resilient, and high-performance data pipelines for enterprise-level data platforms.
  • Lead the build-out of Azure Data Lake leveraging Databricks and other modern data technologies.
  • Develop robust data pipelines for data ingestion, validation, normalization, enrichment, and business-specific processing of healthcare and enterprise datasets.
  • Partner with engineering, product management, program management, and operations teams to deliver pipeline platforms and data lake solutions.
  • Drive technology and business transformation initiatives through effective data architecture and automation.
  • Implement data governance, metadata management, and data privacy best practices.
  • Ensure end-to-end data quality, observability, and reliability across all ETL processes.
  • Collaborate in Agile delivery models, contributing to sprint planning, backlog grooming, and continuous improvement.
  • Drive innovation and optimization in data engineering processes, tools, and infrastructure.

 

Required Skills & Experience:

  • 10+ years of experience in data processing, ETL, and big data engineering.
  • Strong hands-on expertise with Databricks (4+ years) and Python (essential).
  • Experience in Azure Cloud services – Azure Data Lake, Azure Synapse, Azure Data Factory, Azure DevOps, and related data components.
  • Solid background in Big Data technologies – Hadoop, Cloudera, Spark, Hive, Scala, Java, Kafka, and NoSQL databases (MongoDB, Cassandra, etc.).
  • Proven ability to design and optimize data pipelines, data ingestion frameworks, and ETL workflows.
  • Deep understanding of data warehousingdata modelingreporting, and analytics concepts.
  • Experience with metadata managementdata governance, and data privacy frameworks.
  • Strong exposure to CI/CD pipelinesversion control (Git), and automated data deployment frameworks.
  • Bachelor’s degree in Computer Science, Engineering, Mathematics, or Physical Sciences.

 

Nice to Have Skills:

  • Experience leading design and development of large, enterprise-scale systems.
  • Proven full-stack data development experience (end-to-end lifecycle).
  • Strong understanding of DevOps and containerization (Docker/Kubernetes).
  • Experience with streaming technologies such as Kafka, Spark Streaming, or Event Hubs.
  • Knowledge of API integration for data exchange and microservices architecture.
  • Experience in healthcare data processing (HIPAA, HL7, FHIR standards preferred).
  • Excellent communication, stakeholder management, and cross-functional collaboration skills.
  • Demonstrated ability to mentor junior engineers and advocate for engineering best practices.
  • Familiarity with data visualization tools such as Power BI, Tableau, or Looker.

 

Soft Skills:

  • Strong analytical and problem-solving mindset.
  • Ability to operate independently with minimal supervision.
  • Business acumen with technical depth.
  • Excellent time management and multitasking skills.
  • Clear and concise executive-level communication.

Required Skills
Cloud Developer SQL Application Developer

Fill below details & click “Apply”

Only add 10 digit number without prefix
Resume can be attached in PDF, JPG, Word , Txt format only

Share This Job