Job Type: Full Time
Job Category: IT

Job Description

Role: Hadoop PySpark, Python, Apache Kafka

Location: Charlotte, NC / New York, NY/ Dallas, TX / Jersey City, NJ

FTE only

 

Job Description

Must Have Technical/Functional Skills

 

Primary Skill: Hadoop ecosystem (HDFS, Hive, Spark), PySpark, Python, Apache Kafka

Secondary: UI – Angular.

Experience: Minimum 9 years

 

Roles & Responsibilities

 

Architectural Leadership:

? Define end-to-end architecture for data platforms, streaming systems, and web applications.

? Ensure alignment with enterprise standards, security, and compliance requirements.

? Evaluate emerging technologies and recommend adoption strategies.

 

Data Engineering :

? Design and implement data ingestion, transformation, and processing pipelines using Hadoop, PySpark, and related tools.

? Optimize ETL workflows for large-scale datasets and real-time streaming.

? Integrate Apache Kafka for event-driven architectures and messaging.

 

Application Development :

? Build and maintain backend services using Python and microservices architecture. 

? Develop responsive, dynamic front-end applications using Angular. 

? Implement RESTful APIs and ensure seamless integration between components. 

 

Collaboration & Leadership: 

? Work closely with product owners, business analysts, and DevOps teams. 

? Mentor junior developers and data engineers. 

? Participate in agile ceremonies, code reviews, and design discussions. 

 

Required Skills & Qualifications: 

 

Technical Expertise: 

? Strong experience with Hadoop ecosystem (HDFS, Hive, Spark). 

? Proficiency in PySpark for distributed data processing. 

? Advanced programming skills in Python. 

? Hands-on experience with Apache Kafka for real-time streaming. 

? Frontend development using Angular (TypeScript, HTML, CSS). 

 

Architectural Skills: 

? Expertise in designing scalable, secure, and high-performance systems. 

? Familiarity with microservices, API design, and cloud-native architectures. 

 

Additional Skills: 

? Knowledge of CI/CD pipelines, containerization (Docker/Kubernetes). 

? Exposure to cloud platforms (AWS, Azure, GCP). 

 

Education: 

? Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 

 

Experience: 

? 9+ years in software development, with at least 4 + years in architecture and Big Data technologies. 

 

Preferred Qualifications: 

? BFSI domain experience or large-scale enterprise systems. 

? Understanding of data governance, security, and compliance standards. 

 

Soft Skills: 

? Strong analytical and problem-solving abilities. 

? Excellent communication and leadership skills. 

? Ability to thrive in a fast-paced, agile environment.

Required Skills
Data / Python Engineer Python / DevOps Engineer

Fill below details & click “Apply”

Only add 10 digit number without prefix
Resume can be attached in PDF, JPG, Word , Txt format only

Share This Job