Job Type: Contract
Job Category: IT
Job Description
Role: IBM DataStage Developer with SQL & Databricks
Location: Pittsburgh, PA (Onsite Only)
Long-term Contract (C2C/W2)
Job Overview:
We are seeking a skilled IBM DataStage Developer with strong proficiency in SQL and Azure Databricks for a data integration and modernization project. The ideal candidate will work closely with cross-functional teams to design, build, and maintain robust ETL processes for large-scale data systems.
Key Responsibilities:
- Develop, enhance, and support ETL solutions using IBM InfoSphere DataStage.
- Design and implement complex data pipelines integrating SQL and Databricks workflows.
- Perform data extraction, transformation, and loading from multiple source systems into target data lakes/warehouses.
- Collaborate with data architects and business analysts to ensure scalable and efficient data integration.
- Optimize DataStage jobs and Databricks notebooks for performance and reliability.
- Conduct unit testing, integration testing, and participate in code reviews.
- Create and maintain technical documentation for ETL and data flow processes.
Required Skills:
- Overall IT Experience: 8+ Years
- 5+ years of hands-on experience with IBM DataStage (v11.x or higher)
- Strong experience with SQL (T-SQL/PL-SQL) for data manipulation, queries, and procedures
- 2+ years of working knowledge of Azure Databricks / PySpark / Delta Lake
- Solid understanding of data warehousing, data lakes, and data integration patterns
- Experience working with relational databases (e.g., SQL Server, Oracle, or Snowflake)
- Excellent debugging and performance-tuning skills
Nice to Have:
- Experience with Azure Data Factory, Synapse, or Data Lake Gen2
- Background in banking, healthcare, or insurance domains
- Familiarity with Agile methodologies and DevOps practices
Required Skills
Cloud Developer SQL Application Developer