Data Engineer III
Apply NowCompany: Reuben Cooley, Inc.
Location: Morristown, NJ 07960
Description:
JOB SUMMARY:
We are seeking an experienced Data Engineer III to design, build, and maintain scalable data pipelines, ensuring efficient data processing and integration across multiple platforms. The ideal candidate will have expertise in SQL, Python, PySpark, Databricks, ETL pipelines, Database Administration, and ServiceNow. You will collaborate with cross-functional teams to enhance data infrastructure, optimize performance, and support data-driven decision-making.
KEY RESPONSIBILITIES:
REQUIRED QUALIFICATIONS:
We are seeking an experienced Data Engineer III to design, build, and maintain scalable data pipelines, ensuring efficient data processing and integration across multiple platforms. The ideal candidate will have expertise in SQL, Python, PySpark, Databricks, ETL pipelines, Database Administration, and ServiceNow. You will collaborate with cross-functional teams to enhance data infrastructure, optimize performance, and support data-driven decision-making.
KEY RESPONSIBILITIES:
- Design, develop, and optimize ETL pipelines to process large-scale data efficiently.
- Build and manage data workflows in Databricks using PySpark and SQL.
- dminister and optimize databases (SQL, NoSQL, or cloud-based solutions) for high performance and scalability.
- Ensure data integrity, quality, and security across systems and pipelines.
- utomate data ingestion, transformation, and orchestration processes.
- Monitor and troubleshoot data pipelines, ensuring high availability and reliability.
- Work closely with Data Scientists, Analysts, and Business teams to understand data needs and provide solutions.
- Implement best practices for database administration and performance tuning.
- Utilize ServiceNow for ticketing, issue tracking, and workflow automation related to data engineering tasks.
- Stay updated with the latest industry trends in big data, cloud computing, and data engineering technologies.
REQUIRED QUALIFICATIONS:
- 5+ years of experience in Data Engineering or a related field.
- Strong proficiency in SQL for database management and query optimization.
- Hands-on experience with Python and PySpark for data processing.
- Expertise in Databricks for big data analytics and ETL workflows.
- Solid understanding of ETL pipeline development, data modeling, and data warehouse architecture.
- Experience in database administration (SQL Server, PostgreSQL, MySQL, or cloud-based databases).
- Familiarity with ServiceNow for incident management and workflow automation.
- Experience with cloud platforms (AWS, Azure, or GCP) is a plus.
- Strong problem-solving skills, ability to work independently, and excellent communication skills.