Data Engineer
Apply NowCompany: My3Tech
Location: Appleton, WI 54915
Description:
Must have: PySpark, Python Databricks
Job Description:
We are building out an Enterprise Data Platform (EDP) entirely on the cloud, utilizing Databricks on AWS. The project involves two teams focused on data ingestion, with a framework already established using Python and Spark for processing. The platform leverages DBT for data modeling. Our goal is to enhance data transformation capabilities while maintaining high standards for testing and data quality.
Key Responsibilities:
Required Skills and Experience:
Interview Process:
Required Skills : Python/Pyspark AWS DBT
Basic Qualification :
Additional Skills :
Background Check : Yes
Drug Screen : No
Notes :
Selling points for candidate :
Project Verification Info :MSA: Blanket Approval Received Client Letter: Will Provide"
Candidate must be your W2 Employee :No
Exclusive to Apex :No
Face to face interview required :No
Candidate must be local :No
Candidate must be authorized to work without sponsorship :Yes
Interview times set : :No
Type of project :
Master Job Title :
Branch Code :
Job Description:
We are building out an Enterprise Data Platform (EDP) entirely on the cloud, utilizing Databricks on AWS. The project involves two teams focused on data ingestion, with a framework already established using Python and Spark for processing. The platform leverages DBT for data modeling. Our goal is to enhance data transformation capabilities while maintaining high standards for testing and data quality.
Key Responsibilities:
- Lead Data Ingestion: Oversee and guide the data ingestion process, ensuring efficient and accurate data integration.
- Mentorship: Mentor junior engineers, promoting engineering excellence and improving delivery processes.
- Data Expertise: Utilize extensive knowledge of various data structures (Oracle, DB2, Mainframe, MongoDB, DynamoDB) to optimize data ingestion and transformation.
- Data Profiling: Analyze structured, unstructured, and semi-structured data to determine the best methods for profiling and ingestion.
- Ownership: Take full ownership of data sources and datasets, determining the best methods for ingestion and understanding the data thoroughly.
- Collaboration: Work closely with staff engineers and other team members to schedule and conduct interviews, ensuring a collaborative and proactive work environment.
Required Skills and Experience:
- Technical Proficiency: Strong experience with Databricks, AWS, Python, Spark, and DBT.
- Cloud Migration: Proven track record in cloud migration projects, particularly moving on-premises systems to the cloud.
- Data Modeling: Expertise in data modeling and transformation, with a focus on the Medallion architecture.
- Leadership: Demonstrated ability to lead teams, mentor junior engineers, and drive engineering excellence.
- Data Structures: In-depth knowledge of various data structures, including Oracle, DB2, Mainframe, MongoDB, and DynamoDB.
- Data Profiling: Experience with profiling structured, unstructured, and semi-structured data to optimize ingestion processes.
- Soft Skills: Accountable, collaborative, and proactive, with strong communication and interpersonal skills.
Interview Process:
- Round 1: Technical interview with staff engineers.
Required Skills : Python/Pyspark AWS DBT
Basic Qualification :
Additional Skills :
Background Check : Yes
Drug Screen : No
Notes :
Selling points for candidate :
Project Verification Info :MSA: Blanket Approval Received Client Letter: Will Provide"
Candidate must be your W2 Employee :No
Exclusive to Apex :No
Face to face interview required :No
Candidate must be local :No
Candidate must be authorized to work without sponsorship :Yes
Interview times set : :No
Type of project :
Master Job Title :
Branch Code :