Senior Data Engineer
Apply NowCompany: Pinnacle living
Location: Columbus, OH 43230
Description:
Job Title: Senior Data Engineer
Location: Columbus, OH 43240 (Hybrid 3-Days Onsite)
Duration: 6 Months with Right to Hire
**No C2C**
GC & USC only
Candidates should have Python and PySpark, AWS, EC2, S3, Lambada, EMR.
All resources will be:
Skills:
Responsibilities:
Qualifications
Pay Range: $65 - $75
The specific compensation for this position will be determined by a number of factors, including the scope, complexity and location of the role as well as the cost of labor in the market; the skills, education, training, credentials and experience of the candidate; and other conditions of employment. Our full-time consultants have access to benefits including medical, dental, vision as well as 401K contributions.
Location: Columbus, OH 43240 (Hybrid 3-Days Onsite)
Duration: 6 Months with Right to Hire
**No C2C**
GC & USC only
Candidates should have Python and PySpark, AWS, EC2, S3, Lambada, EMR.
All resources will be:
- Creating pipelines and loading the data in enterprise platform. Doing Data Lake and Databricks.
- Migrating from legacy platform to the new platform.
- Candidates Accomplishments will help in moving to the next step.
Skills:
- Formal training or certification on software engineering concepts and 5+ years applied experience; bachelor's degree required Python Programming.
- Strong skills around object-oriented analysis and design (OOAD), data structures, algorithms, design patterns. Hands-on practical experience in system design, application development, testing, and operational stability.
- Advanced understanding of agile methodologies such as CI/CD, Application Resiliency, and Security.
- Proficient in coding in Python, PySpark, Java 17/21, Spring Boot, and SQL Databases,
- Prior hands-on experience in Cloud technologies - AWS, AZURE etc.
- Hands-on experience on AWS with exposure to ECS/EKS, SQS, S3, Lambda, EC2, RDS DB and Step functions
- Prior experience on big data platforms. - Hadoop, Databricks etc.
- Advanced in two or more technologies - Functional Programming, Microservices, RESTful webservices development, Kafka, Hibernate.
- Cloud - Strong Hands-on Cloud Native Architecture - AWS, Containerization / Kubernetes
- Experience in data analysis, physical database design, relational and No SQL database.
- Prior experience on Splunk is preferred.
- Excellent communications skills (Oral and written), interpersonal, and organizational skills.
Responsibilities:
- Analysis of data to drive design on assigned projects.
- Develop APIs, Data ingestion pipelines using AWS services and create notebooks in Databricks.
- Design, Develop and modify Databricks notebooks using PySpark.
- Perform root cause analysis for long running jobs and implement solutions.
- Understand current data flow architecture and rewrite the code to process data in AWS/Databricks platform.
- Agile SDLC Strong experience is strongly preferred.
- Ingest data into AWS data platform for various use cases which include real time streaming, batch processing etc.
- Maintain technical acumen by pursuing formal and informal learning opportunities about technology, JPMorgan Chase products, and financial services.
Qualifications
- Experience required in (AWS/bigdata).
- Experience working with any cloud platforms (design and development).
- Experience in either Python or Java programming (Both is PLUS)
- Experience in large scale data integration.
- Experience with AWS services like ECS/EKS, SQS, S3, Lambda, EC2 etc.
- Experience working on NoSQL databases.
- Exposure to reporting tools.
Pay Range: $65 - $75
The specific compensation for this position will be determined by a number of factors, including the scope, complexity and location of the role as well as the cost of labor in the market; the skills, education, training, credentials and experience of the candidate; and other conditions of employment. Our full-time consultants have access to benefits including medical, dental, vision as well as 401K contributions.