Data Engineer 3
Apply NowCompany: Intelliswift
Location: Ridgefield Park, NJ 07660
Description:
Job ID: 25-07104 Pay rate range - $65/hr. to $70/hr. on W2
Top Skills:
GCP Big Query, Python, ETL pipelines
Summary:
The main function of the Data Engineer is to develop, evaluate, test, and maintain architectures and data solutions within our organization.
The typical Data Engineer executes plans, policies, and practices that control, protect, deliver, and enhance the value of the organization's data assets.
Qualifications:
5-7 years of experience in designing and implementing large-scale data processing/data storage/data distribution systems
Extensive experience working with large data sets with hands-on technology skills to design and build robust Big Data solutions using Spark framework, GCP Big data services, and industry-standard frameworks
Ability to work with multi-technology/cross-functional teams and key stakeholders to guide/manage a full life-cycle solution
Extensive experience in Relational and MPP database platforms like (GCP Bigquery/Hive/Cloud SQL etc)
Open-source Hadoop stack/Spark framework
Strong understanding of Big Data Analytics platforms and ETL in the context of Big Data
Excellent problem-solving, hands-on engineering skills and communication skills
Broad understanding and experience of real-time analytics
Participate in the full Software Development Life Cycle (SDLC) of the Big Data Solution
Technical Skills Required
Any combination of the below technical skills
Hadoop: HDFS, MapReduce, Hive, Airflow
DW: Bigquery, Hive
Languages: Python, PySpark, Shell Scripting, SQL Scripting
Cloud: GCP Big data native services
Any RDBMS/DWBI technologies
Spark(Mandatory): Spark on GCP Dataproc
Roles & Responsibilities
Position Activities and Tasks
Should be able to design complex and high-performance Data architecture
Developing and maintaining strong relations with senior executives-developing new insights into the client's business model and pain points, and delivering actionable, high-impact results
Participating and leading key engagements, in developing plans and strategies of data management processes and IT programs for the business, providing hands-on assistance in data modeling, technical implementation of big data solutions
Facilitating, guiding, and influencing the clients and teams toward right information technology architecture and becoming the interface between Business leadership, Tech leadership, and the delivery teams
Leading and mentoring other developers within the team
Identify the performance bottle-necks and resolve the same
Should be able to work in a team
Ability to produce high-quality work products under pressure and within deadlines
To coordinate with developers / other architects / other stakeholders and cross-functional teams from the organization
Education:
Bachelors Degree
Top Skills:
GCP Big Query, Python, ETL pipelines
Summary:
The main function of the Data Engineer is to develop, evaluate, test, and maintain architectures and data solutions within our organization.
The typical Data Engineer executes plans, policies, and practices that control, protect, deliver, and enhance the value of the organization's data assets.
Qualifications:
5-7 years of experience in designing and implementing large-scale data processing/data storage/data distribution systems
Extensive experience working with large data sets with hands-on technology skills to design and build robust Big Data solutions using Spark framework, GCP Big data services, and industry-standard frameworks
Ability to work with multi-technology/cross-functional teams and key stakeholders to guide/manage a full life-cycle solution
Extensive experience in Relational and MPP database platforms like (GCP Bigquery/Hive/Cloud SQL etc)
Open-source Hadoop stack/Spark framework
Strong understanding of Big Data Analytics platforms and ETL in the context of Big Data
Excellent problem-solving, hands-on engineering skills and communication skills
Broad understanding and experience of real-time analytics
Participate in the full Software Development Life Cycle (SDLC) of the Big Data Solution
Technical Skills Required
Any combination of the below technical skills
Hadoop: HDFS, MapReduce, Hive, Airflow
DW: Bigquery, Hive
Languages: Python, PySpark, Shell Scripting, SQL Scripting
Cloud: GCP Big data native services
Any RDBMS/DWBI technologies
Spark(Mandatory): Spark on GCP Dataproc
Roles & Responsibilities
Position Activities and Tasks
Should be able to design complex and high-performance Data architecture
Developing and maintaining strong relations with senior executives-developing new insights into the client's business model and pain points, and delivering actionable, high-impact results
Participating and leading key engagements, in developing plans and strategies of data management processes and IT programs for the business, providing hands-on assistance in data modeling, technical implementation of big data solutions
Facilitating, guiding, and influencing the clients and teams toward right information technology architecture and becoming the interface between Business leadership, Tech leadership, and the delivery teams
Leading and mentoring other developers within the team
Identify the performance bottle-necks and resolve the same
Should be able to work in a team
Ability to produce high-quality work products under pressure and within deadlines
To coordinate with developers / other architects / other stakeholders and cross-functional teams from the organization
Education:
Bachelors Degree