Sr. Data Engineer (New York, 5 days onsite & Local only)
Apply NowCompany: InfiCare Software Technologies
Location: New York, NY 10025
Description:
Job Title: Sr. Data Engineer
Location: New York (5 days onsite & Candidates MUST be local)
Duration: 12+ Months
Job Description:-
Required Skills: Minimum 10+ Years of experience required.
1.Proficiency in data engineering programming languages (preferably Python, alternatively Scala or Java)
2. Proficiency in at least one cluster computing framework (preferably Spark, alternatively Flink or Storm)
3. Proficiency in at least one cloud data lake house platforms (preferably AWS data lake services or Databricks, alternatively Hadoop), atleast one relational data stores (Postgres, Oracle or similar) and atleast one NOSQL data stores (Cassandra, Dynamo, MongoDB or similar)
4.Proficiency in at least one scheduling/orchestration tools (preferably Airflow, alternatively AWS Step Functions or similar)
5. Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), Agile methodology (develop PI plans and roadmaps), TDD (or BDD) and CI/CD tools (Jenkins, Git,)
Strong organizational, problem-solving and critical thinking skills; Strong documentation skills
Location: New York (5 days onsite & Candidates MUST be local)
Duration: 12+ Months
Job Description:-
Required Skills: Minimum 10+ Years of experience required.
1.Proficiency in data engineering programming languages (preferably Python, alternatively Scala or Java)
2. Proficiency in at least one cluster computing framework (preferably Spark, alternatively Flink or Storm)
3. Proficiency in at least one cloud data lake house platforms (preferably AWS data lake services or Databricks, alternatively Hadoop), atleast one relational data stores (Postgres, Oracle or similar) and atleast one NOSQL data stores (Cassandra, Dynamo, MongoDB or similar)
4.Proficiency in at least one scheduling/orchestration tools (preferably Airflow, alternatively AWS Step Functions or similar)
5. Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), Agile methodology (develop PI plans and roadmaps), TDD (or BDD) and CI/CD tools (Jenkins, Git,)
Strong organizational, problem-solving and critical thinking skills; Strong documentation skills
