Data Engineer - Snowflake and Databricks
Apply NowCompany: Expedite Technology Solutions
Location: Seattle, WA 98115
Description:
Primary skill:
DBT Labs
Snowflake
Secondary skill:
Databricks
Databricks Unity Catalog
Responsibilities:
Develop, optimize, and maintain data pipelines using Azure Data Factory (ADF), DBT Labs, Snowflake, and Databricks.
Develop reusable jobs and configuration-based integration framework to optimize development and scalability.
Manage data ingestion for structured and unstructured data ( landing/ lake house: ADLS, Sources: DLS, Salesforce, SharePoint Documents Libraries, Partner Data: Client, IHME, WASDE etc. ).
Implement and optimize ELT processes, source-to-target mapping, and transformation logic in DBT Labs, Azure Data Factory, Databricks Notebook, Snow SQL etc.
Collaborate with data scientists, analysts, data engineers, report developers and infrastructure engineers for end-to-end support.
Co-develop CI/CD best practices, automation, and pipelines with Infrastructure engineers for code deployments using GitHub Actions.
Bring in automation from source-to-target mappings to data pipelines and data lineage in Collibra.
Required Experience
Hands-on experience building pipelines with ADF, Snowflake, Databricks, and DBT Labs.
Expertise in Azure Cloud with Databricks, Snowflake, and ADLS Gen2 integration.
Data Warehousing and Lakehouse Knowledge: Proficient with ELT processes, *** Tables, and External Tables for structured/unstructured data.
Experience with Databricks Unity Catalog and data sharing technologies.
Strong skills in CI/CD (Azure DevOps, GitHub Actions) and version control (GitHub).
Strong cross-functional collaboration and technical support experience for data scientists, report developers and analysts.
DBT Labs
Snowflake
Secondary skill:
Databricks
Databricks Unity Catalog
Responsibilities:
Develop, optimize, and maintain data pipelines using Azure Data Factory (ADF), DBT Labs, Snowflake, and Databricks.
Develop reusable jobs and configuration-based integration framework to optimize development and scalability.
Manage data ingestion for structured and unstructured data ( landing/ lake house: ADLS, Sources: DLS, Salesforce, SharePoint Documents Libraries, Partner Data: Client, IHME, WASDE etc. ).
Implement and optimize ELT processes, source-to-target mapping, and transformation logic in DBT Labs, Azure Data Factory, Databricks Notebook, Snow SQL etc.
Collaborate with data scientists, analysts, data engineers, report developers and infrastructure engineers for end-to-end support.
Co-develop CI/CD best practices, automation, and pipelines with Infrastructure engineers for code deployments using GitHub Actions.
Bring in automation from source-to-target mappings to data pipelines and data lineage in Collibra.
Required Experience
Hands-on experience building pipelines with ADF, Snowflake, Databricks, and DBT Labs.
Expertise in Azure Cloud with Databricks, Snowflake, and ADLS Gen2 integration.
Data Warehousing and Lakehouse Knowledge: Proficient with ELT processes, *** Tables, and External Tables for structured/unstructured data.
Experience with Databricks Unity Catalog and data sharing technologies.
Strong skills in CI/CD (Azure DevOps, GitHub Actions) and version control (GitHub).
Strong cross-functional collaboration and technical support experience for data scientists, report developers and analysts.