Azure Fabric Data Architect
Apply NowCompany: Cynet Systems
Location: Newark, NJ 07104
Description:
Job Description:
Pay Range: $75hr - $80hr
Responsibilities:
Data Architecture:
Pay Range: $75hr - $80hr
Responsibilities:
Data Architecture:
- Design end-to-end data architecture leveraging Microsoft Fabric's capabilities.
- Design data flows within the Microsoft Fabric environment.
- Implement One Lake storage strategies.
- Configure Synapse Analytics workspaces Establish Power BI integration patterns.
- rchitect data integration patterns with analytics using Azure Data Factory and Microsoft Fabric.
- Implement medallion architecture (Bronze/Silver/Gold layers).
- bility to configure real-time data ingestion patterns.
- stablish data quality frameworks.
- Implement modern data lake house architecture using Delta Lake, ensuring data reliability and performance.
- Establish data governance frameworks incorporating Microsoft Purview for data quality, lineage, and compliance.
- Data Integration: Combining and cleansing data from various sources.
- Data Pipeline Management: Creating, orchestrating, and troubleshooting data pipelines.
- nalytics Reporting: Building and delivering detailed reports and dashboards to derive meaningful insights from large datasets.
- Data Visualization Techniques: Representing data graphically in impactful and informative ways.
- Optimization and Security: Optimizing queries, improving performance, and securing data.
- pache Spark Proficiency: Utilizing Spark for large-scale data processing and analytics.
- Data Engineering: Building and managing data pipelines, including ETL (Extract, Transform, Load) processes.
- Delta Lake: Implementing Delta Lake for data versioning, ACID transactions, and schema enforcement.
- Cluster Management: Configuring and managing Data bricks clusters for optimized performance.
- Integration with Azure Services: Integrating Data bricks with other Azure services like Azure Data Lake, Azure SQL Database, and Azure Synapse Analytics.
- Configure Microsoft Purview policies.
- Establish data masking for sensitive information.
- Design audit logging mechanisms.
- Design scalable data pipelines using Azure Data bricks for ETL/ELT processes and real-time data integration.
- Implement performance tuning strategies for large-scale data processing and analytics workloads.
- Optimize Spark configurations.
- Implement partitioning strategies
- Design caching mechanisms.
- Establish monitoring frameworks.