Senior Python Platform Engineer (Data Platforms)

Apply Now

Company: Egen

Location: Naperville, IL 60540

Description:

Egen is a fast-growing and entrepreneurial company with a data-first mindset. We bring together the best engineering talent working with the most advanced technology platforms, including Google Cloud and Salesforce, to help clients drive action and impact through data and insights. We are committed to being a place where the best people choose to work so they can apply their engineering and technology expertise to envision what is next for how data and platforms can change the world for the better. We are dedicated to learning, thrive on solving tough problems, and continually innovate to achieve fast, effective results.

You will join a team of insatiably curious data engineers, software architects, and product experts who never settle for "good enough". Our Data Engineering teams build scalable data platforms and pipelines for modern analytics and AI services using Python, Airflow, and AWS, GCP, or Azure data storage and warehousing services. The pipelines we build typically integrate with technologies such as Kafka, Storm, and Elasticsearch. We are working on a continuous deployment pipeline that leverages rapid on-demand releases. Our developers work in an agile process to efficiently deliver high-value applications and product packages.

As a Senior Data Platform Engineer, you will build data platforms to enable scalable and modern analytics and machine learning on rich datasets.

Responsibilities
    • Own architecture approaches such as 12 factor, microservices, and well-formed APIs to allow our architecture to scale and enable deployment of modern AI applications.
    • Collaborate with internal teams and customers to data pipelines are built in the best and most scalable ways
    • Document development phases and monitor systems
    • Ensure software is up-to-date with latest technologies
    • Lead and develop passionate engineering teams that strive to AMAZE!


What we're looking for:
    • Minimum of Bachelor's Degree or its equivalent in Computer Science, Computer Information Systems, Information Technology and Management, Electrical Engineering or a related field.
    • Previous experience as a data engineer writing production code to transform data between data models and formats, preferably in Python (Spark or PySpark is a bonus).
    • You know what it takes to build and run resilient data pipelines in production and have experience implementing ETL/ELT to load a multi-terabyte enterprise data warehouse.
    • Experience building cloud-native applications and supporting technologies / patterns / practices including: AWS/GCP/Azure, Docker, CI/CD, DevOps, and microservices is a plus.
    • You have implemented analytics applications using multiple database technologies, such as relational, multidimensional (OLAP), key-value, document, or graph.
    • You value the importance of defining data contracts, and have experience writing specifications including REST APIs.

Similar Jobs