Devops Engineer
Apply NowCompany: Mothership
Location: San Antonio, TX 78228
Description:
About the job Devops Engineer
Location: San Antonio
Devops Engineer
Tools & Technologies:
Skills and Expertise:
AWS Managed Services: Apache Flink: Kafka Broker (Apache Kafka): DevOps & Automation: Programming & Scripting: Monitoring & Performance Tuning:
Responsibilities:
Infrastructure Design & Implementation: Platform Management: Automation & CI/CD: Collaboration with Data Engineering Teams: Security and Compliance: Optimization & Troubleshooting:
Location: San Antonio
Devops Engineer
Tools & Technologies:
- Apache Kafka (Self-managed or MSK)
- AWS managed Apache Flink
- Amazon EC2, S3, RDS, and VPC
- Terraform/CloudFormation
- Docker, Kubernetes (EKS)
- Elk, CloudWatch
- Python, Bash
Skills and Expertise:
- Proficiency in AWS services such as Amazon MSK (Managed Streaming for Kafka), Amazon Kinesis, AWS Lambda, Amazon S3, Amazon EC2, Amazon RDS, Amazon VPC, and AWS IAM.
- Ability to manage infrastructure as code with AWS CloudFormation or Terraform.
- Understanding of Apache Flink for real-time stream processing and batch data processing.
- Familiarity with Flinks integration with Kafka, or other messaging services.
- Experience in managing Flink clusters on AWS (using EC2, EKS, or managed services).
- Deep knowledge of Kafka architecture, including brokers, topics, partitions, producers, consumers, and zookeeper.
- Proficiency with Kafka management, monitoring, scaling, and optimization.
- Hands-on experience with Amazon MSK (Managed Streaming for Kafka) or self-managed Kafka clusters on EC2.
- Strong experience in automating deployments and infrastructure provisioning.
- Familiarity with CI/CD pipelines using tools like Jenkins, GitLab, GitHub Actions, CircleCI, etc.
- Experience with Docker and Kubernetes, especially for containerizing and orchestrating applications in cloud environments.
- Strong scripting skills in Python, Bash, or Go for automation tasks.
- Ability to write and maintain code for integrating data pipelines with Kafka, Flink, and other data sources.
- Knowledge of CloudWatch, Prometheus, Grafana, or similar monitoring tools to observe Kafka, Flink, and AWS service health.
- Expertise in optimizing real-time data pipelines for scalability, fault tolerance, and performance.
Responsibilities:
- Design and deploy scalable and fault-tolerant real-time data processing pipelines using Apache Flink and Kafka on AWS.
- Build highly available, resilient infrastructure for data streaming, including Kafka brokers and Flink clusters.
- Manage and optimize the performance and scaling of Kafka clusters (using MSK or self-managed).
- Configure, monitor, and troubleshoot Flink jobs on AWS infrastructure.
- Oversee the deployment of data processing workloads, ensuring low-latency, high-throughput processing.
- Automate infrastructure provisioning, deployment, and monitoring using Terraform, CloudFormation, or other tools.
- Integrate new applications and services into CI/CD pipelines for real-time processing.
- Work closely with Data Engineers, Data Scientists, and DevOps teams to ensure smooth integration of data systems and services.
- Ensure the data platforms scalability and performance meet the needs of real-time applications.
- Implement proper security mechanisms for Kafka and Flink clusters (e.g., encryption, access control, VPC configurations).
- Ensure compliance with organizational and regulatory standards, such as GDPR or HIPAA, where necessary.
- Optimize Kafka and Flink deployments for performance, latency, and resource utilization.
- Troubleshoot issues related to Kafka message delivery, Flink job failures, or AWS service outages.