Principal Engineer
Apply NowCompany: nLine
Location: Berkeley Springs, WV 25411
Description:
Company Overview
nLine is a technology company dedicated to improving electricity reliability in developing countries through innovative data collection and analysis. We develop and deploy advanced sensor technologies and analytics platforms to provide utilities, regulators, and policymakers with accurate, actionable insights into power grid performance. Our work spans multiple countries, particularly in Sub-Saharan Africa, where we collect and analyze granular data on power quality and outages. By leveraging cutting-edge hardware and software solutions, nLine enables data-driven decision-making for infrastructure investments and operational improvements.
Work Environment
nLine operates with a flat organizational structure, valuing independence, rigor, and versatility. Our work involves developing and maintaining scalable systems that transform raw sensor data into insights that ultimately contribute to enhancing electricity access and reliability in underserved regions. Therefore, scientific integrity is held to the highest regard and a successful candidate should not be comfortable shipping code that negatively impacts the quality of the data collected and presented to partners by nLine. At nLine each person on the team takes ownership of their core technical areas while collaborating across our data processing and infrastructure projects. The ideal candidate should be able to orchestrate both high-level architectural decisions and hands-on development tasks. Additionally, they should be comfortable working autonomously, learning quickly, and delivering high quality work with minimal supervision. They should be comfortable working in a small team environment and be able to adapt to a range of services and platforms.
Base Requirements
nLine is a technology company dedicated to improving electricity reliability in developing countries through innovative data collection and analysis. We develop and deploy advanced sensor technologies and analytics platforms to provide utilities, regulators, and policymakers with accurate, actionable insights into power grid performance. Our work spans multiple countries, particularly in Sub-Saharan Africa, where we collect and analyze granular data on power quality and outages. By leveraging cutting-edge hardware and software solutions, nLine enables data-driven decision-making for infrastructure investments and operational improvements.
Work Environment
nLine operates with a flat organizational structure, valuing independence, rigor, and versatility. Our work involves developing and maintaining scalable systems that transform raw sensor data into insights that ultimately contribute to enhancing electricity access and reliability in underserved regions. Therefore, scientific integrity is held to the highest regard and a successful candidate should not be comfortable shipping code that negatively impacts the quality of the data collected and presented to partners by nLine. At nLine each person on the team takes ownership of their core technical areas while collaborating across our data processing and infrastructure projects. The ideal candidate should be able to orchestrate both high-level architectural decisions and hands-on development tasks. Additionally, they should be comfortable working autonomously, learning quickly, and delivering high quality work with minimal supervision. They should be comfortable working in a small team environment and be able to adapt to a range of services and platforms.
Base Requirements
- Education and Experience
- Bachelor's degree in Computer Science, Engineering, or related field
- 3+ years of experience as a software, infrastructure, or devops engineer
- Technical Proficiency
- Experience with cloud computing platforms, preferably Google Cloud
- Strong proficiency in Python
- Experience with big data processing and distributed computing
- Experience with data visualization and dashboard creation
- Soft Skills
- Potential experience leading small engineering/data teams
- Ability to work independently and collaboratively in a small team, remote environment
- Passion for leveraging technology to solve real-world problems
- Cloud and Infrastructure:
- Deploying infrastructure with Infrastructure as Code practices (i.e Terraform, Pulumi)
- Deploying microservices on Kubernetes clusters (with Docker, Helm, etc)
- Using at least one popular cloud provider (i.e. GCP, AWS, etc)
- Deploying, managing, and optimizing relational databases (i.e. PostgreSQL, TimescaleDB)
- Storing tabular data in datalakes
- Architecting and composing multiple microservics to support a cohesive product
- Developing data backup and restore strategies and performing risk/cost tradeoffs
- Implementing thoughtful security practices (i.e in storing and distributing secrets to microservices, appropriately managing granular resource access for team members, etc.)
- Programming and Data Processing
- Python (Advanced, ideally including experience with parallel processing frameworks such as PySpark, Dask, etc)
- JavaScript (Intermediate, for maintaining existing JS microservices)
- Data Visualization and Analytics
- Experience displaying or visualizing data (i.e. in Grafana, Plotly)
- IoT & Sensors
- Receiving and storing data from remote devices
- Optimizing data usage in protocol design
- General Engineering Tools
- Git, Bash, Unix, etc.
- Misc
- Experience working remotely, independently, across timezones and with international teams
- Familiarity with power grid operations and electricity reliability metrics
- Experiences that are nice to have
- Experience with Databricks
- Familiarity with Helm charts
- Knowledge of low-level Python static analysis
- Experience with delta lake and parquet file formats
- Plotly (or other visualization libraries),
- Performance Optimization
- Understanding of read and write optimization via chunking/paritioning for both datalake-based and relational datastores
- Maintenance and co-development of core infra products
- Manage and supervise cloud computing resources
- Lead infrastructure planning and deployment using Terraform
- Develop and maintain data processing and visualization systems
- Contribute to backend development and integration of auxiliary services (surveys capture)
- Troubleshooting and Improvement
- Troubleshoot and resolve issues related to ongoing deployment projects (cloud provider resources, corrections to survey data)
- Assist in the design and implementation of data storage solutions
- Contribute to the ongoing improvement of the company's technical stack
- Enhancing Analysis Data Pipeline
- Implement and improve caching mechanisms for better performance
- Architect and oversee the data analysis pipeline from ingestion to visualization