Puesto:
Data Engineer
Descripción del trabajo:
DATA ENGINEER Do you have a passion for building data architectures that enable smooth and seamless product experiences? Are you an all-around data enthusiast with a knack for ETL? We're hiring Data Engineers to help build and optimize the foundational architecture of our product's data. We’ve built a strong data engineering team to date, but have a lot of work ahead of us, including: • Migrating from relational databases to a streaming and big data architecture, including a complete overhaul of our data feeds • Defining streaming event data feeds required for real-time analytics and reporting • Leveling up our platform, including enhancing our automation, test coverage, observability, alerting, and performance As a Data Engineer, you will work with the development team to construct a data streaming platform and data warehouse that serves as the data foundations for our product. Help us scale our business to meet the needs of our growing customer base and develop new products on our platform. You'll be a critical part of our growing company, working on a crossfunctional team to implement best practices in technology, architecture, and process. You’ll have the chance to work in an open and collaborative environment, receive hands-on mentorship and have ample opportunities to grow and accelerate your career! Responsibilities: • Build our next generation data warehouse • Build our event stream platform • Translate user requirements for reporting and analysis into actionable deliverables • Enhance automation, operation, and expansion of real-time and batch data environment • Manage numerous projects in an ever-changing work environment • Extract, transform, and load complex data into the data warehouse using cutting-edge technologies • Build processes for topnotch security, performance, reliability, and accuracy • Provide mentorship and collaborate with fellow team members
Requisitos:
Qualifications: • Bachelor’s or Master’s degree in Computer Science, Information Systems, Operations Research, or related field required • 3+ years of experience building data pipelines • 3+ years of experience building data frameworks for unit testing, data lineage tracking, and automation • Fluency in Scala is required • Working knowledge of Apache Spark • Familiarity with streaming technologies (e.g., Kafka, Kinesis, Flink) Nice-to-Haves: • Experience with Machine Learning • Familiarity with Looker a plus • Knowledge of additional server-side programming languages (e.g. Golang, C#, Ruby)
Salario:
According to Experience
Contacto:
maria.cr@alliedglobal.com
Empresa:
Allied Global Technology Services
Fecha de publicación: