Data Engineer (Senior)
We are #VLteam – tech enthusiasts constantly striving for growth. The team is our foundation, that’s why we care the most about the friendly atmosphere, a lot of self-development opportunities and good working conditions. Trust and autonomy are two essential qualities that drive our performance. We simply believe in the idea of “measuring outcomes, not hours”. Join us & see for yourself!
About the role
Design and implement data monitoring pipelines to proactively identify and resolve data quality issues, potentially impacting downstream products.
Build data ingestion & processing pipelines supporting our customer-facing Data Products
Collaborate with stakeholders to define requirements, develop metrics for data pipeline quality, negotiate data quality SLAs on behalf of downstream data product owners, and create monitoring solutions using Python, Spark, and Airflow.
Refine our processes responsible for maintaining data quality and data ingestion to run in a cost/compute-efficient and best practice manner
Innovate and develop new methodologies to enhance access to trustworthy data, accelerating the value provided by the product data team.
The core mission of our client is to equip every company with a complete, 360-degree view of their ideal customer, enhancing every phase of their GTM strategy and boosting their success in achieving business targets.
- Auditing business products through pain point analysis and KPI verification to enhance performance
- Developing reliable Apache Airflow pipelines to support metrics-driven solutions
- Building an intuitive Metric Data Catalog accessible to all stakeholders
- Optimizing data warehouse architecture for cost-effective, scalable solutions
What we expect in general
- Hands-on experience with Python (4+ years as a Data Engineer)
- Proven experience with data warehouse solutions (e.g., BigQuery, Snowflake)
- Strong background in data modeling, data catalog concepts, data formats, and data pipelines/ETL design, implementation and maintenance
- Ability to thrive in an Agile environment, collaborating with team members to solve complex problems with transparency
- Proficient with AWS/GCP cloud services, including: GCS/S3, EMR/Dataproc, MWAA/Composer
- Experience in ecosystems requiring improvements and the drive to implement best practices as long term process
- Experience with data migration from data warehouse solutions (e.g., BigQuery, Snowflake) to cost-effective alternatives is an advantage
- Familiarity in Iceberg Lakehouse architecture using Trino is a plus
- Familiarity with Starburst is a plus
- Experience with Infrastructure as Code practices, particularly Terraform is an advantage
Don’t worry if you don’t meet all the requirements. What matters most is your passion and willingness to develop. Moreover, B2B does not have to be the only form of cooperation. Apply and find out!
A few perks of being with us









Apply now
"*" indicates required fields
