Data Engineer (Senior)
We are #VLteam – tech enthusiasts constantly striving for growth. The team is our foundation, that’s why we care the most about the friendly atmosphere, a lot of self-development opportunities and good working conditions. Trust and autonomy are two essential qualities that drive our performance. We simply believe in the idea of “measuring outcomes, not hours”. Join us & see for yourself!
About the role
You will participate in defining the requirements and architecture for the new platform, implement the solution, and remain involved in its operations and maintenance post-launch. Your work will introduce data governance and management, laying the foundation for accurate and comprehensive reporting that was previously impossible.
You will adhere to and actively promote engineering best practices, data governance standards, and the use of open standards. Build data ingestion & processing pipelines. Collaborate with stakeholders to define requirements, develop data pipelines and data quality metrics
Our client is a company that specialises in insurance, reinsurance, and asset management. It focuses on delivering scalable capital solutions that cater to various areas, including property, casualty, and speciality insurance lines.
Currently in its startup phase, it is actively strategising to enhance its operations and expand its service capabilities. A key aspect of their plans involves the adoption of modern technologies, which will enable the company to streamline processes and increase overall efficiency in their offerings through improved management of data. By leveraging modern technology, our client aims to position itself as a competitive player in the insurance industry while also addressing the evolving needs of its clients.
As part of this transformation, Virtuslab (VL) will accelerate progress on its roadmap and in building a modern data platform with scalable compute capabilities, enhance reporting and workflow automation, and embed cloud-native engineering practices.
Azure, Databricks, SQL, Python, Power Bi
The project involves building a comprehensive reporting and analytics platform from the ground up. Key challenges include integrating data from multiple complex sources, ensuring high data quality and consistency, and designing scalable data models that support both operational and analytical reporting. It also requires close collaboration with business stakeholders to understand reporting needs and translate them into effective data solutions.
The team is small but highly motivated, taking on a broad scope of responsibilities as the platform is built and expanded.
We are architecting a modern Data Platform for a fast-scaling client in the Insurance sector. Our work consolidates fragmented legacy systems, organises data from a vast number of sources, and establishes a standardised, governed, and future-proof data foundation.
We aim to unlock the full value of the company’s data, enabling faster, informed decision-making and providing the backbone for business growth and AI readiness.
SQL, Python, Snowflake, dbt, Data modelling, Data quality, Power BI, Azure, Terraform
The primary objective is to deliver a robust data foundation and enable AI capabilities for a client that has grown organically. The work focuses on several key areas:
- Establishing a production-ready, fully operational Snowflake environment and driving operational excellence.
- Translating complex business logic into accurate data models to ensure the platform truly reflects business reality.
- Integrating diverse data sources to build reliable data products and comprehensive data dictionaries.
- Managing the full Data Engineering and Data Science lifecycle to support production ML and AI experimentation.
- Taking ownership from concept to deployment.
- Cultivating an engineering mindset by promoting automation, CI/CD, and rigorous standards.
We are building a small (4-6 people), agile, cross-functional team capable of delivering the complete data platform, from initial architecture to production operations.
Roles involved: DevOps, Data Engineer, Snowflake Specialist, MLOps/AI Engineer, Business Analyst (BA). The team will collaborate closely with business stakeholders to ensure effective knowledge transfer and strict alignment with strategic goals.
What we expect in general
-
Hands-on experience with Python
-
Proven experience with data warehouse solutions (e.g., BigQuery, Redshift, Snowflake)
-
Experience with Databricks or data lakehouse platforms
-
Strong background in data modelling, data catalogue concepts, data formats, and data pipelines/ETL design, implementation and maintenance
-
Ability to thrive in an Agile environment, collaborating with team members to solve complex problems with transparency
-
Experience with AWS/GCP/Azure cloud services, including: GCS/S3/ABS, EMR/Dataproc, MWAA/Composer or Microsoft Fabric, ADF/AWS Glue
-
Experience in ecosystems requiring improvements and the drive to implement best practices as a long-term process
-
Experience with Infrastructure as Code practices, particularly Terraform, is an advantage
-
Proactive approach
-
Familiarity with Spark is a plus
-
Familiarity with Streaming tools is a plus
Don’t worry if you don’t meet all the requirements. What matters most is your passion and willingness to develop. Apply and find out!
A few perks of being with us
Apply now
"*" indicates required fields