Machine Learning Engineer
We foster a dynamic culture rooted in strong engineering, a sense of ownership, and transparency, empowering our team. As part of the expanding VirtusLab Group, we offer a compelling environment for those seeking to make a substantial impact in the software industry within a forward-thinking organization.
About the role
Join our team to drive business innovation with production-ready machine learning pipelines. You will play a key role in deploying and maintaining ML workflows, leveraging Azure for cloud computing and on-prem clusters for ETLs. Collaborating closely with Data Scientists, you will contribute to AI-powered projects while shaping the organization’s technical culture.
As an ML Engineer in Forecasting and Commodities, you will be involved in projects that support critical decision making processes, by applying your Python, PySpark, Kubernetes and Cloud (Azure) skills. You will be working in a technically mature ecosystem, implementing new features and covering new use-cases. Part of your responsibilities will be design and implementation of a data science innovation framework, as well making contributions to an overall engineering best practises of the organization.
- Developing libraries, tools, and frameworks that standardise and accelerate development and deployment of machine learning models.
- Working in an Azure cloud environment, developing model training code in AzureML. Building and maintaining cloud infrastructure with IaC (infrastructure as code).
- Working with distributed data processing tools such as Spark, to parallelise computation for Machine Learning.
- Diagnosing and resolving technical issues, ensuring availability of high-quality solutions that can be adapted and reused.
- Collaborating closely with different engineering and data science teams, providing advice and technical guidance to streamline daily work.
- Championing best practices in code quality, security, and scalability by leading by example.
- Taking your own, informed decisions moving a business forward.
Python, PySpark, Airflow, Docker, Kubernetes, Azure (incl. Azure ML), pandas, scikit-learn, numpy, GitHub Actions, Azure DevOps, Terraform, Git, GitHub.
- Building a system that provides accurate and up-to-date business forecasts, by providing a set of tools that can be easily leveraged by data scientists and analysts.
- Streamlining the process of onboarding, deployment and patching new ML pipelines.
- Collaborating with cross-functional teams enhancing customer experiences through innovative technologies.
- Employing DevOps practises for reproducible patterns in multiple business domains.
3 engineers
We provide price optimisation solutions in close collaboration with a major UK retailer’s data science team. Together, we build projects to enable quick exploration and productionisation of the ML models and respective optimisation algorithms in a hybrid-cloud environment. The end goal is to provide APIs for the optimiser to solve pricing class problems across multiple business domains.
- Implementation of the end-to-end Machine Learning Lifecycle, starting from data preparation over experimentation to continuous monitoring of data and models in production
- PySpark data pipelines to load and transform large amounts of data to produce smaller, significant features used in modelling and analytics.
- Provisioning all cloud resources that support the model development in AzureML with IaC using Terraform.
- The selection of the best architectural patterns to solve business problems.
- Building robust and maintainable code in the cloud and on-prem to bring models fast and reliably to production.
- Establishing a mature DevOps culture and working on solutions that are reusable in multiple business domains.
Python (pySpark, Airflow, Azure SDK), Spark K8S cluster, Azure ML and IaC with Terraform, CI/CD with GitHub Actions, Docker
There are many pricing problems in the global retailer world which share similar structures and constraints. We focus on building robust solutions that are reusable across multiple domains, leveraging a hybrid on-prem and cloud infrastructure, and ensuring top quality while maintaining quick iterations.
The core engineering team consists of 5-6 talented people in Poland. We collaborate closely with the client’s product management and data science team.
What we expect in general:
- 5+ years of hands-on machine learning engineering experience
- The ability to work in hybrid model from Krakow office is a must (1 day per week).
- Hands-on experience in deploying Python projects.
- Strong experience in writing high-quality Python code.
- Experience with orchestration tools such as Airflow.
- Knowledge of Spark or other distributed data processing tools.
- Experience with Kubernetes ecosystem as a user.
- Strong experience in Azure and Docker
- Ability to work in a team and participate in the design process.
- Good command of English (B2/C1).
- Strong communicator
- Team player with mentoring ability
- Proactive and responsible
- Strategic thinker with big-picture perspective
Seems like lots of expectations, huh? Don’t worry! You don’t have to meet all the requirements.
What matters most is your passion and willingness to develop. Apply and find out!
A few perks of being with us
Apply now
"*" indicates required fields