Data Engineer

Job description

What we are doing

Kaizo aims to actively guide employees towards achieving their goals and making an impact at their companies.

We are building a performance development platform for customer support teams that leverages gamification and AI to improve operational efficiency, elevate teams' performance and retention with actionable goals. We are a product-led, fast-growing SaaS company with a diverse team and a globally active customer base.


  • Kaizo is leveraging gamification and machine learning to make the daily work experience for customer support agents more engaging, fun and productive.
  • We run a microservice-based stream processing platform which processes 200+ million events every day using Akka Streams and Kafka.
  • Those services are deployed to Kubernetes on Google Cloud and feed data into Elasticsearch and Mongodb.
  • Our systems are designed to be reactive, e.g. responsive, resilient, elastic, and message-driven.
  • We are building a realtime Machine Learning engine to continuously adjust the gamification parameters that keep our users motivated and productive.
  • We are always scaling up to handle more data with lower latencies.


Your role in our team

We are looking for a Data Engineer with a software engineering background that will help us build and maintain scalable data pipelines, code intensive data processes, take machine learning models from notebooks, train them on scale, build endpoints that will serve thousands of predictions in real time.


You will focus on


  • Building efficient processes that implement the full data science cycle
  • Processing, cleaning, and aggregating data used for analysis
  • Improving and extending the features used by our existing systems
  • Performing ad-hoc analysis and presenting results in a clear manner
  • Creating automated systems and constantly tracking its performance
  • Building endpoints that will serve thousands of predictions in real-time
  • Collaborating with other teams within the company to improve decision making and drive product development
  • Communicating results and ideas to key decision-makers
  • Experimenting with different tools and technologies, and evaluate new data science approaches for the business

Job requirements

What you bring to the table

  • 5+ years of hands on industry experience in data engineering
  • Experience with Kafka and stream processing is a must
  • Good scripting and programming skills using Python
  • Experience with Scala 
  • Experience with Airflow
  • Experience building ETL tools
  • Experience with an MLops tool like kubeflow, Argo, MLflow, etc.
  • Experience in using the following tools and technologies is a plus: Google Cloud Platform, Github, Docker, Kubernetes, MongoDB, Elasticsearch
  • The drive to learn and master new technologies and techniques.


You will work with some of the best global talent to build a tool used by great companies like Booking.com, Marley Spoon, Footlocker, Soundcloud, Tripaneer or WeTransfer. You will be part of a team of diverse and passionate people with a culture that empowers great work.


What we bring to the table

We do everything to make sure you feel motivated and supported by offering:

  • Teamwork & fun perks when full remote work is required (weekly team games & drinks, morning coffee chats and more)
    • Investment into your personal development using our network of internal and external mentors
    • New laptop & tools
    • Free lunch (even while working from home)
    • Flexible working hours and unlimited holiday policy
    • Remote possible within the EU time zone
    • Workations (2019: Tuscany, 2020: Zoom 😢, 2021: ???)