Notice: While Javascript is not essential for this website, your interaction with the content will be limited. Please turn Javascript on for the full experience.

Data Engineer (m/f/d) New
Berlin, Berlin, Germany

Job Title

Data Engineer (m/f/d)

Job Description

We are looking for a passionate and eager to learn Data Engineer (m/f/d) who will take on responsibility in the engineering team and directly contribute to the growth and success of Coya. Please apply through the link directly.

Some of the activities you’ll be involved in:

  • Data pipeline management - integrate third-party systems and APIs (inbound and outbound) into our internal systems and data lake using Airflow and Python scripts
  • Data lake governance - own data sourcing, validation, processing & ingestion into data lake (S3)
  • Define and implement data reconciliation solutions between data sources and data warehouse
  • Collaborate with BI Analysts on data warehouse management
  • Build data quality monitoring and alerting for incoming integrations to ensure validity within our system
  • Refactor an existing project into python (validation and processing of incoming data with alerting, Lambda functions to integrate with outside APIs)


  • Telecommuting is OK
  • No Agencies Please


  • Experience in working with data pipelines and data management systems
  • Experience building and maintaining backend Python services
  • Experience building and deploying Dockerized applications
  • Experience with AWS services (CLI, S3, Redshift, Lambda)
  • (nice to have) Experience with Terraform

About the Company

At Coya, the engineering team is passionately and tirelessly working to support the intricate requirements of a complex scaling digital insurance business. On top of Kubernetes, at the core of our architecture lies a functional back-end built in Scala, with distributed micro services that communicate via Kinesis event streams and REST API calls. Our front-end Coya customer acquisition funnel in React allows for flexibility, ease of configurability and openness to experimentation, as we quickly adapt to the changing market needs. We aim to be highly data driven in our development process - for that we built an internal ETL process that enables our BI engineers to analyse our data thoroughly in order to help our teams make better informed product decisions.

Contact Info

Previous Senior Backend Engineer, Zyte (formerly Scrapinghub) in Remote, Remote Next Lead Fullstack Developer (Python/React), Stream in Amsterdam, Netherlands