Tasks:

  • Design, implement, and maintain data pipelines for extracting, transforming, and loading (ETL) data from various sources into a data warehouse
  • Monitor and optimize ETL processes to ensure they run error-free and efficiently
  • Manage databases and data warehouses to ensure the integrity, availability, and security of the data
  • Collaborate with other teams such as data analysts and data scientists to ensure data availability
  • Improve data quality and data infrastructure to increase efficiency of data analysis
  • Support data teams in developing new models for various analysis projects

Requirements:

  • Bachelor’s or Master’s degree in Computer Science, Mathematics, Statistics, or a related field
  • Experience in developing and implementing ETL/ELT processes
  • In-depth knowledge of databases, SQL, and at least one scripting language (such as Python)
  • Experience with cloud-based data warehouse platforms such as Microsoft Azure Synapse, Google BigQuery, or Databricks
  • Knowledge of big data technologies such as Hadoop, Spark, or Kafka is a plus
  • Basic knowledge of dbt (Data Build Tool) is highly advantageous
  • Experience working with Linux systems and using shell scripts
  • Good problem-solving skills and the ability to work in a team
  • Fluent in (German , English or Italian) in both written and verbal communication

Your advantages:

Bright, modern offices
Central location
Flexible working hours
40 hours per week
Exciting environment
Young team
Lunch check & Reka money
Fleet discount