Data Engineer

Region Brussels/Gent/Antwerpen

Itenium is looking for a Data Engineer to join our growing team! Are you passionate about transforming data into actionable insights and building robust, scalable data solutions? If you thrive in a dynamic, innovative environment and are eager to tackle complex challenges, we want to hear from you.

At Itenium, you'll be at the forefront of digital innovation, contributing to the design and development of future-proof data platforms and solutions for our diverse clients across various industries. We value empowerment, continuous learning, and a collaborative spirit.

Please note: You must be a Dutch native speaker for this role.


Your Mission:

As a Data Engineer at Itenium, you will be instrumental in:

  • Designing and Implementing Data Pipelines: Build and maintain scalable, efficient, and reliable ETL/ELT pipelines using cutting-edge tools and cloud platforms.

  • Optimizing Data Infrastructure: Manage and optimize data lakes and warehouses, ensuring data quality, accessibility, and performance.

  • Developing Data Models: Design and implement robust data models for reporting, analytics, and machine learning initiatives.

  • Collaborating & Innovating: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data needs and drive innovative solutions.

  • Automating Workflows: Automate recurring data workflows and contribute to CI/CD pipelines for seamless integration and deployment.

What We're Looking For (Required Skills & Experience):

  • Experience: 5+ years of experience in data engineering or a related role.

  • Programming Languages: Strong proficiency in SQL and Python.

  • Cloud Platforms: Hands-on experience with at least one major cloud platform (e.g., Azure, AWS, or GCP).

  • Data Pipeline Tools: Experience with tools such as Microsoft Fabric, Azure Data Factory, Databricks, Spark, or Kafka.

  • Data Warehousing & Data Lakes: Solid experience with data warehousing concepts, data lakes (e.g., Azure Data Lake, Snowflake, Google BigQuery), and ETL/ELT processes.

  • Data Modeling: Good data modeling skills.

  • DevOps: Familiarity with DevOps concepts and CI/CD practices.

  • Adaptability & Proactiveness: Eager to learn new technologies and adapt to evolving client needs and project environments.

  • Client & Stakeholder Engagement: Strong communication and client-facing skills, with the ability to work effectively with diverse stakeholders and adapt to various client contexts.

  • Problem-Solving: Analytical mindset with excellent problem-solving abilities and attention to detail.

Nice to Have:

  • Experience with Power BI or other data visualization tools.

  • Exposure to machine learning workflows or AI solutions.

  • Relevant certifications (e.g., Microsoft Azure Data Engineer Associate, Databricks Lakehouse Professional).

What We Offer:

  • Challenging Projects: A challenging and varied role working on complex, impactful data projects with top-tier clients, ensuring a continuous learning curve and broad experience.

  • Professional Growth: Opportunities for continuous skill development through training, certifications, and career path support, with access to an internal knowledge base and expert community for mutual learning and growth.

  • Competitive Package: A competitive salary and comprehensive benefits package tailored to your experience and potential.

  • Flexible Work: Hybrid work possibilities with flexible hours and remote options.

  • Collaborative Culture: A warm, open, and supportive company culture with a focus on teamwork and collaboration.

  • Innovation: Work with the latest technologies and contribute to cutting-edge solutions in a forward-thinking organization.

Join Our Team!

If you're ready to make a significant impact and grow with a company that values your expertise and ambition long-term across diverse project environments, apply today! We are an equal opportunity employer and welcome candidates of all backgrounds.