Azure Data Engineer


Paris
Contract
Negotiable
Financial Technology
CR/592720_1778840023
Azure Data Engineer

Azure Data Engineer
Location: Paris, Hybrid (on-site 6 days per month)
Industry: Financial market infrastructure
Contract: 12 months (extendable)
Start Date: ASAP

Job Description
Our client is a leading international financial market infrastructure organisation operating large-scale, mission‑critical platforms across Europe. The business is investing heavily in modern cloud and data technologies and is building a polyvalent data engineering team where engineers own data products end‑to‑end.

The environment is collaborative and engineering‑led, with a strong focus on quality, scalability, and long‑term platform evolution.

Role Overview:
We are looking for an Azure Data Engineer to design, build, and maintain modern data products across cloud and on‑prem platforms. You will work across the full data lifecycle - from ingestion and transformation through to analytics and visualisation - within a modern lakehouse architecture.

This is a hands‑on role suited to engineers who enjoy moving across the stack: cloud infrastructure, Spark pipelines, SQL modelling, and analytics layers.

Key Responsibilities:

Azure Cloud & On‑Prem Platforms:

  • Design and orchestrate data pipelines on Azure (ADF, Functions, Event Hub, Lakehouse).
  • Build scalable data transformations using Databricks / Spark.
  • Integrate cloud and on‑prem data platforms.

Data Engineering:

  • Model and optimise datasets using SQL, Python, and Scala/Java.
  • Implement CI/CD, data quality checks, governance, and performance optimization.
  • Contribute to lakehouse‑based data architectures.

Analytics & Consumption:

  • Build curated semantic models and datasets for downstream consumption.
  • Create Power BI datasets and dashboards where required (optional but valued).

Required Skills & Experience:

  • Strong experience with Azure data engineering.
  • (Data Lake, ADF, Key Vault, Functions, Event Hub).
  • Hands‑on Databricks + Spark, including structured streaming and performance tuning.
  • Advanced SQL for data modelling and optimization.
  • Strong Python development skills.
  • Java or Scala (nice to have but highly valued).
  • Experience working with lakehouse architectures.
  • Understanding of CI/CD, data governance, and reliability in enterprise environments.
  • Comfortable switching between Spark code → SQL modelling → analytics layer

FAQs

Congratulations, we understand that taking the time to apply is a big step. When you apply, your details go directly to the consultant who is sourcing talent. Due to demand, we may not get back to all applicants that have applied. However, we always keep your CV and details on file so when we see similar roles or see skillsets that drive growth in organisations, we will always reach out to discuss opportunities.

Yes. Even if this role isn’t a perfect match, applying allows us to understand your expertise and ambitions, ensuring you're on our radar for the right opportunity when it arises.

We also work in several ways, firstly we advertise our roles available on our site, however, often due to confidentiality we may not post all. We also work with clients who are more focused on skills and understanding what is required to future-proof their business. 

That's why we recommend registering your CV so you can be considered for roles that have yet to be created. 

Yes, we help with CV and interview preparation. From customised support on how to optimise your CV to interview preparation and compensation negotiations, we advocate for you throughout your next career move.

Handpicked roles for you