Senior Python Market Data Engineer | Multi-Strat Hedge Fund
A leading global multi‑strategy hedge fund is expanding its U.S. market data engineering team and is hiring a Senior Python Market Data Engineer to help design and build the next generation of historical tick‑data infrastructure supporting its systematic trading platform.
This role offers the opportunity to own large‑scale data pipelines that process massive volumes of exchange‑level tick data used by portfolio managers, quantitative researchers, and trading teams across the firm. You'll work on high‑performance ingestion, cleaning, normalization, and storage systems that serve as the foundation for model development and research.
The ideal candidate has deep experience with tick‑by‑tick market data, strong Python engineering skills, and a background building scalable data platforms leveraging cloud technologies and modern analytical storage formats. You'll collaborate closely with quants, data engineering, and platform teams to deliver reliable, high‑quality datasets to users across the business.
Qualifications:
- 8+ years of experience engineering market data systems or large‑scale financial data pipelines
- Strong proficiency in Python for data processing, transformation, and validation
- Hands‑on experience working with tick‑level exchange data
- Required experience with Parquet‑based storage formats for analytical workflows
- Experience with cloud platforms such as AWS, GCP, or Azure
- Exposure to timeseries databases (KDB, OneTick, etc.)
- Familiarity with containerization/orchestration (Kubernetes) in production environments
- (Preferred) Experience with Apache Iceberg or modern data lake architectures
- (Plus) C++ experience or familiarity with trading and market microstructure concepts
Responsibilities:
- Build, maintain, and optimize historical tick‑data pipelines supporting systematic research
- Develop high‑throughput ingestion and transformation workflows in Python
- Implement data quality, validation, and error‑handling frameworks to ensure clean, reliable datasets
- Manage large‑scale Parquet‑based storage for multi‑asset research and analytics
- Leverage cloud compute and storage to scale data processing and distribution
- Work with timeseries databases for fast querying and analysis workloads
- Improve throughput, reliability, and data availability across the historical platform
- Collaborate closely with quants, PMs, and engineering teams to deliver research‑ready datasets
- Contribute to long‑term architectural decisions around data modeling, storage, and platform scalability
FAQs
Congratulations, we understand that taking the time to apply is a big step. When you apply, your details go directly to the consultant who is sourcing talent. Due to demand, we may not get back to all applicants that have applied. However, we always keep your CV and details on file so when we see similar roles or see skillsets that drive growth in organisations, we will always reach out to discuss opportunities.
Yes. Even if this role isn’t a perfect match, applying allows us to understand your expertise and ambitions, ensuring you're on our radar for the right opportunity when it arises.
We also work in several ways, firstly we advertise our roles available on our site, however, often due to confidentiality we may not post all. We also work with clients who are more focused on skills and understanding what is required to future-proof their business.
That's why we recommend registering your CV so you can be considered for roles that have yet to be created.
Yes, we help with CV and interview preparation. From customised support on how to optimise your CV to interview preparation and compensation negotiations, we advocate for you throughout your next career move.
