Data Engineer (Contract)
About the Role
My client are looking for an experienced Data Engineer to join their team in Boston on an initial 12-month contract basis. This is a hands-on, delivery-focused role designed to carry a critical body of work through to completion. The successful candidate will be working close to the data every day: diagnosing issues, building and optimising pipelines, and delivering incremental progress within an existing codebase and data landscape.
This is not a strategy or architecture role. My client needs an executor: someone technically strong, comfortable in the detail, and capable of making consistent forward progress with limited oversight.
Key Responsibilities
- Write, optimise, and performance-tune complex SQL queries against large financial datasets
- Develop and maintain Python-based data pipelines and processing workflows
- Work with PostgreSQL, Parquet, and modern data engineering patterns on a daily basis
- Diagnose and resolve data quality issues, ensuring pipeline resiliency and reliability
- Apply orchestration best practices, including job partitioning to prevent full reruns on pipeline failures
- Handle financial and investment data including portfolios, reference data, positions, and analytics inputs
- Track tasks, manage your own workload, and keep progress moving in ambiguous or evolving situations
- Collaborate with internal stakeholders and operate independently within existing systems and documentation
Required Experience
- Strong, demonstrable SQL skills including performance tuning against large datasets, across PostgreSQL and/or SQL Server
- Solid Python experience (3+ years) for data processing and pipeline development
- Hands-on experience with PostgreSQL, Parquet, and medallion architecture
- Familiarity with Apache Spark and distributed data processing
- Exposure to Snowflake, including features such as time travel and partitioning strategies
- Proven experience working with financial or investment data: portfolios, reference data, positions, or analytics inputs
- Ability to operate independently within an existing codebase and data environment
- Data quality tooling experience, for example Great Expectations or equivalent
Highly Desirable
- FactSet experience, particularly across multiple modules including Portfolio Analytics (PA)
- Experience handling delayed or late-arriving data in financial or insurance workflows
- Familiarity with IBOR data and related import/partitioning patterns
- Comfort working in-office in a collaborative team environment
FAQs
Congratulations, we understand that taking the time to apply is a big step. When you apply, your details go directly to the consultant who is sourcing talent. Due to demand, we may not get back to all applicants that have applied. However, we always keep your CV and details on file so when we see similar roles or see skillsets that drive growth in organisations, we will always reach out to discuss opportunities.
Yes. Even if this role isn’t a perfect match, applying allows us to understand your expertise and ambitions, ensuring you're on our radar for the right opportunity when it arises.
We also work in several ways, firstly we advertise our roles available on our site, however, often due to confidentiality we may not post all. We also work with clients who are more focused on skills and understanding what is required to future-proof their business.
That's why we recommend registering your CV so you can be considered for roles that have yet to be created.
Yes, we help with CV and interview preparation. From customised support on how to optimise your CV to interview preparation and compensation negotiations, we advocate for you throughout your next career move.
