over 1 year ago
As a Data Engineering Lead, you will develop, maintain, test and evaluate Big Data solutions within our cloud-based Big Data Platform.
You should have 15+ years’ software development experience with agile methodology, continuous integration and automated releases.
Experience in engineering (commercial or open source) software platforms and large-scale data infrastructures (preferably on cloud) is absolutely essential.
- Building and Architecting a distributed and highly parallelised Big Data ingestion and processing pipeline which is able to process massive amounts of data (both structured and unstructured data) in near real-time.
- Evaluate and deploy data quality frameworks to measure and improve data quality across the
- Engage with internal and external stakeholders, both producers and consumers of data, to design data pipelines.
- Lead a team of Data Engineers and Site Reliability Engineers to deliver according to platform product roadmaps in a continuous delivery environment
- 15+ years in Data engineering / Platform Engineering
- Demonstrable ability to write production code in Scalar Python
- Good understanding of, and hands on experience with, the latest data technologies
- Experience with container technologies (OpenShift, Docker or k8s)
- Comfortable in environments with strong Site Reliability engineering culture
- Proven leadership experience from within a financial services or pure technology company
- Knowledge in design and implementation of data infrastructure on AWS or GCP
- Apache Spark and Apache Nifi
- Delivering data platforms
- In-memory data processing
- Implementation of data security capabilities such as encryption and anonymisation