Summary
In details, the position encompasses duties and responsibilities as follows:
The position will lead the production of a comprehensive roadmap for the global trade surveillance rollout, manage the impacts from new business initiatives, and oversee BAU support on behalf of global Compliance. As the lead, you will be instrumental in guiding a team of data engineers, ensuring that the architecture and pipeline solutions align with organizational goals while overseeing the design, development, and optimization of data ingestion frameworks across the application portfolio.
The ideal candidate will have a strong background in leading teams to develop scalable data pipelines and frameworks that support trade surveillance analysts in decision-making and strategy. The team is in an exciting phase of enhancing its data lake house and ingestion architecture using the Azure Synapse stack, transitioning into DataBricks.
The ideal candidate disposes of:
- Lead and manage a team of data engineers focused on the design, development, and optimization of data ingestion pipelines for trade surveillance.
- Oversee the development of scalable and efficient data architectures to support global trade surveillance operations.
- Drive the execution of the global trade surveillance vision and strategy, ensuring alignment with business objectives and compliance requirements.
- Collaborate with global teams to define and implement a roadmap for trade surveillance rollout and business integration.
- Guide the team through a phase of hardening and re-engineering the data lake and ingestion pipelines to improve speed, ease, and data quality.
- Ensure seamless integration of real-time data processing capabilities through the extension of pipelines to a Kappa-like architecture.
- Serve as the technical point of contact for stakeholders, providing leadership in both strategic and technical decision-making.
- Foster an agile, high-performance team culture that embraces innovation and adapts quickly to organizational changes and shifting priorities.
- Ensure the quality of work by enforcing high standards and attention to detail across all data engineering tasks.
Skills:
- Bachelor's degree in Information Systems, Computer Science, Engineering, or a related field, or an equivalent combination of education, training, and experience.
- Proven experience leading data engineering teams, including managing team dynamics and delivering on technical goals.
- Strong communication and interpersonal skills, with the ability to translate complex technical issues to both technical and non-technical audiences.
- Solid experience in Agile methodologies, particularly working in SCRUM or KANBAN environments.
- Proven ability to quickly learn new technical and business concepts and adapt to organizational changes.
- Strong problem-solving capabilities, with a quick-thinking mindset under pressure.
- Set and enforce data management standards, ensuring best practices across the organisation.
- Conduct solution reviews and designs, providing assurance on development approaches.
- Manage multiple project deliveries simultaneously with outstanding attention to detail.
- Provide guidance and support to wider data teams as required
- Strong Python, Pyspark, Synapse, DataBricks or similar technologies.
- Extensive experience in Data Lakehousing practices, particularly using SQL, DataBricks, and Azure Data Factory (ADF).
- Proficient in SQL for data manipulation and query optimization.
- Experience with Data Pipelines, Big Data technologies, and data architectures.
- Knowledge of CI/CD processes and related tools.
- Experience with tools like PyTest for test automation.
- Experience or understanding of commodity trading operations, particularly in Oil, Gas, Shipping, and related sectors.
- Familiarity with tools like Jenkins, Octopus, Git for version control and continuous deployment.
- Experience with Docker and containerized environments.