Join a Data Engineering team working on Crypto Secure, a risk assessment platform providing visibility into crypto spend, transaction volumes, and AML risk.
Design, build, and optimise scalable data platforms that process large, complex financial datasets.
Work with modern data technologies including Snowflake, Hadoop, Spark, NiFi, and cloud-based data services.
Play a key role in delivering high-quality data solutions that directly impact fintech and regulatory risk products.
Build, maintain, and optimise robust data pipelines and ETL workflows for large-scale financial data.
Implement data architectures based on technical designs and evolving business requirements.
Write clean, maintainable, production-quality code aligned with engineering best practices.
Contribute to system design, performance tuning, and optimisation discussions.
Develop and enforce data quality checks, monitoring, and alerting to ensure reliable data delivery.
Support metadata management and data governance initiatives.
Participate actively in agile delivery, including sprint planning, estimation, backlog refinement, and retrospectives.
Collaborate closely with Product, Data Science, and Software Engineering teams to deliver integrated solutions.
Support issue triage, troubleshooting, and ongoing reliability of production data systems.
Mentor junior engineers and contribute to team knowledge sharing and technical standards.
Continuously learn and apply emerging data engineering tools, techniques, and best practices.
8+ years of experience in data engineering or a closely related technical role.
Strong hands-on experience with Snowflake, including performance tuning, security features, and advanced SQL development.
Expert-level SQL skills, with a proven ability to design and optimise complex queries.
Solid experience building and optimising data pipelines using big data technologies such as Hadoop, Spark, Databricks, Kafka, and NiFi.
Proficiency in Python or another object-oriented or functional programming language.
Experience working with cloud-based data platforms and services.
Knowledge of stream processing, message queuing, and distributed data processing concepts.
Experience with data pipeline orchestration and workflow management tools.
Ability to work with structured and unstructured datasets, extracting insights from complex data sources.
Familiarity with data governance, metadata management, and data quality frameworks.
Experience supporting production data systems in fast-paced, high-availability environments.
Strong understanding of modern software development practices, including version control, testing, and CI/CD.
Excellent problem-solving and analytical skills with strong attention to detail.
Proven ability to work independently while collaborating effectively in agile, cross-functional teams.
Strong communication skills, able to engage with both technical and non-technical stakeholders.
