We’re looking for an experienced Data Engineer to join our growing data function and play a key role in designing, building and optimising scalable data solutions. The ideal candidate will bring strong experience working with Databricks, alongside Apache Spark and SQL, and be comfortable operating within Azure or another cloud environment. Experience with DevOps practices - such as CI/CD, infrastructure as code, or platform automation - would be a major advantage, though it’s not essential.
If you enjoy working with large datasets, modern data tooling, and solving complex engineering challenges in a collaborative environment, we’d love to hear from you.
We work in a hybrid way, with a requirement to travel into our Walsall office 2-3 times a week to work with the team.
The Business Intelligence and Data department is responsible for the provision of reporting from key business systems to other areas of the business as well as the integrity and maintenance of master data held on those systems, ensuring there is sufficient governance around the set-up processes and exception monitoring in place to identify potential issues.
As a Data Engineer at HomeServe, you will play a key role in building and maintaining our data infrastructure and pipelines. You will work closely with data scientists, analysts, and other stakeholders to ensure that data is available, accessible, and ready for analysis.
PRINCIPAL ACCOUNTABILITIES:
- Data Pipeline Development: Develop, and maintain data pipelines to ingest, transform, and load data from various sources into our data warehouse or data lake.
- Data Integration: Integrate and consolidate data from diverse sources, ensuring data quality, consistency, and reliability.
- ETL Processes: Build and optimise ETL (Extract, Transform, Load) processes to clean, transform, and enrich data for analysis.
- Performance Optimization: Review and resolve performance bottlenecks and issues within data pipelines and ETL processes.
- Data Security: Implement data security measures to protect sensitive information and ensure compliance with data privacy regulations.
- Monitoring and Maintenance: Implement monitoring and alerting solutions to identify and address data pipeline issues.
- Documentation: Maintain documentation for data pipelines, ETL processes, and best practices.
- Testing: Undertake testing activities on developed changes ahead of release