Contract Databricks Data Engineer
Skills & Technologies
Job Description
Databricks Data Engineer (Contract) - Mid to Senior Level Location: London (Hybrid - 2 days per week onsite) Contract Length: Initial 6 months Day Rate: Flexible (Inside & Outside IR35 considered - final determination pending)
We're partnering with a client seeking a skilled Databricks Data Engineer to support the build and evolution of a modern data platform. This is a hands-on contract role, ideally suited to someone who thrives working across both new platform development and ongoing production support within a Databricks-driven environment. The Role This role sits at the heart of a Databricks Lakehouse implementation, where you'll be responsible for building, optimising, and maintaining scalable data pipelines and datasets to support analytics and business intelligence.
You'll work closely with technical and business stakeholders to deliver high-quality, performant data solutions while also ensuring the stability and reliability of existing workloads. Key Responsibilities
Design, build, and optimise data pipelines within Azure Databricks using PySpark and Spark SQL
Develop and manage Delta Lake-based data models, supporting a structured Lakehouse / Medallion architecture (Bronze, Silver, Gold layers)
Support the ingestion and transformation of large-scale data from multiple sources into Databricks
Contribute to the modernisation and migration of legacy SQL Server workloads into Databricks
Monitor, troubleshoot, and improve the performance of existing data pipelines and jobs
Work closely with stakeholders to ensure data is reliable, well-structured, and ready for analytics and reporting
Integrate and orchestrate workflows using tools such as Azure Data Factory or Databricks Workflows
Collaborate with BI teams to ensure datasets are optimised for Power BI and downstream consumption
Required Skills & Experience
Strong commercial experience working with Azure Databricks as a core data processing platform
Deep expertise in
PySpark / Apache Spark for distributed data processing
Delta Lake (ACID transactions, optimisation, data versioning)
Spark SQL and advanced SQL techniques
Python for data engineering and pipeline development
Hands-on experience designing and implementing Lakehouse architectures
Experience migrating data platforms from on-premises systems into cloud-based Databricks environments
Solid understanding of data modelling, ETL/ELT design, and performance optimisation
Familiarity with orchestration tools such as Azure Data Factory, Airflow, or Databricks Jobs
Experience working in production environments with both project delivery and BAU responsibilities
Desirable Experience
Exposure to Unity Catalog, data governance, and fine-grained access control in Databricks
Experience implementing CI/CD pipelines for Databricks (e.g. via Azure DevOps or Git integration)
Knowledge of streaming data pipelines (Spark Structured Streaming / Kafka)
Experience working with cloud-native data architectures in Azure
Prior exposure to Power BI or similar BI/reporting tools
Please Note: This is a contract role for UK residents only. This role does not offer Sponsorship. You must have the right to work in the UK with no restrictions. Some of our roles may be subject to successful background checks including a DBS and Credit Check. TRG are the go-to recruiter for Power BI and Azure Data Platform roles in the UK, offering more opportunities across the country than any other. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, the London Power BI User Group, Newcastle Power BI User Group and Newcastle Data Platform and Cloud User Group. To find out more and speak confidentially about your job search or hiring needs, please contact me directly via email. - re write this spec for a data engineer aligned with Databricks
Company & Role Analysis
JobSeeker+Neutral 2–4 sentence summary of what working at this company is like, drawn from public reviews and press coverage. Tone, collaboration style, pace, benefits highlights.