Senior Azure Data Engineer / Databricks Developer / Python

The Data Engineer will proactively work with the solution architects, business owners, business representatives, other systems team members to understand business and governance requirements and to implement solutions to address these requirements. The Data Engineer will design, configure, develop and deploy data ingestion pipelines and transformations.  You will develop data pipelines using ARM templates so that they can be deployed automatically to other environments/brought back up. You will develop data transformations in Azure Databricks using Python and on Azure SQL using T-SQL and deployed using ARM templates. These data transformations will be used to combine and curate data as well as transforming it into dimensions and facts and therefore a strong knowledge of standard BI concepts is mandatory.

Key Responsibilities

  • Writes technical designs
  • Installs integration runtime on servers to Integrate with Azure
  • Develops / configures data ingestion / transformation pipelines
  • Develops / configures data ingestion / transformation maintenance routines (e.g. concatenation of files, pipeline monitoring, reconciliation)
  • Deploys data ingestion / transformation pipelines
  • Develops & deploys Microsoft Power BI dashboards
  • Establishes and monitors data quality initiatives 
  • Works with the Business data owners to document data definitions and business rules 
  • Identifies areas for data quality improvements and helps resolve data quality problems
  • Develops data visualisations using Microsoft Power BI

Skills & Experience

  • B.Sc. or Business degree or equivalent
  • 5+ years of experience in elements of data governance: quality, analysis, administration, architecture
  • Experience working as a Data Engineer on data warehousing solutions and cloud technology implementations
  • Experience working with data visualisation tools, e.g. Power BI, Tableau, etc.
  • Detailed knowledge and experience working with the following:
    • Microsoft Azure Data Factory V2 – including connectors to SQL Server, SAP and custom connectors. Other connectors may also be used depending on final source systems selected
    • Microsoft Azure Data Lake Store Gen2
    • Microsoft Azure ARM Templates / Azure Blueprints
    • Databricks (PySpark) – including integration of Azure Data Factory with Databricks
    • T-SQL
    • PowerShell or equivalent
    • Data formats (AVRO and Parquet)
    • Strong knowledge of BI concepts, e.g. enterprise data models, ER mapping, data warehouse design (dimensions & facts)
    • Azure DevOps
    • Integration Runtime
  • Superior attention to detail 
  • Strong analytical, problem-solving and organizational skills
  • Ability to compare technologies and make recommendations to senior members of the team.

Seniority level
Senior / Manager level

Employment type

Job function
Information Technology

Apply on LinkedIn


Appointments will be made in alignment with Integrove’s Employment Equity plan. If you have not been contacted by the company 2 weeks after application, please consider your application as not successful.

POPI Clause:

All the information submitted through the application process shall be used for the purpose of processing your application for employment at Integrove. Integrove undertakes to ensure that appropriate security control measures are implemented to protect all the personal information you have disclosed.