21 Data Pipelines jobs in South Africa
Lead Developer: Data Architecture
Posted today
Job Viewed
Job Description
Lead Developer
3 Months Renewable Contract
Available Immediately
Banking
Role Overview
We are seeking a highly skilled
Lead Developer
with strong expertise in
data architecture, data engineering, data modelling, and SQL
to join our team in the banking sector. The successful candidate will play a critical role in designing, building, and optimizing scalable data solutions while providing technical leadership and mentorship to a junior team of developers and analysts. This role requires both hands-on technical expertise and the ability to guide and upskill a growing team.
Key Responsibilities
- Lead the design, development, and implementation of
robust data architectures
to support banking applications and analytics. - Develop, optimize, and maintain
ETL/ELT pipelines, data warehouses, and data lakes
. - Drive the design and enforcement of
data modelling standards
to ensure accuracy, consistency, and scalability across systems. - Write efficient and optimized
SQL queries
, stored procedures, and database scripts for complex data processing. - Collaborate with stakeholders across business and IT to translate requirements into scalable technical solutions.
- Ensure data solutions are compliant with
regulatory, governance, and security requirements
in the banking sector. - Provide
hands-on technical leadership
, code reviews, and best practices to elevate the technical quality of deliverables. - Mentor, coach, and upskill a junior team of developers and analysts, fostering a culture of knowledge sharing and continuous improvement.
- Stay current with industry trends, emerging technologies, and best practices in
data engineering and architecture
.
Key Skills & Competencies
- Strong leadership and
mentorship abilities
, with proven experience developing junior talent. - Excellent communication and stakeholder management skills, with the ability to explain complex technical concepts to non-technical audiences.
- Strong problem-solving skills, analytical thinking, and attention to detail.
- Ability to work under pressure in a
regulated and fast-paced banking environment
.
Technical Requirements
- Proven expertise in
SQL development
(query optimization, stored procedures, performance tuning). - Strong knowledge of
data architecture and data modelling principles
(relational, dimensional, and NoSQL). - Experience with
data engineering frameworks
and technologies (e.g., Python, Spark, Kafka, Airflow, or similar). - Experience designing and managing
data warehouses and/or data lakes
(e.g., Snowflake, Redshift, BigQuery, or equivalent). - Proficiency in
ETL/ELT design and implementation
. - Familiarity with
cloud platforms
(AWS, Azure, or GCP) and their data services. - Strong understanding of
data governance, data quality, and security practices
within financial services.
Qualifications & Experience
- Bachelor's degree in Computer Science, Information Systems, Engineering, or related field (Master's advantageous).
- 8+ years of experience in
data engineering, data architecture, and SQL development
. - 3+ years in a
leadership or senior developer role
, with mentorship experience. - Prior experience in the
banking or financial services sector
is strongly preferred.
Data Engineering and Architecture Lead
Posted today
Job Viewed
Job Description
At
X, bigly labs
, we're Dis-Chem's high-performance innovation hub, where bold ideas meet data, design, and radical customer focus. Our mission is simple:
power the future of healthcare
by lowering costs, improving outcomes, and unlocking new possibilities. We're driven by one big question:
How do we use data + technology today to create healthier lives tomorrow?
Here, we don't just imagine the future,
we build it
. From cutting-edge digital solutions to smarter, patient-focused experiences, we're reimagining health tech to make breakthroughs possible.
Welcome to
X, bigly labs.
This is healthcare, reimagined.
From retail and pharmacy to clinics, insurance and beyond, we're applying machine learning and smart systems to reimagine healthcare pathways and create personalised experiences at scale. Here, your work doesn't sit in a notebook or on a whiteboard. It becomes real. It shapes decisions. It improves lives. And we do it all the Bigly way, questioning, challenging, and building audaciously, together.
We are seeking a visionary
Data Engineering and Architecture Lead
to lead the design, development, and optimisation of our enterprise data ecosystem. This role is responsible for building scalable, secure, and high-performance data platforms that power analytics, AI, and operational excellence across Dischem and its affiliated businesses. You will be the architect of our data assets by ensuring that our data infrastructure is not only robust and compliant, but also agile enough to support innovation in healthcare, retail, insurance, and beyond.
WHAT WE'RE LOOKING FOR?
Minimum
- Bachelor's degree (or the equivalent) in Computer Science, Data Engineering, or related field
- 7+ years of experience in data engineering or architecture, with 3+ years in a leadership role
- Deep expertise in SQL, Python or/and Spark, Cloud platforms (Azure or AWS) and cloud-native tools (e.g. DataBricks)
- Proven experience designing, implementing scaling enterprise data platforms using the Medallion Architecture
- Strong understanding of data governance, security, and compliance frameworks
- Excellent leadership, communication, and stakeholder engagement skills
Advantageous
- Experience in healthcare, retail, or insurance data ecosystems
- Familiarity with data mesh, lakehouse architecture and real-time data processing
- Certifications in cloud architecture or data engineering
WHAT YOU WILL BE DOING?
- Enterprise Data Architecture
- Data Engineering Leadership
- Platform Enablement & Innovation
- Stakeholder Engagement & Governance
- Enterprise Data Architecture
- Data Engineering Leadership
- Platform Enablement & Innovation
- Financial & Vendor Management
WHO YOU ARE?
- Structured thinking and systems problem-solving
- Commercial fluency and ability to articulate value levers
- Strategic clarity balanced with practical execution
- Able to co-create solutions with technical teams
- Influences without authority and facilitates decision forums
- Drives initiatives independently with high standards of quality
Our values aren't just ideals, they're the through-lines in how we think, build, and make decisions that impact real lives. From bold experimentation in digital solutions to platforms built on integrity, we're shaping a culture designed for
progress that lasts
. It's a culture that
designs for the future
, asks better questions, and answers them with care, urgency, and systems that scale.
Think you've got the energy, the curiosity, and the guts? Stay close
b
igly things are ahead.
Data Engineering and Architecture Lead
Posted 10 days ago
Job Viewed
Job Description
We’re building the next-generation data backbone for a dynamic organisation.
We create scalable, secure, cloud-first platforms that power analytics, AI, and business intelligence across the enterprise.
Currently searching for a Data Engineering and Architecture Lead with deep AWS Cloud expertise and a passion for solving complex data challenges.
Requirements:
- Bachelor’s degree in computer science, information systems, or related field
- 10+ years’ experience in data engineering or architecture
- 5+ years leadership experience
- 5+ years architectural experience
- Proven experience with AWS services (S3, Glue, Redshift, Lambda, EMR, Athena)
- Expertise in SQL, Python, and ETL/ELT development
- Knowledge of data modelling (Kimball, Data Vault) and data governance
- Leadership experience managing technical teams
Responsibilities:
- Lead design and implementation of enterprise data architecture and integration strategies
- Build and manage scalable, high-performance data pipelines
- Ensure data availability and quality to support analytics, BI, and AI initiatives
- Collaborate with business and technology teams to translate requirements into solutions
- Define best practices, enforce standards, and mentor a team of data engineers
Reference number for this position is GZ60852 which is a permanent position based in Melrose Arch offering a cost to company salary of R1.8m per annum negotiable on experience and ability. Contact Garth on or call him on to discuss this and other opportunities.
Are you ready for a change of scenery? The E-Merge IT recruitment is a specialist niche recruitment agency. We offer our candidates options so that we can successfully place the right developers with the right companies in the right roles. Check out the E-Merge website for more great positions.
Do you have a friend who is a developer or technology specialist? We pay cash for successful referrals!
Data Integration
Posted today
Job Viewed
Job Description
The role of the Senior Data Integration Analyst encompasses many activities, including (but not limited to):
- Designing and implementing advanced data integration pipelines and ETL (Extract, Transform, Load) processes.
- Managing complex integrations across multiple systems and platforms to ensure seamless data flow.
- Collaborating with stakeholders to understand and define data integration requirements.
- Overseeing data governance and ensuring data integrity throughout integration processes.
- Mentoring, providing technical guidance and support.
- Troubleshooting and optimizing integration workflows for performance and reliability.
Minimum Qualification:
A Degree in Information Communication Technology (ICT) field incorporating (but not limited to) Computer Science or Software Engineering or Information Systems.
Minimum Experience:
- Minimum of 5 years' experience working with SQL, ETL, APIs; 3 years in data integration and agile-scrum environment within enterprise organisations of >1000 users.
Data Integration Analyst
Posted today
Job Viewed
Job Description
Design and implement advanced integration pipelines and ETL processes
Seamless Dataflow - Manage complex integrations to ensure seamless data flow
Collaboratewith stakeholders to define and understand data integration requirements
For senior roles:
Oversee data Governance and data integrity
Mentor in technical integration and troubleshoot to optimise integration performance
Degree in Information Communication( ICT related, Computer Science, Information Systems)
3 Years experience in ETL, SQL and API's, Integration and Analyses
Experience working in a large enterprise with at least a 1000 user headcount and multiproject environment
2-3 years experience in web-based application environments and Microsoft Office professional
Experience definingand implementing product/integration requirements in a Sprint, Scrum/Agile environment
Use of Integration tools such asAzure Data Factory, Informatica, or Talend
Between 3 - 5 Years
Senior Data Integration Engineer
Posted today
Job Viewed
Job Description
Job Description:
Job Title: Senior Data Integration Engineer (Salesforce, Databricks & MuleSoft)
Location: Johannesburg (Hybrid)
Employment Type: Contract
Contract Tenure: 6 to 12 months
Job Summary
We are seeking a highly experienced and strategic Senior Data Integration Engineer to architect, build, and manage the data pipelines that power our customer intelligence ecosystem. In this critical role, you will be the subject matter expert responsible for designing and implementing robust integrations between our core platforms: Salesforce Data Cloud, Databricks, and MuleSoft.
You will be responsible for creating a seamless flow of data, enabling advanced analytics, and empowering our business to activate real-time customer insights. The ideal candidate is a hands-on expert who can translate complex business requirements into scalable, secure, and high-performance technical solutions.
Required Skills & Experience
- 6+ years of professional experience
in a data engineering, integration development, or data architecture role. - Proven hands-on experience with MuleSoft:
Demonstrable expertise in designing, building, and managing APIs using the Anypoint Platform (API-led connectivity, DataWeave, connectors). - Strong proficiency in Databricks:
Hands-on experience developing data pipelines using
PySpark
, SQL, Delta Lake, and job orchestration. - Demonstrable experience with Salesforce Data Cloud:
In-depth knowledge of its data model, ingestion methods (Connectors, Ingestion API), identity resolution, segmentation, and activation capabilities. - Expert SQL & Python Skills:
Ability to write complex, efficient SQL queries and Python code for data manipulation and automation. - Solid understanding of data modeling principles
and experience designing and working with ETL/ELT processes. - Experience working with major cloud platforms (
AWS, Azure, or GCP
).
Preferred Qualifications
- Certifications:
- Salesforce Data Cloud Consultant
- MuleSoft Certified Developer / Architect
- Databricks Certified Data Engineer Professional
- Experience with other Salesforce clouds (e.g., Marketing Cloud, Sales Cloud).
- Knowledge of CI/CD and DevOps practices in a data context.
- Familiarity with streaming data technologies (e.g., Kafka).
Senior Data Integration Engineer
Posted today
Job Viewed
Job Description
Job Summary
We are seeking a highly experienced and strategic Senior Data Integration Engineer to architect, build, and manage the data pipelines that power our customer intelligence ecosystem. In this critical role, you will be the subject matter expert responsible for designing and implementing robust integrations between our core platforms: Salesforce Data Cloud, Databricks, and MuleSoft.
You will be responsible for creating a seamless flow of data, enabling advanced analytics, and empowering our business to activate real-time customer insights. The ideal candidate is a hands-on expert who can translate complex business requirements into scalable, secure, and high-performance technical solutions.
Key Responsibilities
- Architect Integration Solutions:
Lead the design and architecture of data integration patterns and end-to-end data flows between source systems, MuleSoft, Databricks, and Salesforce Data Cloud. - Develop MuleSoft APIs:
Design, develop, and deploy reusable, API-led integration solutions using MuleSoft's Any point Platform to ingest data into the ecosystem and to syndicate data to downstream systems. - Build Advanced Data Pipelines in Databricks:
Implement complex data transformation, cleansing, and enrichment pipelines using PySpark and SQL within the Databricks Lakehouse Platform. Prepare and model data for ingestion into Salesforce Data Cloud and for advanced analytics use cases. - Master Salesforce Data Cloud:
Configure and manage Salesforce Data Cloud, including setting up data streams, performing data mapping and harmonization, defining identity resolution rules, and creating insightful calculated metrics. - Enable Data Activation:
Collaborate with marketing, sales, and service teams to build and activate complex audience segments from Salesforce Data Cloud for use in personalization and campaign execution. - Ensure Governance and Performance:
Implement data quality checks, error handling, and performance monitoring across all platforms. Ensure solutions adhere to data governance policies, security standards, and privacy regulations. - Mentorship and Best Practices:
Act as a senior technical resource for the team, establishing best practices for integration and data engineering. Provide guidance and mentorship to junior team members. - Stakeholder Collaboration:
Work closely with business analysts, data scientists, and platform owners to gather requirements and deliver solutions that provide tangible business value.
Required Skills & Experience
- 6+ years of professional experience
in a data engineering, integration development, or data architecture role. - Proven hands-on experience with MuleSoft:
Demonstrable expertise in designing, building, and managing APIs using the Any point Platform (API-led connectivity, Data Weave, connectors). - Strong proficiency in Databricks:
Hands-on experience developing data pipelines using
PySpark
, SQL, Delta Lake, and job orchestration. - Demonstrable experience with Salesforce Data Cloud:
In-depth knowledge of its data model, ingestion methods (Connectors, Ingestion API), identity resolution, segmentation, and activation capabilities. - Expert SQL & Python Skills:
Ability to write complex, efficient SQL queries and Python code for data manipulation and automation. - Solid understanding of data modeling principles
and experience designing and working with ETL/ELT processes. - Experience working with major cloud platforms (
AWS, Azure, or GCP
).
Preferred Qualifications
Certifications:
- Salesforce Data Cloud Consultant
- MuleSoft Certified Developer / Architect
- Databricks Certified Data Engineer Professional
- Experience with other Salesforce clouds (e.g., Marketing Cloud, Sales Cloud).
- Knowledge of CI/CD and DevOps practices in a data context.
- Familiarity with streaming data technologies (e.g., Kafka).
Be The First To Know
About the latest Data pipelines Jobs in South Africa !
Senior Data Integration/ Analyst
Posted 14 days ago
Job Viewed
Job Description
Data Integration / Analyst (Senior-Level)
Posted 8 days ago
Job Viewed
Job Description
Mid-Level Data Integration/Analyst
Posted 14 days ago
Job Viewed