338 Data Engineer jobs in South Africa
Data Engineer
Posted today
Job Viewed
Job Description
Requisition nr: 140983
Talent Acquisition Specialist: Tshego Semenya
Location: 135 Rivonia Road, Sandown
Closing date: 25 July 2025
Nedbank, Personal and Private Banking
Career Stream It Application Development Leadership Pipeline Manage Self: Technical PositionData Engineer
Why join our team!Join a collaborative and technically driven team as a Data Engineer, where your skills in SQL and SSIS will be central to maintaining and enhancing critical data operations. This role offers the opportunity to work on a custom-built data system and contribute to innovative projects that introduce new technologies such as Ab Initio, Python, and C#. You’ll be part of a supportive environment that values teamwork, open communication, and continuous learning. With direct client engagement and the chance to influence technical decisions, this position is ideal for professionals who are passionate about solving complex data challenges and driving meaningful improvements.
Job PurposeTo support and maintain the data warehouse in line with the data model; metadata repository and to provide business intelligence analysis through performing strategic and operational support.
Job Responsibilities- Contribute to a culture conducive to the achievement of transformation goals by participating in Nedbank Culture building initiatives (e.g. staff surveys etc).
- Participate and support corporate social responsibility initiatives for the achievement of key business strategies.
- Identify and recommend opportunities to enhance processes; systems and policies and support implementation of new processes; policies and systems.
- Deliver work according to customer expectations by prioritizing; planning and implementing requirements.
- Utilize resources by adhering to standards; policies and procedures.
- Align and continuously improve set processes by identifying innovation opportunities.
- Identify and mitigate risk by executing within governance.
- Resolve incidents by logging and tracking through correct channels Keep abreast of legislation and other industry changes that impacts on role by reading the relevant newsletters; websites and attending sessions.
- Understand and embrace the Nedbank vision and demonstrate the values through interaction with team and stakeholders.
- Improve personal capability and stay abreast of developments in field of expertise by identifying training courses and career progression for self through input and feedback from managers
- Ensure personal growth and enable effectiveness in performance of roles and responsibilities by ensuring all learning activities are completed; experience practiced and certifications obtained and/or maintained within specified time frames.
- Ensure information is provided correctly to stakeholders by maintaining knowledge sharing knowledge with team.
- Structure data into compliance standards by adhering to metadata governance procedures according to Nedbank documented standards and formats.
- Manage final transformed data content by complying to prescribed standards for reviewing and publishing.
- Assist/govern population of datamart and metadata repository by complying to standards; systems; processes and procedures.
- Support business units by providing consulting services that delivers data and information relevant to their business
- Contribute to internal/external information sharing sessions by attending formal and informal meetings.
- Manage vendor relationship interactions by conforming to vendor management office guidelines and principles.
- Advanced Diplomas/National 1st Degrees
- Degree in Information Technology or Business Management, Mathematical/Statistics
- Data Management (DAMA) Certification, Certification/formal training in relevant technology
- 8 years relevant experience of which 3-5 years experience is in a data management /business role
- Built and maintained stakeholder relationships
- Client and Relationship Results
- Developed and Implemented Communications Strategy
- Improved Processes and Culture
- Manage internal process
- Managed Relationships
- Managed Self
- Supported Transformation, Change and continued Improvement
- Cloud Data Engineering (Azure , AWS, Google)
- Data Warehousing
- Databases (PostgreSQL, MS SQL, IBM DB2, HBase, MongoDB)
- Programming (Python, Java, SQL)
- Data Analysis and Data Modelling
- Data Pipelines and ETL tools (Ab Initio, ADB, ADF, SAS ETL)
- Agile Delivery
- Problem solving skills
Preference will be given to candidates from the underrepresented groups
Please contact the Nedbank Recruiting Team at +27 860 55566
---
Please contact the Nedbank Recruiting Team at +27 860 555 566
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
AstuteTech Data is looking for an experiencedDatabricks Engineerto join a one-year contract engagement with one of our enterprise clients. This is aremoteopportunity, open to candidatesbased in South Africa.
You’ll be working independently on a high-impactevent and behavioral dataproject (think Adobe Analytics, web logs, and large-scale user interaction data). If you're hands-on, pragmatic, and confident working solo in a fast-moving environment, we'd love to hear from you.
Key Skills
- StrongDatabricksengineering experience (hands-on)
- Deep knowledge ofSpark / PySpark
- Experience modellingevent-based or behavioural data(e.g., Adobe Analytics, web logs)
- Comfort withunstructured and nested JSONdata
- Strongperformance tuningskills in Spark
- Proficiency inPythonandSQL
- Familiarity withUnity Catalog(preferred)
- Basic working knowledge of theAzureecosystem (e.g., ADO, storage, functions)
- Exposure todata modelling at scalein enterprise environments
Start: ASAP (following client interview)
Duration: 12 months
Hours: Average 80 hours per month
Location: Remote (within South Africa)
Engagement: Independent contributor (you’ll be the primary engineer on the client side)
Send your CV to
Seniority level- Seniority level Mid-Senior level
- Employment type Contract
- Job function Engineering and Information Technology
Cape Town, Western Cape, South Africa 1 month ago
Civil & Construction Engineer - Remote (UK Projects) Senior IT Infrastructure specialist (Remote, Contract)Johannesburg, Gauteng, South Africa 5 hours ago
Senior IT Infrastructure specialist (Remote, Contract)Cape Town, Western Cape, South Africa 5 hours ago
Senior IT Infrastructure specialist (Remote, Contract)Durban, KwaZulu-Natal, South Africa 5 hours ago
Cape Town, Western Cape, South Africa 6 days ago
Senior Process Mining Specialist (12-month contract)Sandton, Gauteng, South Africa 2 weeks ago
Product Manager (Card Acquiring) – CONTRACT, 6-MONTH, R900PH – Build one of South Africa’s most strategic digital payment offerings – Remote, Cape TownCity of Cape Town, Western Cape, South Africa 3 weeks ago
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrData Engineer
Posted 1 day ago
Job Viewed
Job Description
Job Description
Job Summary :
A Data Engineer will be responsible for designing, building, and maintaining the infrastructure and systems that enable our organisation to collect, store, and analyse large volumes of data efficiently and securely. The candidate will develop data pipelines, integrate data from various sources, and ensure data is clean, reliable, and accessible for analysis.
Working closely with data analyst, and the team in our Efficiency department, the data engineers will focus on the architecture and performance of databases and data flows, helping ensure that data is well-organized and readily available for business intelligence and analytics purposes.
Key Responsibilities :
- Data Architecture & Design : Design, develop, and optimize scalable and efficient data architectures and data pipelines for ingesting, transforming, and loading data from diverse internal and external sources. A0WSA
- ETL / ELT Development : Build and maintain Extract, Transform, Load (ETL) or Extract, Load, transform (ELT) processes to consolidate data from various departmental systems and databases (e.g., spreadsheets, legacy systems, operational databases).
- Data Centralization : Implement solutions to centralize all available data, ensuring it is readily accessible and properly structured for analysis and reporting.
- Data Quality & Integrity : Implement robust data quality checks and validation procedures within the data pipelines to ensure the accuracy, completeness, and consistency of consolidated data. Develop monitoring tools to identify and address data integrity issues proactively.
- Database Management : Manage and optimize data storage solutions (e.g., relational databases, data warehouses, data lakes) to support analytical needs and efficient data retrieval.
- Integration : Work closely with the implementation team to ensure seamless data flow and integration between existing data sources and the new Zoho CRM system. This includes designing and implementing data migration strategies.
- Performance Optimization : Optimize data pipelines and database queries for performance, scalability, and cost-efficiency.
- Documentation : Create and maintain comprehensive documentation for data pipelines, data models, and data infrastructure.
- Collaboration : Collaborate with Data Analysts, business stakeholders, and IT teams to understand data requirements and ensure data solutions meet business needs.
Requirements
Qualifications and Desired Skills :
Requirements
Data Engineer
Posted 2 days ago
Job Viewed
Job Description
The Data Engineer will be responsible for designing, building, and optimising data pipelines and ensuring the efficient movement and transformation of data across various systems.
The ideal candidate will have expertise in SQL, data modelling, and experience with cloud-based or on-premises data solutions.
Additionally, knowledge of ML and statistical methodologies is beneficial for supporting advanced analytics.
Key Responsibilities:
- Develop, maintain, and optimise data pipelines for structured and unstructured data.
- Design and implement scalable and efficient data models to support analytical and operational reporting.
- Ensure data integrity, accuracy, and performance through best practices in database management.
- Work closely with data analysts, data scientists, and business stakeholders to understand data requirements.
- Implement ETL/ELT processes for data ingestion, transformation, and storage.
- Optimise database performance, including query tuning and indexing strategies.
- Monitor and troubleshoot data pipeline issues to ensure reliability and efficiency.
- Stay up to date with industry best practices and emerging data technologies.
Requirements:
- Bachelor’s degree in Computer Science, Information Systems, or a related field.
- 5+ years of experience in data engineering, database management, or a related role.
- Strong proficiency in SQL, including complex queries, indexing, and performance tuning.
- Expertise in data modelling and schema design for relational and NoSQL databases.
- Experience with data integration tools and ETL/ELT processes.
- Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.
- Strong understanding of the principles of data governance and security.
Preferred Skills & Experience:
- Exposure to big data technologies such as Spark, Hadoop, or Snowflake.
- Knowledge of Python or other scripting languages for data processing.
- Background in ML and statistics to support data science initiatives.
- Experience with CI/CD practices for data pipeline deployment.
- Proficiency in working with APIs for data extraction and integration.
Key Competencies:
- Strong analytical and problem-solving skills.
- Excellent communication and collaboration abilities.
- Ability to manage multiple projects in a fast-paced environment.
- Attention to detail and a focus on data quality.
Data Engineer
Posted 2 days ago
Job Viewed
Job Description
CHEP helps move more goods to more people, in more places than any other organization on earth via our 347 million pallets, crates and containers. We employ approximately 13,000 people and operate in 60 countries. Through our pioneering and sustainable share-and-reuse business model, the world’s biggest brands trust us to help them transport their goods more efficiently, safely and with less environmental impact.
What does that mean for you? You’ll join an international organization big enough to take you anywhere, and small enough to get you there sooner. You’ll help change how goods get to market and contribute to global sustainability. You’ll be empowered to bring your authentic self to work and be surrounded by diverse and driven professionals. And you can maximize your work-life balance and flexibility through our programs.
Job Description
As detailed in the attached Job Description,
- Global, ASX-listed market leader in Logistics & Supply Chain | One of the World’s most sustainable companies
- Dynamic, collaborative team that’s shaping the future of digital supply chains
As part of the innovative Brambles Data & Analytics team, reporting directly to the Team Lead Data Engineering, you will take the lead in creating and optimizing complex data models, ensuring data quality, security, and compliance. Your expertise will support business decisions across the organization.
This position is based in South Africa.
What You’ll Do :
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets to meet business requirements
- Identify, design, and implement process improvements, automate manual processes, optimize data delivery, and redesign infrastructure for scalability
- Build infrastructure for data extraction, transformation, and loading using SQL and AWS technologies
- Develop analytics tools to provide insights into operational efficiency and performance metrics
- Support stakeholder teams with data infrastructure needs
- Maintain data security and documentation through a centralized analytics data lake for the APAC region
- Create data tools for analytics and data science teams
- Collaborate with data experts to enhance system functionality
- Build dashboards and reports to aid senior management decision-making
- Design visual reports and dashboards using visualization tools
What will ensure your success in the Role :
- Relevant tertiary qualifications in Computer Science, Statistics, or related fields
- 7+ years experience in similar roles within complex, multinational environments
- Advanced SQL skills and experience with relational databases
- Experience with data modeling, Power BI, and other visualization tools
- Experience with big data pipelines and architectures, preferably Apache Spark
- Proficiency in cloud solutions like Azure/AWS
- Knowledge of SCRUM/Agile methodologies
- Understanding of machine learning techniques and their applications
- Experience performing root cause analysis and process improvements
- Familiarity with AWS services and data integration tools
- Knowledge of data mining and modeling tools such as SSAS
- Programming skills in Python, Java, Scala, etc.
Remote Type: Hybrid Remote
Skills to succeed in the role:
Active Learning, Adaptability, AWS, Apache Airflow, Cross-Functional Work, Curiosity, Data Architecture, Data Engineering, Data Integration, Data Security, Digital Literacy, Emotional Intelligence, Empathy, Initiative, Java, Problem Solving, Python, Scala
We are an Equal Opportunity Employer committed to diversity and fair treatment of all employees and applicants.
#J-18808-LjbffrData Engineer
Posted 2 days ago
Job Viewed
Job Description
We currently have numerous vacancies for Data Engineers skilled in tools such as the Microsoft Stack (SSIS, SSRS, SSAS), Power BI, AWS, Azure, DataStage, SAS etc.
Duties :
- Design, develop, test and deploy ETL for ODS and data mart projects, as well as application and management reports.
- Provide technical support, troubleshooting and upgrade setup or support on ETL and database related issues.
- Research and evaluate alternative IT solutions to make appropriate recommendations to meet the business needs for management information.
- Perform root cause analysis, performance monitoring and application related issues.
- Review IT work products from the team members for completeness and quality.
- Accurately translate business requirement to technical documentation and test case or results.
- Analyse and map data from source systems to target operational data stores and data marts.
- Maintain and provide application support in production.
- Code or folder migration from one environment to another as part of release management.
- Participation in all aspects of quality assurance.
- Active participation in systems integration and user acceptance testing.
Required Skills :
- Conventional and data warehouse modeling skills are required, in order to understand the various data models and to define the mappings between them.
- System analysis and design skills are necessary to design and document the data extractions and transformations.
- Expert knowledge of the capabilities of the ETL tools being used, to know what their capabilities and shortcomings are – in order to exploit or avoid those aspects in the ETL program designs.
- Good organisation, planning and basic management skills.
- Good interpersonal and communication skills.
- Decision making and problem solving skills.
Required Qualifications / Training :
- Relevant data warehouse and BI solution training is essential.
- B.Sc. or related degree is advantageous.
- 5+ years programming experience.
- In order to comply with the POPI Act, for future career opportunities, we require your permission to maintain your personal details on our database. By completing and returning this form you give PBT your consent
Data Engineer • Johannesburg, South Africa
#J-18808-LjbffrData Engineer
Posted 4 days ago
Job Viewed
Job Description
Location: Gauteng / Western Cape (Work remotely with regular in-person meetings).
Reports to: The appointee will report to the Support Manager.
Main purpose of role
The Data Engineer is responsible for designing, building, and maintaining scalable data pipelines and architectures that support real-time and batch processing of data. This role ensures high data quality, availability, and reliability for internal teams and clients, enabling efficient analytics, reporting, and decision-making across supply chain software implementations.
Key Outcomes
- Build and maintain scalable data pipelines to support client and internal analytics needs.
- Ensure reliable analytics as well as integration between our inhouse systems (Warehouse Management Systems, Transport Management Systems, Digitisation systems, etc.) and external platforms.
- Collaborate with the Implementation, Support, and Technical teams to deliver data-driven insights.
- Implement data quality, governance, and security best practices.
- Support client reporting and dashboards.
Key Responsibilities includes, but is not limited to:
- Design and build data pipelines to extract, transform, and load (ETL/ELT) data from diverse sources including WMS, TMS, ERPs, and APIs.
- Work with SQL/NoSQL databases to manage and query structured and unstructured data.
- Integrate cloud services with on-premises systems to enable hybrid data solutions.
- Develop scripts and automation to support data validation, transformation, and migration tasks during implementations.
- Collaborate with Business Intelligence and Analytics teams to ensure seamless data flow into reporting systems.
- Monitor, debug, and optimise performance of data workflows.
- Maintain data documentation, schemas, and lineage.
- Ensure data compliance with POPIA and other relevant regulations.
Education
- Bachelor’s or related degree in Computer Science, Information Systems, Engineering, or a related field.
Experience
- 3+ years in data engineering or similar role.
- Experience in the logistics, supply chain, or ERP domain preferred.
Technical Skills
- Strong SQL development and optimization skills.
- Proficiency in Python for data manipulation and integration.
- Experience in HTML, CSS & JS
- Experience with ETL tools
- Familiarity with cloud platforms (Azure, AWS, or GCP) is advantageous.
- Knowledge of data warehousing concepts.
- API integration experience (REST/SOAP).
- Experience with tools like Power BI or Tableau is advantageous.
Soft Skills
- Strong problem-solving and analytical thinking.
- Excellent communication and stakeholder collaboration skills.
- Detail-oriented and highly organized.
- Ability to work independently and within cross-functional teams.
- Determination to master new software & technologies
Working Conditions
- Flexibility for travel between provinces in South Africa as well as cross borders (Africa).
- Needs to be available to work infrequent hours. Especially during client visit travelling.
- Initial and on-the-job training to be provided.
- Competitive salary and benefits package.
- Opportunity to work on a variety of challenging and rewarding projects.
- Collaborative, caring and supportive work environment.
- Salary is based on experience and will be discussed during the interview.
- Gross Package includes a laptop, cell phone and internet router.
- Gross package does not include medical aid & pension fund contributions.
- Travel subsistence is paid on a travel base OR the use of the company’s fleet car is provided.
To Apply:
Please submit your 2- 3-page CV and cover letter to
We are an equal-opportunity employer and value diversity at our company.
#J-18808-LjbffrBe The First To Know
About the latest Data engineer Jobs in South Africa !
Data Engineer
Posted 4 days ago
Job Viewed
Job Description
This role’s responsibility is to design, develop and maintain data-based solutions including ensuring that the operationalization of data pipelines and data stores are high-performing, efficient, organized, and reliable, given a set of business requirements and constraints.
The Data Engineer will build and maintain secure and compliant data processing pipelines by using different tools and techniques, and maintain various data services and frameworks to store and produce cleansed and enhanced datasets for analysis. This includes data store design using different architecture patterns based on business requirements.
The incumbent will help identify and troubleshoot operational and data quality issues and design, implement, monitor, and optimize data platforms to meet the data pipelines. Collaborate with cross-functional teams (internal and external) to ensure effective implementation, set-up, utilization and governance of the enterprise data platform across the organization.
This role contributes to the complete data function for data analytics, business intelligence, and advanced analytics. It requires a strong business focus, and the individual understands the strategy and direction of the business but focuses on how to underpin that with data.
Data Engineer
Posted 4 days ago
Job Viewed
Job Description
Reference: NWA003386-Ren-1
A well-known marketing organization with branches in multiple overseas countries is looking for a Senior Data Engineer to join their organization at their Johannesburg branch. The Senior Data Engineer will be required to work on a hybrid model. Working within the Data Science Team, the Data Engineer will be responsible for the organization's data structure.
Requirements:
- Active working experience in building enterprise standard ETL/ELT processes as well as data lakes with a focus on code-based techniques and collaborative environments.
- Experience in Agile development processes, using Jira, Git, etc.
- Working experience using multiple software development languages.
- Active working experience in Python, Bash/Shell Scripting, and SQL (PostgreSQL).
- Knowledge or willingness to learn Snowflake and DBT.
- Knowledge of Dragster and Superset would be an advantage.
- Experience in AWS Cloud infrastructure.
- Active working experience in Big Data is ideal.
- Experience in market research, marketing industries, and survey & sampling data would be beneficial.
- Collaborate with multiple teams to deliver objectives on time and consistently maintain high quality.
- Ensure that projects are completed efficiently and effectively.
- Work iteratively delivering working software and solutions regularly.
- Sustain and ensure that the company's data infrastructure is always up-to-date and running efficiently.
- Participate in product and technology strategy discussions to provide valuable insights and assist with shaping the direction of the organization.
- Mentor, coach, and share your expertise and knowledge with other team members.
- Help to develop and retain top talent for the organization.
Apply now!
If you have not had any response in two weeks, please consider the vacancy application unsuccessful. Your profile will be kept on our database for any other suitable roles/positions.
For more information contact:
Rendani Ndou
Researcher
Data Engineer
Posted 4 days ago
Job Viewed
Job Description
Responsible for maintaining the data warehouse through design and implementation of ETL/ELT methodologies and technologies, as well as providing maintenance and support to our ETL and ML environments. To ensure optimal performance, the candidate will conduct root cause analysis on production issues and provide technical leadership throughout the entire information management process of both structured and unstructured data.
Duties & ResponsibilitiesDuties:
- Responsible for solution design and development of various functionalities in AWS for the project flow.
- Develop and maintain automated ETL pipelines (with monitoring) using scripting languages such as Python, SQL, and AWS services such as S3, Glue, Lambda, SNS, Redshift, SQS, KMS.
- Develop Glue Jobs for batch data processing and create Glue Catalog for metadata synchronization.
- Develop data pipelines using AWS Lambda and Step Functions for data processing.
- Manage data coming from different sources.
- Experience with AWS services to manage applications in the cloud and create or modify instances.
- Implement solutions using the Scaled Agile Framework (SAFe).
- Be involved in the performance and optimization of existing algorithms in Hadoop using Spark Context.
- Create Hive Tables, load data, and write Hive queries.
- Degree in Information Technology.
- 6-7 years of experience in a similar role.
- Experience with Talend.
- Knowledge of SSIS, SSAS, and SSRS.
- Experience with Clover ETL.
- Strong SQL background.
- Experience with AWS.
- Familiarity with Jupyter Notebook.
- Experience in Data Warehousing.
- Experience in Cloud Warehousing.
CVs should be submitted directly to or
If you do not receive communication within 2 weeks of your application, kindly consider your application unsuccessful.
#J-18808-Ljbffr