307 Aws Data jobs in South Africa
AWS Data Engineer
Posted today
Job Viewed
Job Description
Contract
Experience4 to 24 years
SalaryNegotiable
Job Published03 September 2025
Job Reference No.Job Description
Ready to take your data engineering career to new heights? PBT Group is looking for a Senior AWS Data Engineer to design, build, and lead cutting-edge data solutions in a dynamic, agile environment.
What You'll Do:
- Architect modern data analytics frameworks.
- Translate complex requirements into scalable, secure, high-performance pipelines.
- Build & optimize batch/real-time data solutions using AWS & Big Data tools.
- Lead engineering efforts across multiple agile projects.
What You Bring:
- 5+ yrs in Data/Software Engineering with team leadership experience (3–5 yrs).
- 2+ yrs in Big Data & AWS (EMR, EC2, S3).AWS (EMR, EC2, S3
- ETL expert – especially Talend, cloud migration, & data pipeline support.
- Strong in Python, PySpark, SQL , and data modeling.
- Agile mindset (Scrum, Kanban).
- Familiar with Hadoop ecosystem, production support, and DevOps for BI.
Nice to Have:
- Experience with Spark, streaming data tools, & scalable system design.
- BI data modeling background (3+ yrs).
- Talend & AWS hands-on (1+ yr).
Qualifications:
- Bachelor's in Computer Science/Engineering or equivalent experience.
- AWS Certified (Associate+ level preferred).
Be part of a team that thrives on innovation, collaboration, and cloud-first data transformation.
and help shape the future of data
- In order to comply with the POPI Act, for future career opportunities, we require your permission to maintain your personal details on our database. By completing and returning this form you give PBT your consent
AWSData EngineeringExtract Transform Load (ETL)SQL
IndustriesFinancial ServicesInformation Technology (IT)Insurance
AWS Data Engineer
Posted today
Job Viewed
Job Description
Full Time
Experience4 to 24 years
SalaryNegotiable
Job Published08 October 2025
Job Reference No.Job Description
Ready to take your data engineering career to new heights? PBT Group is looking for a Senior AWS Data Engineer to design, build, and lead cutting-edge data solutions in a dynamic, agile environment.
? What You'll Do:
- Architect modern data analytics frameworks.
- Translate complex requirements into scalable, secure, high-performance pipelines.
- Build & optimize batch/real-time data solutions using AWS & Big Data tools.
- Lead engineering efforts across multiple agile projects.
? What You Bring:
- 5+ yrs in Data/Software Engineering with team leadership experience (3–5 yrs).
- 2+ yrs in Big Data & AWS (EMR, EC2, S3).AWS (EMR, EC2, S3
- ETL expert – especially Talend, cloud migration, & data pipeline support.
- Strong in Python, PySpark, SQL , and data modeling.
- Agile mindset (Scrum, Kanban).
- Familiar with Hadoop ecosystem, production support, and DevOps for BI.
? Nice to Have:
- Experience with Spark, streaming data tools, & scalable system design.
- BI data modeling background (3+ yrs).
- Talend & AWS hands-on (1+ yr).
? Qualifications:
- Bachelor's in Computer Science/Engineering or equivalent experience.
- AWS Certified (Associate+ level preferred).
Be part of a team that thrives on innovation, collaboration, and cloud-first data transformation.
and help shape the future of data
- In order to comply with the POPI Act, for future career opportunities, we require your permission to maintain your personal details on our database. By completing and returning this form you give PBT your consent
AWSData EngineeringExtract Transform Load (ETL)SQL
IndustriesInformation Technology (IT)Retail
AWS Data Engineer
Posted today
Job Viewed
Job Description
Responsible for creating and managing the technological part of data infrastructure in every step of data flow. From configuring data sources to integrating analytical tools — all these systems would be architected, built, and managed by a general-role data engineer.
Data Architecture and Management 20%
- Design and maintain scalable data architectures using AWS services for example, but not limited to, AWS S3, AWS Glue and AWS Athena.
- Implement data partitioning and cataloging strategies to enhance data organization and accessibility.
- Work with schema evolution and versioning to ensure data consistency.
- Develop and manage metadata repositories and data dictionaries.
- Assist and support with defining, setup and maintenance of data access roles and privileges.
Pipeline Development and ETL 30%
- Design, develop and optimize scalable ETL pipelines using batch and real-time processing frameworks (using AWS Glue and PySpark).
- Implement data extraction, transformation and loading processes from various structured and unstructured sources.
- Optimize ETL jobs for performance, cost efficiency and scalability.
- Develop and integrate APIs to ingest and export data between various source and target systems, ensuring seamless ETL workflows.
- Enable scalable deployment of ML models by integrating data pipelines with ML workflows.
Automation, Monitoring and Optimization 30%
- Automate data workflows and ensure they are fault tolerant and optimized.
- Implement logging, monitoring and alerting for data pipelines.
- Optimize ETL job performance by tuning configurations and analyzing resource usage.
- Optimize data storage solutions for performance, cost and scalability.
- Ensure the optimisation of AWS resources for scalability for data ingestion and outputs.
- Deploy machine learning models into productions using cloud based services like AWS Sagemaker.
Security, Compliance and Best Practices 10%
- Ensure API security, authentication and access control best practicespractises.
- Implement data encryption, access control and compliance with GDPR, HIPAA, SOC2 etc.
- Establish data governance policies, including access control and security best practicespractises.
Development
Team Mentorship and Collaboration 5%
- Work closely with data scientists, analysts and business teams to understand data needs.
- Collaborate with backend teams to integrate data pipelines into CI/CD.
- Assist with developmental leadership to the team through coaching, code reviews and mentorship.
- Ensure technological alignment with B2C division strategy supporting overarching hearX strategy and vision.
- Identify and encourage areas for growth and improvement within the team.
QMS and Compliance 5%
- Document data processes, transformations and architectural decisions.
- Maintain high standards of software quality within the team by adhering to good processes, practices and habits, including compliance to QMS system, and data and system security requirements.
- Ensure compliance to the established processes and standards for the development lifecycle, including but not limited to data archival.
- Drive compliance to the hearX Quality Management System in line with the Quality Objectives, Quality Manual, and all processes related to the design, development and implementation of software related to medical devices.
- Comply to ISO, CE, FDA (and other) standards and requirements as is applicable to assigned products.
- Safeguard confidential information and data.
Role Requirements
Minimum education (essential):
Bachelor's degree in Computer Science or Engineering (or similar)
Minimum education (desirable):
- Honors degree in Computer Science or Engineering (or similar)
- AWS Certified Data Engineer or
- AWS Certified Solutions Architect or
- AWS Certified Data Analyst
Minimum applicable experience (years):
5+ years working experience
Required nature of experience:
- Data Engineering development
- Experience with AWS services used for data warehousing, computing and transformations ie.AWS Glue (crawlers, jobs, triggers, and catalog), AWS S3, AWS Lambda, AWS Step Functions, AWS Athena and AWS CloudWatch
- Experience with SQL and NoSQL databases (e.g., PostgreSQL, MySQL, DynamoDB)
- Experience with SQL for querying and transformation of data
Skills and Knowledge
(essential):
- Strong skills in Python (especially PySpark for AWS Glue)
- Strong knowledge of data modeling, schema design and database optimization
- Proficiency with AWS and infrastructure as code
Skills and Knowledge
(desirable):
- Knowledge of SQL, Python, AWS serverless microservices,
- Deploying and managing ML models in production
- Version control (Git), unit testing and agile methodologies
This job description is not a definitive or exhaustive list of responsibilities and is subject to change depending on changing business requirements. Employees will be consulted on any changes. Employee's performance will be reviewed based on the agreed upon objectives. If you do not hear from us within 30 days, please consider your application unsuccessful.
AWS Data Engineer
Posted today
Job Viewed
Job Description
Job Description
Data Engineer - GCH - Cape Town
Must Have
- Proficiency with Matillion ETL: Using the Matillion ETL platform for data integration.
- Cloud Data Warehouses: Familiarity with cloud data warehouses like Snowflake, AWS Redshift, or Google BigQuery.
Key Responsibilities
- Design & Develop Data Pipelines: Build and optimize scalable, reliable, and automated ETL/ELT pipelines using AWS services (e.g., AWS Glue, AWS Lambda, Redshift, S3) and Databricks.
- Cloud Data Architecture: Design, implement, and support in maintaining data infrastructure in AWS, ensuring high availability, security, and scalability. Work with lake houses, data lakes, data warehouses, and distributed computing.
- DBT Core Implementation: Lead the implementation of DBT Core to automate data transformations, develop reusable models, and maintain efficient ELT processes.
- Data Modelling: Build efficient data models to support required analytics/reporting.
- Optimize Data Workflows: Monitor, troubleshoot, and optimize data pipelines for performance and cost-efficiency in cloud environments. Utilize Databricks for processing large-scale data sets and streamlining data workflows.
- Data Quality & Monitoring: Ensure high-quality data by implementing data validation and monitoring systems. Troubleshoot data issues and create solutions to ensure data reliability.
- Automation & CI/CD: Implement CI/CD practices for data pipeline deployment and maintain automation for monitoring and scaling data infrastructure in AWS and Databricks.
- Documentation & Best Practices: Maintain comprehensive documentation for data pipelines, architectures, and best practices in AWS, Databricks, and DBT Core. Ensure knowledge sharing across teams.
Required
Skills & Qualifications:
- Bachelor's / master's degree in computer science, Engineering or a related field.
- 4+ years of experience as a Data Engineer or in a similar role.
- Extensive hands-on experience with AWS services (S3, Redshift, Glue, Lambda, Kinesis, etc.) for building scalable and reliable data solutions.
- Advanced expertise in Databricks, including the creation and optimization of data pipelines, notebooks, and integration with other AWS services.
- Strong experience with DBT Core for data transformation and modelling, including writing, testing, and maintaining DBT models.
- Proficiency in SQL and experience with designing and optimizing complex queries for large datasets.
- Strong programming skills in Python/PySpark, with the ability to develop custom data processing logic and automate tasks.
- Experience with Data Warehousing and knowledge of concepts related to OLAP and OLTP systems.
- Expertise in building and managing ETL/ELT pipelines, automating data workflows, and performing data validation.
- Familiarity with CI/CD concepts, version control (e.g., Git), and deployment automation.
- Having worked under Agile project environment
Preferred
- Experience with Apache Spark and distributed data processing in Databricks.
- Familiarity with streaming data solutions (e.g., AWS Kinesis, Apache Kafka).
AWS Data Engineer
Posted today
Job Viewed
Job Description
Key Responsibilities:
- Design & Develop Data Pipelines
: Build and optimize scalable, reliable, and automated ETL/ELT pipelines using
AWS services
(e.g., AWS Glue, AWS Lambda, Redshift, S3) and
Databricks
. - DBT Core Implementation
: Lead the implementation of
DBT Core
to automate data transformations, develop reusable models, and maintain efficient ELT processes. - Optimize Data Workflows
: Monitor, troubleshoot, and optimize data pipelines for performance and cost-efficiency in cloud environments. Utilize
Databricks
for processing large-scale data sets and streamlining data workflows. - Data Quality & Monitoring
: Ensure high-quality data by implementing data validation and monitoring systems. Troubleshoot data issues and create solutions to ensure data reliability. - Automation & CI/CD
: Implement
CI/CD
practices for data pipeline deployment and maintain automation for monitoring and scaling data infrastructure in
AWS
and
Databricks
. - Documentation & Best Practices
: Maintain comprehensive documentation for data pipelines, architectures, and best practices in
AWS
,
Databricks
, and
DBT Core
. Ensure knowledge sharing across teams.
Skills & Qualifications:
Required:
- Bachelor's / Master's degree in computer science
, Engineering or a related field. - 5+ years
of experience as a Data Engineer or in a similar role. - Extensive hands-on experience with
AWS services
(S3, Redshift, Glue, Lambda, Kinesis, etc.) for building scalable and reliable data solutions. - Advanced expertise in
Databricks
, including the creation and optimization of data pipelines, notebooks, and integration with other AWS services. - Strong experience with
DBT Core
for data transformation and modelling, including writing, testing, and maintaining DBT models. - Proficiency in
SQL
and experience with designing and optimizing complex queries for large datasets. - Strong programming skills in
Python/PySpark
, with the ability to develop custom data processing logic and automate tasks. - Experience with
Data Warehousing
and knowledge of concepts related to OLAP and OLTP systems. - Expertise in building and managing
ETL/ELT pipelines
, automating data workflows, and performing data validation. - Familiarity with
CI/CD
concepts, version control (e.g., Git), and deployment automation. - Having worked under Agile project environment.
Preferred:
- Have exposure to ingestion tools such as Matillion, Fivetran etc.
- Experience with
Apache Spark
and distributed data processing in
Databricks
. - Familiarity with
streaming data
solutions (e.g.,
AWS Kinesis
,
Apache Kafka
).
Soft Skills:
- Excellent communication skills, with the ability to explain complex technical concepts to non-technical stakeholders.
- Strong analytical and problem-solving skills, capable of troubleshooting complex data pipeline issues.
AWS Data Engineer
Posted today
Job Viewed
Job Description
Must have:
- Proficiency with Matillion ETL: Using the Matillion ETL platform for data integration.
- Cloud Data Warehouses: Familiarity with cloud data warehouses like Snowflake, AWS Redshift, or Google BigQuery.
Key Responsibilities:
- Design & Develop Data Pipelines: Build and optimize scalable, reliable, and automated ETL/ELT pipelines using AWS services (e.g., AWS Glue, AWS Lambda, Redshift, S3) and Databricks.
- Cloud Data Architecture: Design, implement, and support in maintaining data infrastructure in AWS, ensuring high availability, security, and scalability. Work with lake houses, data lakes, data warehouses, and distributed computing.
- DBT Core Implementation: Lead the implementation of DBT Core to automate data transformations, develop reusable models, and maintain efficient ELT processes.
- Data Modelling: Build efficient data models to support required analytics/reporting.
- Optimize Data Workflows: Monitor, troubleshoot, and optimize data pipelines for performance and cost-efficiency in cloud environments. Utilize Databricks for processing large-scale data sets and streamlining data workflows.
- Data Quality & Monitoring: Ensure high-quality data by implementing data validation and monitoring systems. Troubleshoot data issues and create solutions to ensure data reliability.
- Automation & CI/CD: Implement CI/CD practices for data pipeline deployment and maintain automation for monitoring and scaling data infrastructure in AWS and Databricks.
- Documentation & Best Practices: Maintain comprehensive documentation for data pipelines, architectures, and best practices in AWS, Databricks, and DBT Core. Ensure knowledge sharing across teams.
Skills & Qualifications:
Required:
- Bachelor’s / master’s degree in computer science, Engineering or a related field.
- 4+ years of experience as a Data Engineer or in a similar role.
- Extensive hands-on experience with AWS services (S3, Redshift, Glue, Lambda, Kinesis, etc.) for building scalable and reliable data solutions.
- Advanced expertise in Databricks, including the creation and optimization of data pipelines, notebooks, and integration with other AWS services.
- Strong experience with DBT Core for data transformation and modelling, including writing, testing, and maintaining DBT models.
- Proficiency in SQL and experience with designing and optimizing complex queries for large datasets.
- Strong programming skills in Python/PySpark, with the ability to develop custom data processing logic and automate tasks.
- Experience with Data Warehousing and knowledge of concepts related to OLAP and OLTP systems.
- Expertise in building and managing ETL/ELT pipelines, automating data workflows, and performing data validation.
- Familiarity with CI/CD concepts, version control (e.g., Git), and deployment automation.
- Having worked under Agile project environment
Preferred:
- Experience with Apache Spark and distributed data processing in Databricks.
- Familiarity with streaming data solutions (e.g., AWS Kinesis, Apache Kafka).
- Research / Data Analysis jobs
Senior AWS Data Engineer
Posted today
Job Viewed
Job Description
THE OPPORTUNITY THAT AWAITS YOU:
We are seeking immediately available/short notice, experienced AWS Data Engineer (Intermediate to Senior) to support an international client in managing and optimising their data infrastructure. The role will focus on building and maintaining scalable data pipelines, optimising cloud-based data solutions, and ensuring high performance and reliability across systems. You will play a key role in supporting data operations roadmap, leveraging AWS technologies to deliver robust, efficient, and secure solutions.
YOUR KEY RESPONSIBILITIES:
- Design, build, and maintain scalable data pipelines and ETL processes.
- Optimise data storage, transformation, and retrieval for performance and cost efficiency.
- Implement best practices in data modelling and architecture.
- Develop and manage data solutions using AWS services such as S3, Glue, Redshift, DBT, Spark, and Terraform.
- Collaborate with cloud architects to ensure smooth integrations and deployments.
- Lead or contribute to migrations and modernisation projects within AWS environments.
- Conduct performance tuning and implement monitoring solutions to ensure system stability.
- Troubleshoot data pipeline failures, ensuring rapid resolution and minimal downtime.
- Build dashboards and reporting tools to monitor data flows and usage.
- Apply role-based access controls and enforce data governance policies.
- Ensure compliance with international data protection and security standards.
- Support audit and compliance initiatives as required.
- Work closely with cross-functional teams (data analysts, product managers, application teams).
- Document processes, pipelines, and architectures for knowledge transfer.
- Mentor junior engineers and contribute to continuous improvement initiatives.
OUR REQUIRED EXPERTISE:
- Proven experience as a Data Engineer (5+ years, with Intermediate to Senior-level capability).
- Strong proficiency with AWS data services (S3, Glue, Redshift, DBT, Spark, Terraform).
- Hands-on experience in building and managing ETL/ELT pipelines.
- Strong knowledge of SQL, data modelling, and performance tuning.
- Familiarity with CI/CD, version control (Git), and infrastructure-as-code.
- Excellent problem-solving skills and ability to work in fast-paced environments.
- Strong communication skills for collaboration with international teams.
- Experience with multi-region or global data deployments. (Nice to have)
- Knowledge of Python or other scripting languages for automation. (Nice to have)
- Exposure to data governance frameworks and observability tools. (Nice to have)
YOUR REWARD
- Competitive contract compensation
- Exposure to cutting-edge AWS technologies and data practices.
- Collaborative environment with global teams.
Be The First To Know
About the latest Aws data Jobs in South Africa !
Senior AWS Data Engineer
Posted today
Job Viewed
Job Description
Contract
Experience4 to 24 years
SalaryNegotiable
Job Published08 October 2025
Job Reference No.Job Description
Ready to take your data engineering career to new heights? PBT Group is looking for a Senior AWS Data Engineer to design, build, and lead cutting-edge data solutions in a dynamic, agile environment.
? What You'll Do:
- Architect modern data analytics frameworks.
- Translate complex requirements into scalable, secure, high-performance pipelines.
- Build & optimize batch/real-time data solutions using AWS & Big Data tools.
- Lead engineering efforts across multiple agile projects.
? What You Bring:
- 5+ yrs in Data/Software Engineering with team leadership experience (3–5 yrs).
- 2+ yrs in Big Data & AWS (EMR, EC2, S3).AWS (EMR, EC2, S3
- ETL expert – especially Talend, cloud migration, & data pipeline support.
- Strong in Python, PySpark, SQL , and data modeling.
- Agile mindset (Scrum, Kanban).
- Familiar with Hadoop ecosystem, production support, and DevOps for BI.
? Nice to Have:
- Experience with Spark, streaming data tools, & scalable system design.
- BI data modeling background (3+ yrs).
- Talend & AWS hands-on (1+ yr).
? Qualifications:
- Bachelor's in Computer Science/Engineering or equivalent experience.
- AWS Certified (Associate+ level preferred).
Be part of a team that thrives on innovation, collaboration, and cloud-first data transformation.
and help shape the future of data
- In order to comply with the POPI Act, for future career opportunities, we require your permission to maintain your personal details on our database. By completing and returning this form you give PBT your consent
AWSData EngineeringExtract Transform Load (ETL)SQL
IndustriesFinancial ServicesInformation Technology (IT)Insurance
Senior AWS Data Engineer
Posted today
Job Viewed
Job Description
Company Description
IT Ridge Technologies is a global IT consulting and product engineering services provider, specializing in e-commerce, e-learning, application development, business intelligence, and infrastructure solutions. Our company partners with global organizations to address complex business challenges and drive strategic growth through innovative technology solutions. With expertise across various industries, including retail, manufacturing, construction, and education, IT Ridge Technologies offers a unique balance of human perspective and strategic thinking to help clients lead in today's challenging business environment.
Role Description
This is a full-time on-site role located in Cape Town for a Senior AWS Data Engineer. In this role, you will design, develop, and maintain data pipelines and architectures on AWS. Your daily tasks will include building ETL processes, data warehousing solutions, and utilizing data modeling techniques to support analytics and business intelligence needs. You will work closely with cross-functional teams to ensure efficient data flow, data quality, and effective data solutions that drive business outcomes.
Qualifications
- Data Engineering and Extract Transform Load (ETL) skills
- Proficiency in Data Modeling and Data Warehousing concepts
- Experience in Data Analytics and Business Intelligence
- Strong understanding of AWS services and architecture
- Excellent problem-solving skills and attention to detail
- Ability to work independently and as part of a team
- Experience in the IT consulting industry is a plus
- Bachelor's degree in Computer Science, Engineering, or related field
AWS Data Engineer (6-Month Contract)
Posted 5 days ago
Job Viewed
Job Description
Location: Remote (supporting an international client)
Contract Duration: 6 Months (Immediate Start)
Compensation: R95,000 – R110,000 per month
We’re looking for an experienced AWS Data Engineer (Intermediate to Senior) to join our client’s global team. You’ll be building and optimising scalable data pipelines, ensuring performance, reliability, and security across cloud-based systems.
What you’ll do:
Design & optimise data pipelines and ETL processes
Work with AWS services: S3, Glue, Redshift, DBT, Spark, Terraform
Support cloud integration and modernisation projects
Ensure system performance, monitoring & reliability
Enforce data security, governance, and compliance standards
Collaborate with global, cross-functional teams
What we’re looking for:
5+ years’ experience as a Data Engineer (Intermediate–Senior)
Hands-on expertise in AWS data services
Strong SQL, data modelling, and pipeline management skills
Familiarity with CI/CD, Git, and infrastructure-as-code
Excellent collaboration and problem-solving skills
Why join?
Competitive contract compensation (R95k – R110k/month)
Work with cutting-edge AWS technologies
Collaborate with international teams on high-impact projects
If you’re ready to make an impact as an AWS Data Engineer, apply today — or share this opportunity with someone in your network!