2,222 Kafka Developer jobs in South Africa

Big Data Developer

R500000 - R1200000 Y Dariel

Posted today

Job Viewed

Tap Again To Close

Job Description

Data Engineer

The Data Engineer's role entails building and supporting data pipelines that must be scalable, repeatable, and secure. This role functions as a core member of an agile team, whereby these professionals are responsible for the infrastructure that provides insights from raw data, handling and integrating diverse sources of data seamlessly. They enable solutions by handling large volumes of data in batch and real-time by leveraging emerging technologies from both the big data and cloud spaces. Additional responsibilities include developing proof of concepts and implementing complex big data solutions with a focus on collecting, parsing, managing, analysing, and visualising large datasets. They know how to apply technologies to solve the problems of working with large volumes of data in diverse formats to deliver innovative solutions. Data Engineering is a technical job that requires substantial expertise in a broad range of software development and programming fields. These professionals have a knowledge of data analysis, end-user requirements, and business requirements analysis to develop a clear understanding of the business need and to incorporate these needs into a technical solution. They have a solid understanding of physical database design and the systems development lifecycle.

Responsibilities

  • Architects' Data Analytics Framework
  • Translates complex functional and technical requirements into detailed architecture, design, and high-performing software
  • Leads Data and batch/real-time analytical solutions leveraging transformational technologies
  • Works on multiple projects as a technical lead, driving user story analysis and elaboration, design and development of software applications, testing, and building automation tools
  • Development and Operations
  • Database Development and Operations
  • Policies, Standards, and Procedures
  • Business Continuity & Disaster Recovery
  • Research and Evaluation
  • Creating data feeds from on-premise to AWS Cloud
  • Support data feeds in production on a break-fix basis
  • Creating data marts using Talend or a similar ETL development tool
  • Manipulating data using Python
  • Processing data using the Hadoop paradigm, particularly using EMR, AWS's distribution of Hadoop
  • Develop for Big Data and Business Intelligence, including automated testing and deployment

Requisite Experience, Education, Knowledge, and/ or Skills

  • Bachelor's Degree in Computer Science, Computer Engineering, or equivalent
  • AWS Certification
  • Extensive knowledge in different programming or scripting languages
  • Expert knowledge of data modelling and understanding of different data structures and their benefits and limitations under particular use cases
  • Capability to architect highly scalable distributed systems, using different open source tools
  • 5+ years of Data engineering or software engineering experience
  • 2+ years of Big Data experience
  • 2+ years' experience with Extract, Transform, and Load (ETL) processes
  • 2+ years of AWS experience
  • 5 years of demonstrated experience with object-oriented design, coding, and testing patterns, as well as experience in engineering (commercial or open source) software platforms and large-scale data infrastructures
  • Big Data batch and streaming tools
  • Talend
  • AWS: EMR, EC2, S3
  • Python
  • PySpark or Spark
This advertiser has chosen not to accept applicants from your region.

Big Data Developer

R900000 - R1200000 Y Remote Recruitment

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Overview

We are looking for an experienced Big Data Developer to join an international banking technology team in Málaga, Spain. In this role, you will contribute to the development of business applications within the Regulatory & Compliance domain, covering the full software lifecycle from problem analysis to deployment.

You'll work with modern big data technologies, collaborate with users to understand business needs, and provide innovative solutions that meet regulatory and compliance requirements. This is a fantastic opportunity to advance your career in a global environment while enjoying the lifestyle benefits of living in Spain.

Key Responsibilities
  • Participate in the end-to-end software lifecycle, including analysis, design, development, testing, and deployment.
  • Collaborate with business users to identify requirements and deliver strategic technology solutions.
  • Optimise and analyse code, applying best practices such as threat modelling and SAST.
  • Manage tools and processes for documentation and Application Lifecycle Management (ALM).
  • Plan and deliver projects using Agile methodology.
  • Support incident resolution, including planned interventions.
  • Execute unit, integration, and regression testing.
  • Manage release processes and deployment tools.
Requirements
Qualifications and Experience

Required:

  • 3+ years of experience as a Big Data Developer.
  • Bachelor's degree in Computer Science, Telecommunications, Mathematics, or a related field.
  • Proficiency with GitHub.
  • Strong knowledge of databases (Oracle PL/SQL, PostgreSQL).
  • Experience with Java and JavaScript.
  • Hands-on ETL experience.
  • Fluency in English (Spanish is advantageous).

Preferred:

  • Familiarity with microservices frameworks (Spring Boot), OpenShift.
  • Knowledge of Flink, Drools, Kafka, DevOps tools.
  • Agile methodology experience with tools such as Jira and Confluence.
  • Exposure to S3, Elastic, and Angular.
  • Experience in Transactional Regulatory Reporting.
  • Innovative mindset and ability to generate strategic ideas.

Other Requirements:

  • Availability to travel.
  • Willingness to relocate to Málaga, Spain.
This advertiser has chosen not to accept applicants from your region.

Big Data Data Engineer

Johannesburg, Gauteng PBT Group

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Big Data Data Engineer job vacancy in Johannesburg.

We are seeking a skilled Data Engineer to design and develop scalable data pipelines that ingest raw, unstructured JSON data from source systems and transform it into clean, structured datasets within our Hadoop-based data platform.

The ideal candidate will play a critical role in enabling data availability, quality, and usability by engineering the movement of data from the Raw Layer to the Published and Functional Layers.

Overview

Big Data Data Engineer job vacancy in Johannesburg.

Key Responsibilities:

  • Design, build, and maintain robust data pipelines to ingest raw JSON data from source systems into the Hadoop Distributed File System (HDFS).
  • Transform and enrich unstructured data into structured formats (e.g., Parquet, ORC) for the Published Layer using tools like PySpark, Hive, or Spark SQL.
  • Develop workflows to further process and organize data into Functional Layers optimized for business reporting and analytics.
  • Implement data validation, cleansing, schema enforcement, and deduplication as part of the transformation process.
  • Collaborate with Data Analysts, BI Developers, and Business Users to understand data requirements and ensure datasets are production-ready.
  • Optimize ETL/ELT processes for performance and reliability in a large-scale distributed environment.
  • Maintain metadata, lineage, and documentation for transparency and governance.
  • Monitor pipeline performance and implement error handling and alerting mechanisms.
Technical Skills & Experience
  • 3+ years of experience in data engineering or ETL development within a big data environment.
  • Strong experience with Hadoop ecosystem tools: HDFS, Hive, Spark, YARN, and Sqoop.
  • Proficiency in PySpark, Spark SQL, and HQL (Hive Query Language).
  • Experience working with unstructured JSON data and transforming it into structured formats.
  • Solid understanding of data lake architectures: Raw, Published, and Functional layers.
  • Familiarity with workflow orchestration tools like Airflow, Oozie, or NiFi.
  • Experience with schema design, data modeling, and partitioning strategies.
  • Comfortable with version control tools (e.g., Git) and CI/CD processes.
Nice to Have
  • Experience with data cataloging and governance tools (e.g., Apache Atlas, Alation).
  • Exposure to cloud-based Hadoop platforms like AWS EMR, Azure HDInsight, or GCP Dataproc.
  • Experience with containerization (e.g., Docker) and/or Kubernetes for pipeline deployment.
  • Familiarity with data quality frameworks (e.g., Deequ, Great Expectations).
Qualifications
  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field.
  • Relevant certifications (e.g., Cloudera, Databricks, AWS Big Data) are a plus.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Big Data Data Engineer

R600000 - R1200000 Y PBT Group

Posted today

Job Viewed

Tap Again To Close

Job Description

Employment Type

Contract

Experience

4 to 25 years

Salary

Negotiable

Job Published

03 September 2025

Job Reference No.
Job Description

We are seeking a skilled Data Engineer to design and develop scalable data pipelines that ingest raw, unstructured JSON data from source systems and transform it into clean, structured datasets within our Hadoop-based data platform. The ideal candidate will play a critical role in enabling data availability, quality, and usability by engineering the movement of data from the Raw Layer to the Published and Functional Layers.

Key Responsibilities:

  • Design, build, and maintain robust data pipelines to ingest raw JSON data from source systems into the Hadoop Distributed File System (HDFS).
  • Transform and enrich unstructured data into structured formats (e.g., Parquet, ORC) for the Published Layer using tools like PySpark, Hive, or Spark SQL.
  • Develop workflows to further process and organize data into Functional Layers optimized for business reporting and analytics.
  • Implement data validation, cleansing, schema enforcement, and deduplication as part of the transformation process.
  • Collaborate with Data Analysts, BI Developers, and Business Users to understand data requirements and ensure datasets are production-ready.
  • Optimize ETL/ELT processes for performance and reliability in a large-scale distributed environment.
  • Maintain metadata, lineage, and documentation for transparency and governance.
  • Monitor pipeline performance and implement error handling and alerting mechanisms.

Technical Skills & Experience:

  • 3+ years of experience in data engineering or ETL development within a big data environment.
  • Strong experience with Hadoop ecosystem tools: HDFS, Hive, Spark, YARN, and Sqoop.
  • Proficiency in PySpark, Spark SQL, and HQL (Hive Query Language).
  • Experience working with unstructured JSON data and transforming it into structured formats.
  • Solid understanding of data lake architectures: Raw, Published, and Functional layers.
  • Familiarity with workflow orchestration tools like Airflow, Oozie, or NiFi.
  • Experience with schema design, data modeling, and partitioning strategies.
  • Comfortable with version control tools (e.g., Git) and CI/CD processes.

Nice to Have:

  • Experience with data cataloging and governance tools (e.g., Apache Atlas, Alation).
  • Exposure to cloud-based Hadoop platforms like AWS EMR, Azure HDInsight, or GCP Dataproc.
  • Experience with containerization (e.g., Docker) and/or Kubernetes for pipeline deployment.
  • Familiarity with data quality frameworks (e.g., Deequ, Great Expectations).

Qualifications:

  • Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field.
  • Relevant certifications (e.g., Cloudera, Databricks, AWS Big Data) are a plus.

  • In order to comply with the POPI Act, for future career opportunities, we require your permission to maintain your personal details on our database. By completing and returning this form you give PBT your consent

  • If you have not received any feedback after 2 weeks, please consider you application as unsuccessful.

Skills

Big DataApache HadoopApache HivePySparkSQLJSONData Engineering

Industries

BankingFinancial Services

This advertiser has chosen not to accept applicants from your region.

Cloudera Big Data Administrator/Engineer

Johannesburg, Gauteng IOCO

Posted 10 days ago

Job Viewed

Tap Again To Close

Job Description

<>iOCO is seeking a skilled Big Data Administrator/Engineer with strong hands-on experience in Cloudera’s ecosystem (Hive, Impala, HDFS, Ozone, Hue, NiFi) and proven expertise in Informatica BDM/DEI . The role involves administering and configuring big data platforms, deploying/supporting clusters, and building optimized pipelines to move and transform large-scale datasets. Experience with alternate platforms such as Hortonworks, MapR, AWS EMR, Azure HDInsight, or Google Dataproc will be advantageous.

What you'll do:

  • Platform Administration: Install, configure, upgrade, and monitor Cloudera/CDP clusters, manage HDFS/Ozone storage, and ensure security (Kerberos, Ranger, Sentry).
  • Data Pipelines: Build and optimize ingestion and processing pipelines using NiFi and Informatica BDM/DEI, supporting both real-time and batch flows.
  • ETL Integration: Develop Informatica mappings and workflows, leveraging pushdown execution to Hive/Impala/Spark; integrate diverse on-prem and cloud data sources.
  • Performance Governance: Optimize queries, orchestrate jobs (Airflow, Oozie, Control-M), and ensure compliance with governance/security standards.

Your Expertise:

  • Strong hands-on expertise in Cloudera tools: Hive, Impala, HDFS, Ozone, Hue, NiFi.
  • Proficiency with Informatica BDM/DEI (ETL/ELT, pushdown optimization, data quality).
  • Solid SQL, Linux administration, and scripting (Bash, Python).
  • Familiarity with cloud data platforms (AWS, Azure, GCP) and orchestration tools.
  • 4+ years in big data administration/engineering, including 2+ years in Informatica BDM/DEI.

Qualifications:

  • Bachelorâ€s degree in Computer Science, Engineering, or related field.
  • Experience in hybrid or cloud-based big data environments.

Soft Skills:

  • Strong troubleshooting and problem-solving mindset.
  • Ability to work independently and within cross-functional teams.
  • Clear communication and documentation skills.

Other information applicable to the opportunity:

  • Contract position
  • Location: Johannesburg

Why work for us?

Want to work for an organization that solves complex real-world problems with innovative software solutions? At iOCO, we believe anything is possible with modern technology, software, and development expertise. We are continuously pushing the boundaries of innovative solutions across multiple industries using an array of technologies.†/p>

You will be part of a consultancy, working with some of the most knowledgeable minds in the industry on interesting solutions across different business domains.†/p>

Our culture of continuous learning will ensure that you will have all the opportunities, tools, and support to hone and grow your craft.†/p>

By joining IOCO you will have an open invitation to developer inspiring forums. A place where you will be able to connect and learn from and with your peers by sharing ideas, experiences, practices, and solutions.†/p>

iOCO is an equal opportunity employer with an obligation to achieve its own unique EE objectives in the context of Employment Equity targets. Therefore, our employment strategy gives primary preference to previously disadvantaged individuals or groups.

This advertiser has chosen not to accept applicants from your region.

Cloudera Big Data Administrator/Engineer

Johannesburg, Gauteng

Posted today

Job Viewed

Tap Again To Close

Job Description

contract
iCO is seeking a skilled Big Data Administrator/Engineer with strong hands-on experience in Cloudera’s ecosystem (Hive, Impala, HDFS, Ozone, Hue, NiFi) and proven expertise in Informatica BDM/DEI . The role involves administering and configuring big data platforms, deploying/supporting clusters, and building optimized pipelines to move and transform large-scale datasets. Experience with alternate platforms such as Hortonworks, MapR, AWS EMR, Azure HDInsight, or Google Dataproc will be advantageous. What you'll do: Platform Administration: Install, configure, upgrade, and monitor Cloudera/CDP clusters, manage HDFS/Ozone storage, and ensure security (Kerberos, Ranger, Sentry). Data Pipelines: Build and optimize ingestion and processing pipelines using NiFi and Informatica BDM/DEI, supporting both real-time and batch flows. ETL Integration: Develop Informatica mappings and workflows, leveraging pushdown execution to Hive/Impala/Spark; integrate diverse on-prem and cloud data sources. Performance Governance: Optimize queries, orchestrate jobs (Airflow, Oozie, Control-M), and ensure compliance with governance/security standards. Your Expertise: Strong hands-on expertise in Cloudera tools: Hive, Impala, HDFS, Ozone, Hue, NiFi. Proficiency with Informatica BDM/DEI (ETL/ELT, pushdown optimization, data quality). Solid SQL, Linux administration, and scripting (Bash, Python). Familiarity with cloud data platforms (AWS, Azure, GCP) and orchestration tools. 4 years in big data administration/engineering, including 2 years in Informatica BDM/DEI. Qualifications: Bachelorâ€s degree in Computer Science, Engineering, or related field. Experience in hybrid or cloud-based big data environments. Soft Skills: Strong troubleshooting and problem-solving mindset. Ability to work independently and within cross-functional teams. Clear communication and documentation skills. Other information applicable to the opportunity: Contract position Location: Johannesburg Why work for us? Want to work for an organization that solves complex real-world problems with innovative software solutions? At iOCO, we believe anything is possible with modern technology, software, and development expertise. We are continuously pushing the boundaries of innovative solutions across multiple industries using an array of technologies.†ou will be part of a consultancy, working with some of the most knowledgeable minds in the industry on interesting solutions across different business domains.†ur culture of continuous learning will ensure that you will have all the opportunities, tools, and support to hone and grow your craft.†y joining IOCO you will have an open invitation to developer inspiring forums. A place where you will be able to connect and learn from and with your peers by sharing ideas, experiences, practices, and solutions.†OCO is an equal opportunity employer with an obligation to achieve its own unique EE objectives in the context of Employment Equity targets. Therefore, our employment strategy gives primary preference to previously disadvantaged individuals or groups.
This advertiser has chosen not to accept applicants from your region.

Research Assistant (Administrative tax data | Big Data)

Pretoria, Gauteng United Nations University

Posted 26 days ago

Job Viewed

Tap Again To Close

Job Description

Research Assistant (Administrative tax data | Big Data)

UNU-WIDER is seeking exceptional candidates for the position of Research Assistant, based in Pretoria, South Africa, to support the SA-TIED programme. This role involves managing and enhancing tax datasets, assisting researchers, and ensuring high standards of data confidentiality.

For the full job description and application details, please click here.

UNU offers three types of contracts: fixed-term staff positions (General Service, National Officer and Professional), Personnel Service Agreement positions (PSA), and consultant positions (CTC). For more information, see the Contract Types page.

1 articles, publications, projects, experts. #J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Kafka developer Jobs in South Africa !

Big Data Developer - Regulatory & Compliance (Relocation to Spain)

Remote Recruitment

Posted 22 days ago

Job Viewed

Tap Again To Close

Job Description

Big Data Developer - Regulatory & Compliance (Relocation to Spain)

We are looking for an experienced Big Data Developer to join an international banking technology team in Málaga, Spain . In this role, you will contribute to the development of business applications within the Regulatory & Compliance domain, covering the full software lifecycle from problem analysis to deployment.

You’ll work with modern big data technologies, collaborate with users to understand business needs, and provide innovative solutions that meet regulatory and compliance requirements.

Responsibilities
  • Participate in the end-to-end software lifecycle , including analysis, design, development, testing, and deployment.
  • Collaborate with business users to identify requirements and deliver strategic technology solutions.
  • Optimise and analyse code, applying best practices such as threat modelling and SAST.
  • Manage tools and processes for documentation and Application Lifecycle Management (ALM).
  • Plan and deliver projects using Agile methodology.
  • Support incident resolution, including planned interventions.
  • Execute unit, integration, and regression testing.
  • Manage release processes and deployment tools.
Qualifications and Experience

Required:

  • 3+ years of experience as a Big Data Developer .
  • Bachelor’s degree in Computer Science, Telecommunications, Mathematics, or a related field.
  • Proficiency with GitHub .
  • Strong knowledge of databases (Oracle PL/SQL, PostgreSQL).
  • Hands-on ETL experience.
  • Fluency in English (Spanish is advantageous).

Preferred:

  • Familiarity with microservices frameworks (Spring Boot), OpenShift.
  • Knowledge of Flink, Drools, Kafka, DevOps tools .
  • Agile methodology experience with tools such as Jira and Confluence .
  • Exposure to S3, Elastic, and Angular .
  • Experience in Transactional Regulatory Reporting .
  • Innovative mindset and ability to generate strategic ideas.
Other Requirements
  • Availability to travel.
Seniority level
  • Mid-Senior level
Employment type
  • Full-time
Job function
  • Information Technology
Industries
  • Staffing and Recruiting

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Intermediate C# Developer with KAFKA

Pretoria, Gauteng Optim-G Sourcing

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

We are seeking a medium-level C# Developer with strong experience in modern cloud-native application development. The ideal candidate will have proven skills in microservices architecture , containerised deployments , and Azure/Kubernetes orchestration , as well as hands-on production experience with Apache Kafka .

The role involves building and maintaining scalable, resilient services that integrate into a distributed financial-technology ecosystem.

Required Skills & Experience
  • 35 years C#/.NET Core development experience in production systems.
  • Strong understanding of microservices principles (domain-driven design, bounded contexts, service-to-service communication).
  • Proficiency in containerisation (Best practices, image optimisation, debugging containerised apps).
  • Hands-on deployment experience with AKS or Kubernetes (RBAC, ConfigMaps, Secrets, Ingress, scaling strategies).
  • Apache Kafka (production experience):
    • Administering Kafka clusters
    • Designing event-driven applications and event schemas
    • Monitoring (Prometheus/Grafana, Confluent Control Center, or similar)
    • Handling data consistency and exactly-once/at-least-once semantics
  • Experience with Azure cloud services :
    • Azure DevOps (pipelines, repos, artifacts)
    • Azure Monitor / Application Insights
    • Networking basics (VNETs, load balancers, firewalls)
    • Azure Storage and Messaging (Event Hubs, Service Bus a plus)
  • PostgreSQL experience (schema design, queries, performance tuning).
This advertiser has chosen not to accept applicants from your region.

Intermediate C# Developer with KAFKA

Pretoria, Gauteng

Posted today

Job Viewed

Tap Again To Close

Job Description

We are seeking a medium-level C# Developer with strong experience in modern cloud-native application development. The ideal candidate will have proven skills in microservices architecture , containerised deployments , and Azure/Kubernetes orchestration , as well as hands-on production experience with Apache Kafka . The role involves building and maintaining scalable, resilient services that integrate into a distributed financial-technology ecosystem. Required Skills & Experience 35 years C#/.NET Core development experience in production systems. Strong understanding of microservices principles (domain-driven design, bounded contexts, service-to-service communication). Proficiency in containerisation (Best practices, image optimisation, debugging containerised apps). Hands-on deployment experience with AKS or Kubernetes (RBAC, ConfigMaps, Secrets, Ingress, scaling strategies). Apache Kafka (production experience): Administering Kafka clusters Designing event-driven applications and event schemas Monitoring (Prometheus/Grafana, Confluent Control Center, or similar) Handling data consistency and exactly-once/at-least-once semantics Experience with Azure cloud services : Azure DevOps (pipelines, repos, artifacts) Azure Monitor / Application Insights Networking basics (VNETs, load balancers, firewalls) Azure Storage and Messaging (Event Hubs, Service Bus a plus) PostgreSQL experience (schema design, queries, performance tuning).
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Kafka Developer Jobs