496 Hadoop jobs in South Africa
RESEARCHER: WATER QUALITY (CHEMISTRY & DATA ANALYSIS)
Posted today
Job Viewed
Job Description
• Conduct advanced research in water chemistry with a focus on agricultural and environmental applications.
• Develop and validate analytical methods for water quality assessment.
• Lead and contribute to national and international research projects.
• Publish research findings in peer-reviewed journals and present at scientific conferences.
• Supervise postgraduate students and mentor junior researchers.
• Collaborate with stakeholders, including government departments, industry partners, and academic institutions.
• Apply water quality expertise to irrigation systems, including the assessment of water suitability for agricultural use and the impact of irrigation practices on soil and crop health
• PhD in Chemistry, Environmental Science, Water Science, Water Resource Management or related field with a strong focus on water chemistry, biogeochemistry, freshwater and aquatic science.
• Proven experience in research, scientific inquiry and investigation, analysis and data management, modelling.
• Scientific publication and knowledge dissemination record.
• Experience in analytical instrumentation (e.g., ICP-MS, GC-MS, HPLC).
• Knowledge of water quality standards, regulations, and monitoring frameworks.
• Record of research collaboration and partnerships.
• Proven ability to function successfully in cross-cutting disciplinary teams.
• Excellent communication skills, written and verbal.
• Proven track record of mentoring, supervising and training students.
• Experience with field work and acquiring samples.
• Experience in irrigation-related research or water quality monitoring for agricultural applications.
• Valid code B & EB Driver’s licence and driving experience.
Enquiries: Dr AT Grundling, Tel:
CLOSING DATE FOR APPLICATIONS: 17 SEPTEMBER 2025
A competitive remuneration package will be congruent with the scope, responsibilities and the stature of the position. The appointment will be subject to a positive security clearance (as well as competency and leadership assessments).
Preference will be given to designated groups in terms of the ARC Employment Equity Plan. The Agricultural Research Council is an equal opportunity employer and is committed to the principles and processes of Employment Equity Act.
Applications accompanied by a covering letter, detailed CV with at least three recent (3) contactable referees, certified copies of certificates, supporting documents and a copy of driver’s license must be attached on the form.
A SAQA evaluation report must accompany foreign qualifications. Incomplete applications will not be considered. Applicants who do not receive any response four (4) weeks after the closing date must regard their applications as unsuccessful. Permanent appointments are subject to six (6) months’ probation period. The organisation reserves the right not to appoint.
Data Engineer
Posted today
Job Viewed
Job Description
Location: Johannesburg, Gauteng, South Africa. Job Type: Full-time (100%).
We are seeking a Data Engineer to join our growing engineering team. This is a key role for a motivated and technically skilled individual with a solid foundation in software engineering and data systems. You will work on building scalable data infrastructure, implementing robust data integrations, and collaborating with cross-functional teams to solve real-world data challenges.
About ScytaleScytale is a fast-growing B2B SaaS startup transforming cybersecurity compliance for businesses worldwide. Our Compliance-as-a-Service platform simplifies frameworks like SOC 2, ISO 27001, HIPAA, GDPR, and PCI DSS for startups, scale-ups, and enterprises. Recognized as a leader in Governance, Risk & Compliance on G2, our customers rave about our platform and service. Headquartered in Tel Aviv, we offer a collaborative, growth-oriented environment with a hybrid work model, competitive compensation, and benefits that prioritize your professional and personal well-being.
Role & Responsibilities- Design, develop, test, and maintain reliable data pipelines and ETL processes using Python and SQL
- Build and manage API-based data ingestion workflows and real-time data integrations
- Apply software engineering best practices: modular design, testing, version control, and documentation
- Own and optimize data workflows and automation, ensuring efficiency and scalability
- Collaborate closely with senior engineers, data scientists, and stakeholders to translate business needs into technical solutions
- Maintain and enhance data reliability, observability, and error handling in production systems
- Develop and support internal data-driven tools
- Implement data operations best practices, including automated monitoring, alerting, and incident response for pipeline health
- Work with data-devops principles: CI/CD for data workflows, infrastructure-as-code, and containerized ETL deployments
- 5+ years of professional experience as a Data Engineer or in a similar role developing data ETL pipelines
- Advanced proficiency in Python for backend development and scripting
- Strong SQL skills with hands-on experience in querying and modeling relational databases
- Experience with cloud platforms such as AWS, GCP, or Azure
- Hands-on with containerization technologies like Docker or Kubernetes
- Solid understanding of RESTful APIs
- Experience with version control systems (GitHub, GitLab, Bitbucket) and CI/CD workflows
- Strong grasp of software development lifecycle (SDLC) and principles of clean, maintainable code
- Demonstrated ability to work independently, own projects end-to-end, and mentor junior engineers
- Familiarity with AI concepts and prompt engineering is a plus
- Experience with data security, privacy compliance, and access controls
- Knowledge of infrastructure-as-code tools (e.g., Terraform, Helm)
- Background in event-driven architecture or stream processing
- Innovative Work: Be part of a cutting-edge product shaping the future of security and compliance.
- Learning & Growth: Access courses, conferences, and mentorship to grow your career.
- Hybrid Work Model: Enjoy the flexibility of hybrid working.
- Collaborative Culture: Work with inspiring colleagues in a supportive environment.
- Relaxation & Fun: Take breaks in our relaxation room or join our team events, happy hours, and holiday celebrations.
- Family First: Personal and family priorities always come first.
Data Engineer
Posted today
Job Viewed
Job Description
Join to apply for the Data Engineer role at Elixirr Digital .
Direct message the job poster from Elixirr Digital.
Do you like building data systems and pipelines? Do you enjoy interpreting trends and patterns? Are you able to recognize the deeper meaning of data ?
Join Elixirr Digital as a Data Engineer and help us analyze and organize raw data to provide valuable business insights to our clients and stakeholders!
As Data Engineer , you will be responsible to ensure the availability and quality of the data so that it becomes usable by target data users. You will be working on a set of operations aimed at creating processes and mechanisms for the flow and access of data in accordance with the project scope and deadlines!
Discover the opportunity to join our Data & Analytics department and work closely with a group of like-minded individuals with cutting-edge technologies!
What you will be doing as a Data Engineer at Elixirr Digital?- Working closely with Data Architects on AWS, Azure, or IBM architecture designs.
- Maintaining and building data ecosystems by working on the implementation of data ingestions, often in collaboration with other data engineers, analysts, DevOps, and data scientists.
- Ensuring the security of cloud infrastructure and processes by implementing best practices.
- Applying modern principles and methodologies to advance business initiatives and capabilities.
- Identifying and consulting on ways to improve data processing, reliability, efficiency, and quality, as well as solution cost and performance.
- Preparing test cases and strategies for unit testing, system, and integration testing.
- Proficient in Python with extensive experience in data processing and analysis.
- Strong SQL expertise, adept at writing efficient queries and optimizing database performance
- Previous working experience with the Azure/AWS data stack.
- Experienced in software development lifecycle methodologies, with a focus on Agile practices.
- Passionate about technology. You anticipate, recognize, and resolve technical problems using a variety of specialized tools for application development and support.
- Independent. You are a self-motivated and ambitious individual, capable of managing multiple responsibilities effectively.
- Problem-solver. You think creatively and find solutions to complex challenges.
- Creative and outside-the-box thinker. You look beyond blog posts and whitepapers, competitions, and even state-of-the-art benchmarks to solve real-world problems.
- Communicator. Strong verbal and written communication skills are essential to ensure effective collaboration and timely delivery of results within the team.
- Proficient in English. We work across continents in a global environment, so fluent English, both written and spoken is a must.
From working with cutting-edge technologies to solving complex challenges for global clients, we make sure your work matters. And while you’re building great things, we’re here to support you.
Compensation & Equity:- Performance bonus
- Employee Stock Options Grant
- Employee Share Purchase Plan (ESPP)
- Competitive compensation
- Pension plan
- Big clients and interesting projects
- Cutting-edge technologies
- Growth and development opportunities
- Internal LMS & knowledge hubs
We don’t just offer a job - we create space for you to grow, thrive, and be recognized.
Intrigued? Apply now!
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
The Kandua Company helps small service businesses grow. We connect them to new customers and simplify business management with easy-to-use tech tools. Kandua.com is South Africa’s #1 online marketplace for home services. Every month, over 40,000 vetted home service professionals access around R50 million worth of work opportunities from individual customers, along with business customers through Kandua’s partnerships with leaders in insurance and retail.
The Kandua for Pros app provides a mobile platform for professionals to send quotes and invoices, accept payments, track customer communication, and view business performance, all securely stored in the cloud. Our mission is to use technology to bridge the gap between skills and livelihood, supporting those who serve us daily.
What does this role involve?
We are seeking a pragmatic and forward-thinking Data Engineer to define and scale our data capabilities. This role involves designing, building, and maintaining data pipelines, models, and infrastructure that support our analytics, operations, and product personalization. You will have the chance to shape our modern data stack and collaborate closely with engineering, product, operations, and growth teams to convert raw data into actionable insights and automated workflows.
Key Responsibilities- Design, build, and manage reliable, scalable data pipelines using batch and streaming techniques (e.g., ELT/ETL, Kafka, Airflow).
- Own and evolve the structure and architecture of our Data Lakehouse and medallion architecture.
- Develop robust processes for data ingestion, validation, transformation, and delivery across multiple systems and sources.
- Implement tools and frameworks for monitoring, testing, and ensuring data quality, lineage, and observability.
- Collaborate with analytics and product teams to develop well-documented, version-controlled data models that support reporting, dashboards, and experiments.
- Leverage geospatial and behavioral data to create features for search, matching, and personalization algorithms.
- Partner with the ML team to support deployment of machine learning workflows and real-time inference pipelines.
- Research and prototype emerging tools and technologies to enhance Kandua’s data stack and developer experience.
- Monitor and support production workloads to ensure performance, availability, and cost-efficiency.
- 6+ years of experience in a data engineering or software engineering role, focusing on data infrastructure and pipelines.
- Strong SQL skills.
- Proficiency with modern data stack tools (e.g., dbt, Airflow, Spark, Kafka, Delta Lake, Snowflake/BigQuery).
- Solid experience with cloud platforms and infrastructure-as-code tools.
- Strong programming skills.
- Deep understanding of relational databases, data warehousing, and data modeling best practices.
- Passion for data quality, testing, documentation, and building sustainable systems.
- Familiarity with data modeling, OLAP cubes, and multidimensional databases.
- Experience with data pipelines and ETL/ELT processes.
- Solutions-oriented mindset and strong problem-solving skills.
- Ownership and accountability for the quality and accuracy of insights.
- Experience with BigQuery and Dataform.
- Familiarity with Google Cloud Platform (GCP) or other cloud providers.
- Exposure to Domain-Driven Design (DDD).
- Experience working in a startup or fast-paced environment.
- Hands-on experience with cloud-based data warehouses (preferably Google BigQuery).
- Be part of a fast-growing startup solving real problems in South Africa.
- Work remotely with a talented team.
- Opportunity to shape the DevOps culture and practices.
- Flexible work arrangements.
- Work with cutting-edge cloud technologies and best practices.
Data Engineer
Posted today
Job Viewed
Job Description
About Us
At KingMakers, we’re not hereto follow the industry; we’re here to shape it. We’re redefining sports and gaming entertainment across Africa, creating platforms that bring millions of users closer to the games and experiences they love.
Since 2018, we’ve grown rapidly to become one of the key players in the region, recognized for redefining the industry and setting the stage for what’s next.
Our mission is bold: we aim toinspire, empower, and create meaningful value for the people and communities we serve . Collaboration and innovation are at the heart of our culture, driving every interaction to be memorable, impactful, and forward-looking.
We think big, move fast, and challenge convention. From revolutionizing user experiences to solving some of the most complex engineering challenges, we set the gold standard. If you’re ready to make your mark, there’s no better time to join KingMakers.
Join our vibrant team as a Data Engineer and play a key role in enhancing our Data Platform!
As we grow and venture into exciting new products and markets, we'll rely on your unique blend of technical expertise, engineering skills, and business insight. You'll thrive in our dynamic environment, delivering quick and meaningful value with agility.
Key Responsibilities:- Design and oversee a highly robust and scalable Data Lake/ETL infrastructure along with a data pipeline capable of supporting both streaming and batch processing.
- Maintain systems and processes that are fault-tolerant, complete with automated data quality monitoring and alert systems.
- Continuously enhance systems by addressing recurring issues, delivering minor features, and optimizing for both performance and scalability.
- Prioritize privacy and data security in all initiatives.
- Create and uphold top-notch documentation for the entire Data Platform stack.
- Collaborate closely with business stakeholders and product engineering to produce high-value data products.
- Work alongside key partners and stakeholders to gather insights, shape requirements, and drive the roadmap for the Data Platform.
- Bachelor's degree in Computer Science, Software Engineering, or a related field (or equivalent education/experience/skills)
- 4 years of experience in data engineering or a similar role
- Strong programming skills in Python and Spark (PySpark / Databricks)
- At least 2 years of experience with Databricks and Azure data services
- Familiarity with other cloud-based data management environments (like AWS, Google Cloud, Hadoop, Snowflake, Spark, Storm, and Kafka) is a plus
- Experience with Customer Data Platforms is a bonus
- Proficient in managing data quality actively, including monitoring and alerting
- A good grasp of the application and database development lifecycle from development to staging/testing, and into production
- Remote working experience is essential
- Experience in a hyper-growth start-up environment is a significant advantage
- Excellent communication skills for effective collaboration with stakeholders
- Adaptability and a knack for thriving in a fast-paced, dynamic environment
- Strong problem-solving abilities and keen attention to detail
- A mindset of continuous learning and a willingness to embrace new technologies and processes
At KingMakers, your work will have a real impact. Here’s what makes us different:
- Work Globally : Collaborate with exceptional talent from across the world in an inclusive and dynamic environment.
- Opportunities for Growth : Develop your skills and advance your career in a team that values learning, innovation, and personal development.
- Impactful Work : Be part of a company redefining sports and gaming entertainment, making a real impact across Africa.
- Dynamic Culture : Join a fast-paced, supportive environment where collaboration and creativity are at the core of everything we do.
- Embrace Challenges : Take on exciting projects that push boundaries and allow you to grow in your field.
- Innovate and Thrive : Be part of a culture that celebrates bold ideas and prioritizes personal and professional growth.
Perks
Our benefits are tailored to each location, ensuring they meet the needs of our global team. Here are some examples:
- Private Health Insurance: Comprehensive plans for you.
- Extra Time Off : Additional leave days, including your birthday off, to relax and recharge.
- Hybrid Work : A flexible arrangement with 2 days in the office and 3 days remote each week.
- Team Activities : Regular events to build connections and foster collaboration.
- Office Perks : Free snacks, coffee, and a welcoming environment to keep you energized.
- Performance Bonuses : Discretionary rewards recognizing your contributions.
Data Engineer
Posted 1 day ago
Job Viewed
Job Description
Join to apply for the Data Engineer role at Blue Pearl .
Responsibilities- Implementation of CreditLens system in Cloud and on premises including the successful migration of data to the new system and optimized end to end reporting
- Problem to solve.
- Solution/Deliverable
- Attractive Tech exposure
- Years of Experience
- Industry Experience
- Design and develop custom Power BI Reports and Dashboards to meet specific client requirements
- Transform raw data into actionable insights through effective Power BI data visualization and dashboard development
- Establish connections and extract data from diverse sources
- Implement data visualization best practices to craft insightful and user-friendly dashboards, including intuitive homepages and navigation
- Ensure accuracy, security and privacy of information
- Fully document all dashboards and processes.
- Provide training and support to end-users of the usage of Power BI
- 5+ years’ experience
- Strong proficiency in SQL and DAX
- Solid understanding of data analytics and visualization techniques
- Experience with data modeling, data warehousing and business intelligence solutions
- Experience with Power BI dataflows, Power Query and Power Apps
- Strong Data Storytelling skills
Data Engineer
Posted 1 day ago
Job Viewed
Job Description
Ikue is a tech start-up with a clear purpose and vision - to provide telecommunications operators with a superior product to deliver superior customer experiences. We know that Customer data is at the heart of hyper personalisation and are looking for the brightest, most inspiring engineers to deliver our product which enables data to drive every decision, every communication, and every customer interaction. We are building a diverse team, all unified by a desire to unleash the data needed by marketeers. Creativity is at the core of Ikue and is something we are looking to further strengthen in 2022! There are no typical profiles, each and every team member shares our vision and wants to be part of its success.
Responsibilities- Design and build data processing pipelines for batch and streaming use cases
- Build reusable microservices to perform ETL processing
- Experiment with new cloud services and incorporate them into existing and new use cases
- Collaborate with others to find innovative solutions and assist with project delivery
- Provide expert data and application support for Ikue production customers
- You will become part of an international environment that embraces diversity and professionalism
- A dynamic and motivated team, with a good sense of humour
- Freedom to take responsibility, grow within the team and ALSO
- Working in a fast-forward company
- Remote work model
- BSc Computer Science or Engineering
- 3+ years working experience as a Data Engineer
- Advanced skills developing in Python, SQL, Spark
- Good understanding of cloud services and architecture for batch and streaming use cases
- AWS Associate Developer or Data Engineering certification
- Excellent problem solving and analytical skills
- Strong communication and collaboration abilities
- Mid-Senior level
- Full-time
- Information Technology
- Software Development
Be The First To Know
About the latest Hadoop Jobs in South Africa !
Data Engineer
Posted 3 days ago
Job Viewed
Job Description
Job Purpose The ideal candidate will use their passion for big data and analytics to provide insights to the business covering a range of topics. They will be responsible for conducting both recurring and ad hoc analysis for business users. As a Data Engineer at Pepkor Lifestyle, you will play a critical role in the development and maintenance of our data infrastructure. You will work closely with cross-functional teams to ensure data availability, quality, and accessibility for analysis. The ideal candidate will use their passion for big data and analytics to provide insights to the business covering a range of topics. They will be responsible for conducting both recurring and ad hoc analysis for business users.
Position outputs/competencies
- Collaborate with data scientists, analysts, and business stakeholders to understand data requirements.
- Design, develop, and maintain data pipelines and ETL processes.
- Implement and maintain data warehousing and data storage solutions.
- Optimize data pipelines for performance, scalability, and reliability.
- Ensure data quality and integrity through data validation and cleansing processes.
- Monitor and troubleshoot data infrastructure issues.
- Stay current with emerging technologies and best practices in data engineering.
- Systematic solution design of the ETL and data pipeline inline with business user specifications
- Develop and implement ETL pipelines aligned to the approved solution design
- Ensure data governance and data quality assurance standards are upheld
- Deal with customers in a customer centric manner
- Effective Self-Management and Team work
Minimum qualification and Experience
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- Proven experience as a Data Engineer in a professional setting.
- Proficiency in data engineering technologies and programming languages (e.g., SQL, Python, Scala, Java).
- Strong knowledge of data storage, database design, and data modelling concepts
- Experience with ETL tools, data integration, and data pipeline orchestration.
- Familiarity with data warehousing solutions (e.g., Snowflake, Redshift).
- Excellent problem-solving and troubleshooting skills.
- Strong communication and collaboration skills.
- 5-10 years’ Experience and understanding in designing and developing data warehouses according to the Kimball methodology.
- Adept at design and development of ETL processes.
- SQL development experience, preferably SAS data studio and AWS experience The ability to ingest/output CSV, JSON and other flat file types and any related data sources.
- Proficient in Python or R or willingness to learn. Experience within Retail, Financial Services and Logistics environments.
- Redshift Technologies
- Understanding of data security and compliance best practices.
- Relevant certifications (e.g., AWS Certified Data Analytics, Google Cloud Professional Data Engineer).
Data Engineer
Posted 3 days ago
Job Viewed
Job Description
Unifi is redefining credit in Africa with simple, fast personal loans delivered through online, mobile and branch channels. We make life easy for thousands of clients across Zambia, Kenya, Uganda and South Africa. Unifi has conviction in the African continent and its people, and our products enable our clients to achieve even more. As one of the fastest-growing lenders in East Africa, we combine exceptional client service with the very best tech and data analytics.
Learn More About Unifi At
The Role
Unifi is on the lookout for a talented Data Engineer with strong expertise in Google Cloud Platform (GCP) to join our fast-growing team. In this role, you’ll design, build, and maintain scalable data pipelines and architectures that power our business. You’ll collaborate closely with data scientists and analysts to ensure seamless data flow across the organisation, enabling smarter decisions and impactful solutions.
We’re looking for someone who is analytically sharp, self-motivated, and thrives in an unstructured environment. A genuine passion for African business is a must—along with a healthy sense of adventure and a good sense of humour to match our dynamic culture.
Responsibilities
- Design and build scalable data pipelines and architectures using GCP technologies such as Dataflow, BigQuery, Pub/Sub, and Cloud Storage.
- Develop and manage ETL processes to transform diverse data sources into clean, structured formats for analysis and reporting.
- Partner with data scientists and analysts to understand their needs and deliver solutions that enable insights and decision-making.
- Create and maintain documentation for data pipelines, architecture, and data models to ensure clarity and consistency.
- Troubleshoot and resolve data-related issues quickly to minimise disruption.
- Continuously optimise data pipelines for performance, scalability, and cost efficiency.
- Automate workflows and processes through scripts and tools that streamline operations.
- Safeguard data quality and integrity across all sources, pipelines, and platforms.
- Stay ahead of the curve by keeping up with new GCP tools, best practices, and data engineering trends.
- Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field.
- 5+ years’ experience as a Data Engineer or in a similar role.
- Strong programming skills in BigQuery, Python, SQL, and GCP.
- Proven expertise in ETL development and data modeling.
- Familiarity with data lakehouse concepts and techniques.
- Excellent problem-solving, analytical, and critical-thinking skills.
- Strong communication and collaboration abilities.
- Experience with Google Cloud Platform (GCP) technologies—especially BigQuery, with additional exposure to Dataflow, Pub/Sub, and Cloud Storage—considered highly beneficial.
- Background in financial services would be an added advantage.
Data Engineer
Posted 5 days ago
Job Viewed