127 Data Engineer Aws jobs in Johannesburg
Data Engineer
Posted 10 days ago
Job Viewed
Job Description
Join to apply for the Data Engineer role at Nedbank
Join to apply for the Data Engineer role at Nedbank
Direct message the job poster from Nedbank
Requisition nr: 140983
Closing date: 25 July 2025
Nedbank, Personal and Private Banking
It Application Development
Manage Self: Technical
Position
Why join our team!
Join a collaborative and technically driven team as a Data Engineer, where your skills in SQL and SSIS will be central to maintaining and enhancing critical data operations. This role offers the opportunity to work on a custom-built data system and contribute to innovative projects that introduce new technologies such as Ab Initio, Python, and C#. You’ll be part of a supportive environment that values teamwork, open communication, and continuous learning. With direct client engagement and the chance to influence technical decisions, this position is ideal for professionals who are passionate about solving complex data challenges and driving meaningful improvements.
Job Purpose
To support and maintain the data warehouse in line with the data model; metadata repository and to provide business intelligence analysis through performing strategic and operational support.
Job Responsibilities
- Contribute to a culture conducive to the achievement of transformation goals by participating in Nedbank Culture building initiatives (e.g. staff surveys etc).
- Participate and support corporate social responsibility initiatives for the achievement of key business strategies.
- Identify and recommend opportunities to enhance processes; systems and policies and support implementation of new processes; policies and systems.
- Deliver work according to customer expectations by prioritizing; planning and implementing requirements.
- Utilize resources by adhering to standards; policies and procedures.
- Align and continuously improve set processes by identifying innovation opportunities.
- Identify and mitigate risk by executing within governance.
- Resolve incidents by logging and tracking through correct channels Keep abreast of legislation and other industry changes that impacts on role by reading the relevant newsletters; websites and attending sessions.
- Understand and embrace the Nedbank vision and demonstrate the values through interaction with team and stakeholders.
- Improve personal capability and stay abreast of developments in field of expertise by identifying training courses and career progression for self through input and feedback from managers
- Ensure personal growth and enable effectiveness in performance of roles and responsibilities by ensuring all learning activities are completed; experience practiced and certifications obtained and/or maintained within specified time frames.
- Ensure information is provided correctly to stakeholders by maintaining knowledge sharing knowledge with team.
- Structure data into compliance standards by adhering to metadata governance procedures according to Nedbank documented standards and formats.
- Manage final transformed data content by complying to prescribed standards for reviewing and publishing.
- Assist/govern population of datamart and metadata repository by complying to standards; systems; processes and procedures.
- Support business units by providing consulting services that delivers data and information relevant to their business
- Contribute to internal/external information sharing sessions by attending formal and informal meetings.
- Manage vendor relationship interactions by conforming to vendor management office guidelines and principles.
Essential Qualifications - NQF Level
- Advanced Diplomas/National 1st Degrees
Preferred Qualification
- Degree in Information Technology or Business Management, Mathematical/Statistics
Essential Certifications
- Data Management (DAMA) Certification, Certification/formal training in relevant technology
Minimum Experience Level
- 8 years relevant experience of which 3-5 years experience is in a data management /business role
Type of Exposure
- Built and maintained stakeholder relationships
- Client and Relationship Results
- Developed and Implemented Communications Strategy
- Improved Processes and Culture
- Manage internal process
- Managed Relationships
- Managed Self
- Supported Transformation, Change and continued Improvement
Technical / Professional Knowledge
- Data Warehousing
- Programming (Python, Java, SQL)
- Data Analysis and Data Modelling
- Data Pipelines and ETL tools (Ab Initio, ADB, ADF, SAS ETL)
- Agile Delivery
Disclaimer
Preference will be given to candidates from the underrepresented groups
Please contact the Nedbank Recruiting Team at +27 860 555 566
Seniority level- Seniority level Associate
- Employment type Full-time
- Job function Information Technology
- Industries Financial Services and Banking
Referrals increase your chances of interviewing at Nedbank by 2x
Sign in to set job alerts for “Data Engineer” roles.Johannesburg, Gauteng, South Africa 1 day ago
Johannesburg, Gauteng, South Africa 1 week ago
Johannesburg, Gauteng, South Africa 4 hours ago
Johannesburg, Gauteng, South Africa 6 days ago
Sandton, Gauteng, South Africa 4 weeks ago
Johannesburg, Gauteng, South Africa 1 day ago
Johannesburg, Gauteng, South Africa 6 days ago
Johannesburg, Gauteng, South Africa 4 days ago
Johannesburg Metropolitan Area 17 hours ago
Sandton, Gauteng, South Africa 1 month ago
Johannesburg Metropolitan Area 1 week ago
Modderfontein, Gauteng, South Africa 5 days ago
Johannesburg, Gauteng, South Africa 1 day ago
Johannesburg, Gauteng, South Africa 6 days ago
Johannesburg Metropolitan Area 1 week ago
Johannesburg, Gauteng, South Africa 1 week ago
Midrand, Gauteng, South Africa 3 weeks ago
City of Johannesburg, Gauteng, South Africa 2 weeks ago
Sandton, Gauteng, South Africa 9 hours ago
City of Johannesburg, Gauteng, South Africa 3 weeks ago
City of Johannesburg, Gauteng, South Africa 5 days ago
Germiston, Gauteng, South Africa 1 day ago
Sandton, Gauteng, South Africa 4 hours ago
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrData Engineer
Posted 10 days ago
Job Viewed
Job Description
InfyStrat is looking for a proficient Data Engineer to strengthen our data team. In this role, you will be integral in designing and implementing data pipelines that facilitate the efficient extraction, transformation, and loading (ETL) of data across various platforms. You will collaborate with data analysts, scientists, and business units to provide reliable and accurate datasets to drive decision-making processes. The ideal candidate should possess a combination of technical proficiency and creativity to solve complex data challenges. We foster a culture of innovation at InfyStrat, where the contributions of our team members are essential to transforming our data into valuable insights. Join us to help build data solutions that empower our business growth.
Key Responsibilities
- Design, implement, and manage ETL processes to collect and transform data from diverse sources
- Develop and maintain data models, ensuring they meet business needs and performance requirements
- Optimize database performance and troubleshoot data-related issues
- Collaborate with stakeholders to identify data needs and develop solutions accordingly
- Implement data quality monitoring and validation to maintain data integrity
- Keep up with industry trends and emerging technologies to continually enhance data engineering practices
- Bachelor's degree in Computer Science, Data Science, or a related field
- 5-12 years of experience in data engineering or a related role
- Strong Knowledge in Snowflake + Metallion
- Strong proficiency in SQL and experience with relational databases
- Experience with data integration and ETL tools (such as Talend, Apache NiFi, or similar)
- Familiarity with big data frameworks (like Hadoop, Spark) and cloud computing platforms (AWS, Azure)
- Proficient in programming languages for data processing (Python, Scala, or Java)
- Problem-solving skills with a keen attention to detail
- Ability to work independently and collaborate effectively within teams
- Seniority level Mid-Senior level
- Employment type Contract
- Job function Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at InfyStrat Software Services by 2x
Get notified about new Data Engineer jobs in Johannesburg, Gauteng, South Africa .
Johannesburg, Gauteng, South Africa 6 days ago
Johannesburg Metropolitan Area 5 days ago
Johannesburg, Gauteng, South Africa 1 day ago
Johannesburg, Gauteng, South Africa 6 days ago
Midrand, Gauteng, South Africa 4 weeks ago
Johannesburg Metropolitan Area 1 week ago
Johannesburg, Gauteng, South Africa 21 hours ago
Johannesburg, Gauteng, South Africa 21 hours ago
Johannesburg, Gauteng, South Africa 4 days ago
Johannesburg Metropolitan Area 2 months ago
Johannesburg, Gauteng, South Africa 1 day ago
Randburg, Gauteng, South Africa 19 hours ago
City of Johannesburg, Gauteng, South Africa 3 weeks ago
Johannesburg, Gauteng, South Africa 6 months ago
Johannesburg, Gauteng, South Africa 1 week ago
Johannesburg, Gauteng, South Africa 1 day ago
Johannesburg, Gauteng, South Africa 6 days ago
Sandton, Gauteng, South Africa 16 hours ago
Johannesburg, Gauteng, South Africa 1 day ago
Johannesburg, Gauteng, South Africa 1 week ago
Johannesburg Metropolitan Area 5 days ago
Johannesburg, Gauteng, South Africa 3 days ago
Sandton, Gauteng, South Africa 1 month ago
Randburg, Gauteng, South Africa 6 days ago
Johannesburg, Gauteng, South Africa 1 week ago
Bedfordview, Gauteng, South Africa 1 month ago
Woodmead, Gauteng, South Africa 1 month ago
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrData Engineer
Posted 12 days ago
Job Viewed
Job Description
InfyStrat is looking for a proficient Data Engineer to strengthen our data team. In this role, you will be integral in designing and implementing data pipelines that facilitate the efficient extraction, transformation, and loading (ETL) of data across various platforms. You will collaborate with data analysts, scientists, and business units to provide reliable and accurate datasets to drive decision-making processes. The ideal candidate should possess a combination of technical proficiency and creativity to solve complex data challenges. We foster a culture of innovation at InfyStrat, where the contributions of our team members are essential to transforming our data into valuable insights. Join us to help build data solutions that empower our business growth.
Key Responsibilities- Design, implement, and manage ETL processes to collect and transform data from diverse sources.
- Develop and maintain data models, ensuring they meet business needs and performance requirements.
- Optimize database performance and troubleshoot data-related issues.
- Collaborate with stakeholders to identify data needs and develop solutions accordingly.
- Implement data quality monitoring and validation to maintain data integrity.
- Keep up with industry trends and emerging technologies to continually enhance data engineering practices.
- Bachelor's degree in Computer Science, Data Science, or a related field.
- 5-12 years of experience in data engineering or a related role.
- Strong Knowledge in Snowflake + Metallion.
- Strong proficiency in SQL and experience with relational databases.
- Experience with data integration and ETL tools (such as Talend, Apache NiFi, or similar).
- Familiarity with big data frameworks (like Hadoop, Spark) and cloud computing platforms (AWS, Azure).
- Proficient in programming languages for data processing (Python, Scala, or Java).
- Problem-solving skills with a keen attention to detail.
- Ability to work independently and collaborate effectively within teams.
Data Engineer
Posted 17 days ago
Job Viewed
Job Description
Business Segment: Personal & Private Banking
Location: ZA, GP, Johannesburg, Baker Street 30
Support in providing infrastructure, tools, and frameworks used to deliver end-to-end solutions to business problems. Build scalable infrastructure for supporting the delivery of business insights from raw data sources with a focus on collecting, managing, analysing, visualising data, and developing analytical solutions. Responsible for expanding and optimising the organisation's data and data pipeline architecture, whilst optimising data flow and collection to ultimately support data initiatives.
QualificationsDegree: Information Technology
Experience7-10 years experience in building databases, warehouses, reporting solutions, and building data integration solutions.
#J-18808-LjbffrData Engineer
Posted 17 days ago
Job Viewed
Job Description
The Kandua Company helps small service businesses grow. We connect them to new customers and simplify business management with easy-to-use tech tools. Kandua.com is South Africa’s #1 online marketplace for home services. Every month, over 40,000 vetted home service professionals access around R50 million worth of work opportunities from individual customers, along with business customers through Kandua’s partnerships with leaders in insurance and retail.
The Kandua for Pros app provides a mobile platform for professionals to send quotes and invoices, accept payments, track customer communication, and view business performance, all securely stored in the cloud. Our mission is to use technology to bridge the gap between skills and livelihood, supporting those who serve us daily.
What does this role involve?
We are seeking a pragmatic and forward-thinking Data Engineer to define and scale our data capabilities. This role involves designing, building, and maintaining data pipelines, models, and infrastructure that support our analytics, operations, and product personalization. You will have the chance to shape our modern data stack and collaborate closely with engineering, product, operations, and growth teams to convert raw data into actionable insights and automated workflows.
Key Responsibilities- Design, build, and manage reliable, scalable data pipelines using batch and streaming techniques (e.g., ELT/ETL, Kafka, Airflow).
- Own and evolve the structure and architecture of our Data Lakehouse and medallion architecture.
- Develop robust processes for data ingestion, validation, transformation, and delivery across multiple systems and sources.
- Implement tools and frameworks for monitoring, testing, and ensuring data quality, lineage, and observability.
- Collaborate with analytics and product teams to develop well-documented, version-controlled data models that support reporting, dashboards, and experiments.
- Leverage geospatial and behavioral data to create features for search, matching, and personalization algorithms.
- Partner with the ML team to support deployment of machine learning workflows and real-time inference pipelines.
- Research and prototype emerging tools and technologies to enhance Kandua’s data stack and developer experience.
- Monitor and support production workloads to ensure performance, availability, and cost-efficiency.
- 6+ years of experience in a data engineering or software engineering role, focusing on data infrastructure and pipelines.
- Strong SQL skills.
- Proficiency with modern data stack tools (e.g., dbt, Airflow, Spark, Kafka, Delta Lake, Snowflake/BigQuery).
- Solid experience with cloud platforms and infrastructure-as-code tools.
- Strong programming skills.
- Deep understanding of relational databases, data warehousing, and data modeling best practices.
- Passion for data quality, testing, documentation, and building sustainable systems.
- Familiarity with data modeling, OLAP cubes, and multidimensional databases.
- Experience with data pipelines and ETL/ELT processes.
- Solutions-oriented mindset and strong problem-solving skills.
- Ownership and accountability for the quality and accuracy of insights.
- Experience with BigQuery and Dataform.
- Familiarity with Google Cloud Platform (GCP) or other cloud providers.
- Exposure to Domain-Driven Design (DDD).
- Experience working in a startup or fast-paced environment.
- Hands-on experience with cloud-based data warehouses (preferably Google BigQuery).
- Be part of a fast-growing startup solving real problems in South Africa.
- Work remotely with a talented team.
- Opportunity to shape the DevOps culture and practices.
- Flexible work arrangements.
- Work with cutting-edge cloud technologies and best practices.
Data Engineer
Posted 17 days ago
Job Viewed
Job Description
Requisition nr: 140983
Talent Acquisition Specialist: Tshego Semenya
Location: 135 Rivonia Road, Sandown
Closing date: 25 July 2025
Nedbank, Personal and Private Banking
Career Stream It Application Development Leadership Pipeline Manage Self: Technical PositionData Engineer
Why join our team!Join a collaborative and technically driven team as a Data Engineer, where your skills in SQL and SSIS will be central to maintaining and enhancing critical data operations. This role offers the opportunity to work on a custom-built data system and contribute to innovative projects that introduce new technologies such as Ab Initio, Python, and C#. You’ll be part of a supportive environment that values teamwork, open communication, and continuous learning. With direct client engagement and the chance to influence technical decisions, this position is ideal for professionals who are passionate about solving complex data challenges and driving meaningful improvements.
Job PurposeTo support and maintain the data warehouse in line with the data model; metadata repository and to provide business intelligence analysis through performing strategic and operational support.
Job Responsibilities- Contribute to a culture conducive to the achievement of transformation goals by participating in Nedbank Culture building initiatives (e.g. staff surveys etc).
- Participate and support corporate social responsibility initiatives for the achievement of key business strategies.
- Identify and recommend opportunities to enhance processes; systems and policies and support implementation of new processes; policies and systems.
- Deliver work according to customer expectations by prioritizing; planning and implementing requirements.
- Utilize resources by adhering to standards; policies and procedures.
- Align and continuously improve set processes by identifying innovation opportunities.
- Identify and mitigate risk by executing within governance.
- Resolve incidents by logging and tracking through correct channels Keep abreast of legislation and other industry changes that impacts on role by reading the relevant newsletters; websites and attending sessions.
- Understand and embrace the Nedbank vision and demonstrate the values through interaction with team and stakeholders.
- Improve personal capability and stay abreast of developments in field of expertise by identifying training courses and career progression for self through input and feedback from managers
- Ensure personal growth and enable effectiveness in performance of roles and responsibilities by ensuring all learning activities are completed; experience practiced and certifications obtained and/or maintained within specified time frames.
- Ensure information is provided correctly to stakeholders by maintaining knowledge sharing knowledge with team.
- Structure data into compliance standards by adhering to metadata governance procedures according to Nedbank documented standards and formats.
- Manage final transformed data content by complying to prescribed standards for reviewing and publishing.
- Assist/govern population of datamart and metadata repository by complying to standards; systems; processes and procedures.
- Support business units by providing consulting services that delivers data and information relevant to their business
- Contribute to internal/external information sharing sessions by attending formal and informal meetings.
- Manage vendor relationship interactions by conforming to vendor management office guidelines and principles.
- Advanced Diplomas/National 1st Degrees
- Degree in Information Technology or Business Management, Mathematical/Statistics
- Data Management (DAMA) Certification, Certification/formal training in relevant technology
- 8 years relevant experience of which 3-5 years experience is in a data management /business role
- Built and maintained stakeholder relationships
- Client and Relationship Results
- Developed and Implemented Communications Strategy
- Improved Processes and Culture
- Manage internal process
- Managed Relationships
- Managed Self
- Supported Transformation, Change and continued Improvement
- Cloud Data Engineering (Azure , AWS, Google)
- Data Warehousing
- Databases (PostgreSQL, MS SQL, IBM DB2, HBase, MongoDB)
- Programming (Python, Java, SQL)
- Data Analysis and Data Modelling
- Data Pipelines and ETL tools (Ab Initio, ADB, ADF, SAS ETL)
- Agile Delivery
- Problem solving skills
Preference will be given to candidates from the underrepresented groups
Please contact the Nedbank Recruiting Team at +27 860 55566
---
Please contact the Nedbank Recruiting Team at +27 860 555 566
#J-18808-LjbffrData Engineer
Posted 17 days ago
Job Viewed
Job Description
Collaborate with data scientists and business stakeholders to design, develop, and maintain efficient data pipelines feeding into the organization's data lake.
Ensure the data lake contains accurate, up-to-date, and high-quality data, enabling data scientists to develop insightful analytics and business stakeholders to make well-informed decisions.
Utilize expertise in data engineering and cloud technologies to contribute to the overall success of the organization by providing the necessary data infrastructure and fostering a data-driven culture.
Demonstrate a strong architectural sense in defining data models, leveraging the Poly-base concept to optimize data storage and access.
Facilitate seamless data integration and management across the organization, ensuring a robust and scalable data architecture.
Take responsibility for defining and designing the data catalogue, effectively modelling all data within the organization, to enable efficient data discovery, access, and management for various stakeholders.
Key Responsibilities:- Design, develop, optimize, and maintain data architecture and pipelines that adhere to ETL principles and business goals.
- Develop complex queries and solutions using Scala, .NET, Python / PySpark languages.
- Implement and maintain data solutions on Azure Data Factory, Azure Data Lake, and Databricks.
- Create data products for analytics and data scientist team members to improve their productivity.
- Advise, consult, mentor, and coach other data and analytic professionals on data standards and practices.
- Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions.
- Lead the evaluation, implementation, and deployment of emerging tools and processes for analytic data engineering in order to improve our productivity as a team.
- Develop and deliver communication and education plans on analytic data engineering capabilities, standards, and processes.
- Partner with business analysts and solutions architects to develop technical architectures for strategic enterprise projects and initiatives.
- Collaborate with other team members and effectively influence, direct, and monitor project work.
- Develop strong understanding of the business and support decision making.
- 10 years of overall experience & at least 5 years of relevant experience.
- 5 years of experience working with Azure data factory & Databricks in a retail environment.
- Bachelor's degree required; Computer Science Engineering.
- 5+ years of experience working in data engineering or architecture role.
- Expertise in SQL and data analysis and experience with at least one programming language (Scala and .NET preferred).
- Experience developing and maintaining data warehouses in big data solutions.
- Experience with Azure Data Lake, Azure Data Factory, and Databricks in the data and analytics space is a must.
- Database development experience using Hadoop or Big Query and experience with a variety of relational, NoSQL, and cloud data lake technologies.
- Worked with BI tools such as Tableau, Power BI, Looker, Shiny.
- Conceptual knowledge of data and analytics, such as dimensional modelling, ETL, reporting tools, data governance, data warehousing, and structured and unstructured data.
- Big Data Development experience using Hive, Impala, Spark, and familiarity with Kafka.
- Exposure to machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics.
- Fluency in verbal and written English mandatory.
- Fluency in Spanish & French is useful.
- Internal: CEO & COO of Africa, Managers across various departments, Senior Management, Head of Departments in other regional hubs of Puma Energy.
Be The First To Know
About the latest Data engineer aws Jobs in Johannesburg !
Data Engineer
Posted 17 days ago
Job Viewed
Job Description
Join to apply for the Data Engineer role at Puma Energy
Join to apply for the Data Engineer role at Puma Energy
Get AI-powered advice on this job and more exclusive features.
Main Purpose:
Collaborate with data scientists and business stakeholders to design, develop, and maintain efficient data pipelines feeding into the organization's data lake.
Ensure the data lake contains accurate, up-to-date, and high-quality data, enabling data scientists to develop insightful analytics and business stakeholders to make well-informed decisions.
Utilize expertise in data engineering and cloud technologies to contribute to the overall success of the organization by providing the necessary data infrastructure and fostering a data-driven culture.
Demonstrate a strong architectural sense in defining data models, leveraging the Poly-base concept to optimize data storage and access.
Facilitate seamless data integration and management across the organization, ensuring a robust and scalable data architecture.
Take responsibility for defining and designing the data catalogue, effectively modelling all data within the organization, to enable efficient data discovery, access, and management for various stakeholders.
Knowledge Skills and Abilities, Key Responsibilities:
KEY RESPONSIBILTIES:
- Design, develop, optimize, and maintain data architecture and pipelines that adhere to ETL principles and business goals.
- Develop complex queries and solutions using Scala, .NET, Python/PySpark languages.
- Implement and maintain data solutions on Azure Data Factory, Azure Data Lake, and Databricks
- Create data products for analytics and data scientist team members to improve their productivity.
- Advise, consult, mentor, and coach other data and analytic professionals on data standards and practices.
- Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions.
- Lead the evaluation, implementation, and deployment of emerging tools and processes for analytic data engineering in order to improve our productivity as a team.
- Develop and deliver communication and education plans on analytic data engineering capabilities, standards, and processes.
- Partner with business analysts and solutions architects to develop technical architectures for strategic enterprise projects and initiatives.
- Collaborate with other team members and effectively influence, direct, and monitor project work.
- Develop strong understanding of the business and support decision making.
Experience:
- 10 years of overall experience & at least 5 years of relevant experience
- 5 years of experience working with Azure data factory & Databricks in a retail environment
- Bachelor's degree required; Computer Science Engineering.
- 5+ years of experience working in data engineering or architecture role.
- Expertise in SQL and data analysis and experience with at least one programming language (Scala and .NET preferred).
- Experience developing and maintaining data warehouses in big data solutions.
- Experience with, Azure Data Lake, Azure Data Factory, and Databricks) in the data and analytics space is a must
- Database development experience using Hadoop or Big Query and experience with a variety of relational, NoSQL, and cloud data lake technologies.
- Worked with BI tools such as Tableau, Power BI, Looker, Shiny.
- Conceptual knowledge of data and analytics, such as dimensional modelling, ETL, reporting tools, data governance, data warehousing, and structured and unstructured data.
- Big Data Development experience using Hive, Impala, Spark, and familiarity with Kafka.
- Exposure to machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics.
- Fluency in verbal and written English mandatory.
- Fluency in Spanish & French is useful.
- Internal: CEO & COO of Africa, Managers across various departments, Senior Management, Head of Departments in other regional hubs of Puma Energy
- External: External Consultants
- Seniority level Not Applicable
- Employment type Full-time
- Job function Information Technology
- Industries Oil and Gas, Financial Services, and Banking
Referrals increase your chances of interviewing at Puma Energy by 2x
Sign in to set job alerts for “Data Engineer” roles.Johannesburg, Gauteng, South Africa 5 days ago
Johannesburg, Gauteng, South Africa 2 days ago
Bedfordview, Gauteng, South Africa 1 week ago
Sandton, Gauteng, South Africa 2 months ago
Sandton, Gauteng, South Africa 1 month ago
Johannesburg Metropolitan Area 2 days ago
Johannesburg, Gauteng, South Africa 3 days ago
Johannesburg, Gauteng, South Africa 5 months ago
Johannesburg, Gauteng, South Africa 3 days ago
Sandton, Gauteng, South Africa 2 weeks ago
Johannesburg, Gauteng, South Africa 3 days ago
Johannesburg Metropolitan Area 23 hours ago
Johannesburg, Gauteng, South Africa 1 week ago
Johannesburg Metropolitan Area 2 days ago
Johannesburg Metropolitan Area 1 month ago
Woodmead, Gauteng, South Africa 4 weeks ago
City of Johannesburg, Gauteng, South Africa 2 days ago
Johannesburg, Gauteng, South Africa 3 days ago
Sandton, Gauteng, South Africa 2 days ago
Sandton, Gauteng, South Africa 3 days ago
Johannesburg, Gauteng, South Africa 1 day ago
Johannesburg, Gauteng, South Africa 1 year ago
Johannesburg, Gauteng, South Africa 1 week ago
Johannesburg, Gauteng, South Africa 9 months ago
Sandton, Gauteng, South Africa 2 days ago
Johannesburg, Gauteng, South Africa 3 days ago
Johannesburg, Gauteng, South Africa 4 days ago
Brakpan, Gauteng, South Africa 1 week ago
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrData Engineer
Posted 17 days ago
Job Viewed
Job Description
We are seeking a skilled Data Engineer to design, develop, optimize, and manage robust, highly available data analytics infrastructure, reports, and data models. This role drives the delivery of high-quality data analytics solutions for data ingestion, storage, consumption, and management to generate actionable insights. Reporting to the Technical Team Lead and Software Development Manager, you will play a critical role in advancing our data capabilities.
Key Responsibilities- Data Pipeline Development : Build and maintain scalable data pipelines for efficient data ingestion, processing, and storage.
- ETL Processes : Develop and automate ETL workflows to integrate data from diverse sources seamlessly.
- Database Management : Manage SQL and NoSQL databases to handle structured and unstructured data effectively.
- Data Quality Assurance : Ensure data integrity, consistency, and security through proactive monitoring.
- Collaboration : Partner with data scientists, analysts, and software engineers to enable data-driven decision-making.
- Documentation : Create clear, detailed documentation for data workflows, architectures, and processes.
- Performance Optimization : Enhance database performance through tuning, query optimization, and indexing.
- Cloud & Big Data Technologies : Support and expand expertise in cloud-based platforms like AWS, Azure, or Google Cloud.
- Pipeline Efficiency : Percentage of data pipelines executed on schedule.
- Data Accuracy : Reduction in data errors and inconsistencies.
- ETL Performance : Improved speed and efficiency of ETL processes.
- Query Performance : Reduced database query response times.
- Collaboration Success : Number of effective collaborations with data scientists and analysts.
- Code & Documentation Quality : Adherence to coding standards and comprehensive documentation.
- Bachelors or Honours Degree in Engineering, Data Science, Computer Science, or Information Systems.
- 3 -5 years in data engineering, analytics, or data management roles.
- 3+ years working with sales, channel, or business development teams.
- 3+ years delivering customer-facing projects.
- 3+ years in the cybersecurity industry.
- 3+ years using agile methodologies for project delivery.
- Certification in AWS, Microsoft, or other Business Intelligence technologies.
- Expertise in Databricks and Python Notebooks.
- Advanced skills in Power BI, DAX, Advanced Excel, Python, SQL Server, and SQL Scripting.
- Proficiency in creating technical architectures, entity relationship diagrams, and process flow diagrams.
- Strong command of Microsoft Office.
- Ability to manage multiple programs, balancing strategic planning with fast-paced execution.
- Strong communication, negotiation, and consensus-building skills with stakeholders and teams.
- Exceptional presentation skills, comfortable presenting to executive leadership.
- Superior analytical, organizational, and time-management abilities.
- Thrives in high-pressure environments and adapts quickly to new skills.
- Trusted to handle confidential information with discretion.
- Self-motivated, team-oriented, with a structured approach and proactive ownership of tasks.
Data Engineer
Posted today
Job Viewed
Job Description
- Build real-time ETL pipelines integrating Asana, SharePoint, AcceleratorApp, Fluxx, and Jibble.
- Design and implement scalable data lake and warehouse architectures supporting over 300,000 beneficiaries.
- Develop robust API connections with error handling and retry mechanisms.
- Optimize data synchronization to occur within 5 minutes, processing 100,000+ records per hour.
- 5+ years experience with Python, Java, or Scala for data engineering
- Proficiency with Apache Spark, Kafka, or similar streaming technologies
- Experience with cloud platforms (Azure preferred, AWS acceptable)
- Expertise in API integration and RESTful services
- Skilled in SQL and NoSQL databases
- Knowledge of data lake and warehouse architectures (e.g., Delta Lake, Databricks)
- ETL/ELT pipeline development and real-time data processing
- Microsoft Azure Data Factory
- Experience with CRM data models and impact measurement platforms
- SharePoint API / Microsoft Graph API experience
- Predictive analytics pipeline setup
- 5 to 8 years in data engineering roles
- Experience in non-profit or social impact sectors is advantageous
- Proven success with multi-system integrations and large data volumes
- Fully functional, integrated data pipeline
- Data lake architecture documentation
- Automated data quality monitoring
- Achievement of performance benchmarks (sync within 5 minutes)
Apply now!