1,933 Data Engineers jobs in South Africa

Data Engineers (Denodo)

Johannesburg, Gauteng InfyStrat

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

InfyStrat is on the lookout for skilled and driven Data Engineers with expertise in Denodo to join our innovative data team. As a Data Engineer, you will be responsible for designing, building, and maintaining data integration solutions that leverage Denodo’s data virtualization platform. Your role will be pivotal in transforming complex data into actionable insights, thereby empowering our stakeholders to make data-informed decisions. We are seeking candidates who are not only technically proficient but also enthusiastic about working with diverse datasets and developing efficient data pipelines. At InfyStrat, we value creativity, collaboration, and continuous learning. You will be part of a vibrant team that thrives on tackling challenges and driving the future of our data capabilities. If you are passionate about data engineering and are well-versed in Denodo, we invite you to apply and help us shape the data landscape of InfyStrat.

Responsibilities
  • Design and implement data integration solutions using Denodo to ensure seamless access to diverse data sources.
  • Develop and maintain data models and metadata repositories.
  • Optimize data virtualization processes for improved performance and scalability.
  • Collaborate with data analysts, business stakeholders, and IT teams to gather requirements and deliver solutions.
  • Monitor and troubleshoot data pipeline issues to ensure data quality and integrity.
  • Stay updated with the latest trends and technologies in data engineering and virtualization.
  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • 3+ years of experience in data engineering or a similar role, with a strong focus on Denodo.
  • Proficiency in SQL and experience with data modeling techniques.
  • Familiarity with ETL processes and data warehousing concepts.
  • Experience working with cloud platforms (e.g., AWS, Azure, Google Cloud) is a plus.
  • Strong problem-solving skills and the ability to work independently.
  • Excellent communication skills and the ability to work collaboratively in a team environment.
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Big Data Developer

R900000 - R1200000 Y Remote Recruitment

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Overview

We are looking for an experienced Big Data Developer to join an international banking technology team in Málaga, Spain. In this role, you will contribute to the development of business applications within the Regulatory & Compliance domain, covering the full software lifecycle from problem analysis to deployment.

You'll work with modern big data technologies, collaborate with users to understand business needs, and provide innovative solutions that meet regulatory and compliance requirements. This is a fantastic opportunity to advance your career in a global environment while enjoying the lifestyle benefits of living in Spain.

Key Responsibilities
  • Participate in the end-to-end software lifecycle, including analysis, design, development, testing, and deployment.
  • Collaborate with business users to identify requirements and deliver strategic technology solutions.
  • Optimise and analyse code, applying best practices such as threat modelling and SAST.
  • Manage tools and processes for documentation and Application Lifecycle Management (ALM).
  • Plan and deliver projects using Agile methodology.
  • Support incident resolution, including planned interventions.
  • Execute unit, integration, and regression testing.
  • Manage release processes and deployment tools.
Requirements
Qualifications and Experience

Required:

  • 3+ years of experience as a Big Data Developer.
  • Bachelor's degree in Computer Science, Telecommunications, Mathematics, or a related field.
  • Proficiency with GitHub.
  • Strong knowledge of databases (Oracle PL/SQL, PostgreSQL).
  • Experience with Java and JavaScript.
  • Hands-on ETL experience.
  • Fluency in English (Spanish is advantageous).

Preferred:

  • Familiarity with microservices frameworks (Spring Boot), OpenShift.
  • Knowledge of Flink, Drools, Kafka, DevOps tools.
  • Agile methodology experience with tools such as Jira and Confluence.
  • Exposure to S3, Elastic, and Angular.
  • Experience in Transactional Regulatory Reporting.
  • Innovative mindset and ability to generate strategic ideas.

Other Requirements:

  • Availability to travel.
  • Willingness to relocate to Málaga, Spain.
This advertiser has chosen not to accept applicants from your region.

Big Data Data Engineer

Johannesburg, Gauteng PBT Group

Posted 24 days ago

Job Viewed

Tap Again To Close

Job Description

Big Data Data Engineer job vacancy in Johannesburg.

We are seeking a skilled Data Engineer to design and develop scalable data pipelines that ingest raw, unstructured JSON data from source systems and transform it into clean, structured datasets within our Hadoop-based data platform.

The ideal candidate will play a critical role in enabling data availability, quality, and usability by engineering the movement of data from the Raw Layer to the Published and Functional Layers.

Overview

Big Data Data Engineer job vacancy in Johannesburg.

Key Responsibilities:

  • Design, build, and maintain robust data pipelines to ingest raw JSON data from source systems into the Hadoop Distributed File System (HDFS).
  • Transform and enrich unstructured data into structured formats (e.g., Parquet, ORC) for the Published Layer using tools like PySpark, Hive, or Spark SQL.
  • Develop workflows to further process and organize data into Functional Layers optimized for business reporting and analytics.
  • Implement data validation, cleansing, schema enforcement, and deduplication as part of the transformation process.
  • Collaborate with Data Analysts, BI Developers, and Business Users to understand data requirements and ensure datasets are production-ready.
  • Optimize ETL/ELT processes for performance and reliability in a large-scale distributed environment.
  • Maintain metadata, lineage, and documentation for transparency and governance.
  • Monitor pipeline performance and implement error handling and alerting mechanisms.
Technical Skills & Experience
  • 3+ years of experience in data engineering or ETL development within a big data environment.
  • Strong experience with Hadoop ecosystem tools: HDFS, Hive, Spark, YARN, and Sqoop.
  • Proficiency in PySpark, Spark SQL, and HQL (Hive Query Language).
  • Experience working with unstructured JSON data and transforming it into structured formats.
  • Solid understanding of data lake architectures: Raw, Published, and Functional layers.
  • Familiarity with workflow orchestration tools like Airflow, Oozie, or NiFi.
  • Experience with schema design, data modeling, and partitioning strategies.
  • Comfortable with version control tools (e.g., Git) and CI/CD processes.
Nice to Have
  • Experience with data cataloging and governance tools (e.g., Apache Atlas, Alation).
  • Exposure to cloud-based Hadoop platforms like AWS EMR, Azure HDInsight, or GCP Dataproc.
  • Experience with containerization (e.g., Docker) and/or Kubernetes for pipeline deployment.
  • Familiarity with data quality frameworks (e.g., Deequ, Great Expectations).
Qualifications
  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field.
  • Relevant certifications (e.g., Cloudera, Databricks, AWS Big Data) are a plus.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Big Data Data Engineer

Johannesburg, Gauteng PBT Group

Posted 25 days ago

Job Viewed

Tap Again To Close

Job Description

We are seeking a skilled Data Engineer to design and develop scalable data pipelines that ingest raw, unstructured JSON data from source systems and transform it into clean, structured datasets within our Hadoop-based data platform. The ideal candidate will play a critical role in enabling data availability, quality, and usability by engineering the movement of data from the Raw Layer to the Published and Functional Layers.

Key Responsibilities:

  • Design, build, and maintain robust data pipelines to ingest raw JSON data from source systems into the Hadoop Distributed File System (HDFS).
  • Transform and enrich unstructured data into structured formats (e.g., Parquet, ORC) for the Published Layer using tools like PySpark, Hive, or Spark SQL.
  • Develop workflows to further process and organize data into Functional Layers optimized for business reporting and analytics.
  • Implement data validation, cleansing, schema enforcement, and deduplication as part of the transformation process.
  • Collaborate with Data Analysts, BI Developers, and Business Users to understand data requirements and ensure datasets are production-ready.
  • Optimize ETL/ELT processes for performance and reliability in a large-scale distributed environment.
  • Maintain metadata, lineage, and documentation for transparency and governance.
  • Monitor pipeline performance and implement error handling and alerting mechanisms.

Technical Skills & Experience:

  • 3+ years of experience in data engineering or ETL development within a big data environment.
  • Strong experience with Hadoop ecosystem tools: HDFS, Hive, Spark, YARN, and Sqoop.
  • Proficiency in PySpark, Spark SQL, and HQL (Hive Query Language).
  • Experience working with unstructured JSON data and transforming it into structured formats.
  • Solid understanding of data lake architectures: Raw, Published, and Functional layers.
  • Familiarity with workflow orchestration tools like Airflow, Oozie, or NiFi.
  • Experience with schema design, data modeling, and partitioning strategies.
  • Comfortable with version control tools (e.g., Git) and CI/CD processes.

Nice to Have:

  • Experience with data cataloging and governance tools (e.g., Apache Atlas, Alation).
  • Exposure to cloud-based Hadoop platforms like AWS EMR, Azure HDInsight, or GCP Dataproc.
  • Experience with containerization (e.g., Docker) and/or Kubernetes for pipeline deployment.
  • Familiarity with data quality frameworks (e.g., Deequ, Great Expectations).

Qualifications:

  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field.
  • Relevant certifications (e.g., Cloudera, Databricks, AWS Big Data) are a plus.

* In order to comply with the POPI Act, for future career opportunities, we require your permission to maintain your personal details on our database. By completing and returning this form you give PBT your consent

* If you have not received any feedback after 2 weeks, please consider you application as unsuccessful.

This advertiser has chosen not to accept applicants from your region.

Big Data Data Engineer

Johannesburg, Gauteng

Posted today

Job Viewed

Tap Again To Close

Job Description

We are seeking a skilled Data Engineer to design and develop scalable data pipelines that ingest raw, unstructured JSON data from source systems and transform it into clean, structured datasets within our Hadoop-based data platform. The ideal candidate will play a critical role in enabling data availability, quality, and usability by engineering the movement of data from the Raw Layer to the Published and Functional Layers. Key Responsibilities: Design, build, and maintain robust data pipelines to ingest raw JSON data from source systems into the Hadoop Distributed File System (HDFS). Transform and enrich unstructured data into structured formats (e.g., Parquet, ORC) for the Published Layer using tools like PySpark, Hive, or Spark SQL. Develop workflows to further process and organize data into Functional Layers optimized for business reporting and analytics. Implement data validation, cleansing, schema enforcement, and deduplication as part of the transformation process. Collaborate with Data Analysts, BI Developers, and Business Users to understand data requirements and ensure datasets are production-ready. Optimize ETL/ELT processes for performance and reliability in a large-scale distributed environment. Maintain metadata, lineage, and documentation for transparency and governance. Monitor pipeline performance and implement error handling and alerting mechanisms. Technical Skills & Experience: 3 years of experience in data engineering or ETL development within a big data environment. Strong experience with Hadoop ecosystem tools: HDFS, Hive, Spark, YARN, and Sqoop. Proficiency in PySpark, Spark SQL, and HQL (Hive Query Language). Experience working with unstructured JSON data and transforming it into structured formats. Solid understanding of data lake architectures: Raw, Published, and Functional layers. Familiarity with workflow orchestration tools like Airflow, Oozie, or NiFi. Experience with schema design, data modeling, and partitioning strategies. Comfortable with version control tools (e.g., Git) and CI/CD processes. Nice to Have: Experience with data cataloging and governance tools (e.g., Apache Atlas, Alation). Exposure to cloud-based Hadoop platforms like AWS EMR, Azure HDInsight, or GCP Dataproc. Experience with containerization (e.g., Docker) and/or Kubernetes for pipeline deployment. Familiarity with data quality frameworks (e.g., Deequ, Great Expectations). Qualifications: Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field. Relevant certifications (e.g., Cloudera, Databricks, AWS Big Data) are a plus. * In order to comply with the POPI Act, for future career opportunities, we require your permission to maintain your personal details on our database. By completing and returning this form you give PBT your consent * If you have not received any feedback after 2 weeks, please consider you application as unsuccessful.
This advertiser has chosen not to accept applicants from your region.

Big Data Data Engineer

R600000 - R1200000 Y PBT Group

Posted today

Job Viewed

Tap Again To Close

Job Description

Employment Type

Contract

Experience

4 to 25 years

Salary

Negotiable

Job Published

03 September 2025

Job Reference No.
Job Description

We are seeking a skilled Data Engineer to design and develop scalable data pipelines that ingest raw, unstructured JSON data from source systems and transform it into clean, structured datasets within our Hadoop-based data platform. The ideal candidate will play a critical role in enabling data availability, quality, and usability by engineering the movement of data from the Raw Layer to the Published and Functional Layers.

Key Responsibilities:

  • Design, build, and maintain robust data pipelines to ingest raw JSON data from source systems into the Hadoop Distributed File System (HDFS).
  • Transform and enrich unstructured data into structured formats (e.g., Parquet, ORC) for the Published Layer using tools like PySpark, Hive, or Spark SQL.
  • Develop workflows to further process and organize data into Functional Layers optimized for business reporting and analytics.
  • Implement data validation, cleansing, schema enforcement, and deduplication as part of the transformation process.
  • Collaborate with Data Analysts, BI Developers, and Business Users to understand data requirements and ensure datasets are production-ready.
  • Optimize ETL/ELT processes for performance and reliability in a large-scale distributed environment.
  • Maintain metadata, lineage, and documentation for transparency and governance.
  • Monitor pipeline performance and implement error handling and alerting mechanisms.

Technical Skills & Experience:

  • 3+ years of experience in data engineering or ETL development within a big data environment.
  • Strong experience with Hadoop ecosystem tools: HDFS, Hive, Spark, YARN, and Sqoop.
  • Proficiency in PySpark, Spark SQL, and HQL (Hive Query Language).
  • Experience working with unstructured JSON data and transforming it into structured formats.
  • Solid understanding of data lake architectures: Raw, Published, and Functional layers.
  • Familiarity with workflow orchestration tools like Airflow, Oozie, or NiFi.
  • Experience with schema design, data modeling, and partitioning strategies.
  • Comfortable with version control tools (e.g., Git) and CI/CD processes.

Nice to Have:

  • Experience with data cataloging and governance tools (e.g., Apache Atlas, Alation).
  • Exposure to cloud-based Hadoop platforms like AWS EMR, Azure HDInsight, or GCP Dataproc.
  • Experience with containerization (e.g., Docker) and/or Kubernetes for pipeline deployment.
  • Familiarity with data quality frameworks (e.g., Deequ, Great Expectations).

Qualifications:

  • Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field.
  • Relevant certifications (e.g., Cloudera, Databricks, AWS Big Data) are a plus.

  • In order to comply with the POPI Act, for future career opportunities, we require your permission to maintain your personal details on our database. By completing and returning this form you give PBT your consent

  • If you have not received any feedback after 2 weeks, please consider you application as unsuccessful.

Skills

Big DataApache HadoopApache HivePySparkSQLJSONData Engineering

Industries

BankingFinancial Services

This advertiser has chosen not to accept applicants from your region.

Big data data engineer

Johannesburg, Gauteng PBT Group

Posted today

Job Viewed

Tap Again To Close

Job Description

permanent
Big Data Data Engineer job vacancy in Johannesburg. We are seeking a skilled Data Engineer to design and develop scalable data pipelines that ingest raw, unstructured JSON data from source systems and transform it into clean, structured datasets within our Hadoop-based data platform. The ideal candidate will play a critical role in enabling data availability, quality, and usability by engineering the movement of data from the Raw Layer to the Published and Functional Layers. Overview Big Data Data Engineer job vacancy in Johannesburg. Key Responsibilities: Design, build, and maintain robust data pipelines to ingest raw JSON data from source systems into the Hadoop Distributed File System (HDFS). Transform and enrich unstructured data into structured formats (e.g., Parquet, ORC) for the Published Layer using tools like Py Spark, Hive, or Spark SQL. Develop workflows to further process and organize data into Functional Layers optimized for business reporting and analytics. Implement data validation, cleansing, schema enforcement, and deduplication as part of the transformation process. Collaborate with Data Analysts, BI Developers, and Business Users to understand data requirements and ensure datasets are production-ready. Optimize ETL/ELT processes for performance and reliability in a large-scale distributed environment. Maintain metadata, lineage, and documentation for transparency and governance. Monitor pipeline performance and implement error handling and alerting mechanisms. Technical Skills & Experience 3+ years of experience in data engineering or ETL development within a big data environment. Strong experience with Hadoop ecosystem tools: HDFS, Hive, Spark, YARN, and Sqoop. Proficiency in Py Spark, Spark SQL, and HQL (Hive Query Language). Experience working with unstructured JSON data and transforming it into structured formats. Solid understanding of data lake architectures: Raw, Published, and Functional layers. Familiarity with workflow orchestration tools like Airflow, Oozie, or Ni Fi. Experience with schema design, data modeling, and partitioning strategies. Comfortable with version control tools (e.g., Git) and CI/CD processes. Nice to Have Experience with data cataloging and governance tools (e.g., Apache Atlas, Alation). Exposure to cloud-based Hadoop platforms like AWS EMR, Azure HDInsight, or GCP Dataproc. Experience with containerization (e.g., Docker) and/or Kubernetes for pipeline deployment. Familiarity with data quality frameworks (e.g., Deequ, Great Expectations). Qualifications Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field. Relevant certifications (e.g., Cloudera, Databricks, AWS Big Data) are a plus. #J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data engineers Jobs in South Africa !

Cloudera Big Data Administrator/Engineer

Johannesburg, Gauteng IOCO

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

<>iOCO is seeking a skilled Big Data Administrator/Engineer with strong hands-on experience in Cloudera’s ecosystem (Hive, Impala, HDFS, Ozone, Hue, NiFi) and proven expertise in Informatica BDM/DEI . The role involves administering and configuring big data platforms, deploying/supporting clusters, and building optimized pipelines to move and transform large-scale datasets. Experience with alternate platforms such as Hortonworks, MapR, AWS EMR, Azure HDInsight, or Google Dataproc will be advantageous.

What you'll do:

  • Platform Administration: Install, configure, upgrade, and monitor Cloudera/CDP clusters, manage HDFS/Ozone storage, and ensure security (Kerberos, Ranger, Sentry).
  • Data Pipelines: Build and optimize ingestion and processing pipelines using NiFi and Informatica BDM/DEI, supporting both real-time and batch flows.
  • ETL Integration: Develop Informatica mappings and workflows, leveraging pushdown execution to Hive/Impala/Spark; integrate diverse on-prem and cloud data sources.
  • Performance Governance: Optimize queries, orchestrate jobs (Airflow, Oozie, Control-M), and ensure compliance with governance/security standards.

Your Expertise:

  • Strong hands-on expertise in Cloudera tools: Hive, Impala, HDFS, Ozone, Hue, NiFi.
  • Proficiency with Informatica BDM/DEI (ETL/ELT, pushdown optimization, data quality).
  • Solid SQL, Linux administration, and scripting (Bash, Python).
  • Familiarity with cloud data platforms (AWS, Azure, GCP) and orchestration tools.
  • 4+ years in big data administration/engineering, including 2+ years in Informatica BDM/DEI.

Qualifications:

  • Bachelorâ€s degree in Computer Science, Engineering, or related field.
  • Experience in hybrid or cloud-based big data environments.

Soft Skills:

  • Strong troubleshooting and problem-solving mindset.
  • Ability to work independently and within cross-functional teams.
  • Clear communication and documentation skills.

Other information applicable to the opportunity:

  • Contract position
  • Location: Johannesburg

Why work for us?

Want to work for an organization that solves complex real-world problems with innovative software solutions? At iOCO, we believe anything is possible with modern technology, software, and development expertise. We are continuously pushing the boundaries of innovative solutions across multiple industries using an array of technologies.†/p>

You will be part of a consultancy, working with some of the most knowledgeable minds in the industry on interesting solutions across different business domains.†/p>

Our culture of continuous learning will ensure that you will have all the opportunities, tools, and support to hone and grow your craft.†/p>

By joining IOCO you will have an open invitation to developer inspiring forums. A place where you will be able to connect and learn from and with your peers by sharing ideas, experiences, practices, and solutions.†/p>

iOCO is an equal opportunity employer with an obligation to achieve its own unique EE objectives in the context of Employment Equity targets. Therefore, our employment strategy gives primary preference to previously disadvantaged individuals or groups.

This advertiser has chosen not to accept applicants from your region.

Cloudera Big Data Administrator/Engineer

Johannesburg, Gauteng

Posted today

Job Viewed

Tap Again To Close

Job Description

contract
iCO is seeking a skilled Big Data Administrator/Engineer with strong hands-on experience in Cloudera’s ecosystem (Hive, Impala, HDFS, Ozone, Hue, NiFi) and proven expertise in Informatica BDM/DEI . The role involves administering and configuring big data platforms, deploying/supporting clusters, and building optimized pipelines to move and transform large-scale datasets. Experience with alternate platforms such as Hortonworks, MapR, AWS EMR, Azure HDInsight, or Google Dataproc will be advantageous. What you'll do: Platform Administration: Install, configure, upgrade, and monitor Cloudera/CDP clusters, manage HDFS/Ozone storage, and ensure security (Kerberos, Ranger, Sentry). Data Pipelines: Build and optimize ingestion and processing pipelines using NiFi and Informatica BDM/DEI, supporting both real-time and batch flows. ETL Integration: Develop Informatica mappings and workflows, leveraging pushdown execution to Hive/Impala/Spark; integrate diverse on-prem and cloud data sources. Performance Governance: Optimize queries, orchestrate jobs (Airflow, Oozie, Control-M), and ensure compliance with governance/security standards. Your Expertise: Strong hands-on expertise in Cloudera tools: Hive, Impala, HDFS, Ozone, Hue, NiFi. Proficiency with Informatica BDM/DEI (ETL/ELT, pushdown optimization, data quality). Solid SQL, Linux administration, and scripting (Bash, Python). Familiarity with cloud data platforms (AWS, Azure, GCP) and orchestration tools. 4 years in big data administration/engineering, including 2 years in Informatica BDM/DEI. Qualifications: Bachelorâ€s degree in Computer Science, Engineering, or related field. Experience in hybrid or cloud-based big data environments. Soft Skills: Strong troubleshooting and problem-solving mindset. Ability to work independently and within cross-functional teams. Clear communication and documentation skills. Other information applicable to the opportunity: Contract position Location: Johannesburg Why work for us? Want to work for an organization that solves complex real-world problems with innovative software solutions? At iOCO, we believe anything is possible with modern technology, software, and development expertise. We are continuously pushing the boundaries of innovative solutions across multiple industries using an array of technologies.†ou will be part of a consultancy, working with some of the most knowledgeable minds in the industry on interesting solutions across different business domains.†ur culture of continuous learning will ensure that you will have all the opportunities, tools, and support to hone and grow your craft.†y joining IOCO you will have an open invitation to developer inspiring forums. A place where you will be able to connect and learn from and with your peers by sharing ideas, experiences, practices, and solutions.†OCO is an equal opportunity employer with an obligation to achieve its own unique EE objectives in the context of Employment Equity targets. Therefore, our employment strategy gives primary preference to previously disadvantaged individuals or groups.
This advertiser has chosen not to accept applicants from your region.

Big Data Specialist ( Contract Role)

Sandton, Gauteng The Focus Group

Posted 11 days ago

Job Viewed

Tap Again To Close

Job Description

Job Purpose

We require a Big Data Specialist to assist with harmonising data from diverse sources, to analyse, problem solve, reconcile, develop solutions and build reporting in various tool sets, to enable Business understanding and decision making.

Job Responsibilities
  • Data Integration, harmonisation and reporting:
    • Collaborate with cross-functional teams to understand data requirements.
    • Design and implement efficient data pipelines using Abinitio and Denodo.
    • Extract, transform, analyze and or load data from various sources.
    • Leverage SAP connectors to seamlessly integrate SAP data.
  • Data Consolidation and Harmonization:
    • Pull data from multiple sources (including SAP, legacy systems, APls, and external databases).
    • Develop strategies to ensure data consistency, accuracy, and reliability.
    • Create unified views of data for reporting reconciliation and analytics purposes.
  • Performance Optimization:
    • Identify bottlenecks and optimize data processing workflows.
    • Monitor and fine-tune production jobs to ensure optimal performance and reconciliation of various data sources.
  • Data Modeling and Architecture
    • Design and maintain data models that facilitate efficient querying and reporting.
    • Optimize data structures for scalability and responsiveness.
    • Enhance data flows and provide specifications for IT architecture builds.
  • Governance
    • Develop governance frameworks for data flows.
    • Design appropriate controls to monitor master data and financial reconciliation.
  • Collaboration and Documentation
    • Work closely with data engineers, data scientists, and business.
    • Document data integration processes, best practices, and troubleshooting guidelines for Business.
Qualifications
  • Bachelor's degree in Computer Science, Information Systems, or a related
  • Minimum of 3 years of experience in big data technologies, including Abinitio and
  • Proficiency in SAP connectors and hands-on experience integrating SAP data.
  • Strong understanding of data modeling, ETL processes, and data warehousing
  • Familiarity with cloud-based big data platforms (e.g.,Azure, GCP) is
  • Excellent problem-solving skills and ability to work independently
  • Deep understanding of SAP on premise, S4Hana, BW4Hana, Denodo
Technical/ Professional Knowledge
  • Governance, Risk and Controls
  • Organisational behaviour theory
  • Principles of project management
  • Relevant regulatory knowledge
  • Stakeholder management
  • Strategic planning
  • Talent management
  • Business writing skills
  • Management information and reporting principles, tools and mechanisms
  • Client Service Management
Behavioural Competencies
  • Building Partnerships
  • Customer Focus
  • Decision Making
  • Facilitating Change
  • Inspiring others
  • Business Acumen
  • Building Organizational Talent
  • Compelling Communication

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Engineers Jobs