664 Big Data jobs in South Africa

Big Data Developer

R500000 - R1200000 Y Dariel

Posted today

Job Viewed

Tap Again To Close

Job Description

Data Engineer

The Data Engineer's role entails building and supporting data pipelines that must be scalable, repeatable, and secure. This role functions as a core member of an agile team, whereby these professionals are responsible for the infrastructure that provides insights from raw data, handling and integrating diverse sources of data seamlessly. They enable solutions by handling large volumes of data in batch and real-time by leveraging emerging technologies from both the big data and cloud spaces. Additional responsibilities include developing proof of concepts and implementing complex big data solutions with a focus on collecting, parsing, managing, analysing, and visualising large datasets. They know how to apply technologies to solve the problems of working with large volumes of data in diverse formats to deliver innovative solutions. Data Engineering is a technical job that requires substantial expertise in a broad range of software development and programming fields. These professionals have a knowledge of data analysis, end-user requirements, and business requirements analysis to develop a clear understanding of the business need and to incorporate these needs into a technical solution. They have a solid understanding of physical database design and the systems development lifecycle.

Responsibilities

  • Architects' Data Analytics Framework
  • Translates complex functional and technical requirements into detailed architecture, design, and high-performing software
  • Leads Data and batch/real-time analytical solutions leveraging transformational technologies
  • Works on multiple projects as a technical lead, driving user story analysis and elaboration, design and development of software applications, testing, and building automation tools
  • Development and Operations
  • Database Development and Operations
  • Policies, Standards, and Procedures
  • Business Continuity & Disaster Recovery
  • Research and Evaluation
  • Creating data feeds from on-premise to AWS Cloud
  • Support data feeds in production on a break-fix basis
  • Creating data marts using Talend or a similar ETL development tool
  • Manipulating data using Python
  • Processing data using the Hadoop paradigm, particularly using EMR, AWS's distribution of Hadoop
  • Develop for Big Data and Business Intelligence, including automated testing and deployment

Requisite Experience, Education, Knowledge, and/ or Skills

  • Bachelor's Degree in Computer Science, Computer Engineering, or equivalent
  • AWS Certification
  • Extensive knowledge in different programming or scripting languages
  • Expert knowledge of data modelling and understanding of different data structures and their benefits and limitations under particular use cases
  • Capability to architect highly scalable distributed systems, using different open source tools
  • 5+ years of Data engineering or software engineering experience
  • 2+ years of Big Data experience
  • 2+ years' experience with Extract, Transform, and Load (ETL) processes
  • 2+ years of AWS experience
  • 5 years of demonstrated experience with object-oriented design, coding, and testing patterns, as well as experience in engineering (commercial or open source) software platforms and large-scale data infrastructures
  • Big Data batch and streaming tools
  • Talend
  • AWS: EMR, EC2, S3
  • Python
  • PySpark or Spark
This advertiser has chosen not to accept applicants from your region.

Big Data Developer

R900000 - R1200000 Y Remote Recruitment

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Overview

We are looking for an experienced Big Data Developer to join an international banking technology team in Málaga, Spain. In this role, you will contribute to the development of business applications within the Regulatory & Compliance domain, covering the full software lifecycle from problem analysis to deployment.

You'll work with modern big data technologies, collaborate with users to understand business needs, and provide innovative solutions that meet regulatory and compliance requirements. This is a fantastic opportunity to advance your career in a global environment while enjoying the lifestyle benefits of living in Spain.

Key Responsibilities
  • Participate in the end-to-end software lifecycle, including analysis, design, development, testing, and deployment.
  • Collaborate with business users to identify requirements and deliver strategic technology solutions.
  • Optimise and analyse code, applying best practices such as threat modelling and SAST.
  • Manage tools and processes for documentation and Application Lifecycle Management (ALM).
  • Plan and deliver projects using Agile methodology.
  • Support incident resolution, including planned interventions.
  • Execute unit, integration, and regression testing.
  • Manage release processes and deployment tools.
Requirements
Qualifications and Experience

Required:

  • 3+ years of experience as a Big Data Developer.
  • Bachelor's degree in Computer Science, Telecommunications, Mathematics, or a related field.
  • Proficiency with GitHub.
  • Strong knowledge of databases (Oracle PL/SQL, PostgreSQL).
  • Experience with Java and JavaScript.
  • Hands-on ETL experience.
  • Fluency in English (Spanish is advantageous).

Preferred:

  • Familiarity with microservices frameworks (Spring Boot), OpenShift.
  • Knowledge of Flink, Drools, Kafka, DevOps tools.
  • Agile methodology experience with tools such as Jira and Confluence.
  • Exposure to S3, Elastic, and Angular.
  • Experience in Transactional Regulatory Reporting.
  • Innovative mindset and ability to generate strategic ideas.

Other Requirements:

  • Availability to travel.
  • Willingness to relocate to Málaga, Spain.
This advertiser has chosen not to accept applicants from your region.

Big Data Data Engineer

Johannesburg, Gauteng PBT Group

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Big Data Data Engineer job vacancy in Johannesburg.

We are seeking a skilled Data Engineer to design and develop scalable data pipelines that ingest raw, unstructured JSON data from source systems and transform it into clean, structured datasets within our Hadoop-based data platform.

The ideal candidate will play a critical role in enabling data availability, quality, and usability by engineering the movement of data from the Raw Layer to the Published and Functional Layers.

Overview

Big Data Data Engineer job vacancy in Johannesburg.

Key Responsibilities:

  • Design, build, and maintain robust data pipelines to ingest raw JSON data from source systems into the Hadoop Distributed File System (HDFS).
  • Transform and enrich unstructured data into structured formats (e.g., Parquet, ORC) for the Published Layer using tools like PySpark, Hive, or Spark SQL.
  • Develop workflows to further process and organize data into Functional Layers optimized for business reporting and analytics.
  • Implement data validation, cleansing, schema enforcement, and deduplication as part of the transformation process.
  • Collaborate with Data Analysts, BI Developers, and Business Users to understand data requirements and ensure datasets are production-ready.
  • Optimize ETL/ELT processes for performance and reliability in a large-scale distributed environment.
  • Maintain metadata, lineage, and documentation for transparency and governance.
  • Monitor pipeline performance and implement error handling and alerting mechanisms.
Technical Skills & Experience
  • 3+ years of experience in data engineering or ETL development within a big data environment.
  • Strong experience with Hadoop ecosystem tools: HDFS, Hive, Spark, YARN, and Sqoop.
  • Proficiency in PySpark, Spark SQL, and HQL (Hive Query Language).
  • Experience working with unstructured JSON data and transforming it into structured formats.
  • Solid understanding of data lake architectures: Raw, Published, and Functional layers.
  • Familiarity with workflow orchestration tools like Airflow, Oozie, or NiFi.
  • Experience with schema design, data modeling, and partitioning strategies.
  • Comfortable with version control tools (e.g., Git) and CI/CD processes.
Nice to Have
  • Experience with data cataloging and governance tools (e.g., Apache Atlas, Alation).
  • Exposure to cloud-based Hadoop platforms like AWS EMR, Azure HDInsight, or GCP Dataproc.
  • Experience with containerization (e.g., Docker) and/or Kubernetes for pipeline deployment.
  • Familiarity with data quality frameworks (e.g., Deequ, Great Expectations).
Qualifications
  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field.
  • Relevant certifications (e.g., Cloudera, Databricks, AWS Big Data) are a plus.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Big Data Data Engineer

R600000 - R1200000 Y PBT Group

Posted today

Job Viewed

Tap Again To Close

Job Description

Employment Type

Contract

Experience

4 to 25 years

Salary

Negotiable

Job Published

03 September 2025

Job Reference No.
Job Description

We are seeking a skilled Data Engineer to design and develop scalable data pipelines that ingest raw, unstructured JSON data from source systems and transform it into clean, structured datasets within our Hadoop-based data platform. The ideal candidate will play a critical role in enabling data availability, quality, and usability by engineering the movement of data from the Raw Layer to the Published and Functional Layers.

Key Responsibilities:

  • Design, build, and maintain robust data pipelines to ingest raw JSON data from source systems into the Hadoop Distributed File System (HDFS).
  • Transform and enrich unstructured data into structured formats (e.g., Parquet, ORC) for the Published Layer using tools like PySpark, Hive, or Spark SQL.
  • Develop workflows to further process and organize data into Functional Layers optimized for business reporting and analytics.
  • Implement data validation, cleansing, schema enforcement, and deduplication as part of the transformation process.
  • Collaborate with Data Analysts, BI Developers, and Business Users to understand data requirements and ensure datasets are production-ready.
  • Optimize ETL/ELT processes for performance and reliability in a large-scale distributed environment.
  • Maintain metadata, lineage, and documentation for transparency and governance.
  • Monitor pipeline performance and implement error handling and alerting mechanisms.

Technical Skills & Experience:

  • 3+ years of experience in data engineering or ETL development within a big data environment.
  • Strong experience with Hadoop ecosystem tools: HDFS, Hive, Spark, YARN, and Sqoop.
  • Proficiency in PySpark, Spark SQL, and HQL (Hive Query Language).
  • Experience working with unstructured JSON data and transforming it into structured formats.
  • Solid understanding of data lake architectures: Raw, Published, and Functional layers.
  • Familiarity with workflow orchestration tools like Airflow, Oozie, or NiFi.
  • Experience with schema design, data modeling, and partitioning strategies.
  • Comfortable with version control tools (e.g., Git) and CI/CD processes.

Nice to Have:

  • Experience with data cataloging and governance tools (e.g., Apache Atlas, Alation).
  • Exposure to cloud-based Hadoop platforms like AWS EMR, Azure HDInsight, or GCP Dataproc.
  • Experience with containerization (e.g., Docker) and/or Kubernetes for pipeline deployment.
  • Familiarity with data quality frameworks (e.g., Deequ, Great Expectations).

Qualifications:

  • Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field.
  • Relevant certifications (e.g., Cloudera, Databricks, AWS Big Data) are a plus.

  • In order to comply with the POPI Act, for future career opportunities, we require your permission to maintain your personal details on our database. By completing and returning this form you give PBT your consent

  • If you have not received any feedback after 2 weeks, please consider you application as unsuccessful.

Skills

Big DataApache HadoopApache HivePySparkSQLJSONData Engineering

Industries

BankingFinancial Services

This advertiser has chosen not to accept applicants from your region.

Cloudera Big Data Administrator/Engineer

Johannesburg, Gauteng IOCO

Posted 10 days ago

Job Viewed

Tap Again To Close

Job Description

<>iOCO is seeking a skilled Big Data Administrator/Engineer with strong hands-on experience in Cloudera’s ecosystem (Hive, Impala, HDFS, Ozone, Hue, NiFi) and proven expertise in Informatica BDM/DEI . The role involves administering and configuring big data platforms, deploying/supporting clusters, and building optimized pipelines to move and transform large-scale datasets. Experience with alternate platforms such as Hortonworks, MapR, AWS EMR, Azure HDInsight, or Google Dataproc will be advantageous.

What you'll do:

  • Platform Administration: Install, configure, upgrade, and monitor Cloudera/CDP clusters, manage HDFS/Ozone storage, and ensure security (Kerberos, Ranger, Sentry).
  • Data Pipelines: Build and optimize ingestion and processing pipelines using NiFi and Informatica BDM/DEI, supporting both real-time and batch flows.
  • ETL Integration: Develop Informatica mappings and workflows, leveraging pushdown execution to Hive/Impala/Spark; integrate diverse on-prem and cloud data sources.
  • Performance Governance: Optimize queries, orchestrate jobs (Airflow, Oozie, Control-M), and ensure compliance with governance/security standards.

Your Expertise:

  • Strong hands-on expertise in Cloudera tools: Hive, Impala, HDFS, Ozone, Hue, NiFi.
  • Proficiency with Informatica BDM/DEI (ETL/ELT, pushdown optimization, data quality).
  • Solid SQL, Linux administration, and scripting (Bash, Python).
  • Familiarity with cloud data platforms (AWS, Azure, GCP) and orchestration tools.
  • 4+ years in big data administration/engineering, including 2+ years in Informatica BDM/DEI.

Qualifications:

  • Bachelorâ€s degree in Computer Science, Engineering, or related field.
  • Experience in hybrid or cloud-based big data environments.

Soft Skills:

  • Strong troubleshooting and problem-solving mindset.
  • Ability to work independently and within cross-functional teams.
  • Clear communication and documentation skills.

Other information applicable to the opportunity:

  • Contract position
  • Location: Johannesburg

Why work for us?

Want to work for an organization that solves complex real-world problems with innovative software solutions? At iOCO, we believe anything is possible with modern technology, software, and development expertise. We are continuously pushing the boundaries of innovative solutions across multiple industries using an array of technologies.†/p>

You will be part of a consultancy, working with some of the most knowledgeable minds in the industry on interesting solutions across different business domains.†/p>

Our culture of continuous learning will ensure that you will have all the opportunities, tools, and support to hone and grow your craft.†/p>

By joining IOCO you will have an open invitation to developer inspiring forums. A place where you will be able to connect and learn from and with your peers by sharing ideas, experiences, practices, and solutions.†/p>

iOCO is an equal opportunity employer with an obligation to achieve its own unique EE objectives in the context of Employment Equity targets. Therefore, our employment strategy gives primary preference to previously disadvantaged individuals or groups.

This advertiser has chosen not to accept applicants from your region.

Cloudera Big Data Administrator/Engineer

Johannesburg, Gauteng

Posted today

Job Viewed

Tap Again To Close

Job Description

contract
iCO is seeking a skilled Big Data Administrator/Engineer with strong hands-on experience in Cloudera’s ecosystem (Hive, Impala, HDFS, Ozone, Hue, NiFi) and proven expertise in Informatica BDM/DEI . The role involves administering and configuring big data platforms, deploying/supporting clusters, and building optimized pipelines to move and transform large-scale datasets. Experience with alternate platforms such as Hortonworks, MapR, AWS EMR, Azure HDInsight, or Google Dataproc will be advantageous. What you'll do: Platform Administration: Install, configure, upgrade, and monitor Cloudera/CDP clusters, manage HDFS/Ozone storage, and ensure security (Kerberos, Ranger, Sentry). Data Pipelines: Build and optimize ingestion and processing pipelines using NiFi and Informatica BDM/DEI, supporting both real-time and batch flows. ETL Integration: Develop Informatica mappings and workflows, leveraging pushdown execution to Hive/Impala/Spark; integrate diverse on-prem and cloud data sources. Performance Governance: Optimize queries, orchestrate jobs (Airflow, Oozie, Control-M), and ensure compliance with governance/security standards. Your Expertise: Strong hands-on expertise in Cloudera tools: Hive, Impala, HDFS, Ozone, Hue, NiFi. Proficiency with Informatica BDM/DEI (ETL/ELT, pushdown optimization, data quality). Solid SQL, Linux administration, and scripting (Bash, Python). Familiarity with cloud data platforms (AWS, Azure, GCP) and orchestration tools. 4 years in big data administration/engineering, including 2 years in Informatica BDM/DEI. Qualifications: Bachelorâ€s degree in Computer Science, Engineering, or related field. Experience in hybrid or cloud-based big data environments. Soft Skills: Strong troubleshooting and problem-solving mindset. Ability to work independently and within cross-functional teams. Clear communication and documentation skills. Other information applicable to the opportunity: Contract position Location: Johannesburg Why work for us? Want to work for an organization that solves complex real-world problems with innovative software solutions? At iOCO, we believe anything is possible with modern technology, software, and development expertise. We are continuously pushing the boundaries of innovative solutions across multiple industries using an array of technologies.†ou will be part of a consultancy, working with some of the most knowledgeable minds in the industry on interesting solutions across different business domains.†ur culture of continuous learning will ensure that you will have all the opportunities, tools, and support to hone and grow your craft.†y joining IOCO you will have an open invitation to developer inspiring forums. A place where you will be able to connect and learn from and with your peers by sharing ideas, experiences, practices, and solutions.†OCO is an equal opportunity employer with an obligation to achieve its own unique EE objectives in the context of Employment Equity targets. Therefore, our employment strategy gives primary preference to previously disadvantaged individuals or groups.
This advertiser has chosen not to accept applicants from your region.

Research Assistant (Administrative tax data | Big Data)

Pretoria, Gauteng United Nations University

Posted 26 days ago

Job Viewed

Tap Again To Close

Job Description

Research Assistant (Administrative tax data | Big Data)

UNU-WIDER is seeking exceptional candidates for the position of Research Assistant, based in Pretoria, South Africa, to support the SA-TIED programme. This role involves managing and enhancing tax datasets, assisting researchers, and ensuring high standards of data confidentiality.

For the full job description and application details, please click here.

UNU offers three types of contracts: fixed-term staff positions (General Service, National Officer and Professional), Personnel Service Agreement positions (PSA), and consultant positions (CTC). For more information, see the Contract Types page.

1 articles, publications, projects, experts. #J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Big data Jobs in South Africa !

Big Data Developer - Regulatory & Compliance (Relocation to Spain)

Remote Recruitment

Posted 22 days ago

Job Viewed

Tap Again To Close

Job Description

Big Data Developer - Regulatory & Compliance (Relocation to Spain)

We are looking for an experienced Big Data Developer to join an international banking technology team in Málaga, Spain . In this role, you will contribute to the development of business applications within the Regulatory & Compliance domain, covering the full software lifecycle from problem analysis to deployment.

You’ll work with modern big data technologies, collaborate with users to understand business needs, and provide innovative solutions that meet regulatory and compliance requirements.

Responsibilities
  • Participate in the end-to-end software lifecycle , including analysis, design, development, testing, and deployment.
  • Collaborate with business users to identify requirements and deliver strategic technology solutions.
  • Optimise and analyse code, applying best practices such as threat modelling and SAST.
  • Manage tools and processes for documentation and Application Lifecycle Management (ALM).
  • Plan and deliver projects using Agile methodology.
  • Support incident resolution, including planned interventions.
  • Execute unit, integration, and regression testing.
  • Manage release processes and deployment tools.
Qualifications and Experience

Required:

  • 3+ years of experience as a Big Data Developer .
  • Bachelor’s degree in Computer Science, Telecommunications, Mathematics, or a related field.
  • Proficiency with GitHub .
  • Strong knowledge of databases (Oracle PL/SQL, PostgreSQL).
  • Hands-on ETL experience.
  • Fluency in English (Spanish is advantageous).

Preferred:

  • Familiarity with microservices frameworks (Spring Boot), OpenShift.
  • Knowledge of Flink, Drools, Kafka, DevOps tools .
  • Agile methodology experience with tools such as Jira and Confluence .
  • Exposure to S3, Elastic, and Angular .
  • Experience in Transactional Regulatory Reporting .
  • Innovative mindset and ability to generate strategic ideas.
Other Requirements
  • Availability to travel.
Seniority level
  • Mid-Senior level
Employment type
  • Full-time
Job function
  • Information Technology
Industries
  • Staffing and Recruiting

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Product Manager, Big Data & AI Engineer (Cisco & Mobile Technologies)

Johannesburg, Gauteng Jelocorp

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

Introduction

We are seeking an experienced Big Data & AI Specialist who will drive the design development and deployment of intelligent data solutions. The ideal candidate will combine deep technical expertise in Big Data platforms, Artificial Intelligence and Machine Learning with practical knowledge of Cisco networking technologies and Mobile communication systems. You will work across teams to build data-driven architectures, ensure secure and scalable infrastructure and enable actionable insights through advanced analytics.

Duties & Responsibilities
  • Design and implement robust Big Data architectures and AI-driven solutions for advanced data processing, analytics, and automation.
  • Develop and deploy machine learning models, predictive analytics, and scalable data pipelines.
  • Collaborate closely with network engineering teams to seamlessly integrate AI solutions within Cisco networking environments.
  • Optimize mobile technology platforms for real-time data collection and transmission, enabling responsive AI-driven applications.
  • Manage and process large-scale datasets from diverse sources (structured, semi-structured and unstructured) ensuring the highest standards of data quality, security and governance.
  • Deploy and maintain scalable big data platforms on cloud or on-premises infrastructure leveraging technologies such as Hadoop, Spark, Kafka and others.
  • Build and deploy APIs and microservices to enable seamless delivery of AI models across mobile and network environments.
  • Perform system performance tuning, troubleshooting and proactive monitoring of big data and AI platforms.
  • Continuously research and adopt emerging technologies and best practices in AI, Big Data, Cisco networking solutions and mobile networks.
Key Skills & Competencies Big Data & AI Technologies
  • Proficiency in big data ecosystems: Hadoop, Spark, Hive, Kafka, Flink or equivalent technologies.
  • Expertise in machine learning frameworks such as TensorFlow, PyTorch and Scikit-learn.
  • Strong experience with data science tools: Python, R, SQL, Scala.
  • Knowledge of ETL processes and workflow orchestration tools: Airflow, NiFi.
Cisco Networking
  • Cisco Certified Network Professional (CCNP) certification or equivalent practical experience.
  • Hands-on knowledge of Cisco SD-WAN, ACI, ISE and advanced security solutions.
  • Experience with network automation and monitoring using Cisco DNA Center, NetFlow and SNMP protocols.
Mobile Technologies
  • Solid understanding of 3G, 4G LTE and 5G mobile network technologies.
  • Experience with mobile device management (MDM), edge computing and IoT platforms.
  • Familiarity with mobile application ecosystems and their integration with AI platforms.
Cloud Platforms (Advantageous)
  • Experience with cloud providers such as AWS, Azure or Google Cloud Platform, specifically in Big Data and AI services.
  • Proficiency with Kubernetes, Docker and container orchestration for scalable deployments.
Other Competencies
  • Strong problem-solving and analytical skills.
  • Excellent communication, collaboration and stakeholder management abilities.
  • Proven ability to thrive in cross-functional agile teams.
Qualifications & Experience
  • Bachelors or Masters degree in Computer Science, Data Science, Telecommunications or a related field.
  • A minimum of 5 years hands-on experience in the development and deployment of Big Data and AI / ML solutions.
  • At least 3 years of proven experience working with Cisco network infrastructure.
  • Prior experience in mobile technology environments or the telecommunications industry is highly advantageous.
  • Relevant professional certifications are preferred including: AI / Big Data certifications (e.g. TensorFlow, Azure AI Engineer, Google Professional Data Engineer).
  • Cisco certifications (CCNA, CCNP or higher).
  • Mobile technology certifications (MDM, 5G, IoT platforms).
  • Experience with Huawei Mobile Cloud is a distinct advantage.
Package & Remuneration

Please send your CV to or contact

Required Experience

IC

Key Skills
  • cpr
  • Patient Care
  • Customer Service
  • DOT
  • Hand Tools
  • HVAC
  • Android
  • Communication
  • OSHA
  • Preventive Maintenance
  • Troubleshoot
  • Company Standards
  • Setup
  • Service Calls
  • Technical Support

Employment Type: Full-Time

Experience: years

Vacancy: 1

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer - Market Risk & Big Data Automation (Relocation to Spain)

Remote Recruitment

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

Data Engineer - Market Risk & Big Data Automation (Relocation to Spain)

Join to apply for the Data Engineer - Market Risk & Big Data Automation (Relocation to Spain) role at Remote Recruitment

Location: Málaga, Spain (relocation required). This role is in the Markets Data Hub division of a UK-based financial technology team, focusing on automating data processes within a Market Risk environment.

Overview

We are seeking a data-driven Data Engineer to define and automate data processes using advanced Big Data technologies. The role emphasizes process definition and automation over pure development, with a strong SQL/ETL foundation and a focus on data operations in a fast-paced, international setting.

Key Responsibilities

  • Define and automate data processing workflows using tools such as Databricks , Spark Scala , and Airflow within a market risk context.
  • Design and document ETL processes and ensure alignment with department standards.
  • Prepare and transform data for processing and reporting, ensuring accuracy and integrity.
  • Use Control-M and Job as Code frameworks to automate and schedule data processes.
  • Create and manage requests in line with team procedures for software deployment and process configuration.
  • Conduct and validate integrated test executions as part of automation workflows.
  • Collaborate with cross-functional teams, including developers, analysts, and risk teams, to streamline data operations.

Qualifications and Experience

  • Solid experience on technology-focused projects in data engineering or data operations.
  • Strong proficiency in SQL and ability to write and optimise complex queries.
  • Proven experience in ETL design , data modelling , and data pipeline orchestration .
  • Hands-on knowledge of Control-M and understanding of Job as Code for automation.
  • Experience with testing methodologies and data validation in complex environments.
  • Professional competence in English , both written and verbal.
  • Willingness to relocate to Málaga, Spain and work in an international, fast-paced setting.
  • Familiarity with Scala and Big Data tools such as Spark Scala , Databricks , and Airflow .
  • Prior experience in the banking sector or understanding of market risk metrics and financial products.

Employment details

  • Seniority level: Mid-Senior level
  • Employment type: Full-time
  • Job function: Information Technology
  • Industries: Banking

Note: This description reflects the target role and does not include unrelated postings or site-specific notices.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Big Data Jobs