307 Aws Data jobs in South Africa

AWS Data Engineer

R450000 - R900000 Y PBT Group

Posted today

Job Viewed

Tap Again To Close

Job Description

Employment Type

Contract

Experience

4 to 24 years

Salary

Negotiable

Job Published

03 September 2025

Job Reference No.
Job Description

Ready to take your data engineering career to new heights? PBT Group is looking for a Senior AWS Data Engineer to design, build, and lead cutting-edge data solutions in a dynamic, agile environment.

What You'll Do:

  • Architect modern data analytics frameworks.
  • Translate complex requirements into scalable, secure, high-performance pipelines.
  • Build & optimize batch/real-time data solutions using AWS & Big Data tools.
  • Lead engineering efforts across multiple agile projects.

What You Bring:

  • 5+ yrs in Data/Software Engineering with team leadership experience (3–5 yrs).
  • 2+ yrs in Big Data & AWS (EMR, EC2, S3).AWS (EMR, EC2, S3
  • ETL expert – especially Talend, cloud migration, & data pipeline support.
  • Strong in Python, PySpark, SQL , and data modeling.
  • Agile mindset (Scrum, Kanban).
  • Familiar with Hadoop ecosystem, production support, and DevOps for BI.

Nice to Have:

  • Experience with Spark, streaming data tools, & scalable system design.
  • BI data modeling background (3+ yrs).
  • Talend & AWS hands-on (1+ yr).

Qualifications:

  • Bachelor's in Computer Science/Engineering or equivalent experience.
  • AWS Certified (Associate+ level preferred).

Be part of a team that thrives on innovation, collaboration, and cloud-first data transformation.

and help shape the future of data

  • In order to comply with the POPI Act, for future career opportunities, we require your permission to maintain your personal details on our database. By completing and returning this form you give PBT your consent
Skills

AWSData EngineeringExtract Transform Load (ETL)SQL

Industries

Financial ServicesInformation Technology (IT)Insurance

This advertiser has chosen not to accept applicants from your region.

AWS Data Engineer

R450000 - R900000 Y PBT Group

Posted today

Job Viewed

Tap Again To Close

Job Description

Employment Type

Full Time

Experience

4 to 24 years

Salary

Negotiable

Job Published

08 October 2025

Job Reference No.
Job Description

Ready to take your data engineering career to new heights? PBT Group is looking for a Senior AWS Data Engineer to design, build, and lead cutting-edge data solutions in a dynamic, agile environment.

? What You'll Do:

  • Architect modern data analytics frameworks.
  • Translate complex requirements into scalable, secure, high-performance pipelines.
  • Build & optimize batch/real-time data solutions using AWS & Big Data tools.
  • Lead engineering efforts across multiple agile projects.

? What You Bring:

  • 5+ yrs in Data/Software Engineering with team leadership experience (3–5 yrs).
  • 2+ yrs in Big Data & AWS (EMR, EC2, S3).AWS (EMR, EC2, S3
  • ETL expert – especially Talend, cloud migration, & data pipeline support.
  • Strong in Python, PySpark, SQL , and data modeling.
  • Agile mindset (Scrum, Kanban).
  • Familiar with Hadoop ecosystem, production support, and DevOps for BI.

? Nice to Have:

  • Experience with Spark, streaming data tools, & scalable system design.
  • BI data modeling background (3+ yrs).
  • Talend & AWS hands-on (1+ yr).

? Qualifications:

  • Bachelor's in Computer Science/Engineering or equivalent experience.
  • AWS Certified (Associate+ level preferred).

Be part of a team that thrives on innovation, collaboration, and cloud-first data transformation.

and help shape the future of data

  • In order to comply with the POPI Act, for future career opportunities, we require your permission to maintain your personal details on our database. By completing and returning this form you give PBT your consent
Skills

AWSData EngineeringExtract Transform Load (ETL)SQL

Industries

Information Technology (IT)Retail

This advertiser has chosen not to accept applicants from your region.

AWS Data Engineer

R90000 - R120000 Y hearX

Posted today

Job Viewed

Tap Again To Close

Job Description

Responsible for creating and managing the technological part of data infrastructure in every step of data flow. From configuring data sources to integrating analytical tools — all these systems would be architected, built, and managed by a general-role data engineer.

Data Architecture and Management 20%

  • Design and maintain scalable data architectures using AWS services for example, but not limited to, AWS S3, AWS Glue and AWS Athena.
  • Implement data partitioning and cataloging strategies to enhance data organization and accessibility.
  • Work with schema evolution and versioning to ensure data consistency.
  • Develop and manage metadata repositories and data dictionaries.
  • Assist and support with defining, setup and maintenance of data access roles and privileges.

Pipeline Development and ETL 30%

  • Design, develop and optimize scalable ETL pipelines using batch and real-time processing frameworks (using AWS Glue and PySpark).
  • Implement data extraction, transformation and loading processes from various structured and unstructured sources.
  • Optimize ETL jobs for performance, cost efficiency and scalability.
  • Develop and integrate APIs to ingest and export data between various source and target systems, ensuring seamless ETL workflows.
  • Enable scalable deployment of ML models by integrating data pipelines with ML workflows.

Automation, Monitoring and Optimization 30%

  • Automate data workflows and ensure they are fault tolerant and optimized.
  • Implement logging, monitoring and alerting for data pipelines.
  • Optimize ETL job performance by tuning configurations and analyzing resource usage.
  • Optimize data storage solutions for performance, cost and scalability.
  • Ensure the optimisation of AWS resources for scalability for data ingestion and outputs.
  • Deploy machine learning models into productions using cloud based services like AWS Sagemaker.

Security, Compliance and Best Practices 10%

  • Ensure API security, authentication and access control best practicespractises.
  • Implement data encryption, access control and compliance with GDPR, HIPAA, SOC2 etc.
  • Establish data governance policies, including access control and security best practicespractises.

Development
 
Team Mentorship and Collaboration 5%

  • Work closely with data scientists, analysts and business teams to understand data needs.
  • Collaborate with backend teams to integrate data pipelines into CI/CD.
  • Assist with developmental leadership to the team through coaching, code reviews and mentorship.
  • Ensure technological alignment with B2C division strategy supporting overarching hearX strategy and vision.
  • Identify and encourage areas for growth and improvement within the team.

QMS and Compliance 5%

  • Document data processes, transformations and architectural decisions.
  • Maintain high standards of software quality within the team by adhering to good processes, practices and habits, including compliance to QMS system, and data and system security requirements.
  • Ensure compliance to the established processes and standards for the development lifecycle, including but not limited to data archival.
  • Drive compliance to the hearX Quality Management System in line with the Quality Objectives, Quality Manual, and all processes related to the design, development and implementation of software related to medical devices.
  • Comply to ISO, CE, FDA (and other) standards and requirements as is applicable to assigned products.
  • Safeguard confidential information and data.

Role Requirements

Minimum education (essential):

Bachelor's degree in Computer Science or Engineering (or similar)

Minimum education (desirable):

  • Honors degree in Computer Science or Engineering (or similar)
  • AWS Certified Data Engineer or
  • AWS Certified Solutions Architect or
  • AWS Certified Data Analyst

Minimum applicable experience (years):

5+ years working experience

Required nature of experience:

  • Data Engineering development
  • Experience with AWS services used for data warehousing, computing and transformations ie.AWS Glue (crawlers, jobs, triggers, and catalog), AWS S3, AWS Lambda, AWS Step Functions, AWS Athena and AWS CloudWatch
  • Experience with SQL and NoSQL databases (e.g., PostgreSQL, MySQL, DynamoDB)
  • Experience with SQL for querying and transformation of data

Skills and Knowledge
 
(essential):

  • Strong skills in Python (especially PySpark for AWS Glue)
  • Strong knowledge of data modeling, schema design and database optimization
  • Proficiency with AWS and infrastructure as code

Skills and Knowledge
 
(desirable):

  • Knowledge of SQL, Python, AWS serverless microservices,
  • Deploying and managing ML models in production
  • Version control (Git), unit testing and agile methodologies

This job description is not a definitive or exhaustive list of responsibilities and is subject to change depending on changing business requirements. Employees will be consulted on any changes. Employee's performance will be reviewed based on the agreed upon objectives. If you do not hear from us within 30 days, please consider your application unsuccessful.

This advertiser has chosen not to accept applicants from your region.

AWS Data Engineer

R900000 - R1200000 Y Zensar Technologies

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description
Data Engineer - GCH - Cape Town

Must Have

  • Proficiency with Matillion ETL: Using the Matillion ETL platform for data integration.
  • Cloud Data Warehouses: Familiarity with cloud data warehouses like Snowflake, AWS Redshift, or Google BigQuery.

Key Responsibilities

  • Design & Develop Data Pipelines: Build and optimize scalable, reliable, and automated ETL/ELT pipelines using AWS services (e.g., AWS Glue, AWS Lambda, Redshift, S3) and Databricks.
  • Cloud Data Architecture: Design, implement, and support in maintaining data infrastructure in AWS, ensuring high availability, security, and scalability. Work with lake houses, data lakes, data warehouses, and distributed computing.
  • DBT Core Implementation: Lead the implementation of DBT Core to automate data transformations, develop reusable models, and maintain efficient ELT processes.
  • Data Modelling: Build efficient data models to support required analytics/reporting.
  • Optimize Data Workflows: Monitor, troubleshoot, and optimize data pipelines for performance and cost-efficiency in cloud environments. Utilize Databricks for processing large-scale data sets and streamlining data workflows.
  • Data Quality & Monitoring: Ensure high-quality data by implementing data validation and monitoring systems. Troubleshoot data issues and create solutions to ensure data reliability.
  • Automation & CI/CD: Implement CI/CD practices for data pipeline deployment and maintain automation for monitoring and scaling data infrastructure in AWS and Databricks.
  • Documentation & Best Practices: Maintain comprehensive documentation for data pipelines, architectures, and best practices in AWS, Databricks, and DBT Core. Ensure knowledge sharing across teams.

Required
Skills & Qualifications:

  • Bachelor's / master's degree in computer science, Engineering or a related field.
  • 4+ years of experience as a Data Engineer or in a similar role.
  • Extensive hands-on experience with AWS services (S3, Redshift, Glue, Lambda, Kinesis, etc.) for building scalable and reliable data solutions.
  • Advanced expertise in Databricks, including the creation and optimization of data pipelines, notebooks, and integration with other AWS services.
  • Strong experience with DBT Core for data transformation and modelling, including writing, testing, and maintaining DBT models.
  • Proficiency in SQL and experience with designing and optimizing complex queries for large datasets.
  • Strong programming skills in Python/PySpark, with the ability to develop custom data processing logic and automate tasks.
  • Experience with Data Warehousing and knowledge of concepts related to OLAP and OLTP systems.
  • Expertise in building and managing ETL/ELT pipelines, automating data workflows, and performing data validation.
  • Familiarity with CI/CD concepts, version control (e.g., Git), and deployment automation.
  • Having worked under Agile project environment

Preferred

  • Experience with Apache Spark and distributed data processing in Databricks.
  • Familiarity with streaming data solutions (e.g., AWS Kinesis, Apache Kafka).
This advertiser has chosen not to accept applicants from your region.

AWS Data Engineer

R600000 - R1800000 Y ThirdEye IT Consulting Services (Pty) Ltd

Posted today

Job Viewed

Tap Again To Close

Job Description

Key Responsibilities:

  • Design & Develop Data Pipelines
    : Build and optimize scalable, reliable, and automated ETL/ELT pipelines using
    AWS services
    (e.g., AWS Glue, AWS Lambda, Redshift, S3) and
    Databricks
    .
  • DBT Core Implementation
    : Lead the implementation of
    DBT Core
    to automate data transformations, develop reusable models, and maintain efficient ELT processes.
  • Optimize Data Workflows
    : Monitor, troubleshoot, and optimize data pipelines for performance and cost-efficiency in cloud environments. Utilize
    Databricks
    for processing large-scale data sets and streamlining data workflows.
  • Data Quality & Monitoring
    : Ensure high-quality data by implementing data validation and monitoring systems. Troubleshoot data issues and create solutions to ensure data reliability.
  • Automation & CI/CD
    : Implement
    CI/CD
    practices for data pipeline deployment and maintain automation for monitoring and scaling data infrastructure in
    AWS
    and
    Databricks
    .
  • Documentation & Best Practices
    : Maintain comprehensive documentation for data pipelines, architectures, and best practices in
    AWS
    ,
    Databricks
    , and
    DBT Core
    . Ensure knowledge sharing across teams.

Skills & Qualifications:

Required:

  • Bachelor's / Master's degree in computer science
    , Engineering or a related field.
  • 5+ years
    of experience as a Data Engineer or in a similar role.
  • Extensive hands-on experience with
    AWS services
    (S3, Redshift, Glue, Lambda, Kinesis, etc.) for building scalable and reliable data solutions.
  • Advanced expertise in
    Databricks
    , including the creation and optimization of data pipelines, notebooks, and integration with other AWS services.
  • Strong experience with
    DBT Core
    for data transformation and modelling, including writing, testing, and maintaining DBT models.
  • Proficiency in
    SQL
    and experience with designing and optimizing complex queries for large datasets.
  • Strong programming skills in
    Python/PySpark
    , with the ability to develop custom data processing logic and automate tasks.
  • Experience with
    Data Warehousing
    and knowledge of concepts related to OLAP and OLTP systems.
  • Expertise in building and managing
    ETL/ELT pipelines
    , automating data workflows, and performing data validation.
  • Familiarity with
    CI/CD
    concepts, version control (e.g., Git), and deployment automation.
  • Having worked under Agile project environment.

Preferred:

  • Have exposure to ingestion tools such as Matillion, Fivetran etc.
  • Experience with
    Apache Spark
    and distributed data processing in
    Databricks
    .
  • Familiarity with
    streaming data
    solutions (e.g.,
    AWS Kinesis
    ,
    Apache Kafka
    ).

Soft Skills:

  • Excellent communication skills, with the ability to explain complex technical concepts to non-technical stakeholders.
  • Strong analytical and problem-solving skills, capable of troubleshooting complex data pipeline issues.
This advertiser has chosen not to accept applicants from your region.

AWS Data Engineer

New
Western Cape, Western Cape Zensar Technologies

Posted today

Job Viewed

Tap Again To Close

Job Description

full-time
Job title : AWS Data Engineer Job Location : Western Cape, Cape Town Deadline : October 30, 2025 Quick Recommended Links

Must have:

  • Proficiency with Matillion ETL: Using the Matillion ETL platform for data integration.
  • Cloud Data Warehouses: Familiarity with cloud data warehouses like Snowflake, AWS Redshift, or Google BigQuery.

Key Responsibilities:

  • Design & Develop Data Pipelines: Build and optimize scalable, reliable, and automated ETL/ELT pipelines using AWS services (e.g., AWS Glue, AWS Lambda, Redshift, S3) and Databricks.
  • Cloud Data Architecture: Design, implement, and support in maintaining data infrastructure in AWS, ensuring high availability, security, and scalability. Work with lake houses, data lakes, data warehouses, and distributed computing.
  • DBT Core Implementation: Lead the implementation of DBT Core to automate data transformations, develop reusable models, and maintain efficient ELT processes.
  • Data Modelling: Build efficient data models to support required analytics/reporting.
  • Optimize Data Workflows: Monitor, troubleshoot, and optimize data pipelines for performance and cost-efficiency in cloud environments. Utilize Databricks for processing large-scale data sets and streamlining data workflows.
  • Data Quality & Monitoring: Ensure high-quality data by implementing data validation and monitoring systems. Troubleshoot data issues and create solutions to ensure data reliability.
  • Automation & CI/CD: Implement CI/CD practices for data pipeline deployment and maintain automation for monitoring and scaling data infrastructure in AWS and Databricks.
  • Documentation & Best Practices: Maintain comprehensive documentation for data pipelines, architectures, and best practices in AWS, Databricks, and DBT Core. Ensure knowledge sharing across teams.

Skills & Qualifications: 

Required:

  • Bachelor’s / master’s degree in computer science, Engineering or a related field.
  • 4+ years of experience as a Data Engineer or in a similar role.
  • Extensive hands-on experience with AWS services (S3, Redshift, Glue, Lambda, Kinesis, etc.) for building scalable and reliable data solutions.
  • Advanced expertise in Databricks, including the creation and optimization of data pipelines, notebooks, and integration with other AWS services.
  • Strong experience with DBT Core for data transformation and modelling, including writing, testing, and maintaining DBT models.
  • Proficiency in SQL and experience with designing and optimizing complex queries for large datasets.
  • Strong programming skills in Python/PySpark, with the ability to develop custom data processing logic and automate tasks.
  • Experience with Data Warehousing and knowledge of concepts related to OLAP and OLTP systems.
  • Expertise in building and managing ETL/ELT pipelines, automating data workflows, and performing data validation.
  • Familiarity with CI/CD concepts, version control (e.g., Git), and deployment automation.
  • Having worked under Agile project environment

Preferred:

  • Experience with Apache Spark and distributed data processing in Databricks.
  • Familiarity with streaming data solutions (e.g., AWS Kinesis, Apache Kafka).

  • Research / Data Analysis jobs

This advertiser has chosen not to accept applicants from your region.

Senior AWS Data Engineer

R900000 - R1200000 Y 60 Degrees

Posted today

Job Viewed

Tap Again To Close

Job Description

THE OPPORTUNITY THAT AWAITS YOU:

We are seeking immediately available/short notice, experienced AWS Data Engineer (Intermediate to Senior) to support an international client in managing and optimising their data infrastructure. The role will focus on building and maintaining scalable data pipelines, optimising cloud-based data solutions, and ensuring high performance and reliability across systems. You will play a key role in supporting data operations roadmap, leveraging AWS technologies to deliver robust, efficient, and secure solutions.

YOUR KEY RESPONSIBILITIES:

  • Design, build, and maintain scalable data pipelines and ETL processes.
  • Optimise data storage, transformation, and retrieval for performance and cost efficiency.
  • Implement best practices in data modelling and architecture.
  • Develop and manage data solutions using AWS services such as S3, Glue, Redshift, DBT, Spark, and Terraform.
  • Collaborate with cloud architects to ensure smooth integrations and deployments.
  • Lead or contribute to migrations and modernisation projects within AWS environments.
  • Conduct performance tuning and implement monitoring solutions to ensure system stability.
  • Troubleshoot data pipeline failures, ensuring rapid resolution and minimal downtime.
  • Build dashboards and reporting tools to monitor data flows and usage.
  • Apply role-based access controls and enforce data governance policies.
  • Ensure compliance with international data protection and security standards.
  • Support audit and compliance initiatives as required.
  • Work closely with cross-functional teams (data analysts, product managers, application teams).
  • Document processes, pipelines, and architectures for knowledge transfer.
  • Mentor junior engineers and contribute to continuous improvement initiatives.

OUR REQUIRED EXPERTISE:

  • Proven experience as a Data Engineer (5+ years, with Intermediate to Senior-level capability).
  • Strong proficiency with AWS data services (S3, Glue, Redshift, DBT, Spark, Terraform).
  • Hands-on experience in building and managing ETL/ELT pipelines.
  • Strong knowledge of SQL, data modelling, and performance tuning.
  • Familiarity with CI/CD, version control (Git), and infrastructure-as-code.
  • Excellent problem-solving skills and ability to work in fast-paced environments.
  • Strong communication skills for collaboration with international teams.
  • Experience with multi-region or global data deployments. (Nice to have)
  • Knowledge of Python or other scripting languages for automation. (Nice to have)
  • Exposure to data governance frameworks and observability tools. (Nice to have)

YOUR REWARD

  • Competitive contract compensation
  • Exposure to cutting-edge AWS technologies and data practices.
  • Collaborative environment with global teams.
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Aws data Jobs in South Africa !

Senior AWS Data Engineer

R1800000 - R2500000 Y PBT Group

Posted today

Job Viewed

Tap Again To Close

Job Description

Employment Type

Contract

Experience

4 to 24 years

Salary

Negotiable

Job Published

08 October 2025

Job Reference No.
Job Description

Ready to take your data engineering career to new heights? PBT Group is looking for a Senior AWS Data Engineer to design, build, and lead cutting-edge data solutions in a dynamic, agile environment.

? What You'll Do:

  • Architect modern data analytics frameworks.
  • Translate complex requirements into scalable, secure, high-performance pipelines.
  • Build & optimize batch/real-time data solutions using AWS & Big Data tools.
  • Lead engineering efforts across multiple agile projects.

? What You Bring:

  • 5+ yrs in Data/Software Engineering with team leadership experience (3–5 yrs).
  • 2+ yrs in Big Data & AWS (EMR, EC2, S3).AWS (EMR, EC2, S3
  • ETL expert – especially Talend, cloud migration, & data pipeline support.
  • Strong in Python, PySpark, SQL , and data modeling.
  • Agile mindset (Scrum, Kanban).
  • Familiar with Hadoop ecosystem, production support, and DevOps for BI.

? Nice to Have:

  • Experience with Spark, streaming data tools, & scalable system design.
  • BI data modeling background (3+ yrs).
  • Talend & AWS hands-on (1+ yr).

? Qualifications:

  • Bachelor's in Computer Science/Engineering or equivalent experience.
  • AWS Certified (Associate+ level preferred).

Be part of a team that thrives on innovation, collaboration, and cloud-first data transformation.

and help shape the future of data

  • In order to comply with the POPI Act, for future career opportunities, we require your permission to maintain your personal details on our database. By completing and returning this form you give PBT your consent
Skills

AWSData EngineeringExtract Transform Load (ETL)SQL

Industries

Financial ServicesInformation Technology (IT)Insurance

This advertiser has chosen not to accept applicants from your region.

Senior AWS Data Engineer

R900000 - R1200000 Y IT Ridge Technologies

Posted today

Job Viewed

Tap Again To Close

Job Description

Company Description

IT Ridge Technologies is a global IT consulting and product engineering services provider, specializing in e-commerce, e-learning, application development, business intelligence, and infrastructure solutions. Our company partners with global organizations to address complex business challenges and drive strategic growth through innovative technology solutions. With expertise across various industries, including retail, manufacturing, construction, and education, IT Ridge Technologies offers a unique balance of human perspective and strategic thinking to help clients lead in today's challenging business environment.

Role Description

This is a full-time on-site role located in Cape Town for a Senior AWS Data Engineer. In this role, you will design, develop, and maintain data pipelines and architectures on AWS. Your daily tasks will include building ETL processes, data warehousing solutions, and utilizing data modeling techniques to support analytics and business intelligence needs. You will work closely with cross-functional teams to ensure efficient data flow, data quality, and effective data solutions that drive business outcomes.

Qualifications

  • Data Engineering and Extract Transform Load (ETL) skills
  • Proficiency in Data Modeling and Data Warehousing concepts
  • Experience in Data Analytics and Business Intelligence
  • Strong understanding of AWS services and architecture
  • Excellent problem-solving skills and attention to detail
  • Ability to work independently and as part of a team
  • Experience in the IT consulting industry is a plus
  • Bachelor's degree in Computer Science, Engineering, or related field
This advertiser has chosen not to accept applicants from your region.

AWS Data Engineer (6-Month Contract)

Johannesburg, Gauteng Visi Select

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

We’re Hiring: AWS Data Engineer (6-Month Contract)



Location: Remote (supporting an international client)



Contract Duration: 6 Months (Immediate Start)



Compensation: R95,000 – R110,000 per month







We’re looking for an experienced AWS Data Engineer (Intermediate to Senior) to join our client’s global team. You’ll be building and optimising scalable data pipelines, ensuring performance, reliability, and security across cloud-based systems.





What you’ll do:



Design & optimise data pipelines and ETL processes



Work with AWS services: S3, Glue, Redshift, DBT, Spark, Terraform



Support cloud integration and modernisation projects



Ensure system performance, monitoring & reliability



Enforce data security, governance, and compliance standards



Collaborate with global, cross-functional teams







What we’re looking for:



5+ years’ experience as a Data Engineer (Intermediate–Senior)



Hands-on expertise in AWS data services



Strong SQL, data modelling, and pipeline management skills



Familiarity with CI/CD, Git, and infrastructure-as-code



Excellent collaboration and problem-solving skills







Why join?



Competitive contract compensation (R95k – R110k/month)



Work with cutting-edge AWS technologies



Collaborate with international teams on high-impact projects





If you’re ready to make an impact as an AWS Data Engineer, apply today — or share this opportunity with someone in your network!
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Aws Data Jobs