612 Aws Data jobs in South Africa
AWS Data Engineer
Posted 1 day ago
Job Viewed
Job Description
Get AI-powered advice on this job and more exclusive features.
cloudandthings.io is a software engineering company specialising in Cloud, Software, Data, and FinOps. Our aim is to enable great engineers and organisations to do more, by building world-class cloud platforms, innovative software, and data environments that deliver true business value. We take great pride in our culture, which values great engineering skills, teamwork, getting stuff done, and ongoing learning.
We’re on the lookout for a motivated and experienced Data Engineers to be part of our growing team.
Overview
As a Data Engineer, you will be responsible for designing, developing, and maintaining the data infrastructure, pipelines, and applications for our clients. You will collaborate with cross-functional teams to ensure efficient data processing, integration, and analysis.
We aim to build one of the strongest Engineering capabilities within Africa and abroad and the Data Engineer will help us achieve this.
Key Responsibilities
While the list below is long, an ideal candidate should have working knowledge and experience covering many of the tools and services. The requirements for each project differ over time, and these skills provide an overview of what may typically be required of an Data Engineer.
Software Engineering – Fundamentals
- A fundamental software engineering skill set underpins all engineering work at cloudandthings.io.
- Experience with modern operating systems, particularly Linux.
- Experience with version control software, particularly Git.
- Software fundamentals, such as: Problem solving, data structures and algorithms, software development methodologies, common design patterns and best practices.
- Experience with at least one relevant language, preferably more. For example, Python, SQL, NodeJS, Terraform.
Cloud
- Ability to identify serverless options, managed options, and roll-your-own options, strengths and weaknesses.
- Development experience working with Terraform IaC to provision and maintain data infrastructure.
- Familiarity with AWS Well-Architected principles and experience implementing them.
- Working knowledge of Big Data – Volume, Variety, Velocity, etc.
- Good experience collecting data in hybrid environments (on-premise to cloud, and cloud to on-premise).
- Batch: AWS DataSync, Storage Gateway, Transfer Family (FTP / SFTP / MFT), Snowball.
- Databases: ODBC/JDBC, database replicas and replication tools, migration tools such as Database Migration Service (DMS) and SCT.
- Basic experience working with on-premise storage solutions (NFS / SMB, NAS / DAS, etc.).
- Cloud Storage: Amazon S3.
- Relational Databases: AWS RDS or similar, MySQL / PostgreSQL, Aurora.
- Search Databases: AWS Elasticsearch / OpenSearch.
- Caching: Redis / Memcached.
- Strong experience developing ETL processes, and integrating with source and destination systems.
- Strong experience developing using Python, Spark (e.g. PySpark), and SQL to work with data.
- Basic experience with Lakehouse technologies such as Apache Hudi, Apache Iceberg or Databricks Delta Lake.
- AWS Lambda for file/stream/event processing, ETL, and triggers.
- General ETL and cataloging of data, access control: AWS Glue ETL, Glue catalog, Lake Formation.
- Hadoop-like processing: Mainly Spark and Hive, Instance types and cluster and job sizing; AWS Elastic MapReduce (EMR).
- Basic understanding of cloud data warehouse architecture and data integration: AWS Redshift and Redshift spectrum.
- Data modelling skills, normalization, facts and dimensions.
- On-object-store querying: AWS Athena, Glue Crawlers.
- Basic experience with authentication and identity federation, authorisation and RBAC pertaining to data.
- Basic knowledge of cloud network security: AWS VPC, VPC endpoints, Subnets, DirectConnect.
- Identity and Access Management: AWS IAM, STS and cross-account access.
- Encryption for data at rest, and data in motion for all services used: AWS KMS / SSE, TLS, etc.
Data Engineering – Operations
- Orchestration of data pipelines: AWS Step Functions, Managed Apache Airflow, Glue, etc.
- Basic knowledge of good architecture pillars and how to apply them:
- Operational excellence, Security, and Reliability.
- Performance Efficiency, Cost Optimization, and Sustainability.
Requirements
- Proven track record of designing and implementing data solutions.
- Knowledge of and experience with AWS Cloud infrastructure and services.
- Bachelor’s degree in Engineering, Computer Science, or related field.
- Certifications, such as:
- AWS Solutions Architect Associate
- AWS Data Analytics Specialty
- AWS Database Specialty.
- Any other data-related experience, e.g. working with Hadoop, databases, analytics software, etc.
- Experience with a second cloud vendor (e.g. both AWS and Azure).
- Experience with Docker/Containers/Kubernetes/CICD pipelines for data.
- Knowledge of data security and compliance standards.
- Understanding of Cloud Security best practices.
- Willingness to learn and expand knowledge related to Cloud and Data Technologies.
- Strong problem-solving and analytical skills.
- Self-organizing with the ability to prioritize and manage multiple tasks simultaneously.
- Excellent verbal and written communication skills.
- Ability to work collaboratively with clients and team members.
- Willingness to travel to clients as and when required.
What We Offer
- A culture of engineering and an environment where ideas are heard, and builders can build.
- Competitive compensation and bonus structure.
- A flexible and supportive work environment that values diversity, work-life balance, and personal growth.
- Opportunities for career advancement and ongoing professional development.
- Ongoing learning and development opportunities to enhance your skills.
- Engaging with cutting-edge technologies and awesome client projects.
- Access to a talented team of professionals and mentors.
*If you have not heard back from us within 30 days, please consider your application unsuccessful. However, we'd love for you to keep an eye out for future opportunities and please continue to apply.
Seniority level- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at cloudandthings.io by 2x
Get notified about new Data Engineer jobs in South Africa .
Cape Town, Western Cape, South Africa 1 week ago
Cape Town, Western Cape, South Africa 6 months ago
Cape Town, Western Cape, South Africa $400.00-$50.00 1 week ago
Cape Town, Western Cape, South Africa 400.00- 450.00 1 week ago
Cape Town, Western Cape, South Africa 400.00- 450.00 4 days ago
Cape Town, Western Cape, South Africa 2 weeks ago
Cape Town, Western Cape, South Africa 2 months ago
Cape Town, Western Cape, South Africa 1 week ago
Analytics Engineer at Aios Medical — Remote, $6 -90k/year inc equityCape Town, Western Cape, South Africa 2 weeks ago
Software Engineer (Python) - Supply ChainCape Town, Western Cape, South Africa 1 month ago
Software Engineer (Python) - Supply ChainCape Town, Western Cape, South Africa 1 month ago
Cape Town, Western Cape, South Africa 1 month ago
Cape Town, Western Cape, South Africa 400.00- 450.00 4 days ago
Cape Town, Western Cape, South Africa 400.00- 450.00 1 week ago
Cape Town, Western Cape, South Africa 5 days ago
Johannesburg Metropolitan Area 1 week ago
City of Cape Town, Western Cape, South Africa 3 weeks ago
Johannesburg, Gauteng, South Africa 3 months ago
Cape Town, Western Cape, South Africa 2 months ago
Cape Town, Western Cape, South Africa 2 months ago
South Africa 60,000.00- 120,000.00 1 month ago
South Africa 15,000.00- 25,000.00 1 week ago
Cape Town, Western Cape, South Africa 2 months ago
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrAWS Data Engineer
Posted 13 days ago
Job Viewed
Job Description
PBT Group is currently offering an opportunity for an AWS Data Engineer with 2 to 5 years of relevant experience.
The role of a Data Engineer involves constructing and maintaining data pipelines and datamarts, emphasizing scalability, repeatability, and security. Data Engineers play a pivotal role in facilitating the acquisition of data from diverse sources, ensuring its conformity to data quality standards, and enabling downstream users to access data promptly. This position is an integral part of an agile team.
These professionals are entrusted with the responsibility of establishing the infrastructure required to derive insights from raw data, integrating data from various sources seamlessly. They empower solutions by efficiently managing substantial volumes of data, both in batch and real-time, utilizing cutting-edge technologies from the realms of big data and cloud computing. Additional responsibilities encompass the development of proof-of-concepts and the implementation of intricate big data solutions, with a primary focus on collecting, parsing, managing, analyzing, and visualizing extensive datasets. They are adept at employing technologies to resolve challenges associated with handling vast amounts of data in diverse formats, thereby delivering innovative solutions.
Data is a technically demanding role that necessitates a broad spectrum of expertise in software development and programming. These professionals possess knowledge in data analysis, understanding end-user and business requirements, and have the ability to translate these needs into technical solutions. They exhibit a strong grasp of physical database design and the systems development lifecycle. Collaboration within a team environment is essential for success in this role.
Key Responsibilities:
- Architecting Data analytics framework.
- Translating complex functional and technical requirements into detailed architecture, design, and high-performance software.
- Leading the development of data and batch / real-time analytical solutions by leveraging transformative technologies.
- Engaging in multiple projects as a technical lead, overseeing user story analysis, design, software development, testing, and automation tool creation.
Duties: Primary Job Objectives:
- Development and Operations
- Database Development and Operations
- Establishment and Adherence to Policies, Standards, and Procedures
- Communication
- Business Continuity and Disaster Recovery Planning
- Research and Evaluation
- Coaching and Mentoring
Required Skills, Knowledge, and Experience:
- A minimum of 5 years of experience in Data Engineering or Software Engineering.
- Demonstrated leadership experience, managing teams of engineers for 3-5 years.
- A minimum of 2 years of experience in Big Data.
- At least 5 years of experience with Extract, Transform, and Load (ETL) processes.
- A minimum of 2 years of experience with AWS (Amazon Web Services).
- Demonstrated experience with agile or other rapid application development methodologies for at least 2 years (e.g., Agile, Kanban, Scrum).
- 5 years of proven expertise in object-oriented design, coding, testing patterns, and working with commercial or open-source software platforms and large-scale data infrastructures.
- Proficiency in creating data feeds from on-premise to AWS Cloud (2 years).
- Support experience for data feeds in production on a break-fix basis (2 years).
- A minimum of 4 years of experience in creating data marts using Talend or similar ETL development tools.
- Proficiency in data manipulation using Python and PySpark (2 years).
- Experience in processing data using the Hadoop paradigm, particularly with EMR, AWS's distribution of Hadoop (2 years).
- DevOps experience in Big Data and Business Intelligence, including automated testing and deployment (2 years).
- Extensive knowledge of various programming or scripting languages.
- Expertise in data modeling and an understanding of different data structures and their suitability for specific use cases.
Additional Technical Skills Required:
- The ability to design highly scalable distributed systems using various open-source tools.
- Proficiency in both batch and streaming Big Data tools.
- Experience with Talend for at least 1 year.
- Familiarity with AWS services such as EMR, EC2, and S3 for at least 1 year.
- Proficiency in Python for at least 1 year.
- Familiarity with PySpark or Spark (desirable for at least 1 year).
- Experience in Business Intelligence data modeling for 3 years.
- Proficiency in SQL for 3 years.
Qualifications / Certifications:
- A Bachelor's degree in computer science, computer engineering, or equivalent work experience for a minimum of 4 years.
- AWS Certification, at least at the associate level.
AWS Data Engineer
Posted 25 days ago
Job Viewed
Job Description
Job Purpose:
Responsible for creating and managing the technological part of data infrastructure in every step of data flow. From configuring data sources to integrating analytical tools all these systems would be architected, built and managed by a general-role Data Engineer.
Minimum education (essential):
Bachelors degree in Computer Science or Engineering (or similar)
Minimum education (desirable):
- Honors degree in Computer Science or Engineering (or similar)
- AWS Certified Data Engineer; or
- AWS Certified Solutions Architect; or
- AWS Certified Data Analyst
Minimum applicable experience (years):
5+ years working experience
Required nature of experience:
- Data Engineering development
- Experience with AWS services used for data warehousing, computing and transformations i.e. AWS Glue (crawlers, jobs, triggers, and catalog), AWS S3, AWS Lambda, AWS Step Functions, AWS Athena and AWS CloudWatch
- Experience with SQL and NoSQL databases (e.g., PostgreSQL, MySQL, DynamoDB)
- Experience with SQL for querying and transformation of data
Skills and Knowledge (essential):
- Strong skills in Python (especially PySpark for AWS Glue)
- Strong knowledge of data modelling, schema design and database optimization
- Proficiency with AWS and infrastructure as code
Skills and Knowledge (desirable):
- Knowledge of SQL, Python, AWS serverless microservices,
- Deploying and managing ML models in production
- Version control (Git), unit testing and agile methodologies
Data Architecture and Management 20%
- Design and maintain scalable data architectures using AWS services for example, but not limited to, AWS S3, AWS Glue and AWS Athena.
- Implement data partitioning and cataloging strategies to enhance data organization and accessibility.
- Work with schema evolution and versioning to ensure data consistency.
- Develop and manage metadata repositories and data dictionaries.
- Assist and support with defining setup and maintenance of data access roles and privileges.
- Design, develop and optimize scalable ETL pipelines using batch and real-time processing frameworks (using AWS Glue and PySpark).
- Implement data extraction, transformation and loading processes from various structured and unstructured sources.
- Optimize ETL jobs for performance, cost efficiency and scalability.
- Develop and integrate APIs to ingest and export data between various source and target systems, ensuring seamless ETL workflows.
- Enable scalable deployment of ML models by integrating data pipelines with ML workflows.
- Automate data workflows and ensure they are fault tolerant and optimized.
- Implement logging, monitoring and alerting for data pipelines.
- Optimize ETL job performance by tuning configurations and analyzing resource usage.
- Optimize data storage solutions for performance, cost and scalability.
- Ensure the optimisation of AWS resources for scalability for data ingestion and outputs.
- Deploy machine learning models into productions using cloud-based services like AWS SageMaker.
- Ensure API security, authentication and access control best practices.
- Implement data encryption, access control and compliance with GDPR, HIPAA, SOC2 etc.
- Establish data governance policies, including access control and security best practices.
- Work closely with data scientists, analysts and business teams to understand data needs.
- Collaborate with backend teams to integrate data pipelines into CI/CD.
- Assist with developmental leadership to the team through coaching, code reviews and mentorship.
- Ensure technological alignment with B2C division strategy supporting overarching strategy and vision.
- Identify and encourage areas for growth and improvement within the team.
Document data processes, transformations and architectural decisions.
Maintain high standards of software quality within the team by adhering to good processes, practices and habits, including compliance to QMS system, and data and system security requirements.
Ensure compliance to the established processes and standards for the development lifecycle, including but not limited to data archival.
Drive compliance to the Quality Management System in line with the Quality Objectives, Quality Manual, and all processes related to the design, development and implementation of software related to medical devices.
Comply to ISO, CE, FDA (and other) standards and requirements as is applicable to assigned products.
Safeguard confidential information and data.
AWS Data Engineer
Posted 25 days ago
Job Viewed
Job Description
Overview
An exciting opportunity has become available for an experienced AWS Data Engineer. In this role, you'll be responsible for building and managing scalable data systems, from setting up data sources to integrating analytical tools, using AWS services.
Key Responsibilities:
- Data Architecture & Management: Design and maintain data systems using AWS services (e.g., AWS S3, AWS Glue, Athena). Organize data effectively and ensure easy access through data partitioning and cataloging strategies.
- ETL Pipeline Development: Develop and optimize ETL (Extract, Transform, Load) pipelines using AWS Glue and PySpark. Focus on improving performance, scalability, and cost-efficiency in batch and real-time data processing.
- Automation & Monitoring: Automate workflows and ensure they run efficiently. Set up monitoring and alerts for data pipelines, and optimize AWS resources for scalability and performance.
- Security & Compliance: Follow security best practices, including API authentication, data encryption, and compliance with GDPR, HIPAA, and SOC2.
- Collaboration & Mentorship: Work closely with data scientists, analysts, and backend teams to integrate data pipelines. Provide mentorship to junior team members and encourage growth and collaboration.
- Quality Management: Ensure high-quality software development standards by adhering to best practices and complying with industry regulations like ISO, CE, and FDA.
Key Skills and Experience (5+ Years Required):
- AWS Services: Experience with AWS Glue, S3, Lambda, Athena, and CloudWatch.
- Data Engineering: Proven experience developing and optimizing ETL pipelines. Familiarity with SQL and NoSQL databases (e.g., PostgreSQL, MySQL, DynamoDB).
- Programming: Strong skills in Python, particularly using PySpark with AWS Glue.
- Data Modeling & Optimization: Experience in data modeling, schema design, and database optimization.
- Machine Learning: Experience integrating data pipelines with machine learning workflows and deploying models with AWS SageMaker.
- Compliance & Security: Knowledge of data governance, API security, and compliance with industry standards and regulations.
Education & Certification Requirements:
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- AWS certifications, such as AWS Certified Data Engineer, Solutions Architect, or Data Analyst, are highly desirable.
AWS Data Engineer
Posted 25 days ago
Job Viewed
Job Description
PBT Group is currently offering an opportunity for a Senior AWS Data Engineer.
The role of a Data Engineer involves constructing and maintaining data pipelines and datamarts, emphasizing scalability, repeatability, and security. Data Engineers play a pivotal role in facilitating the acquisition of data from diverse sources, ensuring its conformity to data quality standards, and enabling downstream users to access data promptly. This position is an integral part of an agile team.
These professionals are entrusted with the responsibility of establishing the infrastructure required to derive insights from raw data, integrating data from various sources seamlessly. They empower solutions by efficiently managing substantial volumes of data, both in batch and real-time, utilizing cutting-edge technologies from the realms of big data and cloud computing. Additional responsibilities encompass the development of proof-of-concepts and the implementation of intricate big data solutions, with a primary focus on collecting, parsing, managing, analyzing, and visualizing extensive datasets. They are adept at employing technologies to resolve challenges associated with handling vast amounts of data in diverse formats, thereby delivering innovative solutions.
Data Engineering is a technically demanding role that necessitates a broad spectrum of expertise in software development and programming. These professionals possess knowledge in data analysis, understanding end-user and business requirements, and have the ability to translate these needs into technical solutions. They exhibit a strong grasp of physical database design and the systems development lifecycle. Collaboration within a team environment is essential for success in this role.
Key Responsibilities:
- Architecting Data analytics framework.
- Translating complex functional and technical requirements into detailed architecture, design, and high-performance software.
- Leading the development of data and batch/real-time analytical solutions by leveraging transformative technologies.
- Engaging in multiple projects as a technical lead, overseeing user story analysis, design, software development, testing, and automation tool creation.
Duties: Primary Job Objectives:
- Development and Operations
- Database Development and Operations
- Establishment and Adherence to Policies, Standards, and Procedures
- Communication
- Business Continuity and Disaster Recovery Planning
- Research and Evaluation
- Coaching and Mentoring
Required Skills, Knowledge, and Experience:
- A minimum of 5 years of experience in Data Engineering or Software Engineering.
- Demonstrated leadership experience, managing teams of engineers for 3-5 years.
- A minimum of 2 years of experience in Big Data.
- At least 5 years of experience with Extract, Transform, and Load (ETL) processes.
- A minimum of 2 years of experience with AWS (Amazon Web Services).
- Demonstrated experience with agile or other rapid application development methodologies for at least 2 years (e.g., Agile, Kanban, Scrum).
- 5 years of proven expertise in object-oriented design, coding, testing patterns, and working with commercial or open-source software platforms and large-scale data infrastructures.
- Proficiency in creating data feeds from on-premise to AWS Cloud (2 years).
- Support experience for data feeds in production on a break-fix basis (2 years).
- A minimum of 4 years of experience in creating data marts using Talend or similar ETL development tools.
- Proficiency in data manipulation using Python and PySpark (2 years).
- Experience in processing data using the Hadoop paradigm, particularly with EMR, AWS's distribution of Hadoop (2 years).
- DevOps experience in Big Data and Business Intelligence, including automated testing and deployment (2 years).
- Extensive knowledge of various programming or scripting languages.
- Expertise in data modeling and an understanding of different data structures and their suitability for specific use cases.
Additional Technical Skills Required:
- The ability to design highly scalable distributed systems using various open-source tools.
- Proficiency in both batch and streaming Big Data tools.
- Experience with Talend for at least 1 year.
- Familiarity with AWS services such as EMR, EC2, and S3 for at least 1 year.
- Proficiency in Python for at least 1 year.
- Familiarity with PySpark or Spark (desirable for at least 1 year).
- Experience in Business Intelligence data modeling for 3 years.
- Proficiency in SQL for 3 years.
Qualifications/Certifications:
- A Bachelor's degree in computer science, computer engineering, or equivalent work experience for a minimum of 4 years.
- AWS Certification, at least at the associate level.
* In order to comply with the POPI Act, for future career opportunities, we require your permission to maintain your personal details on our database. By completing and returning this form you give PBT your consent
AWS Data Engineer
Posted 14 days ago
Job Viewed
Job Description
We’re a team of curious minds and caffeine-fueled builders on a mission to turn raw data into real-world impact. We believe in pipelines that don’t leak, schemas that actually make sense, and dashboards that don’t make your eyes bleed.
Our company is scaling fast, and guess what? So is our data. That’s where you come in.
Currently in search for a Data Engineer.
Requirements:
- Preferable a Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience)
- Seven to 10 years of experience in data engineering
- Three+ years of hands-on experience with AWS, including: S3, Glue, Spark, Athena, Redshift, RDS, Lambda, Lake Formation
- Strong SQL skills and experience with relational databases (e.g., PostgreSQL, Oracle, RDS)
- Proficiency in Python or Scala for data processing
- Familiarity with infrastructure-as-code tools (e.g., Terraform, CloudFormation)
- Understanding of data governance, security, and compliance in cloud environments
- Exposure to AI/ML platforms (e.g., AWS AI, SageMaker, OpenAI) is an advantage
Responsibilities:
- Collaborate with analysts, developers, architects, and business stakeholders to understand data needs and deliver technical solutions
- Design, build, and maintain data pipelines and integrations using AWS services such as S3, Glue, Lambda, and Redshift
- Develop and manage data lakes and data warehouses on AWS
- Support and maintain production and non-production data environments
- Optimize data storage and query performance through schema design and efficient data processing
- Implement CI/CD practices for data infrastructure, including monitoring, logging, and alerting
Reference Number for this position is GZ60695 which is a permanent position based in Centurion offering a cost to company salary of R1.2 per annum negotiable on experience and ability. Contact Garth on or call him on to discuss this and other opportunities.
Are you ready for a change of scenery? The E-Merge IT recruitment is a specialist niche recruitment agency. We offer our candidates options so that we can successfully place the right developers with the right companies in the right roles. Check out the E-Merge website for more great positions.
Do you have a friend who is a developer or technology specialist? We pay cash for successful referrals!
AWS Data Engineer (Cape Town)
Posted 9 days ago
Job Viewed
Job Description
About You and the Key Skills We’re Looking For
Your proven record as an AWS Cloud Data Engineer highlights your expertise in creating scalable and secure data solutions. Proficiency in AWS and Big Data technologies such as AWS EC2, AWS S3, AWS Lambda, and Spark/SparkSQL enables you to handle massive amounts of data efficiently. Your adeptness with programming languages like Java, C#, Node.js, Python, and SQL ensures robust and innovative solutions. Your comprehension of cloud-based infrastructure and DevOps tools underscores your holistic approach. Your familiarity with data integration, enterprise data warehousing, and data visualisation enhances your technical edge for this role.
Key Responsibilities
As an AWS Cloud Data Engineer, you will be responsible for building high-quality, innovative, and fully performing data solutions. You will collaborate closely with cross-functional teams, including business stakeholders, data scientists, and IT professionals, to ensure accurate and actionable insights are provided.
Some of your duties include:
- Develop data ingestion, data processing, data engineering, and analytical pipelines for big data, relational databases, NoSQL, and data warehouse solutions.
- Utilise public cloud architectures, considering pros/cons and migration considerations.
- Implement AWS and Big Data technologies such as AWS EC2, AWS S3, AWS Lambda, Spark/SparkSQL, and streaming technologies like Apache Kafka, AWS Kinesis, and Apache NiFI.
- Use programming languages such as Java, C#, Node.js, Python, PySpark, SQL, and Unix shell/Perl scripting.
- Work with DevOps tools like GitHub, GitLab, Jenkins, CodeBuild, CodePipeline, and CodeDeploy.
- Follow Agile software development and DevOps best practices and principles.
Who we are
Our purpose is to grow people, grow business, and grow Africa.
At iqbusiness, we are creative, analytical people who never concede defeat. Beyond IT and technology, we apply innovative solutions to complex problems. We make sure that our clients grow, whether these challenges may be overcome by using research from IQbusiness and our partners, distributing teams to build and implement solutions, or deploying point ability to address a skills shortage. Whatever the challenges, we support them in overcoming them.
What’s in it for you?
At iqbusiness, we prioritise work-life balance and offer attractive salary packages with generous employee benefits. Our offices are conveniently found in both Johannesburg and Cape Town, ensuring ease even during challenging times like load shedding. We foster a workplace culture that values flexibility and supports your personal needs. Moreover, our diverse talent pool provides many opportunities for growth and development through meaningful interactions with your colleagues. Join us and experience the best of both worlds: a fulfilling career and a fulfilling life outside of work.
How do we recruit?
At IQbusiness, we take a refreshingly straightforward approach to recruitment. We firmly believe that feedback is the backbone of improvement, so we avoid dragging out the process unnecessarily.
Here's a sneak peek at the steps involved once you've sent us your resume:
- First, we'll dive into your CV, delving into your background, interests, passions, and tech prowess. If you're a shining star that aligns with our needs, congratulations! You'll swiftly move on to step two.
- This stage involves meeting one of our charismatic hiring managers, who will assess your skills and, just as importantly, your compatibility with our vibrant culture.
- If you emerge victorious from this encounter, a thrilling challenge awaits you—an online assessment to prove your mettle. But wait, there's more! You'll also mingle with more of our extraordinary team members.
- Once these delightful encounters conclude, we get down to business with employment checks—references, credit history, criminal records, and even the rarefied world of fraud.
- Once you've successfully navigated these hurdles, voila!We eagerly extend to you an offer of employment, formalities having been dealt with in a timely manner.
So, if you're ready to embark on a whirlwind recruitment adventure, buckle up and send us that resume!
Note: As all iqbusiness roles require honesty in the handling of or access to cash, finances, financial systems, or confidential information, our recruitment process requires that the following background checks be completed: credit, criminal, ID, and qualification verification.
IQbusiness is committed to sustainable growth and transformation; we embrace diversity and employ previously disadvantaged individuals.
#J-18808-LjbffrBe The First To Know
About the latest Aws data Jobs in South Africa !
AWS Data Engineer – Senior Consultant
Posted 13 days ago
Job Viewed
Job Description
At Deloitte, our Purpose is to make an impact that matters for our clients, our people, and society. This is the lens through which our global strategy is set. It unites Deloitte professionals across geographies, businesses, and skills. It makes us better at what we do and how we do it. It enables us to deliver on our promises to stakeholders, while creating the lasting impact we seek.
Harnessing the talent of 450,000+ people located across more than 150 countries and territories, our size and scale put us in a unique position to help change the world for the better—by bringing together the services we provide, the societal investments we make, and the collaborations we advance through our ecosystems.
Deloitte offers career opportunities across Audit & Assurance (A&A), Tax & Legal (T&L), and our Consulting services business, which includes Strategy, Risk & Transactions Advisory (SR&T), and Technology & Transformation (T&T).
Job DescriptionWe are seeking an AWS Data Engineer to join our AI and Data practice. The ideal candidate is passionate about data and technology solutions, with strong problem-solving and analytical skills, and is tech-savvy with a solid understanding of software development. You should be driven to learn more, keep up with market evolution and industry trends.
You will work throughout the entire engagement cycle, specializing in modern data solutions such as data ingestion/data pipeline frameworks, data warehouse & data lake architectures, cognitive computing, and cloud services.
Technical Requirements for the role:- Support AI & Data teams and implement end-to-end modern data platforms supporting analytics and AI use cases.
- Collaborate with enterprise architects, data architects, ETL developers & engineers, data scientists, and information designers to identify and define data structures, formats, pipelines, metadata, and workload orchestration capabilities.
- Address data privacy & security, data ingestion & processing, data storage & compute, analytical & operational consumption, data modeling, data virtualization, self-service data preparation & analytics, AI enablement, and API integrations.
- Estimate effort and mentor junior colleagues.
- Participate in technical meetings with clients and advise on technical options based on leading practices.
- Work as a data engineer on AWS and other technologies.
- Apply deep technology knowledge to drive continuous improvement.
- Maintain good communication skills, both written and verbal.
- Build interpersonal relationships and foster collaboration.
- Focus on client delivery and quality.
- Be adaptable, problem-solver, and analytical thinker.
Bachelor’s Degree (or higher) in fields like Computer Science, Information Management, Big Data & Analytics, or related areas is preferred.
Having one or more AWS certifications is preferred, but experience with cloud platform solutions is mandatory, such as:
- AWS Solutions Architect – Associate/Professional
Experience required:
At least 3 years in implementing innovative data solutions leveraging Big Data frameworks, supporting on-premise or AWS cloud environments for analytics and AI use cases.
At least 3 years in extracting, transforming, and loading data from various sources including structured, unstructured, and semi-structured data using SQL, NoSQL, and data pipelines for real-time, streaming, batch, and on-demand workloads.
Experience with data warehousing or data lakes.
Ability to communicate complex technical concepts clearly to non-technical stakeholders and work within Agile development environments.
Must demonstrate experience in services and technologies like:
- Database development: Views, functions, stored procedures, query optimization, indexes, OLAP / MDX
- Cloud platforms: AWS (preferred), Azure, GCP, Snowflake
- ETL tools: AWS Glue, Athena, SSIS, IBM DataStage / SAP Data Services, AWS DMS, AppFlow
- Data acquisition: pipeline creation, automation, data delivery, CDC, streaming
- Designing structured problem-solving approaches, data models, and architecture standards.
At Deloitte, we foster an inclusive environment where everyone can be themselves and thrive. We are committed to fairness and respect, including reasonable accommodations for persons with disabilities. We leverage our diverse workforce to build a more inclusive environment across Africa.
Note: The list of tasks and responsibilities is not exhaustive. Deloitte may assign additional duties as required by operational needs.
Beware of Recruitment Scams: Fraudulent schemes may pose as legitimate recruiters. Be cautious of requests for upfront payments, personal information, or unprofessional communication. Contact Deloitte directly through official channels for verification.
#J-18808-LjbffrAWS Data Engineer Market related
Posted 19 days ago
Job Viewed
Job Description
The AWS Data Engineer will be responsible for building and maintaining Big Data Pipelines using Data Platforms and must ensure that data is shared in line with the information classification requirements on a need-to-know basis.
Duties & Responsibilities- Exceptional analytical skills analyzing large and complex data sets.
- Perform thorough testing and data validation to ensure the accuracy of data transformations.
- Strong written and verbal communication skills, with precise documentation.
- Self-driven team player with ability to work independently and multi-task.
- Experience building data pipeline using AWS Glue or Data Pipeline, or similar platforms.
- Familiar with data stores such as AWS S3, and AWS RDS or DynamoDB.
- Experience and solid understanding of various software design patterns.
- Experience preparing specifications from which programs will be written, designed, coded, tested, and debugged.
- Strong organizational skills. Experience developing and working with REST APIs is a bonus.
- Basic experience in Networking and troubleshooting network issues.
- Degree
- Certifications such as AWS Certified Cloud Practitioner, AWS Certified SysOps Associate, AWS Certified Developer Associate, AWS Certified Architect Associate, AWS Certified Architect Professional, Hashicorp Certified Terraform Associate
- Automotive industry experience
- Terraform
- Python 3.x
- Py Spark
- Boto3
- ETL
- Docker
- Powershell / Bash
- Must have 3 – 5 years working experience
- 3 – 5 years experience in working with Enterprise Collaboration tools such as Confluence, JIRA
- 3 – 5 years experience developing technical documentation and artifacts.
- Knowledge of data formats such as Parquet, AVRO, JSON, XML, CSV
- Experience working with Data Quality Tools such as Great Expectations
- Knowledge of the Agile Working Model
Market related
#J-18808-LjbffrAWS Data Engineer (Cape Town)
Posted today
Job Viewed