725 Nosql Database jobs in South Africa
Data Engineer
Posted 4 days ago
Job Viewed
Job Description
Do you live and breathe all things data related?
We are looking for a data engineer to join our growing team. Are you passionate about the technical aspects of OUTs data landscape? See it as a challenge to build processes that are optimized to run as quickly as possible, delivering quality data sets that can be reused by multiple teams. As a data engineer within our team, you will be responsible for developing best practices and patterns, maintaining our current workloads, optimizing our data warehouse, and building data pipelines and data products. The data engineer will support multiple stakeholders, including software development, actuarial, payroll, and finance analytical teams.
Qualifications:
- Bachelor's degree in Computer Science, Engineering, or a related field.
- At least 3 years of experience in data engineering or related fields.
- Proficiency in at least one programming language (Python, C#, etc.) and SQL.
- Proven experience with data modeling, ETL processes, and data warehousing.
- Experience with cloud platforms such as AWS, GCP, or Azure.
- Understanding of data processing technologies and data management principles.
- Familiarity with data technologies such as Synapse, Databricks, Delta Lake, and AirFlow.
Additional Information:
An ideal candidate will align their personal work values with OUTsurance values of Awesome Service, Dynamic, Honest, Human, Passionate, and Recognition.
In accordance with OUTsurance Insurance Company Ltd's Employment Equity goals, preference will be given to individuals who meet the job requirements and are from the various designated groups.
Remote Work: Employment Type:
Full-time
Key Skills:
Apache Hive, S3, Hadoop, Redshift, Spark, AWS, Apache Pig, NoSQL, Big Data, Data Warehouse, Kafka, Scala
Experience: years
Vacancy: 1
#J-18808-LjbffrData Engineer
Posted 4 days ago
Job Viewed
Job Description
Our client is looking for a Data Engineer in Centurion, Gauteng.
The Data Engineer helps stakeholders understand data through exploration and builds secure and compliant data processing pipelines using various tools and techniques. You will maintain data services and frameworks to store and produce cleansed and enhanced datasets for analysis, including designing data store architectures based on business needs.
The responsibilities include designing, developing, and maintaining data solutions, ensuring high performance, efficiency, organization, and reliability of data pipelines and stores according to business requirements and constraints.
You will identify and troubleshoot operational and data quality issues, as well as design, implement, monitor, and optimize data platforms to meet pipeline needs.
Collaborate with cross-functional teams (internal and external) to ensure effective implementation, setup, utilization, and governance of the enterprise data platform across the organization.
This role supports data analytics, business intelligence, and advanced analytics functions.
The ideal candidate should have a strong business focus, understanding the company's strategy and how data underpins it.
This position reports to the Team Lead: Business Information.
Required Minimum Education / Training- BCom in Information Technology, or a bachelor's degree in Computer Science, Engineering (IT), or similar.
- Data Engineering tools (Azure Data Engineer) certification is an advantage.
- ITIL or COBIT certification.
- 2-4 years of experience in an IT environment.
- Experience with IT enterprise Azure-based data solutions.
- Proficiency in Python and SQL.
- Experience gathering requirements and translating them into designs and outcomes.
- Experience working in the agriculture sector is a plus.
- Strategic Direction
- Information Technology Operations
- IT Project Execution
- People Management
- Financial Management
- Governance and Compliance
- Microsoft Azure Data suite
- Data and Business intelligence enablement
- Business intelligence rendering tools
- Experience with Project Management Tools
- Excellent verbal and written communication skills
- Report writing skills
- Knowledge of IT Security (advantageous)
- People change management skills
- Excellent analytical and problem-solving skills
- Attention to detail
- Ability to integrate technologies and architecture to meet requirements
- Planning and organizing skills
- Strong communication skills
- Teamwork and collaboration
- Change management capability
- Ability to work under pressure
- Assertiveness
- Interpersonal skills
Apache Hive, S3, Hadoop, Redshift, Spark, AWS, Apache Pig, NoSQL, Big Data, Data Warehouse, Kafka, Scala
Employment Type: Full Time
Vacancy: 1
#J-18808-LjbffrData Engineer
Posted 4 days ago
Job Viewed
Job Description
Description
This roles responsibility is to design develop and maintain data-based solutions including ensuring that the operationalization of data pipelines and data stores are high-performing efficient organized and reliable given a set of business requirements and constraints.
The Data Engineer will build and maintain secure and compliant data processing pipelines by using different tools and techniques and maintain various data services and frameworks to store and produce cleansed and enhanced datasets for analysis. This includes data store design using different architecture patterns based on business requirements.
The incumbent will help identify and troubleshoot operational and data quality issues and design implement monitor and optimize data platforms to meet the data pipelines. Collaborate with cross-functional teams (internal and external) to ensure effective implementation set-up utilization and governance of the enterprise data platform across the organization.
This role contributes to the complete data function for data analytics business intelligence and advanced analytics. It requires a strong business focus and the individual understands the strategy and direction of the business but focuses on how to underpin that with data.
Requirements
REQUIRED MINIMUM EDUCATION / TRAINING
- BCom or Bachelors degree in Information Technology / Computer Science / Engineering (IT) or similar
- Data Engineering tools (Azure Data Engineer) certification will be an added advantage
- ITIL or COBIT certification will be an added advantage
REQUIRED MINIMUM WORK EXPERIENCE
KEY PERFORMANCE AREAS
TECHNICAL KNOWLEDGE / COMPETENCIES
BEHAVIOURAL COMPETENCIES
Closing date : 15 August 2025
Please note that correspondence will be limited to shortlisted candidates only. Applicants who have not heard from us within 30 days of the closing date may assume that their applications have been unsuccessful and are hereby thanked for their interest.
The filling of this position will be aligned with AFGRIs Employment Equity Policy.
Applicants are informed that in order to consider any application for employment we will have to process their personal information. A law known as the Protection of Personal Information Act 4 of 2013 (POPIA) provides that when one processes anothers personal information such collection retention dissemination and use of that persons personal information must be done in a lawful and transparent manner.
In order to give effect to this right we are under a duty to provide you with a number of details pertaining to the processing of your personal information. These details are housed under the HR Processing Notice which can be accessed and viewed on the AFGRI Group website which HR Processing Notice we request you kindly download and read.
Work Level
Junior Management
Job Type
Permanent
Salary
Market Related
EE Position
Location
Centurion
Required Experience :
Junior IC
Key Skills
Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala
Employment Type : Full-Time
Experience : years
Vacancy : 1
#J-18808-LjbffrData Engineer
Posted 4 days ago
Job Viewed
Job Description
Reference : BIT004534-Cha L-1
Our client is seeking a skilled Data Engineer to join our growing team. If you are passionate about the technical aspects of the data landscape and thrive on building optimized processes that deliver high-quality reusable datasets, this could be the perfect opportunity for you.
As a Data Engineer, you will play a key role in developing best practices, maintaining current workloads, and optimizing our data warehouse, pipelines, and data products. You will support multiple stakeholders, including software development, actuarial, payroll, and finance analytical teams.
Duties & Responsibilities
- Design, build, and maintain scalable and reliable data pipelines to process large volumes of data from various sources.
- Develop and maintain data models, databases, and data warehouses to support analytical reporting needs.
- Implement data validation and monitoring processes to ensure data quality and accuracy.
- Collaborate with data scientists, analysts, and stakeholders to understand data requirements and deliver effective solutions.
- Demonstrate excellent written and verbal communication skills with the ability to explain complex problems clearly.
Qualifications
- Bachelor's degree in Computer Science, Engineering, or a related field.
- At least 3 years of experience in data engineering or related fields.
- Proficiency in at least one programming language (Python, C#, etc.) and SQL.
- Proven experience with data modeling, ETL processes, and data warehousing.
- Hands-on experience with cloud platforms (AWS, GCP, or Azure).
- Strong understanding of data processing technologies and data management principles.
- Familiarity with Synapse, Databricks, Delta Lake, and AirFlow is advantageous.
Apply Now!
If you meet the qualifications and are excited about this opportunity, we encourage you to apply directly.
For more IT job opportunities, visit our website.
To submit your CV via email, send it to (email address) and include the reference number in the subject line.
Note : If you do not receive a response within two weeks, please consider your application unsuccessful. Your profile will remain on our database for future opportunities.
Key Skills
Apache Hive, S3, Hadoop, Redshift, Spark, AWS, Apache Pig, NoSQL, Big Data, Data Warehouse, Kafka, Scala
Employment Type : Full-Time
Experience : 3+ years
Vacancy : 1
#J-18808-LjbffrData Engineer
Posted 5 days ago
Job Viewed
Job Description
Job Title: Data Engineer Location: Remote (Kenya, South Africa, Philippines) Type: Full-Time, Permanent Salary: Market-Related
About our Client
Our client is a high-growth global technology company focused on delivering scalable solutions through innovation and digital excellence. With a strong commitment to remote collaboration and engineering excellence, they operate across multiple geographies, fostering a high-performance culture driven by data and cloud-native technologies.
About the Role
We are looking for an experienced Data Warehouse Engineer to architect and optimise cloud-based analytics infrastructure. This role is ideal for someone with deep expertise in AWS, a passion for building robust data platforms, and a strong understanding of data modelling, ETL/ELT workflows, and data visualisation tools.
What they offer:
Work alongside & learn from best-in-class talent
Excellent career development opportunities
Great work environment with a remote-first culture
Design and maintain a cloud-native data warehouse on AWS
Implement scalable data models in Amazon Redshift
Build and manage ETL/ELT workflows using Apache Airflow
Develop clear, actionable dashboards using Amazon QuickSight
Integrate with DynamoDB, EventBridge, and S3 to enable real-time and batch data flows
Collaborate with cross-functional teams to understand data requirements
Monitor data platform performance and costs, ensuring ongoing optimisation
Maintain high standards for data quality, security, and reliability
4+ years’ experience in cloud-based data warehousing (AWS preferred)
Advanced proficiency in SQL and Python
Hands-on expertise with Amazon Redshift, Airflow, QuickSight, DynamoDB, S3, and EventBridge
Strong grasp of data modelling, warehousing principles, and performance tuning
Familiarity with CI/CD pipelines and infrastructure as code
Highly proactive with excellent attention to data integrity, resilience, and cost-efficiency
Strong communication skills and collaborative mindset
Apply now to join a cutting-edge remote engineering team building the future of cloud data infrastructure.
#J-18808-LjbffrData Engineer
Posted 5 days ago
Job Viewed
Job Description
Job Title: Data Engineer Location: Remote (Kenya, South Africa, Philippines) Type: Full-Time, Permanent Salary: Market-Related
About our Client
Our client is a high-growth global technology company focused on delivering scalable solutions through innovation and digital excellence. With a strong commitment to remote collaboration and engineering excellence, they operate across multiple geographies, fostering a high-performance culture driven by data and cloud-native technologies.
About the Role
We are looking for an experienced Data Warehouse Engineer to architect and optimise cloud-based analytics infrastructure. This role is ideal for someone with deep expertise in AWS, a passion for building robust data platforms, and a strong understanding of data modelling, ETL/ELT workflows, and data visualisation tools.
What they offer:
Work alongside & learn from best-in-class talent
Excellent career development opportunities
Great work environment with a remote-first culture
Design and maintain a cloud-native data warehouse on AWS
Implement scalable data models in Amazon Redshift
Build and manage ETL/ELT workflows using Apache Airflow
Develop clear, actionable dashboards using Amazon QuickSight
Integrate with DynamoDB, EventBridge, and S3 to enable real-time and batch data flows
Collaborate with cross-functional teams to understand data requirements
Monitor data platform performance and costs, ensuring ongoing optimisation
Maintain high standards for data quality, security, and reliability
4+ years’ experience in cloud-based data warehousing (AWS preferred)
Advanced proficiency in SQL and Python
Hands-on expertise with Amazon Redshift, Airflow, QuickSight, DynamoDB, S3, and EventBridge
Strong grasp of data modelling, warehousing principles, and performance tuning
Familiarity with CI/CD pipelines and infrastructure as code
Highly proactive with excellent attention to data integrity, resilience, and cost-efficiency
Strong communication skills and collaborative mindset
Apply now to join a cutting-edge remote engineering team building the future of cloud data infrastructure.
#J-18808-LjbffrData Engineer
Posted 5 days ago
Job Viewed
Job Description
Job Title: Data Engineer Location: Remote (Kenya, South Africa, Philippines) Type: Full-Time, Permanent Salary: Market-Related
About our Client
Our client is a high-growth global technology company focused on delivering scalable solutions through innovation and digital excellence. With a strong commitment to remote collaboration and engineering excellence, they operate across multiple geographies, fostering a high-performance culture driven by data and cloud-native technologies.
About the Role
We are looking for an experienced Data Warehouse Engineer to architect and optimise cloud-based analytics infrastructure. This role is ideal for someone with deep expertise in AWS, a passion for building robust data platforms, and a strong understanding of data modelling, ETL/ELT workflows, and data visualisation tools.
What they offer:
Work alongside & learn from best-in-class talent
Excellent career development opportunities
Great work environment with a remote-first culture
Design and maintain a cloud-native data warehouse on AWS
Implement scalable data models in Amazon Redshift
Build and manage ETL/ELT workflows using Apache Airflow
Develop clear, actionable dashboards using Amazon QuickSight
Integrate with DynamoDB, EventBridge, and S3 to enable real-time and batch data flows
Collaborate with cross-functional teams to understand data requirements
Monitor data platform performance and costs, ensuring ongoing optimisation
Maintain high standards for data quality, security, and reliability
4+ years’ experience in cloud-based data warehousing (AWS preferred)
Advanced proficiency in SQL and Python
Hands-on expertise with Amazon Redshift, Airflow, QuickSight, DynamoDB, S3, and EventBridge
Strong grasp of data modelling, warehousing principles, and performance tuning
Familiarity with CI/CD pipelines and infrastructure as code
Highly proactive with excellent attention to data integrity, resilience, and cost-efficiency
Strong communication skills and collaborative mindset
Apply now to join a cutting-edge remote engineering team building the future of cloud data infrastructure.
#J-18808-LjbffrBe The First To Know
About the latest Nosql database Jobs in South Africa !
Data Engineer
Posted 7 days ago
Job Viewed
Job Description
Sand Technologies is a fast-growing enterprise AI company that solves real-world problems for large blue-chip companies and governments worldwide.
We’re pioneers of meaningful AI : our solutions go far beyond chatbots. We are using data and AI to solve the world’s biggest issues in telecommunications, sustainable water management, energy, healthcare, climate change, smart cities, and other areas that have a real impact on the world. For example, our AI systems help to manage the water supply for the entire city of London. We created the AI algorithms that enabled the 7th largest telecommunications company in the world to plan its network in 300 cities in record time. And we built a digital healthcare system that enables 30m people in a country to get world-class healthcare despite a shortage of doctors.
We’ve grown our revenues by over 500% in the last 12 months while winning prestigious scientific and industry awards for our cutting-edge technology. We’re underpinned by over 300 engineers and scientists working across Africa, Europe, the UK and the US.
ABOUT THE ROLE
Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data engineers create pipelines that support our data scientists and power our front-end applications. This means we do data-intensive work for both OLTP and OLAP use cases. Our environments are primarily cloud-native spanning AWS, Azure and GCP, but we also work on systems running self-hosted open source services exclusively. We strive towards a strong code-first, data as a product mindset at all times, where testing and reliability with a keen eye on performance is a non-negotiable.
JOB SUMMARY
A Data Engineer, has the primary role of designing, building, and maintaining scalable data pipelines and infrastructure to support data-intensive applications and analytics solutions. They closely collaborate with data scientists, analysts, and software engineers to ensure efficient data processing, storage, and retrieval for business insights and decision-making. From their expertise in data modelling, ETL (Extract, Transform, Load) processes, and big data technologies it becomes possible to develop robust and reliable data solutions.
RESPONSIBILITIES
- Data Pipeline Development: Design, implement, and maintain scalable data pipelines for ingesting, processing, and transforming large volumes of data from various sources using tools such as databricks, python and pyspark.
- Data Modeling: Design and optimize data models and schemas for efficient storage, retrieval, and analysis of structured and unstructured data.
- ETL Processes: Develop and automate ETL workflows to extract data from diverse sources, transform it into usable formats, and load it into data warehouses, data lakes or lakehouses.
- Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics.
- Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging cloud-native services for data storage, processing, and analytics.
- Data Quality and Governance: Implement data quality checks, validation processes, and data governance policies to ensure accuracy, consistency, and compliance with regulations.
- Monitoring, Optimization and Troubleshooting: Monitor data pipelines and infrastructure performance, identify bottlenecks and optimize for scalability, reliability, and cost-efficiency. Troubleshoot and fix data-related issues.
- DevOps: Build and maintain basic CI/CD pipelines, commit code to version control and deploy data solutions.
- Collaboration: Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand requirements, define data architectures, and deliver data-driven solutions.
- Documentation: Create and maintain technical documentation, including data architecture diagrams, ETL workflows, and system documentation, to facilitate understanding and maintainability of data solutions.
- Best Practices: Continuously learn and apply best practices in data engineering and cloud computing.
QUALIFICATIONS
- Proven experience as a Data Engineer, or in a similar role, with hands-on experience building and optimizing data pipelines and infrastructure.
- Proven experience working with Big Data and tools used to process Big Data
- Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues.
- Solid understanding of data engineering principles and practices.
- Excellent communication and collaboration skills to work effectively in cross-functional teams and communicate technical concepts to non-technical stakeholders.
- Ability to adapt to new technologies, tools, and methodologies in a dynamic and fast-paced environment.
- Ability to write clean, scalable, robust code using python or similar programming languages. Background in software engineering a plus.
DESIRABLE LANGUAGES/TOOLS
- Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting.
- Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling.
- Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink.
- Experience in using modern data architectures, such as lakehouse.
- Experience with CI/CD pipelines and version control systems like Git.
- Knowledge of ETL tools and technologies such as Apache Airflow, Informatica, or Talend.
- Knowledge of data governance and best practices in data management.
- Familiarity with cloud platforms and services such as AWS, Azure, or GCP for deploying and managing data solutions.
- Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues.
- SQL (for database management and querying)
- Apache Spark (for distributed data processing)
- Apache Spark Streaming, Kafka or similar (for real-time data streaming)
- Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc
Would you like to join us as we work hard, have fun and make history?
#J-18808-LjbffrData Engineer
Posted 7 days ago
Job Viewed
Job Description
Do you like building data systems and pipelines? Do you enjoy interpreting trends and patterns? Are you able to recognize the deeper meaning of data?
Join Elixirr Digital as a Data Engineer and help us analyze and organize raw data to provide valuable business insights to our clients and stakeholders!
As a Data Engineer, you will be responsible for ensuring the availability and quality of data so that it becomes usable by target data users. You will work on operations aimed at creating processes and mechanisms for data flow and access in accordance with project scope and deadlines!
Discover the opportunity to join our Data & Analytics department and work closely with a group of like-minded individuals using cutting-edge technologies!
What you will be doing as a Data Engineer at Elixirr Digital?
- Working closely with Data Architects on AWS, Azure, or IBM architecture designs
- Maintaining and building data ecosystems by implementing data ingestion processes, often collaborating with data engineers, analysts, DevOps, and data scientists
- Ensuring the security of cloud infrastructure and processes by implementing best practices
- Applying modern principles and methodologies to advance business initiatives and capabilities
- Identifying and consulting on ways to improve data processing, reliability, efficiency, and quality, as well as solution cost and performance
- Preparing test cases and strategies for unit testing, system, and integration testing
Competencies and skillset we expect you to have to successfully perform your job:
- Proficient in Python with extensive experience in data processing and analysis
- Strong SQL expertise, adept at writing efficient queries and optimizing database performance
- Previous experience with Azure / AWS data stack
- Experienced in software development lifecycle methodologies, with a focus on Agile practices
We could be a perfect fit if you are:
- Passionate about technology, capable of recognizing and resolving technical problems using specialized tools
- Self-motivated and ambitious, capable of managing multiple responsibilities effectively
- Creative problem-solver, thinking outside the box to find solutions to complex challenges
- Effective communicator with strong verbal and written skills in English
Why is Elixirr Digital the right next step for you?
We work with cutting-edge technologies to solve complex challenges for global clients, making sure your work matters. We support you as you build great things.
Compensation & Equity:
- Performance bonus
- Employee Stock Options Grant
- Employee Share Purchase Plan (ESPP)
- Competitive compensation
Health & Wellbeing:
- Health benefits plan
- Flexible working hours
- Pension plan
Projects & Tools:
- Modern equipment
- Big clients and interesting projects
- Cutting-edge technologies
Learning & Growth:
- Opportunities for growth and development
- Internal LMS & knowledge hubs
We don’t just offer a job — we create space for you to grow, thrive, and be recognized.
Intrigued? Apply now!
#J-18808-LjbffrData Engineer
Posted 11 days ago
Job Viewed
Job Description
Press Tab to Move to Skip to Content Link
Select how often (in days) to receive an alert:
Date Posted: 08/12/2025
Req ID: 44744
Faculty/Division: Faculty of Arts & Science
Department: Acceleration Consortium
Campus: St. George (Downtown Toronto)
Position Number: 00057089
Description:
About us:
The Faculty of Arts & Science is the heart of Canada’s leading university and one of the most comprehensive and diverse academic divisions in the world. The strength of Arts & Science derives from our combined teaching and research excellence in the humanities, sciences and social sciences across 29 departments, seven colleges and 46 interdisciplinary centres, institutes and programs.
We can only realize our mission with the dedication and excellence of engaged staff and faculty. The diversity of opportunities and perspectives within the Faculty reflect the local and global landscape and the need for curiosity, innovative thinking and collaboration. At Arts & Science, we take pride in our legacy of innovation and discovery that has changed the way we think about the world.
The Acceleration Consortium (AC) at the University of Toronto (U of T) is leading a transformative shift in scientific discovery that will accelerate technology development and commercialization. The AC is a global community of academia, industry, and government that leverages the power of artificial intelligence (AI), robotics, materials sciences, and high-throughput chemistry to create self-driving laboratories (SDLs), also called materials acceleration platforms (MAPs). These autonomous labs rapidly design materials and molecules needed for a sustainable, healthy, and resilient future, with applications ranging from renewable energy and consumer electronics to drugs. AC Staff Scientists will advance the infield of AI-driven autonomous discovery and develop the materials and molecules required to address society’s largest challenges, such as climate change, water pollution, and future pandemics.
The Acceleration Consortium received a $200M Canadian First Research Excellence Grant for seven years to develop self-driving labs for chemistry and materials, the largest ever grant to a Canadian University.
Your opportunity:
Reporting to the Executive Director, Acceleration Consortium and working closely with the (Senior) Research Associates, the Data Engineer plays a pivotal role in managing and optimizing our data infrastructure to support these high-impact research projects. The Data Engineer will be responsible for designing, implementing, and maintaining robust ETL/ELT pipelines that ensure the efficient flow of data from various sources to data warehouses and research databases.
Your responsibilities will include:
- Reconciling business requirements with information architecture needs for highly complex system integration
- Analyzing and optimizing database software
- Developing and maintaining quality control procedures
- Analyzing, recommending, and designing highly complex software architecture
- Designing, testing, and modifying programming code
- Leading and planning IT projects
- Analyzing, recommending and designing technical solutions for highly complex IT problems
- Serving as a resource to others by providing (non-supervisory) job-related guidance
Essential Qualifications:
- Bachelor's Degree (Master's Degree preferred) in Computer Science, Information Technology, Data Engineering, or a related field or acceptable combination of equivalent experience.
- Minimum five years recent and relevant Data Engineer experience with a strong background in ETL/ELT processesin materials, chemicals, research, and technology industry or related industries with significant research and development.
- Experience with data pipeline tools and platforms (e.g., Apache Airflow, AWS Glue, Talend, etc.).
- Proficiency in SQL, Python, and/or other programming languages commonly used in data engineering as well as data transformation tools (e.g. DBT).
- Solid understanding of database management systems (RDBMS, NoSQL, etc.) and data warehousing solutions (e.g., AWS Redshift, Google BigQuery, Snowflake, Databricks).
- Familiarity with cloud computing platforms (e.g. AWS, Azure, GCP) and containerization technologies (Docker, Kubernetes).
- Strong problem-solving skills and the ability to work in a fast-paced, research-driven environment.
- Excellent communication skills, with the ability to collaborate effectively with cross-functional teams.
Assets (Nonessential):
- Experience in a research or academic environment, particularly in handling large and complex scientific datasets.
- Knowledge of data modeling, schema design, and data architecture best practices.
- Familiarity with data visualization tools (e.g., Tableau, Power BI) is a plus.
To be successful in this role you will be:
- Accountable
- Insightful
- Team player
Closing Date: 08/29/2025, 11:59PM ET
Employee Group: USW
Appointment Type : Grant - Continuing
Schedule: Full-Time
Pay Scale Group & Hiring Zone:
USW Pay Band 16 -- $03,367. with an annual step progression to a maximum of 132,188. Pay scale and job class assignment is subject to determination pursuant to the Job Evaluation/Pay Equity Maintenance Protocol.
Job Category: Information Technology (IT)
Lived Experience Statement
Candidates who are members of Indigenous, Black, racialized and 2SLGBTQ+ communities, persons with disabilities, and other equity deserving groups are encouraged to apply, and their lived experience shall be taken into consideration as applicable to the posted position.
Diversity Statement
The University of Toronto embraces Diversity and is building aculture of belonging that increases our capacity to effectivelyaddress and serve the interests of our global community. Westrongly encourage applications from Indigenous Peoples,Black and racialized persons, women, persons withdisabilities, and people of diverse sexual and gender identities.We value applicants who have demonstrated a commitment toequity, diversity and inclusion and recognize that diverseperspectives, experiences, and expertise are essential tostrengthening our academic mission.
As part of your application, you will be asked to complete a brief Diversity Survey. This survey is voluntary. Any information directly related to you is confidential and cannot be accessed by search committees or human resources staff. Results will be aggregated for institutional planning purposes. For more information, please see .
Accessibility Statement
The University strives to be an equitable and inclusive community, and proactively seeks to increase diversity among its community members. Our values regarding equity and diversity are linked with our unwavering commitment to excellence in the pursuit of our academic mission.
The University is committed to the principles of the Accessibility for Ontarians with Disabilities Act (AODA). As such, we strive to make our recruitment, assessment and selection processes as accessible as possible and provide accommodations as required for applicants with disabilities.
If you require any accommodations at any point during the application and hiring process, please .