Faculty/School

Faculty of Science

School of Information Systems

Topic status

We're looking for students to study this topic.

Research centre

Supervisors

Associate Professor Chun Ouyang
Position
Associate Professor
Division / Faculty
Faculty of Science
Adjunct Associate Professor Catarina Pinto Moreira
Position
Adjunct Associate Professor
Division / Faculty
Faculty of Science

Overview

Existing machine learning-based intelligent systems are autonomous and opaque (often considered “black-box” systems), which has led to the lack of trust in AI adoption and, consequently, the gap between machine and human being.

In 2018, the European Parliament adopted the General Data Protection Regulation (GDPR), which introduces a right of explanation for all human individuals to obtain “meaningful explanations of the logic involved” when a decision is made by automated systems. To this end, it is a compliance that an intelligent system needs to be transparent and is expected to provide human-understandable explanations.

In 2019, the Australian Government initiated the Australia’s Artificial Intelligence (AI) Ethics Framework to guide businesses and governments to responsibly design, develop and implement AI systems. According to the AI ethics principles set out in the Framework, an intelligent system is expected to address human-centred values and support fairness, reliability, transparency and explainability.

The research project aims to build explainable and trustworthy intelligent systems by (i) devising new algorithms and techniques to design robust, reliable, and trustworthy systems underpinned by machine intelligence, (ii) developing new theories and methods to support transparency and explainability, and (iii) incorporating human behaviour and input in the design and development of intelligence systems.

Research engagement

The research activities can be scoped to cater for students with different background and interests. Examples of research activities include literature review, algorithm design, evaluation and analysis, benchmark experiments, and code development.

Research activities

Students will be part of the "eXplainable Analytics for Machine Learning" (XAMI) research team (xami-lab.org) when working on the project. The project will be conducted mainly at QUT with possible collaboration with researchers from our external academic partners.

Outcomes

  • New algorithms and techniques to design robust, reliable, and trustworthy systems underpinned by machine intelligence.
  • New theories and methods to support transparency and explainability.
  • New approach to incorporate human behaviour and in put in the design and development of intelligence systems.

Skills and experience

  • Knowledge in data mining and machine/deep learning.
  • Knowledge in human-computer interaction (optional).
  • Problem-solving and logical thinking capabilities.
  • Programming skills in Python.
  • Academic writing skills.

Start date

18 November, 2024

End date

16 January, 2025

Location

Y Block, QUT Gardens Point campus

Keywords

Contact

Chun Ouyang <c.ouyang@qut.edu.au>