Zoe Engeman, 21 January, 2025
Since graduataing from QUT in 1987, Chris Hockings built an impressive career in the technology industry, holding various executive roles. Now the APAC Chief Technology Officer (Cyber Security) of IBM, Chris leverages his extensive experience to provide valuable insights into how organisations can effectively secure and govern artificial intelligence (AI) to ensure it earns public trust.
In today’s rapidly evolving technological landscape, the importance of securing and governing AI cannot be overstated. As AI systems become increasingly integrated into various aspects of our lives, ensuring their security and proper governance is crucial to protect sensitive data, maintain trust, and comply with regulatory standards.
Securing AI
Secure the data
Protecting the data used in AI begins with detecting sensitive information used in training or fine-tuning through data discovery and classification techniques. Implementing data security controls, such as encryption, access management, and compliance monitoring, is essential. Additionally, raising awareness of security risks at every step of the AI pipeline and fostering collaboration between security teams and data science or research teams, ensures proper safeguards are in place.
Secure the model
Securing AI models requires continuous scanning for vulnerabilities, malware and corruption across the AI/machine learning (ML) pipeline. Beyond simple scanning, it is critical to discover and strengthen the application programming interface (API) and plugin integrations with third-party models. Organisations should configure and enforce policies, controls, and role-based access controls (RBAC) around ML models, artifacts, and data sets to further enhance protection.
Secure the usage
There are several strategies to secure AI usage. First, organisations must monitor for malicious inputs like prompt injections, and outputs containing sensitive data or inappropriate content. Implementing AI-specific security solutions that detect and respond to attacks (e.g., data poisoning, model evasion, model extraction) is also vital. Furthermore, organisations should develop response playbooks to deny access, quarantine, and/or disconnect compromised models.
Secure the infrastructure
Infrastructure security controls serve as a strong first-line defence against adversarial access to AI. However, additional protections are necessary. Leveraging existing expertise to optimise security, privacy, and compliance standards across distributed environments. Organisations should harden network security, access control, and data encryption, as well as implement intrusion detection and prevention measures around AI environments. Combining these efforts with investments in new AI-specific security defences ensures comprehensive maintenance of infrastructure security.
Governing AI
Manage risk and reputation
- Enable responsible, explainable, high-quality AI models, and automatically document model lineage and metadata
- Monitor for fairness, bias, and drift to detect the need for model retraining
Support regulatory compliance
- Use protections and validation to help enable models that are fair, transparent, and compliant
- Automatically document model facts in support of audits
Operationalise AI governance
- Accelerate model building at scale
- Automate and consolidate multiple tools, applications and platforms while documenting the origin of datasets, models, associated metadata and pipelines
By implementing these strategies, organisations can effectively secure AI systems and establish robust governance frameworks that promote trust, transparency, and regulatory compliance in an increasingly AI-driven world.
Chris Hockings
QUT degree - Bachelor of Engineering (Electronics) / Bachelor of Information Technology (1997)
Have a question for Chris? Connect with him on LinkedIn.