Loading Events

Building a Trusted AI-based System: Bringing it All Together

January 25 @ 12:00 pm - 2:00 pm

Bringing it All Together

In this third and final webinar we discuss how to build a trusted AI-based system. This includes demonstrating that the integration of cloud processing with the edge devices and communication infrastructure discussed in previous webinars is secure as well functional.

The webinar should help attendees to better understand what is required to demonstrate that an AI-based distributed system can be trusted.


12.05Dan Wilkinson – Graphcore
12.25Peter Davies – Thales
12.45Chris Longstaff –Mindtech Global
13.05Dr William Jones - Embecosm

NMI Electronic Systems Design Group Presents a Series of 3 Webinars: Building a Trusted AI-based System


Artificial Intelligence (AI) based systems are becoming prevalent in a wide range of industries, distinguishing new innovative products with inference capabilities. However, their introduction brings additional challenges regarding their reliability, safety and security.

This series of 3 free-to-attend webinars will outline those challenges and how they can be demonstrably overcome or mitigated.

The 3 foundation webinars will first consider how AI-based systems are constructed from building blocks (e.g. AI-enabled sensors and actuators), the infrastructure that connects them, and how this is integrated into an intelligent system. Subject area experts will first outline the specific safety and security challenges (e.g. sufficiency of edge-AI training sets, wider security attack surface), and then consider potential solutions and their verification. Ultimately, the objective is to demonstrate that such systems can be reliably used in the target application.

Attendees, whether from a hardware, software or system background, will be able to learn from subject matter experts to better understand the challenges and solutions in building AI-based products and systems that are demonstrably reliable, safe and secure. Sponsors will have the opportunity to showcase their solutions for overcoming these challenges.

Biographies & Abstracts

Daniel Wilkinson is currently Chief Architect for Silicon Systems at Graphcore, was Graphcore’s second ever employee, and has been leading the architecture of Graphcore’s Intelligence Processing Unit™ for six years. Prior to Graphcore he held leadership positions at XMOS Semiconductor in applications and silicon verification for seven years, and 10 years prior to that at Broadcom, Elixent, Clearspeed and ST Microelectronics in Bristol, working in diverse areas of chip design.

Presentation: Zero Trust Cloud for Edge and IOT Machine Intelligence

Architects and engineers developing edge or IOT based Machine Intelligence applications often rely on the cloud for services such as accelerated machine intelligence inference and training, storage, analytics and monitoring. These kinds of hybrid edge/cloud applications exist across diverse application areas and will all have different threat models, security and resilience requirements. This talk asks the question of to what extent you can trust the cloud environment, and summarises what options are available to you if you can’t. Alternative threat models are outlined along with the hardware and software requirements that must be implemented in the edge/IOT application and the cloud environment to meet them.

Dr Jones has a research background in computational neuroscience, focused on the areas of consciousness, cognition, and meta-cognition. In industry, before his current role, William had a short but highly successful stint as the co-founder and CTO of the cryptocurrency Arweave.

In his current role, Dr Jones heads AI and Machine Learning efforts at Embecosm, a consultancy that solves open source problems in many challenging areas, with a focus on compilers, tool chains, embedded systems, and operating systems. His current work is focused on Bayesian methods and statistical computing.

Presentation: The Challenge of Trusting AI

One of the prominent challenges in AI is not just to build systems that are effective problem solvers, but to build systems that humans can understand and trust to be explainable, fair, robust, transparent, and private – Trusted AI. This is an enormously challenging problem, because even over and above the bare minimum necessity of a robust and secure architecture for collecting and processing data at the edge, disseminating it through data networks, and processing it in the cloud, AI suffers from several unique vulnerabilities that bad actors can exploit. In this talk, we examine some of these less-traditional attack vectors, and discuss what needs to be considered (and done) to make an AI system truly Trusted.

Back to Events

Newsletter Signup

Keep up to date with our latest news and events.

    Techworkshub Limited, 1 George Square, Glasgow G2 1AL

    Privacy Policy

    Restricted Content

    This content is restricted to registered users. To view the content please either login or register below.

    Login in Register

    Restricted Content

    This content is restricted to registered users. To view the content please either login or register below.

    Login in Register