Accuracy boost for AI Brain Computer Interface model brings life-changing technology a step closer
An artificial intelligence model, which could allow the human brain to interact with computers, prosthetic limbs and even appliances in the home, has achieved a nearly 90% accuracy rate, according to research from the University of Hertfordshire and Teesside University – outperforming any previously tested.
Published recently in the science and technology journal Sensors, a research paper by Dr Yar Muhammad, Dr Radia Rayan Chowdhury and Dr Usman Adeel has proposed a new algorithm to detect and classify signals from the brain, which can then be used in a Brain Computer Interface (BCI).
The model achieved an accuracy rate higher than its nearest competitors, representing a significant advancement towards potential applications of the technology, which could be life-changing for disabled people in particular.
Dr Yar Muhammad, Principal Lecturer in Computer Science at the University of Hertfordshire, said:
“This algorithm will be the backbone of all kinds of BCI applications. For example, it could allow people to use brain signals to drive a wheelchair or use advanced prosthetic limbs. People could also use a BCI to switch on or off appliances in a smart home, such as the lights or heating.
“More research, funding and support is needed to make this a reality, but this is a considerable step forward – and there are plenty more exciting applications of the technology which could change the way we live.”
Brain Computer Interfaces (BCIs) allow for communication between the brain and computers. They work using brain signals, which are recorded in an electroencephalogram (EEG). This procedure, which is similar to an ECG but for the brain, is when small sensors are attached to the head in order to record electrical activity.
However, an obstacle to developing BCIs is detecting and translating this brain activity into computer signals, because this type of data is very individualised, meaning different people produce different results. The research seeks to address this problem using machine-learning to increase the accuracy of models used to detect and classify motor imagery from an EEG.
Dr Usman Adeel, Associate Professor of Enterprise, Knowledge and Exchange at Teesside University, said:
“Artificial intelligence has the capability to make a significant and positive impact on the way we live our lives. Health is just one of the many areas that can benefit from this technology, and we are delighted to be able to record significant advancements through this collaborative research project.”
Dr Radia Rayan Chowdhury, Data Scientist at Teesside University, said:
"This AI model is designed to enhance the accuracy of brain signal classification, making it a remarkable addition to the world of BCI. This model can be used to control prosthetic limbs or assistive devices for people with limb loss or disabilities. To apply this in the real world, more research and funding are essential. We are pleased to be a part of the advancements in this field."
Herts and Tees researchers proposed a multi-branch convolutional neural network*, called EEGNet Fusion V2. Tested on three public datasets, it outperformed other neural networks, such as EEGNet, ShallowConvNet, DeepConvNet, achieving 89.6% and 87.8% accuracy rates for the actual and imagined motor activity.
*A convolutional neural network is type of machine learning algorithm, which teaches itself to extract certain characteristics from raw data by filtering out unwanted characteristics.