Contents of this page:
The goal of our project was to develop adaptive, computer-based cognitive training for post-stroke rehabilitation. We focused on prospective memory (PM) as PM failure can interfere with independent living, as it can result in forgetting to take medication, switch off the stove or missing doctor’s appointments. There has been some recent evidence in literature to show that using visual mnemonics might help improve skills used by prospective memory. Starting from those findings, we developed a comprehensive treatment based on visual imagery, and a Virtual Reality (VR) environment in which stroke survivors are able to improve their prospective memory.
We have evaluated our approach with three groups of people: healthy young people, healthy older people, and stroke survivors. Having several studies has provided us with a cross-dimensional view on how PM works in these age ranges and furthermore, if the treatment is a successful strategy for improving PM skills. The first two studies were short in duration, and focused on the effect of our visual imagery training only on the two groups of healthy participants. The third study involved a longer, more sophisticated visual imagery training, followed by practice in the VR environment.
The first study showed that young healthy people generally do not have problems with prospective memory. When asked to use visual mnemonics, high scorers in the treatment group seemed to do better and increase in recall items than their counterparts in the control group. Low scorers however, either did not seem to benefit from using visual mnemonics or chose not to use visual mnemonics.
The second study demonstrated beneficial effects of our treatment on older healthy people, showing a significant increase in their ability to recall PM tasks after a short period of training. In the study, the participants were taught how to use visual imagery by interacting with a computer-based tutorial (10 minutes), and later were evaluated on memorizing and performing PM tasks by interacting with pre-recorded videos. Even though the session was short (2 hours), there was a statistically significant difference in the participants’ ability to recall PM tasks. Using visual mnemonics and making the scenario personal and concrete in one’s mind significantly improves their chances of recall.
The third study involved recruiting stroke survivors. Each participant attended ten sessions, over the period of 10 weeks. We have faced a lot of difficulty recruiting participants who met the inclusion/exclusion criteria, and the study was completed only recently (the last session was held on October 15, 2014). We have collected a lot of data which we are currently analyzing. The most important finding from the study shows that our intervention has improved the PM skills significantly, and that the effect is stable (as measured four weeks after the end of treatment).
When thinking about memory we often think about remembering past events. What did I do for my last birthday? What did I have for dinner yesterday? Where did I go for my last holiday? Remembering things from the past is called retrospective memory. Retrospective memory only covers one aspect of memory. The other is called prospective memory. Prospective memory is remembering future events. We use prospective memory very often in our daily lives. "I need to remember to go to my doctor next Tuesday" is an example of prospective memory. Anything that involves thinking about future events or planning requires prospective memory. Prospective memory is essential to live independently and safely. Remembering the steps that need to be taken before that deadline at work or remembering to turn the stove off after a set amount of time requires prospective memory.
Interestingly, for prospective memory to work well, a person must also have relatively good retrospective memory. One must not only remember that they need to do something in future but they need to remember what they need to do.
The tasks that require prospective memory are often classified into two (or sometimes three) groups. Time-based tasks are tasks that need to be done at a certain time. For example, my appointment with my client from work is tomorrow at 2pm. At 6pm, I want to watch the news on TV. Event-based tasks are tasks that occur when a certain event happens. For example, after dinner I need to take my medication (dinner might occur approximately around a certain time, but the medication needs to be taken after the event "dinner"). When I go past the supermarket, I need to pop in and buy some milk. Sometimes, we also talk about another type of task - Activity-based tasks. An activity-based task is very similar to an event-based task as the task occurs after an event. However, the events could be said to be very closely related into one activity. However, we often view these as sub-tasks. Going out to play tennis might require a number of sub-tasks that trigger the next task. For example, putting on your tennis shoes might then prompt you to grab your tennis racquet. Each of the tasks required to get you to the tennis court might be classified as activity-based tasks.
People who have suffered from brain injury often also have problems associated with memory. Depending on the type and location of injury, a person might have reduced performance in both retrospective and prospective memory. Prospective memory is often one of the main cognitive reasons for a loss of independence and even the need for long-term (and full-time) carers. A person's independence and safety often depends on the performance of prospective memory. Often, this might lead to the person not being able to work. Sometimes, the person requires carers to ensure their safety is not compromised (e.g. the stove gets turned off after cooking, the correct medication is taken at the right time).
Stroke is one of the leading causes of death and disability in our country. With our ageing population (inverted pyramid) and the incidences of stroke at younger ages, the need to provide cognitive support and rehabilitation is increasing. The rehabilitation provided needs to be cost effective and ideally (eventually) be customised to the individual's needs. Customising it might simply mean having the training at the best times for the individual (this is particularly important for those who have suffered brain injury) or altering the levels of difficulty and providing additional support, etc.
During the course of this project, a few sub-projects were conducted either as Honours or Summer Projects.
Two Honours projects were conducted investigating brain-computer interfaces. Electroencephalography (EEG) provides a means of accessing neural activity, allowing a computer to analyse information from the brainwave patterns produced by thought. The Emotive EPOC EPOC is a commercially available 14-channel wireless EEG device. Its manufacturers claim that it can be trained easily to control robots, wheel chairs or play games, for example. There have been reports in the literature of using EPOC to control software by using facial expressions and thoughts. The device is inexpensive, and that was our motivation to investigate it further for potential use in the VR environment.
The first Honours project using EPOC was conducted by Matthew Lang in 2012, who had investigated the usability of EPOC as an input device. He conducted two studies, both using healthy people. The first study investigated whether it was possible to train EPOC to perform two different actions (moving a virtual cube left or right) in a short period of time (11 minutes). The results of a study with 10 healthy participants were disappointing, as the participants achieved an average success rate of only 36%. However, the actions performed were artificial, and therefore in the second study, Matthew developed a small software system, in which participants were asked to select an answer to a given question from three options by training the device for 15 minutes. A study was conducted with 21 (healthy) participants, who trained EPOC for an average of 15 minutes, and achieved the average success rate of 47%. Some participants also reported discomfort after about half an hour of wearing the device. The conclusion was that EPOC required too much training time and therefore it would not be a good solution for stroke survivors.
The second EPOC study was conducted by Tegan Harrison in her Honours project during 2013, in which she focused on tracking user’s emotion using the Emotive EPOC device. The affective state of the user is of high importance, as negative emotions (such as stress and frustration) may significantly reduce the effect of training. Tegan first performed a study to compare the affective states identified by EPOC to those induced by a validated set of photographs. We have not found any significant relationships between the self-report scores and the emotional states reported by EPOC.
During summer 2012/2013, Sam Dopping-Hepenstal was awarded a UC summer scholarship, partly funded from our Marsden grant. Sam has investigated whether our VR environment can be extended into a tool for testing a person’s prospective memory. He has developed a modified version of the environment. Initially, the tasks are presented to the user to memorize, and then the user is tested to determine whether he/she could remember the tasks. After that, the user can practise using the VR environment, and finally perform the test within the environment.
During summer 2013/2014, Anthony Bracegirdle was awarded a UC summer scholarship, partly funded from our Marsden grant. In the summer project, Anthony has experimented with two devices: Razer Hydra and Oculus Rift. The Razer Hydra is a set of two hand-held controllers that sense motion and can be used to navigate around the environment and interact with it. The Oculus Rift is a VR headset that gives the user a sense of actually being in the environment with a stereoscopic view, providing a full 3D immersion in the environment. Anthony conducted a study with 24 participants, who each tested the virtual system a number of times completing a set of household tasks within the environment. Each participant trialed the system six times: three different devices for interaction (keyboard, joystick, and Razer Hydra) without the Oculus Rift and the same devices with the Oculus Rift. The participants completed several tasks in a specific order, such as taking items from the pantry or turning on the radio. The participants then completed a short survey and rated their experiences with the devices. It was found that users preferred the joystick for interaction and also that the Oculus Rift induced motion sickness in an alarming number of participants with 18 experiencing motion sickness, 5 of those so much so they had to stop and finish the experiment early.
In 2014, Anthony completed his Honours project, in which he has investigated another input device, the Leap Motion controller. This inexpensive gesture-based device has been released commercially in late 2013. The user places the device in front of him/her and gestures above it.
Anthony has integrated the Leap Motion device into the VR environment, and designed three different gesture modes, two unimanual and one bimanual. The first mode uses the airplane metaphor, in which the user uses his/her dominant hand, with the pitch of the hand controlling the forward/backward movement, and the roll of the hand controlling rotation, while the speed of movements is controlled by the inclination of the hand. The bimanual mode is also based on the airplane metaphor, and uses the dominant hand for rotation and the other hand for forward/backward movements. The third mode is the positional one, in which moving the hand forward/backward and side to side is reflected to corresponding changes in the environment. A study was conducted to investigate the viability of the Leap Motion controller as an interaction device, and also to determine whether physical fatigue would be an obstacle to its use. The study involved 30 participants, each using all three modes but in randomized order, to decrease the practice effect. The participants strongly preferred the positional mode and also strongly disliked the bimanual mode. The participants were also significantly slower using the bimanual mode. The two unimanual modes were competitive with joystick. The results of the study therefore show that the Leap Motion controller is a viable device to use in the VR environment.
Over the 2013/14 summer, Scott Ogden worked on a project: "Creating and evaluating a model for a user in a rehabilitative virtual-reality environment" as part of his COSC486 Research Project course. In this project, a constraint-based model was developed for the VR environment. Users could now be given customised feedback and be modelled according to their behaviour within the environment.
In 2012, we began investigating whether the VRE environment could be controlled using eye movements. In that project, Jon Rutherford used the Tobii eye tracker as the input device. Tobii gives sufficiently precise information about the user’s eye-gaze, which is robust to head movements. The version of VR controlled by eye gaze was developed, which allowed the user to move around the environment by looking to the left or right of the viewport. In order to select objects or interact with them, the user could blink. This has not been evaluated in a study yet.
The funding for this project was provided through a Marsden Grant (end November 2014).The Royal Society of New Zealand manages the Marsden Fund which supports excellence in science, engineering, maths, social sciences and the humanities in New Zealand by providing grants for investigator-initiated research. Some of the sub-projects were also funded by the UC Summer Scholarship programme.
If you are a researcher, clinician, or prospective thesis student interested in continuing this research with us please contact us.