Last week, I bought my first laptop, spent the entire week setting it up (i.e. getting the wallpapers, MP3s, transferring all those journal articles, etc.). Anyways, I was procrastinating on deciding which psychology study on video games that I would be writing and I thought of writing how video games can be used to improve skills. Given how I made few posts per month, I kind of felt guilty, but then I do have a busy life and this blog doesn’t have many readers.
Expert video game players often outperform non-players on measures of basic attention and performance. Such differences might result from exposure to video games or they might reflect other group differences between those people who do or do not play video games. Recent research has suggested a causal relationship between playing action video games and improvements in a variety of visual and attentional skills (e.g., [Green, C. S., & Bavelier, D. (2003). Action video game modifies visual selective attention. Nature, 423, 534-537]). The current research sought to replicate and extend these results by examining both expert/non-gamer differences and the effects of video game playing on tasks tapping a wider range of cognitive abilities, including attention, memory, and executive control. Non-gamers played 20+ h of an action video game, a puzzle game, or a real-time strategy game. Expert gamers and non-gamers differed on a number of basic cognitive skills: experts could track objects moving at greater speeds, better detected changes to objects stored in visual short-term memory, switched more quickly from one task to another, and mentally rotated objects more efficiently. Strikingly, extensive video game practice did not substantially enhance performance for non-gamers on most cognitive tasks, although they did improve somewhat in mental rotation performance. Our results suggest that at least some differences between video game experts and non-gamers in basic cognitive performance result either from far more extensive video game experience or from pre-existing group differences in abilities that result in a self-selection effect.
I must say that I’m not well versed in cognitive psychology. If there’s another blog that covers this article, I’ll give the link right away.
I guess you know what is memory and reasoning. But not executive functioning or executive control if you followed the abstract, I’ll let Wikipedia handle it. The article is a replication of Green and Bavelier’s (2003) nature study and the authors made additions and improvements in their study to examine other effects.
The authors have two samples, one sample to see differences between expert gamers versus non-gamers. Another sample is to show how playing video games over a period of time can show how they improve certain cognitive abilities.
Cross-sectional sample: 11 expert gamers in which they defined as playing for 7 hours per week for the past two years. They were selected for high skills in FPS games and other genres (there are no pure FPS gamers, much like mental disorders like depression). The authors also noted that many had played since childhood, so they’re clearly gamers. 10 non-gamers are selected based on playing less than 1 hour per week for the past two years. Interestingly, this sample is only composed of males because of the obvious difficulty of finding female gamers.
For the purpose of this study, they’re experts, but some might feel that in a general context that they are average gamers and not hardcore. It is possible that they might not reap benefits out of video games because they don’t play long enough. But then, the authors did not write any additional information, so I can’t say anything about this sample. Well this could just be a semantic problem of what an expert really mean, perhaps another study on hardcore or pro gamers might reveal differences between them and a non-gamer.
The sample size is very small compared to other video games studies, if I recall correctly that’s the norm when it involves cognitive, neuropsychological, or perceptions studies. IMO, with the right criteria (i.e. no head injuries or neurological disorders), it is possible to have a small sample size. There are other reasons, but that’s all I could think.
Longitudinal sample: 82 participants non-gamers, played less than one hour per week over 2 years. Participants were mainly female, again a demographic characteristic.
The authors used many cognitive measures, so I did my best is describing the simplest and clearest way possible along with any links that does a better job than I do. The measures are categorized into three sections investigating each a cognitive aspect. Again, I’m not versed in cognitive psychology as to why they used them.
Visual and attentional skills
Functional field of view test: Search for a white triangle in a circle among square distracters. Participants have 24 practice trials, and 120 test trials. Previously used by Green & Bavelier (2003).
Attentional blink test: Watch a series of letters rapidly blinking, participants must identify the white letter and whether there was an x presented sometime after the white letter. 15 practice trials and 144 test trials. Previously used by Green & Bavelier (2003).
Enumeration task: Brief presentation of dots and asked how many were there. 32 practice trials and 160 test trials. Previously used in Green and Bavelier (2003).
Multiple object tracking: participants were asked to track three red circles that will turn into green once they start moving among 7 other green circles. Participants are asked to modify the speed the circles are moving to the fastest level they can without losing sight of the target circles. Of course, if they lost sight of one, they can press a key to show their locations and continue tracking at fastest speed possible. Three trials are used to calculate the average tracking speed.
Visual short-term memory: sets of 2, 4 or 6 coloured lines are shown with different orientations (horizontal, vertical, diagonal, etc.) and were shown again, either with one change (in colour or orientation) or no change. Participants were asked if they saw differences. This is done with 24 practice trials with 144 test trials.
Spatial processing and spatial memory
Spatial 2-back: letters appear in different locations on a screen. They were asked if the letter was at the same location as the letter from 2 letters back. For example, “a” shown at top right, “b” shown at bottom left, “c” shown at top right (participants must a press key if this letter was the same location or not as from two letters ago, that would be a’s position). “d” shown at left, participant must give an answer by pressing a key, yes or no. Or see wikipedia. There were 24 practice trials and 144 test trials.
Corsi block-tapping task: the closest lay description is the Simon game, except there’s no sound, all the blocks are grey, and there are nine of them. Hmmm. Doesn’t tell how many trials were there.
Mental Rotation: Wikipedia has an article on it. Simply look at 3D tetris-like shape, look at another one and say if it’s the same object or not as soon as possible. The object are simply rotated at specific degrees. There were 30 practice trials and 128 test trials.
Executive control and reasoning
Task switching: Because of a single word in the article, I got confused when they wrote “letter”, instead of “number”. Participants had to switch between two tasks, either to press a key whether the number is even or odd or whether the number is higher or lower than five. Which task they must perform depends on the background colour, the high/low task is blue while the odd/even task is pink. They also have to use one hand for one task and the other hand for the other task. Tell me if I lost you. There were 120 test trials for single tasks (just one task), then a practice trial for dual-task (switching between tasks), then a 160 dual-task trials.
Tower of London: It looked more like the tower of Hanoi game. Differences are participants not limited by time, must complete within a number of moves and has three chances to complete it. There are nine games of increasing difficulty.
Working memory operation span: Do math problems while trying to remember 3-6 words. Doesn’t tell how many trials.
Ravens matrices: Wikipedia has an article on it. “In each test item, a candidate is asked to identify the missing segment required to complete a larger pattern. Many items are presented in the form of a 3×3 or 2×2 matrix, giving the test its name.”
Games used: Medal of Honor: Allied Assault is used and is expected to improved visual and attentional skills, spatial processing and memory, and executive functioning. A modified Tetris, with the preview screen removed, is expected to improve spatial processing. Rise of Nations chosen for its complexity, requirements for planning and multi-tasking. They also argued that spatial memory, task switching, working memory would also be useful in RTS games and therefore possible improvements in visual and attentional skills, spatial processing and memory, and executive functioning.
For the longitudinal participants: They are randomly assigned to play one game for the study’s duration or no game as a control group. They played 13 sessions of 1.5 hours for over a period of four to five weeks. Two sessions were only 1 hour long, first and last session was devoted for cognitive testing, so they have pre-test and a post-test. Total playtime is calculated to 21.5 hours. Green and Bavelier’s (2003) playtime was 10 hours.
Game performance was recorded based on game scores to see if participants are improving over time. Of course, they do. But you never know, you might have an incompetent participant.
Cognitive testing occurred on three occasions: a pre-test, mid-way and a post-test. They don’t tell the order of cognitive measures they give in each occurrence, but I don’t think the order really matters.
For the cross-sectional participants: no playing games, just do the cognitive tests once, except no tower of London task or the Ravens matrices task.
First I’ll write about the differences between expert and non-expert gamers. Afterwards, the longitudinal study groups’ results.
A note on the statistical non-significance in this section: many of them have p-values in the range of .15 to .20. My psych professors called a near statistical significance (say .06) as a statistical trend where the p value is near the significance cutoff point of .05. From my understanding, it basically means that there are differences, but the math says it is not, it could be due to chance and it doesn’t mean that there is definitely no difference, it’s just not so clear. I know I’m not clear either on statistical math. I will discuss some of the reasons of the non-significance later on.
Visual and attentional tasks
- Functional field of view test: Experts seemed to do better than non-experts in finding a target triangle, however there are no statistical differences. No group differences found in the longitudinal sample.
- Attentional blink test: Experts seemed to do better, but no statistical significance found. No group differences found in the longitudinal sample.
- Enumeration task: Experts seemed to do better in reporting the number of dots, but no statistical significance found. No group differences found in the longitudinal sample.
- Multiple object tracking: Expert gamers were found to have faster tracking, but they’re not different in terms of accuracy. No group differences found in the longitudinal sample.
- Visual short-term memory: Experts performed better in detecting changes, especially for the large sets. No group differences found in the longitudinal sample.
Spatial processing and spatial memory
- Spatial 2-back task: Experts seemed to be faster in responding, but this was a statistical trend (p = .06) and there was no difference in their accuracy from non-gamers. No group difference in speed or accuracy.
- Corsi block-tapping task: No statistical difference in the cross-sectional sample and in the longitudinal sample.
- Mental Rotation: A statistical trend (p = .13) in that experts were faster and more accurate. As for the longitudinal sample, the Tetris group was statistically faster in response time, but were not more accurate. The authors noted that this may be due to similarities between Tetris and the mental rotation task (i.e. rotating geometric figures).
Executive control and reasoning
- Task switching: Experts were found to switch task faster than non-experts (the authors called this smaller switch cost) for the high/low task. No group differences found in the longitudinal group, although I noted that there was a trend (p = .12) in group differences, however there are no interaction effects between groups and time.
- Tower of London: The cross-sectional sample was not tested on that task. There were no group differences in performance in the longitudinal sample.
- Working memory operation span: There was no statistical difference found in the cross-sectional sample and the longitudinal sample in the number of words correctly recalled.
- Ravens matrices: The cross-sectional sample was not tested on that task. There were no group differences in performance in the longitudinal sample.
Overall, the authors have not been able to replicate previous results, statistically speaking. However, there are several reasons as to why they have not been able to produce these results.
IMO, they should redefine expert gamers as I find the criteria a bit low. Reading randomly through the literature, the average playtime per week is around 14 hours per week. I would pick participants who play more than 16 hours per week, now that’s expert gamers! However, it is possible Boot et al.’s sample may be playing at the average rate or above, but they did not report such data means we don’t know how much an expert a participant is. This could be said the same for Green and Bavelier (2003). Additionally, participants were selected primarily on their FPS skills, although they may have experiences with other gaming genres, it is not known how experienced or proficient they are with them. Based on the results, it is possible that these players may not have played RTS games long enough to reap the benefits, but were probably able to reap benefits from FPS games on visual and attentional abilities. Conversely, it is also arguable that certain gaming genres may have specific effects on certain cognitive abilities, but not in others.
It also means that the games we have now are entertaining, but are not useful as a general mental exercise tool. IMO, I think many games in America are dumbed-down (staring at Medal of Honor and EA) to appeal to a bigger audience and profit margins. If we look for participants who play much harder (or hardcore) games with a real learning curve, then I think it might show some differences. It would also be interesting to examine some crazy Japanese games that push people’s abilities to the limit (I’m thinking of the R-Type games or the OCD-inducing sidequests in certain RPG games.).
The authors wrote several paragraphs to explain why they haven’t found the expected results: the slight differences from their cognitive measures and from other studies might be a factor in failing to find differences. For example, the presentation of a stimulus is shortened, the colour in a stimulus is in colour as opposed to greyscale in other studies, etc. This could explain that they have found the expected direction, but failing to achieve statistical significance. Furthermore, practice effect from repeating cognitive testing might obscure video game effects despite having a control group, and a expert vs. non gamer group.
Another possible explanation provided by the authors is the gaming schedule the longitudinal sample went through. In Green and Bavelier (2003), participants played for one hour a day for ten consecutive days whereas Boot et al.’s sample played for 1.5 hours per day over a period of several weeks. However, the authors and IMO believed that such difference is unlikely to make a difference in the results. Although, they suggested that it would be an interesting idea to see how a gaming schedule might affect individuals cognitive abilities.
Finally, there’s the question about the samples’ gender. The cross-sectional sample consisted of only men, whereas the longitudinal sample consisted mainly of women. The authors did not believed that gender would be a barrier in the results since they expected improved based on previous studies that found women to have improved cognitive skills from playing video games. Again, it is likely the cognitive measures they used might be the culprit.
The authors concluded that more research is needed; people should play video games for fun and not for delaying Alzheimer’s or whatever quack might claim. They suggested a deconstruction of current games into simpler games in order to verify which gaming aspect have an effect and go on into adding more aspect to see interaction effects and such. They also wrote about how video games make players adopt flexible strategies which may explain the improved attentional skills.
Boot, W. R., Kramer, A. F., Simons, D. J., Fabiani, M. & Gratton, G. (2008) The effects of video game playing on attention, memory, and executive control. Acta Psychologica, 129, 387-398.