Much ado about details: violent video game effects in Eastern and Western nations and the back and forth comments

Kana Minami (Minami-ke) is having a hard time with three pots of curry.

Anderson and company have published a meta-analytical study in the March 2010 issue of Psychological Bulletin. Naturally, people and experts started throwing comments and rage here and about. What I didn’t realize is that single study has three accompanying academic commentaries and I am struggling what to do with them. Gamepolitics picked the news from a post on the Washington Post. University press release here.

Abstract (Anderson)

Abstract (Ferguson)

Abstrac (Bushman)

I knew the studies’ existence for about a month, but I was hampered by daytime naps and coma-induced readings.

Media violence critic Dr. Christopher Ferguson and John Kilburn wrote their criticism of this study and Dr. Brad Bushman and company (who are co-authors of the first study) wrote a counter-criticism to theirs. There’s a fourth one by Huesmann, but I haven’t received the paper yet and I’ll probably not going to read it for this post. I’ll narrate the criticisms, since they’re more entertaining than the coma-inducing meta-analysis. My words may detract from the actual tone and attitude.

Earlier in 2007 and 2009, Ferguson and Kilburn published their meta-analysis articles with results that showed media violence having little effect on aggression or violent behaviours. They attributed earlier meta-analysis done by Anderson and company, in one sentence, by “doing it wrong” and had the results inflated. So, Anderson et al. took this as a challenge and did a bigger and elaborate meta-analysis, including studies from Eastern nations which in reality mostly from Japan. Ferguson and Kilburn are saying that they’re still doing it wrong, but Bushman says that they haven’t done it wrong and it’s them they got it wrong. But what details were they arguing over?

Point one: What to include or exclude in the meta-analysis?

Ferguson (2009) said that there’s publication bias in the literature, so they used the “trim and fill method” to get a more accurate effect size. Lo and behold, the numbers they found are pretty small.

Anderson replied that Ferguson can’t use those numbers because the filling data points they use don’t really exist and the creator of the “trim and fill” method create it to test the impact of missing studies and it can’t be used for effect size estimate. So, they included unpublished studies (i.e. studies not in published journals like books or posters, e.g. Grand Theft Childhood) in their meta-analysis.

Ferguson replied that Anderson failed to mention that Ferguson (2009) used a variety of publication bias analyses besides the trim and fill method to gain a representative and concordant number and found it acceptable. Besides, does including unpublished studies, though common, in a meta-analysis is a widely accepted practice? Where did they get their unpublished studies, why haven’t this and that researcher hasn’t been contact for their unpublished studies? Some in-press or in-review studies were missing. Why is it that most unpublished studies in the meta-analysis come mostly from Anderson et al.’s own works? They see publication bias here.

Bushman replied that published studies that were refereed don’t mean they’re of higher quality and shouldn’t be used as a criterion for excluding or including studies. Sure, unpublished studies can introduce bias, under some condition, but limiting the meta-analysis to just published studies isn’t recommended. Furthermore, they haven’t missed any studies, when they were doing the analysis these studies weren’t available at the time. Sure they could redo the analysis and include the new studies that Ferguson wants included, but the few studies wouldn’t significantly change the numbers.

Ferguson said that new or “hot” research tends to have bias through politicization of the field. They demonstrated that many published studies are supportive of the media violence effects whereas unpublished tend to against the theory.

Bushman admitted this could happen, but it can go both ways. Studies that show non-significant results are suppressed, but also studies with huge effects can also be suppressed too. So what’s the problem with Anderson’s results having no evidence of publication bias.

Point two: which studies are methodologically classed as “best practice”?

Anderson have 6 criteria in their meta-analysis:

1- The compared levels of the independent variable were appropriate for testing the hypothesis. (e.g. testing clearly violent vs. clearly non-violent games)

2- The independent variable was properly operationalized. (e.g. testing the amount of time playing violent video games, not total game play time)

3- The study had sufficient internal validity in all other respects (e.g. participants are random people from everywhere)

4- The outcome measure used was appropriate for testing the hypothesis (e.g. did the participants push some confederate because they were called a cheater?)

5- The outcome variable could reasonably be expected to be influenced by the independent variable if the hypothesis was true. (e.g. They’re more likely to push the confederate despite their personality differences and was due to playing a violent video game)

6- The outcome variable was properly computed. (e.g. scores of aggression before and after)

Seems reasonable, said Ferguson, but Anderson haven’t addressed standardized testing for aggression. One study used this measure in that way, but another study is using the same in a different way. All these studies computing different ways of scoring can inflate effect size, especially when scoring them in a way that fits a priori theories. Why can’t we have a standard measure that is the same across all studies? At least, it prevents researchers from picking the “worst” outcome to the “best outcome”. You’ll get what you get and live with it.

Bushman said that if there were systematic bias in choosing a measure, then we should’ve seen one measure reporting a larger effect size than other measures, but that was not shown to be true. Wait a minute, did Bushman misread Ferguson’s question or did I read wrong?

Point 3: How big is the effect size and does it really matter?

I have no clue which number in the Anderson paper I’m supposed to be looking for… One passage that’s repeated is that small effects sizes warrants serious concerns because these effects accumulate over time or when large portion of the populations are exposed to the risk factor or when consequences are severe, then statistically small effects become important.

Ferguson said that an r = .15 effect size was too liberal because it did not control for other risk factors. If it was done so, the number would have dropped to zero, a non-significant effect size. Ferguson pointed out that the effect size is for nonserious aggression. Even if, an r = .15 is accurate, it’s still small compared with other risk factors with bigger effect size in which they produced a table of other known risk factors of aggression, such as genetic influences on antisocial behaviours (r = . 75).

Bushman replied that the numbers are actually larger and they were looking at the wrong effect size or something. Bushman then compared their effect size from other social psychological studies effect sizes (from 322 meta-analysis studies) and found that the average effect size is .2. This number is like that because human behaviours are very complex and has multiple causes. They noted that random assignment of participants in experiments help control confound variables, albeit (IMO) it would’ve been nice to study the juvenile prison population. They stand firm in that the violent video games effect is comparable to other risk factors and appealed to many authority organizations, for example the American Academy of Pediatrics, who issued statements that media violence negatively affects children to be more aggressive. Honestly, I think those organizations should consider newer studies and ask them to reassert or not their previous statement.

Ferguson brings the correlation between video game sales and violent crime statistics and showed that there’s a high negative correlation. As games sales increase, crime decreases. Ferguson argues that the focus on violent video games distracts society from more important causes of aggression like poverty, peer influence, family violence, genetics, etc.

Bushman replied that they never used violent crime statistics to demonstrate media violence effects on society, as it is influenced by oh so many risk factors. Any changes in violent crime statistics would be difficult to interpret. None of the media violence researchers (unless you include the “experts” from Fox News) asserted that media violence is the most important factor for aggressive behaviours. But, media violence is the easiest factor for parents to control.

Well I’ll stop, I think I got the most important details from this back and forth commentaries and I’ll just rest my head on the floor.

Anderson, C. A., Shibuya, A., Ihori, N., Swing, E. L., Bushman, B. J., Sakamoto, A., Rothstein, H. R., & Saleem, M. (2010). Violent video game effects on aggression, empathy, and prosocial behavior in Eastern and Western countries: A meta-analytic review. Psychological Bulletin, 136, 151-173. doi: 10.1037/a0018251

Ferguson, C. J., & Kilburn, J. (2010). Much ado about nothing: The misestimation and overinterpretation of violent video game effects in Eastern and Western nations: Comment on Anderson et al. (2010). Psychological Bulletin, 136, 174-178. doi: 10.1037/a0018566

Bushman, B. J., Rothstein, H. R., & Anderson, C. A. (2010). Much ado about something: Violent video game effects and a school of red herring: Reply to Ferguson and Kilburn (2010). Psychological Bulletin, 136, 182-187. doi: 10.1037/a0018718

Advertisements

7 thoughts on “Much ado about details: violent video game effects in Eastern and Western nations and the back and forth comments

  1. Thanks for that…I just got through reading them as well. Waiting for them to settle it over a game of Mortal Kombat, then a fist fight.

  2. Lol^^

    Whoever did this review, I want to thank you! I’m an undergraduate Psychology student at William Jewell College and my first research paper for my Capstone regards the debate whether or not violent video games really are linked to aggression in adolescents. The Anderson/Ferguson debate is annoying and I understood it for the most part, but your review helped clear up some foggy areas for me.

    Anyway, thanks again!

  3. Pingback: ¿Puede afectar la violencia en los videojuegos a nuestra empatía?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s