In February 2016, together with my colleague Alyssa Alcorn, I held a couple of workshops at the University of Edinburgh as part of the annual Innovative Learning Week. The theme of ILW2016 was ‘Ideas in Play’. So we decided so gather a bunch of people interested in technology and autism, give them some iPads and apps, and get them to play with them. We asked the workshop attendees to write up reviews of the apps they tried out, and I’ve been publishing them on twitter over the past few weeks using #asdtech. They’ll also all be added to my app review page in due course.
The other main output from the workshop though, was a series of rich discussions about how to get the most out of technology – thinking about users with autism specifically. And these discussions echoed a series of chats I’ve have had over the last 3 or 4 years, about autism & technology, but also technology and child development more generally. In discussions with parents, therapists, teachers, doctors and autistic people, I find myself returning to the same themes, and responding to the same questions and so I thought it would be worth sharing some of these in a series of posts, of which this is Part One.
So, let’s go straight in with a biggie: Is screentime bad for kids?
In a nutshell, No.
Care to elaborate?
OK. First, there’s the problem with the word “screentime”. This is a spurious term used to describe any time a person spends looking at a screen – e.g. computer, TV, iPad, phone. I detest this word. It lumps together a series of discrete activities which are not in any meaningful way the same. Imagine if we added up “papertime” including reading novels & newspapers, completing worksheets in school, doing crossword puzzles and colouring in. Nonsense!
Second, there is published evidence relating screentime to poor outcomes in children – for example, linking early TV-watching with poor attention. BUT crucially these studies are rarely well controlled. They don’t take account of potential confounding variables. For example, what if a child is spending a lot of time watching TV because they live in an area where they have little else they can do? What if there are no playgrounds nearby, or the neighbourhood isn’t safe? What if their parents are struggling, and that means this child doesn’t get much parental attention, or support?* All of those things could also give rise to difficulties with attention, but that doesn’t mean they were caused by watching TV – rather, the same underlying problem caused both things.
Another flaw with many of these studies is that they don’t often demonstrate a causal direction. An association between poor attention and a lot of TV watching could reflect the fact that children who struggle to focus their attention end up watching more TV. Putting oneself in the shoes of a parent of a child with attention problems, resulting in perhaps limited independent play, it is easy to see that this might be very common. Finally, these studies are now mostly rather out of date. Most can only report on apparent links between TV and child behaviour and have very limited relevance to the much more complex and interactive types of ‘screentime’ which are now common.
Is there any decent, rigorous evidence on ‘screentime’ then?
Yes. One favourite paper of mine, overcomes the flaws which are rife elsewhere in the literature. This 2011 paper by Parkes and colleagues used data on TV and video games collected from 11,000 children aged five years and related these to behavioural outcomes at seven years old. Their results, quoting from the abstract, were as follows:
Results: Watching TV for 3 h or more at 5 years predicted a 0.13 point increase (95% CI 0.03 to 0.24) in conduct problems by 7 years, compared with watching for under an hour, but playing electronic games was not associated with conduct problems. No associations were found between either type of screen time and emotional symptoms, hyperactivity/inattention, peer relationship problems or prosocial behaviour. There was no evidence of gender differences in the effect of screen time.
Let’s pick this apart. The group looked at five possible main outcomes and three possible predictors (TV-watching, video games and combined TV & video games) for two outcome groups (mid-level exposure of 1-3 hours, and high exposure of more than 3 hrs per day) compared against a ‘reference group’ (less than 1 hour per day exposure). Of all of those thirty possible relationships, despite the enormous power from this huge sample, only one turned out to be statistically significant. The authors specifically attribute the lack of effects which have been reported elsewhere to “our more comprehensive set of potential confounders” – which is a modest way of saying that they designed their study better than other people have done. In other words, this study upholds the idea that some previous links between ‘screentime’ and problematic behaviour are because these studies were not so well controlled.
But there’s still that link from TV watching to conduct problems. That sounds pretty bad, right?
Well, some details about this result are very striking to me. First, the study had enormous power because it is such a large sample size – 11,000 children. This means that if any genuine links were there to be found, we would expect this study to find them. The usual “sample size is too small” complaint certainly doesn’t apply here. So finding only one link out of a possible thirty is pretty telling already. Going further, we could even argue that this result could be a Type I error. This is when we think we have found a pattern, which doesn’t truly exist – have a look at this image for an explanation… Certainly the effect size (another way of estimating how important a result is, in addition to the p-value or significance test) is very small^. So I think there is a strong possibility that this could be a Type I error.
A second reason why I find this result striking relates to its clinical significance. Clinical significance is distinct from statistical significance in that its focus is not merely on what the numbers say, but whether it matters. A result needs to be understood in terms of its impact in relevant clinical (for which read “real world”) settings. Take as an example the fact that women are shorter than men. This is a statistically significant finding. But does it matter in the real world? Well, not for a furniture-maker for example. We don’t need to build special men’s chairs and tables of increased height. But it does matter in clothing production – as do a number of other physical differences between the sexes. If I asked you to make a pair of trousers, one of the key details you would like to know would surely be, “are these for a man or a woman?”.
What I’m driving at here is that whether a result is statistically significant – i.e. whether two numbers are rated as being “different” in a statistical test – is distinct from whether that difference really matters. Let’s put yourself in the shoes of a parent whose child is currently watching more than 3 hours of TV per day. Does a 0.13 increase in conduct problems at 7 years old provide adequate motivation to bring that TV watching down to less than 1 hour per day? I think it depends on your personal circumstances – your home environment, your family life, your work, and other important factors. Why is your child watching TV, and how does that impact your life? It might be pretty useful for you.
I certainly think that there is not enough evidence here to support the current trend for dramatic, black and white pronouncements on ‘screentime’ for kids. In fact, this focus on the screen is missing the point entirely. It is likely that families would get more benefit if we focused recommendations, and support, in much more meaningful domains. The authors of this paper seem to agree. They conclude that:
Our findings do not demonstrate that interventions to reduce screen exposure will improve psychosocial adjustment. Indeed, they suggest that interventions in respect of family and child characteristics, rather than a narrow focus on screen exposure, are more likely to improve outcomes.
So our kids can watch as much TV as they want then?
Well… I wouldn’t say that exactly. What the data are showing us is that middle class, well-to-do parents are keeping a lid on TV watching and video games, and these kids are achieving better outcomes. This link is almost certainly driven not by a focus on limiting ‘screentime’ but instead by the usual advantages experienced by privileged socio-economic groups: good schools, play dates, sports clubs, trips out, having a garden & loads of toys at home. Kids get a lot of benefit from these sorts of things, and If they’re doing all that stuff, their time watching TV or playing video games will necessarily be limited. So limited screentime is not causal, but it is a side-effect of something which is important. This, to me, is very different from the “turn off the TV, you’re a bad parent” message we’re currently hearing across the media.
So let’s frame our messages to parents around this idea – that in an ideal world all children would have access to variety, choice, challenge and inspiration. That, as a society, we want to increase availability of such opportunities for all families. And that watching TV, playing video games, skype-ing far-flung relations and going online to explore your interests can be a part of this, rather than an inferior alternative.
Yep. But I’ll be back with more posts on this topic in the future, thinking about technology addiction, technology in relation to motor development and obesity, educational technology, and added-value in technology. Watch this space!
* Please don’t read this to mean that any child watching “a lot” of TV is being neglected by their parents. There are loads of reasons why this might happen and I reject the notion that large amounts of ‘screentime’ are a sign of poor parenting. For example, in some cases, time to play video games or watch TV might be a sensitive response to your child’s needs.
^ For nerds, the p-value of this result is p=.01 precisely, and the 95% confidence interval for the size of the increase in conduct problems, as reported in the abstract, is 0.03 – 0.24. The reported estimated effect size (approximate as data are non-normal) is 0.09 of a standard deviation.