Is technology use impacting the brains of our kids?

technology

I do wonder if the line that defines a distinct technology gap is marked by those that see kids with phones and respond with a shake of their head as they mumble about it eating their brains, and those that don’t notice because they are busy surfing the web on their mobile.

There is a new Study

The question regarding technology use has been around for some time now and the prevailing concern is that it causes deep harm. Just how true is that?

Within a newly published study researchers at Oxford university looked into the wellbeing association of technology use. This involved 300,000 adolescents and parents from both the UK and USA hence the data sample is indeed sufficient for this to be truly meaningful. There have of course been other studies, but this is so far the most comprehensive study of this type.

There is still a risk that they might have cherry picked and in effect p-hack their results to conform to a prevailing bias. So did they?

Let’s take a look.

The Study: The association between adolescent well-being and digital technology use

Their results clearly demonstrate that technology use is inherently evil because it will completely and totally rot the brains of your kids … oh wait that’s not what they found, that’s me channeling somebody who starts out with a specific bias. Such thinking perhaps falls into this modality …

The widespread use of digital technologies by young people has spurred speculation that their regular use negatively impacts psychological well-being. Current empirical evidence supporting this idea is largely based on secondary analyses of large-scale social datasets. Though these datasets provide a valuable resource for highly powered investigations, their many variables and observations are often explored with an analytical flexibility that marks small effects as statistically significant, thereby leading to potential false positives and conflicting results.

In other words, for this study the researchers approached the topic with an acute awareness of the risks involved in dealing with social data.

What did they actually find?

0.4%

Here is what they really discovered …

The association we find between digital technology use and adolescent well-being is negative but small, explaining at most 0.4% of the variation in well-being. Taking the broader context of the data into account suggests that these effects are too small to warrant policy change.

In other words, if you are truly and sincerely worried about technology use by your kids, then you should also be equally concerned about them having potatoes in their diet.

Why?

Basically because the impact upon their overall well-being will be exactly the same … 0.4%.

Potatoes!

This comparison came from the research team …

‘Our findings demonstrate that screen use itself has at most a tiny association with youth mental health,’ said lead researcher Professor Andrew Przybylski, Director of Research at the Oxford Internet Institute, University of Oxford. ‘The 0.4% contribution of screen use on young people’s mental health needs to be put in context for parents and policymakers. Within the same dataset, we were able to demonstrate that including potatoes in your diet showed a similar association with adolescent wellbeing. Wearing corrective lenses had an even worse association.’

There really are better things to worry about

Smoking marijuana and being bullied was found, on average, to have a 2.7 times and 4.3 times more negative association with adolescent mental health than screen use. Activities like getting enough sleep and eating breakfast, often overlooked in media coverage, had a much stronger association with wellbeing than technology use.

Can we be confident in their conclusion?

The very brief compact answer is “yes”.

The longer answer involves understanding what they did to address the flaws that has led to some supposed “revelations” from other previous studies.

(Stick with me here)

Up until now there has been no solid scientific consensus on screen use and mental health. This can be demonstrated by the observation that different groups of researchers can use exactly the same dataset and reach fundamentally different conclusions. This happens because we can often apply our own biases, hence it becomes rather important to devise ways to ensure that we don’t end up fooling ourselves.

Bias

Amy Orben, College Lecturer at the Queen’s College, University of Oxford, and author on the study, explains this as follows …

‘Of the three datasets we analysed for this study, we found over 600 million possible ways to analyse the data. We calculated a large sample of these and found that – if you wanted – you could come up with a large range of positive or negative associations between technology and wellbeing, or no effect at all.’

Professor Andrew Przybylski, Director of Research at the Oxford Internet Institute, University of Oxford …

‘We needed to take the topic beyond cherry-picked results, so we developed an approach that helped us harvest the whole orchard,’

A Bit of Detail

This next section might get a tad too detailed, so feel free to skip over this bit by jumping directly to the next heading. I’ve included it because it gives an interesting insight into how researchers can fool themselves.

There are at least three reasons why the inferences drawn by behavioural scientists from large-scale datasets might produce divergent findings.

First, these datasets are mostly collected in collaboration with multidisciplinary research councils and are characterized by a battery of items meant to be completed by postal survey, face-to-face or telephone interview. Though research councils engage in public consultations, the pre-tested or validated scales common in clinical, social or personality psychology are often abbreviated or altered to reduce participant burden. Scientists wishing to make inferences about the effects of digital technology using these data need to make numerous decisions about how to analyse, combine and interpret the measures. Taking advantage of these valuable datasets is therefore fraught with many subjective analytical decisions, which can lead to high numbers of researcher degrees of freedom. With nearly all decisions taken after the data are known, these are not apparent to those reading the published paper highlighting only the final analytical pathway.

The second possible explanation for conflicting patterns of effects found in large-scale datasets is rooted in the scale of the data analysed. Compared to the laboratory- and community-based samples typical of behavioural research (mostly <1,000), large-scale social datasets feature high numbers of participant observations (ranging from 5,000 to 5,000,000). This means that very small covariations (for example, r < 0.01) between self-report items will result in compelling evidence for rejecting the null hypothesis at alpha-levels typically interpreted as statistically significant by behavioural scientists (that is, P<0.05).

Thirdly, it is important to note that most datasets are cross-sectional and therefore provide only correlational evidence, making it difficult to pinpoint causes and effects. Thus, large-scale datasets are simultaneously attractive and problematic for researchers, peer reviewers and the public. They are a resource for testing behavioural theories at scale but are, at the same time, inherently susceptive to false positives and significant but minute effects using the alpha-levels traditionally employed in behavioural science.

What Exactly did they do to overcome the data challenges?

They conducted Specification Curve Analysis.

They did what?

They used information from other questions to put the technology related answers into a bigger context.

If you measure mental health and only look for an association between that measurement and screen usage, then you will end up with bizarre results. You also need to consider what else is going on. In very basic terms you need to be able to move away from statistical significance and instead examine practical significance.

Professor Przybylski explains it as follows …

‘Bias and selective reporting of results is endemic to social and biological research influencing the screen time debate, we need to put scientific findings in context for parents, policymakers and the general public. Our approach provides an excellent template for data scientists wanting to make the most of the excellent cohort data available in the UK and beyond.’

Bottom Line

You can trust their 0.4% impact conclusion. Beware of other misleading results derived via p-hacking that paint technology use as inherently scary or evil, there is no robust scientific basis for that stance.

1 thought on “Is technology use impacting the brains of our kids?”

  1. It’s not technology. it’s the content which is random and totally lacking any focus so the minds of youth are absorbing mostly nonsense–as usual.

    Reply

Leave a Comment