Keywords

By way of contrast to the policy analysis in Chapter 2, this chapter presents the initial discussions with young people to better understand their concerns about digital technology and whether the current approaches in their education experiences were effective. Developing on observations from Chapter 3, and based upon extensive focus group activity with over 100 children and young people, this chapter examines some fundamental questions juxtaposing the policy perspective with young people’s own views around what actually causes upset online, the data from which demonstrates a breadth of issues that cannot easily be addressed through technical countermeasures.

A lot of young people’s “online harms” centre on peer abuse, communication issues and a broad range of “upsetting” content, rather than the more adultist perspective that harms are done to young people, rather than by them. Another key facet of these early explorations centred on the knowledge of those with caring responsibilities, whether parents, teachers, or other safeguarding professionals. It was clear from our discussions that most young people had little confidence in adult’s knowledge of the online world, or how concerns and harms can be tackled. Most young people’s educational experiences involved little more than being shown videos in large class scenarios, with no opportunity to discuss. Others stated that adults would exaggerate potential harms and dismiss young people’s concerns with prohibitive messages such as “you shouldn’t be doing that” or “you’ve only got yourselves to blame”. Clearly, when considering how we develop resilience among young people, a lack of support from adults is a significant concern.

In this chapter, we also explore the research methodology and development process for this resource; the tool was a cumulation of three years research with young people and those working with young people and, we hope, ultimately provides support for those working in the children’s workforce to make a more nuanced and informed decisions and to provide individual support for young people who might disclose issues around their use of digital technology. Once the tool was defined in a form agreed by the project research team, it was then validated with focus groups with young people and carers, to ensure behaviours were effectively defined and categorised, which is discussed in Chapter 5.

Digital Research in Headstart Kernow

Returning to the goals of the Headstart Kernow work, as one would anticipated with the formation of a project that considers practice around children’s mental health, there was a lot of workshops and discussions with professional teams to bring in their observations and expertise in considering what project goals might be and the outcomes they wanted from the project.

What was clear from these discussions, drawn mainly from a project steering board comprising senior education, social care and public health professionals, “online” was definitely an issue and often seen as the driver for wellbeing and mental health issues. The prevailing view seemed to be that issues such as sexting, cyberbullying and online screen time were all causal factors in wellbeing issues and preventative approaches. What was less clear was evidence to support these concerns. When pushed on evidence of these issues, it was clear most professionals had received no training on online safeguarding and we bring views from the media into these professional discussions. It was also very clear, even at these early stages of discussion, that there was a gulf between professional opinion and young people’s experiences.

While professionals would generally adopt as preventative position (e.g. “how do we stop young people from sexting”), young people would acknowledge that online issues play a part in their wellbeing, but would balance negatives with positives and look to adults to support them if something went wrong. They do not want a digital white knight preventing harm from occurring, they want advice and support. What was clear from early discussions with young people (discussed in this chapter) is that you could not prevent issues occurring online just as you could not prevent verbal abuse or harassment in an offline setting. However, there was an expectation that adults should know how to support them in the event of harm or concern as a result of an online issue.

Adverse Childhood Experiences

The project as a whole adopted a Trauma-Informed Schools approach to considering children’s mental health and wellbeing. This is now a well-established (e.g. Walkley & Cox, 2013) whole school approach acknowledging that a child who has experienced trauma is more likely to experience behavioural and learning challenges in schools. While trauma-informed approaches vary a great deal, what lies at the heart of these approaches is to acknowledge trauma has a long-term impact on mental health and wellbeing and early intervention is more effective than intervention once a young person is presenting serious mental health concerns. There is a view that schools are an ideal place for early intervention (see Chafouleas et al., 2016) due to the amount of time a young person spends in a school setting and the likelihood that early signs of mental health concern could be identified by the professionals who work there.

While this book is not a broad exploration of trauma-informed school approaches, it is worthwhile to consider the underlying empirical evidence as it informed what became a highly participative and ethnographic approach to exploring the role of digital technology on young people’s wellbeing and how they might be best supported in navigated growing up in a connected world.

At the foundations of trauma-informed approaches is the now well-established concept of Adverse Childhood Experiences (Fellitti et al., 1998) (ACEs). Considerable evidence now exists to demonstrate that early childhood experiences can have a significant impact on life course and future wellbeing. There is a wealth of research that shows clear causation (e.g. see Kalmakis & Chandler, 2015) for these effects, and the findings of studies are now well embedded into practice.

Broadly speaking, Adverse Childhood Experiences are things the young person is subjected to themselves, and environmental factors, and can be separated into three categories:

  • Abuse:

    • Sexual abuse

    • Verbal abuse

    • Physical abuse

  • Neglect:

    • Physical neglect

    • Emotional neglect

  • Household Dysfunction:

    • Incarceration

    • Substance abuse

    • Mental illness

    • Parental separation

    • Domestic abuse

While others have been identified in further studies (e.g. Pachter et al., 2017), the focus remains on either environment factors or harms to the young person.

When we are considering the impact of technologically facilitated behaviours on young people’s wellbeing, the definition of ACEs presents us with a fundamental issue, and one with which we were highly cognisant during the early stages of the project. While a trauma-informed approach, with a knowledge of ACEs, is a worthy foundation for a project that aimed to address early intervention and cross-sector mental health support for young people, the underlying knowledge base had scant consideration of how online issues might impact upon children’s mental health.

While there are two ACEs defined that may manifest through online technology—sexual abuse (sometimes referred to as online non-contact abuse when occurring digitally) and verbal abuse—there is little discussion in the literature that has explored whether online issues might impact upon wellbeing or the sort of causation explored in the original ACE study. This is not a surprise, given that the methodology for the identification of ACEs is surveying adults about experiences as a child. Clearly we are not sufficiently advanced in the digital world (consider Facebook was only established in 2004, Instagram in 2010 and SnapChat in 2011) that adults can reflect back upon potential adverse impacts as a result of online harms.

That is not to say that there is no evidence related to children’s use of digital technology and its harm (see Livingstone et al., 2017 for a detailed review). However, the majority of this research focussed upon trying to quantify harm and explore behaviour, rather than considering the more complex trauma that might occur and impact upon mental health or future life, as we will discuss below. While if we reflect on widely reported media stories about mental health concerns (e.g. GambleAware, 2019; Royal College of Psychiatrist, 2020; The World Health Organisation, 2019), the evidence base to underpin these views is scant. Indeed, an analysis across a large dataset on young people’s mental health by the Oxford Internet Institute (Orben & Przybylski, 2019) showed there is little to suggest causation and there is greater impact on a young person’s mental health from missing a meal than spending a long time online.

The Digital Resilience Workpackage in the Headstart Kernow Project

Therefore, from the early stages of the project, we took the view we would not assume the impact of digital technology, or more correctly the use of digital technology, on children and young people’s mental health we would, instead, speak to them about it and, adopt a fundamental principle of the Headstart Kernow project, be driven by the youth voice from the outset.

The digital workstream on the Headstart Kernow project was established to explore young people’s use and attitudes towards digital technology with a youth perspective from the ground up. We placed a condition at the start of the programme that we would not be led by policy agendas and would instead take a grounded theory approach in that we would learn from data collection. We were clear there was very little credible literature to support the assumptions that there must be a negative impact, and history shows us the same assumptions have been applied to video games, television, radio and books (Mueller, 2019) and can be linked back to Cohen’s (2011)’s work on moral panics, which has previously been explored alongside digital harm narratives (Phippen & Bond, 2020). We therefore took the position we did not actually know what the impact of digital technology is on young people’s wellbeing and the best people to explore this with are young people themselves.

In our observations, young people tend to be “early adopters” on emerging technology and will use technology in a manner most adults will not. They will explore, navigate and interact in a far more open and risk-free manner than many older users. This sometimes creates a cultural tension where adults do not understand the young people’s behaviour and therefore assume it must be bad. While young people are, in general, engaged with technology, their capabilities, appreciation of risk and approaches to addressing concerns vary greatly. These terms come from adultist perspectives on childhood where the needs of the individual are reduced in favour of uniform educative messages such as “don’t go online until you’re, it’s illegal” or “if you share something online and it goes further, you only have yourself to blame”.

We would not, however, wish to adopt the problematic discourse around “Digital Natives”. This is one term we come across frequently in the evolution of online safeguarding discourse over the last fifteen years. Coined by Prensky (2001), as a phrase differentiating between children—digital natives—and adults—digital immigrants—the concept rapidly found into way into academic and educational discourse.

Prensky’s Digital Native idea comes from an article that proposes a theory where because someone was born in an era where digital technology was ubiquitous, they had some inbuilt ability to engage with it with capabilities that are missing from previous generations generalised as digital immigrants. While this crude generalisation is now widely debunked (e.g. Helsper & Eynon, 2010), its use still pervades in popular discourse.

We have certainly attended seminars and workshops around digital literacies and safeguarding where senior speakers from government and regulators have unhelpfully spoken of younger generations being natives capable of navigating the digital world without further support. To paraphrase one professional in a training session:

They know more than me because they’re a digital native, it comes naturally to them.

The term, when unchallenged, has become a taken-for-granted assumption and we frequently hear it from all manner of professionals, mainly used in one of two ways—firstly, as a way to imply blame:

They’re digital natives, they should know about this sort of thing

Or it is used to deflect responsibility:

I’m not a digital native like they are, they know more than me.

Brown and Czerniewicz (2010) among others are also highly critical of the concept as such terminology hides inequalities in digital experiences. Furthermore, given that Digital Native ties in with the concept of Millennials (born between mid-1980s and early 2000s) and Generation Z (late 1990s—approximately 2015), this is not a term that could simply be applied to children and young people now—it is both unproven and now obsolete when we are concerned with the online safeguarding of young people in 2021. If we were to engage with this many adults would now be considered digital natives, including some who are claiming there are cultural challenges in online safeguarding because young people are now digital natives.

Returning to the young man who showed evidence of wisdom beyond his years in stating “what do you mean by safe anyway?”, this was something we were mindful of throughout this work. This view, shared by lots of other young people we have spoken to (and which we will explore in more detail below), formed the basis on the tool—we can’t make young people safe, but we can work at helping them become more resilient.

One doesn’t become resilient by being excluded from something, one becomes resilient by understanding risk and where support is available. Moreover, and arguably more importantly, people with safeguarding responsibilities have a greater chance of being able to provide that support that young people are talking about if they are well informed on the nature of digital risk, and the severity of the risk (or whether there should be concern at all). To paraphrase a 12-year-old young woman from a different session, when asked what they felt online safety should be, she said:

That you know who you can talk to when you’re upset by something that has happened online, and that they can help you.

We did not wish to define the “definitive” resource to any aspect of youth online behaviour, as this would be impossible. We wish, instead, to develop a resource that would allow professionals to make more informed decisions about how to support young people, working alongside their existing safeguarding policies and training, and try to bridge the gap between preventative perspectives from adults (driven by biases and a dearth of training) and a wish for help and support by young people (a wish unfulfilled as a result of not wishing to disclose and risk the wrath of an adult).

Drawing from Discussions with Young People

The research work with young people, which ultimately resulted in the development of the Online Resilience Tool, was built upon a great deal of interaction with young people during the first three years of the programme.

Prior to the commencement of the main phase of the research project, we ran a pilot study to gain some grounding in young people’s views. This took the form of an exploratory workshop with KS4 students from 4 schools, where they were initially encouraged to post up general opinions on the statement “digital technology has a negative impact young people’s lives” on flip charts around the room, prior to engaging in smaller group discussions facilitated by Headstart staff, including ourselves.

While this was a short (2 h) session with around 100 young people from year 10 (aged 14–15), it allowed us to lay the foundations of our understanding of young people’s views on the online safeguarding education and how they are supported by adults. We saw a frustration by young people with the nature of online safety they receive and a feeling that their views were dismissed because “adults think they know best”. However, this was often caveated with a view by the young people “even though adults really did not”. There was a clear view among young people that while digital technology was essentially a positive aspect in their lives, there were some things where it could cause upset and harm, for example receiving abuse or seeing upsetting content. However, they were unlikely to speak to adults about this because they anticipated a negative response in general.

However, perhaps more telling was the behaviour of professionals at the workshop. Young people were brought together from four different schools and, as one would expect, teaching staff came with them. Aside from one school, where the two teachers immediately sat on tables with the young people in the discussion group area, the other staff all placed their students on their respective tables and then departed to the back of the room to chat among themselves.

They did not view this discussion as something they needed to be part of and saw it as an opportunity to get on with work away from the students. Perhaps this is an unfair observation, and the teachers did not know they were invited to take part in the discussions. However, it is something we have experienced in other work activities—sometimes teachers will join in with discussions, sometimes they will sit at the back of the room doing marking, sometimes they will go “you’re alright here aren’t you?” and disappear to the staff room. It is notable that for the group where the teachers immediately sat with the students, this was the group where the young people were more open about their views on the online safety education they received in schools, and were more likely to disclose upset and harm.

Nevertheless, those staff who did disappear to the back of the room were invited to the discussion. Some engaged positively, some less so. However, one member of staff made a point of telling us afterwards that they had “learned so much” from listening to their students about their online lives.

The pilot study changed our outlook significantly—while initially we were of the view that we might develop resources to help young people better appreciate the impact of online behaviours upon their wellbeing, we decided to consider instead focussing any outcomes from the research on a tool for professionals. Even in the early stages of the research, it was clear that the young people were overloaded with resources, whereas professionals made decisions without any support. A decision was made, therefore, to focus on the goal of the workstream on developing practical resource, underpinned by all of our discussions with young people, that would be of sound practical help for those in the children’s workforce who are making safeguarding judgements.

In this pilot group that the issue was not the activities young people were doing online (which was mainly positive), it was the lack of support, or overreaction by professionals, when harm was disclosed. If we could develop something that might help professionals we might result in an environment where better support was provided to young people who were truly exhibiting problematic behaviours, and lessen overreactions for those engaged in misunderstood activities.

Embarking on the research study proper, which took place over two years in the early stages of the Headstart Kernow project, we maintained an open, exploratory approach to the discussion with young people.

Dialogue with young people took place in a number of different ways, but always in school settings. Approaches to discussion included:

  • Large workshops drawing young people from different schools with facilitated discussion (attendance was around 60 students in each case).

  • Discussion groups in specific schools with large student groups (30–40 in each group).

  • Smaller discussion sessions in schools with 10–20 students in each group.

In total, we conducted 3 large workshops, 10 large discussion groups and 10 smaller discussion sessions. In total, around 1000 young people were spoken to in this phase of the work. The majority of young people spoken to were drawn from secondary schools, with an approximate 70%/30% split between secondary and primary schools.

Data was collected in different ways—large workshops were attended by teams of facilitators who each worked with a small group (approximately 10 young people per group) who made notes during their discussions, as well as providing young people with opportunities to post up their own thoughts and comments with post it notes and flip charts. School-specific discussion groups were generally attended by two or three staff from the digital Headstart team, and a similar approach was used. For the smaller discussions, two researchers attended and discussions were recorded (with the students consent) and audio tracks were analysed. With the whole data set, a thematic analysis was conducted to draw out common themes and discuss “highlights”. It was both reassuring and encouraging to note that there was a considerable amount of saturation of themes across the groups and, while activity online, unsurprisingly, differed depending on the age of the students with whom we were speaking. For example, unsurprisingly, adult themes such as pornography did not occur in primary discussions. However, there were plenty of discussions with those students with things like age appropriate games and social media. We are also mindful to record activities young people discussed at different ages, to start to map out what they viewed as “normal” within different age groups.

We did generally keep questions very open ended in discussions to allow these views to be drawn out. Our key foci were:

  • What causes upset online?

  • Do you worry about how much time you spend online?

  • Do you enjoy learning about online safety in school?

  • How do you ask for help?

  • What can adults do to help?

One theme that often occurred, unsurprisingly, was that adults did not seem to have a strong appreciation of young people’s online lives, and often overreacted or accused them of behaving in a manner which a lot of them did not recognise. For example, one young person from a year 10 group (aged 14–15) said that they felt there were lots of stereotypes about young people online that do not play out in reality. They acknowledged that some of their peers would engage in risk behaviour online (such as sending images or meeting up with online “friends”), most were aware of the risks and did not do so. They also said that they felt a lot of adults exaggerate about both the risks online and also young people’s behaviours. Adults, they said, always had a story about a young person who ended up dead or seriously harmed as a result of something that happened online.

What Causes Upset Online?

This was generally the opening focus of discussions and resulted in a large amount of feedback that resulted from lived experiences, rather than what they had been told in class. There were a great many different things that came out of these discussions, but one thing that was common was young people talked about upset online arising from people, rather than content. Young people of all ages talked about how upset and abuse might arise in all manner of online situations, but most upset occurred as a result of interaction with others (abuse in gaming, group chats where someone became argumentative, groups “ganging up” or “piling on” on some else, comments on social media meant to upset, late night messages intending to cause conflict, etc.).

While we might generally group this upset in the unhelpful term “cyberbullying” (see below), a more rounded and less emotive term is “peer on peer abuse”. We also came across some of the more adultist concerns, such as grooming, and while some were somewhat naïve (e.g. saying they’ve a friend their age who lives somewhere else, but without being able to provide any evidence that they might be the same age aside from that was what they were told or their profile picture on social media), there was also a great deal of resilience, knowing that there is grooming online (they referred to “pervs” or “pedos”), the fact that this is a message delivered to them a great deal at school (particularly at primary school) and they would generally ignore or block people who made them uncomfortable.

One thing we were mindful of in our discussions was to not confront attendees on their own behaviour, so we would never say “do you do this” or “have you ever done this”. Instead, we would use scenarios, media stories or questions like “are you aware of anyone who has ever done this?” to avoid them becoming defensive or feeling challenged. This was generally an effective approach and resulted in much open dialogue about young people online and their thoughts about impact upon wellbeing.

When it came to upsetting content, this was equally wide ranging, and while some would talk about being shown “inappropriate content” (e.g. being shown pornography by a peer), there was also a great deal of discussions around content from those with heavy media presence. Over the duration of this phase of the Headstart project, the Manchester Arena bombing took place, and many young people talked about how it was upsetting to see the news reporting about this as it was about people their age. One other form of upsetting content young people frequently talked about was climate change, which is unsurprising given its prevalence in the media. Again, a lot of the time young people disclosed content that came from mainstream media channels online, rather than specifically online produced content, that was causing upset about climate change.

We did explore gaming considerably, as young people playing “age inappropriate games” is a frequent concern for adults. This concern was generally not shared by young people. There was, again, some evidence of third person effects, with some gamers saying they were resilient to adult content; however, they would not agree with someone younger than them playing a game like that. However, when asked when they started playing “adult” games, they would generally say that they played them when they were younger too!

Most young people felt they, themselves, were resilient to seeing more mature content in games. When asked about adult concerns that playing adult games might cause them to become violent or engaging in risky sexual behaviour, young people we spoke to dismissed this. The biggest harm in games, in their view, was the abuse one might receive via group chats in multi-player games (again highlighting the harm arising from the behaviour of peers, rather than the content of the game), or frustration with the game itself, which might result in “rage quitting”, particularly when they were beaten at the last minute in a football game. It was interesting to note that young people viewed sports games, in general, as having great potential for harm because of the competitive nature of them and the capability to abuse while playing. Some disclosed knowledge of fall outs during these sorts of games resulting in physical altercations the following day.

Do You Worry About How Much Time You Spend Online?

The response to this was interesting, given the large amount of concern about young people’s screen time. Many young people were very open about the large amount of time they spent online, but were equally open that this was because a great deal of their lives happened online. They were quick to point out that a lot of school work is done online, and this is something that they have to do as part of their education. There were a wide range of other activities that took place online, such as consuming media (Netflix, iPlayer, etc.), interacting with friends, interacting with family, playing games, browsing social media and so on. Most would point out there were very few aspects of their lives that did not have an online element to them. Even when considering something typically “offline” such as sport, they pointed out that there would be arrangements for playing sport that took place online, group chats about the sporting activities, chats about professional sport online and similar.

Some did feel that they spent “too much” time online, but there was little agreement on what “too much” would look like. Some young people who disclosed they spent more than 6 h a day online saw nothing wrong with it, given that every aspect of their lives required some form on online interaction, and others who spent less than an hour were concerned. It was interesting to observe that for some young people whose online consumptions did not seem that great but were concerned were generally told the time was excessive by adults in their lives (parents, teachers, etc.) rather than it being a belief they have developed independently. For example, one young person in year 6 (aged between 10 and 11) said she thought she spent too much time online, but also said she spends less that an hour a day, on average, online. When asked why she thought that was excessive, she said that’s what her mother told her.

However, even those who spent a lot of time online but were less concerned were happy to acknowledge Fear Of Missing Out (FOMO) and concerns about online popularity were prevalent. No one wanted to be the first person to leave a group chat so they would sometimes go late into the night, there were concerns caused by seeing friends all together at a party (using things like SnapMaps), and “like anxiety” was also an issue, with jealousy arising if someone else’s post was getting more attention that theirs or someone was perceived to be more popular online because they had more friends, or more attention. Some would describe spending long periods of time passively looking at Instagram pages of others but not interacting, which they acknowledged was problematic. So perhaps the responses to this question helped us understand that the concern was less about the duration of being online, but why they were online and whether they felt pressured to do so.

Do You Enjoy Learning About Online Safety in School?

In general, there was a sympathetic, but negative, response to this question. While comments like “its boring”, “we do the same things all of the time” and “we just get shown videos” were common, equally there was a general view that it was clear that a lot of their teachers were not particularly aware of the issues they were supposed to be teaching. One of two things frequently occurred—either students would lose interest quickly or, if the member of staff turned the lesson around and ask their views on aspects of online safety, there was a more positive response.

In general, it was interesting to note that online safety was generally delivered as a short-dedicated session (e.g. a video shown in assembly) or with a “collapsed timetable” day with external speakers. There was little mention of online safety being discussed in a different subject (e.g. in an English class) or consistency of delivery across a prolonged period of weeks. There was a greater likelihood to have an “online safety” session delivered as part of these off timetable weeks where regular lessons were not delivered and instead the young people who take part in classes delivered by, generally, external speakers on a range of social issues, such as drug awareness, sex and relationships, and online safety.

The use of external speakers was an interest thing to get the young people to reflect upon—many saw the benefits of having an “expert” to speak to, so they could ask more risqué questions without risk of a telling off. They also said, however, that one of the issues they face is that they want to have people in their school they can ask questions to on a more ad hoc basis, rather than solely in a twice a year classroom session, and this was not possible if the “experts” were not available outside of these sessions.

How Do You Ask for Help?

It was fair to say that there was not a great deal of faith in adults who have responsibilities for their safeguarding. Young people would say that perhaps there would be one or two staff would be trusted not to “lose it” the general view was they’d get into trouble if they disclosed anything about an online incident. As already discussed in Chapter 3, a lot of young people felt there was no point in disclosing upset or harm to an adult because it was not worth the hassle or the telling off they would receive.

Those they were more likely to disclose to are those staff with the closest pastoral relationship with the young people such as teaching assistants and, to a lesser extent, a class teacher. Senior staff were viewed more as disciplinarians and as such were unlikely to be turned to for a pastoral issue—they were, after all, the staff more like to give the “scary” assembly where they would point out all the scary and dangerous things that happen online. The likelihood of speaking to parents was highly variable, some young people were very happy to do so, some said they would be scared to in case they were told off and a key finding was as they got older the likelihood of disclosing to a parent would reduce, particularly for a more mature issue such as pornography or sexting.

When asking about the tools that were available online to help with dealing with abuse or unwanted contact, again there was a mixed view. Some would actively use reporting mechanisms on games and social media platforms (sometimes to get people “banned” for mischievous or malicious reasons); there was variable view of how useful this was. In a lot of games, they could see responsive platforms where bans and blocks were used well. Few would block people in social media (sometimes it was acknowledged this was down to FOMO—even if someone was being abusive or argumentative it was better to see what they were saying “to your face” rather than “behind your back” and many believed there was no point in reporting people because nothing would be done. A few gave examples of when they had reported upsetting content (generally this was content related to animal abuse) and it was not taken down. Therefore, they said, they knew there was no point in reporting. However, it was encouraging to note many were aware of reporting and blocking routes on both platforms and devices and used them in some circumstance.

What Can Adults to Do Help?

A common thread in responses over the whole project (and this has already been talked about in Chapter 2) is these three requests:

  • Listen

  • Understand

  • Don’t Judge

As we have discussed before, this is not something that has changed. The “don’t judge” call came loud and clear from many young people in our discussions. When a young person turns to an adult for help, as a result of concern or upset about something that might have happened online, or even if they are simply curious about something related to technology and they have a question, it comes as no surprise that they wish to be listened to by someone who can appreciate what has happened and has clear advice on what to do. Or just to answer their question without fear of being told off for asking it. As discussed above, some young people were confident they could do this with some adults, and others were less confident. And there was a clear feeling that for some of the more complex issues older teens faced (such as sexting), adults would generally not respond in a calm and supportive manner. It should be noted that it was not just professionals who were viewed like this. When speaking young people from older classes, they were equally concerned about disclosing to a parent. In the case of non-consensual sharing, one young person said they would never be able to disclose that to their parents if it happened to them because their father would “kill” whomever did the sharing.

Particular Issues Arising

As well as key themes, a lot of issues arose that helped us shape the aspects that would go into the tool, and this is developed further in the following chapter. While some were expected, others were more of a surprise for us:

Cyberbullying—was a term used a great deal for all manner of online abuse from peers and strangers. However, what was less clear was young people’s understanding of the term, or what differentiates between someone being mean to someone else online, and what was cyberbullying. An early decision we made in the development of the tool was to avoid the term, because it has because so opaque and broadly used it have become virtually meaningless. Cyberbullying was used to describe activities as diverse as a stranger calling someone a name on a game to persistent long-term online abuse by a peer. What was clear from these discussions is the unhelpfulness of the term and how we needed to be more specific in our descriptions of activities, such as online pile ones, peer on peer abuse, sharing images, etc.

Deep/dark web—Probably one of the most interesting, and confusing, topics of debate related to the use of dark web/deep web technologies. This relates to areas of the internet that are not indexed, and cannot be searched or monitored, as a result of the encryption technologies used (e.g. browsing the web using a Tor browser). The most notorious aspects of deep web technologies (the dark web) relate to criminal online activities, such as drug dealing, buying illegal products or accessing illegal content such as child sexual abuse material. However, there are also other deep web activities, such as covert browsing, which are innocuous but might be used to circumvent censorious regimes or excessive internet access monitoring. Most “knowledge” on the dark web was somewhat folklore-ish—many talked about it but no one used it and there was a lot of unease in talking about it initially. When this was explored, it was because many young people had been told by staff that the dark web was full of paedophiles, gun sellers and drug dealers and if you go there you will be arrested.

Young people would sometimes mention that they knew someone who had been on the dark web, like this was an edgy and rebellious thing to do. Yet no one we spoke to at this stage had experienced it themselves (this is similar to our broader online safeguarding work—many people have very clear views on the dangers of using deep web technologies, yet have never used them and do not know anyone who does), which does lead us to wonder where the opinions formed about these technologies came from—we discovered this was a mix of peer myths and questioning by concerned adults. Conversely, in our work with professionals during the Headstart project, including professionals who were part of the project, there was clear consensus that the dark web was illegal, harmful and an immediate safeguarding “red flag”.

Pornography—The perennial topic of anxiety for adults, young people seemed far more comfortable talking about it when they get the opportunity to! There was general agreement that from year 8 onwards that pornography is part of young people’s experiences, and a very normal part by Key Stage 4 (aged between 14 and 16). While there was some gender difference (males were far more likely to access pornography than females), there was generally a view that this happens and we should be talking about it. There were more interesting discussions about people “excessively” using pornography, which generally related to watching in break times or consumption that impacted on other social aspects, such as interacting with friends. There was clearly a view that some of their peers watched too much. However, when asked what we might do to support young people accessing pornography, or concerns they might have about peers doing so, there were few calls to block it or to control access. In general, the discussions we had around pornography, which were with those in Key Stage 4 (between then ages of 14 and 16) there were great calls for education around the topic, because, in the view of most young people, it was impossible to ignore or avoid. Even if young people did not wish to access it themselves, they would receive videos and images in group chats, some from “mainstream” pornography, some intimate images from peers.

The Lure of Online Celebrity—for a lot of younger children the desire to not just be famous, but being online famous, was something discussed a great deal. In the era of the online influencer, many young people talked about their favourite online celebrities and a wish to have a lifestyle like them—in general it was viewed as both a good way to make money and also having huge amounts of followers would be indicative of success. Deeper conversations (e.g., “how do you think they maintain their popularity?”, “what happens when they start to lose subscribers?”, “how often do they have to produce new contact and interact with followers?”) allowed young people to think more critically about what being an internet celebrity might be like, and it was clear these were not conversations they had been engaged with before.

The Law—There were three very specific things that came out from discussions on what is illegal—young people, in general, were of the view that access pornography, sending nudes/sexting and using social media under the age of 13 were all illegal. They generally believed this because that’s what they had been told by adults. It was clear that messages of illegality (alongside the subsequent “you could get arrested for doing that”) were frequently used in school settings and discussions with other adults. However, what was equally clear was the way in which these messages were delivered were blunt and imprecise. For each one of these, there are complexities that do not make legality as black and white as they might first seem, and this was something we were mindful to incorporate into the tool.

Fake accounts/catfishing—the use of fake accounts, creating accounts to look like someone else or accounts to defraud (i.e. claiming to be someone else to befriend people online) were all more common than we had expected, and knowledge of them was prevalent.

The Mundane—One final issue that frequently arose in our discussions which, on the face of it, might not seem as significant as other “named” issues and one which we will refer to as “the mundane”. These were not specific harms, more the nagging irritation of people getting more likes for a picture or a post than someone else, or the frustration with someone maintaining a SnapChat streak with one friend more than someone else. Issues of popularity, or what makes a good online friend, arose again and again. There was a lot of discussion about how this was the sort of thing that troubled young people on a regular, even daily, basis. They wanted to know how to deal with it but if they raised these issues with adults, they were told to “stop being so silly” or “what are you worried about that for”. What was clear that there was little opportunity to discuss these issues in school settings.

Conclusions

In this chapter, we have explored the research foundations that led to the development of the Online Resilience Tool. This was the first research phase of the tool’s development—drawing upon this body of knowledge we drafted a pilot version of the tool and then further engaged with young people for refinement. The following chapter describes that process.