Keywords

This final research chapter explores the deployment of the tool and its impact. As we have discussed throughout this book, we did not want to simply present the development of a new online safeguarding resource as the end point of a research project. The tool was developed and deployed in June 2020 with complimentary training, which allowed us once more to explore professional’s concerns, and better understand knowledge barriers, which informs the knowledge base around online safeguarding.

As we will discuss throughout this chapter, a key emergent factor in this phase of the research was to see the lack of critical thinking among some professionals—a potentially serious barrier to young people engaging with them which has been a key thread running through this book. Many professionals would voice concerns based upon conjecture and opinion, with no grounding in evidence.

A common example of this is incorrect knowledge of law and its uses to justify prohibitive approaches. To take a perennial favourite, it is illegal for young people under the age of 13 to be on social media. Rather than exploring the factual basis for this conjecture, professionals would use it as an excuse to shut down conversations with young people—they should not be using the platforms, and speaking to them about it will just encourage them. Interactions in training gave the opportunity to both develop the knowledge of professionals and pro-actively challenge these views, encouraging a more open and critical approach to dialogue with young people—supported by the tool as a starting point for whether concern needed to be expressed regarding a disclosed behaviour.

Professional Feedback

However, prior to the exploration of the training proper, it is worthwhile to reflect on an experience that occurred towards the end of the tool’s development and launch. We have attempted, throughout this text, to provide the narrative around the development of the Online Resilience Tool and used this as a vehicle to observe the wider challenges in the online safeguarding world. In particular, developing a more effective culture of trust between adults and young people such that there is confidence that disclosure of harm will result in support and rectification, not punishment and judgement. Given the participative nature of the project, we are able to provide vignettes and anecdotes which highlight these challenges and illustrate the barriers we need to overcome.

As discussed in Chapter 5, once we have developed the tool and refined it as a result of young people’s feedback (as we have been keen throughout the project to be youth lead and provide authentic youth voice in the work), and as touched upon at the end of the previous chapter, we also sought validation from professionals. Clearly, this is an important part of wishing to engage professionals with a new resource and approach to something that impacts upon their practice. We did not want to appear to have a tool that professionals should just use because we thought it was good, we wanted to work with professionals across the sector to achieve feedback on the tool and gain buy-in with our approach. Given our wide ranging experiences in working across the children’s workforce both regionally and nationally, we understood the importance of getting others to champion approaches across their networks.

Once we had made (or rejected) the changes suggested by the young people, we sent the tool to a range of professionals working with young people, including the local lead for Prevent, Designated Safeguarding Leads in youth work organisations and Children’s Social Care. These included:

  • The Chair of Ethics Committee for a leading child safeguarding charity

  • A Safeguarding lead at the DfE

  • An independent consultant in RSE

  • Headteachers from one primary and one secondary school in Cornwall

  • The Prevent Lead for Cornwall

  • A director from a leading online safety NGO

Some of the feedback highlighted issues of phrasing, for example “withdrawal issues”, was listed as Harmful for the 0–5 age group, but it soon became clear that this was not detailed enough for many professionals to be able to identify in a child—any child may become distressed upon having something taken away from them with no reason given and no other activity to distract them, so on the one hand it could seem to be general, and on the other hand we did not want professionals to only look for the extreme withdrawal that may be associated with drug use. It was later amended to “Upset or aggressive response to withdrawal of device (beyond what is normal for the child)”.

Another colleague pointed out that the way we had suggested dealing with issues may have inadvertently suggested not informing children of the law. This was the phrase:

We should not tell young people that sending nudes is illegal, as we risk re-victimising those who are being abused as a result of taking and sending an image.

This was addressed by adding the word “simply” and it now reads:

We should not simply tell young people that sending nudes is illegal, as we risk re-victimising those who are being abused as a result of taking and sending an image.

Making these changes was important to show professionals that we had listened and therefore to ensure buy-in from professionals across the county (and the country). We knew there was a risk small issues had the potential to turn whole teams away from using the tool because the language did not sit comfortably with their safeguarding policies.

Professionals did not suggest adding any behaviours to the tool at this stage (nor have they at any point since its deployment). This gives us confidence that the tool has comprehensive coverage of online behaviours faced by young people and recognised by professionals.

We made it clear to all consulted that this was a tool with young voice at its heart and we were guided in the main by the validation from young people. However, it was a worthwhile exercise to consult with external stakeholders to evaluate both tone and value—we were looking for validation rather than another round of editing. Overall, with the exception of a few minor changes and refinements (particularly around screen time where we elaborated on the types of screen time and how passivity was potentially more harmful than active engagement), there were no modification to the tool as a result of this consultation, and the tool was well received. Those in front line delivery could all see that value of the tool and we keen to engage with it, and others came back with offers of promotion across their networks once the tool is finalised.

However, we did face one challenge that illustrated very clearly how poor knowledge and digital value bias can result in exactly the sort of preventative messaging that young people tell us means that they will not disclose harm to adults.

We were intending to work with another NGO on the development and release of the tool. The NGO is a national organisation which focusses in the main on sex and relationships education, and sexual health in general. We saw this as a positive and complimentary relationship, particularly given how young people had frequently told us about the role of digital technology in personal and sexual relationships. However, when we provided the organisation with a draft of the tool, after much delay, their feedback presented a fundamental challenge for the partnership to work.

We were told that, in order to work with them on the project, all mentions of sending intimate images had to be listed as “harmful”, regardless of the age of the young person. When challenged on their rationale for this, we were told it’s because it’s “illegal”. Clearly, this was in conflict with young people, who had told us through the project that the issues around legality of exchanging intimate images, and how this is expressed in school settings, are exactly the reason they do not disclose to adults in the event of further non-consensual sharing. We were further told that there were “new laws” that made it clear “sexting was unacceptable”, we had to use the term sexting “because that’s what young people say” and we would be giving out the wrong message if we said there were some activities that would not be harmful.

Clearly, there are no new laws on the exchange of intimate images among minors in the UK, as we have been told in many discussions with young people that “only people over 40 call it sexting”. However, when challenged on this, the view remained that what they were doing was illegal and therefore should be categorised as harmful. This was after we pointed out that many young people had talked about how the exchange of intimate images within a relationship was typical and harm generally occurred when the images were non-consensually shared.

When asked whether they would say that any minor engaged in sexual activity would be harmful, they said this would not be the case (which we are not surprised about given this organisation does a lot of work with minors who become pregnant). Even though the premise was the same (illegal = harmful), they could not see parallels with what they were calling for with digital issues. For some, ultimately undetermined, reason, digital made it different, which highlighted to us once more that even those who claim a progressive voice will bring their own value biases to safeguarding discussions.

And as a result of these discussions, we stopped working with the organisation and continued with the majority of intimate image behaviours (except in younger categories) as “potentially harmful”, as we had been told by many young people.

Reflections from Observations with the Professional Training Sessions

Moving on to observations from training, as we have described elsewhere in this book, the intention of the project was not just to launch a resource that would be downloaded and used by professionals with no guidance, we intended to provide training for professionals alongside the launch of the tool. Given the launch time of the tool—June 2020—the intention for face-to-face training was abandoned to incorporate online delivery methods that had become so prevalent during the first COVID-19 lockdown. In one way, this was a beneficial outcome—professionals had become used to using online technology for meetings, and we could make sure of pre-sessional online resources and focus the live part of the training on questions and answers and discussion. However, we do acknowledge that some of the value of delivering training face to face is the greater opportunity for group discussion and learning from peers, which was a challenge in online delivery.

The training comprised a number of elements:

  • A recorded talk providing an overview of the tool, going through the different types of behaviours and how to respond, and an exploration of the behaviours themselves.

  • Some “myth busting” online activities, which allowed professionals to explore their own knowledge of online safeguarding issues (e.g. “legal or illegal?” scenarios which looked at things like accessing the dark web and pornography). These were again delivered in an online package the profession could do on their own. This a useful technique that exploited the asynchronous nature of some of the online training—it allowed professionals to test their knowledge without being put in a situation where they had to demonstrate it in front of others.

  • An hour of online “face-to-face” discussion with an expectation that attendees will have done the other online elements prior to attending.

To date, we have had approximately 200 professionals attend the training, which provides us with a useful evidence base to reflect upon its impact and also to observe professionals’ wider concerns around online safeguarding. While the training was intended in the first instance to be aimed at education professionals, we soon expanded (due to demand) to other sectors such as youth workers and social care. The majority of attendees did come from education settings; however, a significant minority also came from the wider children’s workforce and, as word of mouth spread awareness of the training, more professionals signed up to sessions.

What we present below are observations from the face-to-face aspects of the sessions. While there were few surprises from these sessions, what observations we did collect gave us both confidence about a youth-centric approach for the tool and also how the tool fits into a wider culture change around online safeguarding.

“But It’s Illegal”

One thing we were not surprised about, and had reinforced throughout the training, was that legal issues were frequently used within preventative narrative, and professionals were surprised with the pre-sessional material that burst some of these legal perceptions prior to the discussion sessions. While we have already talked, at length, about the legalities around the exchange of intimate images among minors, and this was certainly a strongly held, but poorly understood, view by many professionals, we also observed some other key legal myths, including that it was illegal for a child to play an age inappropriate game, it was illegal for a child to access pornography, and it is illegal for a child to be on social media under the age of thirteen.

This, obviously, reflected what we had been told by young people, and this is no surprise given it would have been professionals similar to those in the training sessions who had delivered this education to the young people. When we expanded upon these legal issues, and highlighted that they are all more complex that preventative “it’s illegal, don’t do it messages”, there was always a lot of surprise. If we take, for example, the perennial favourite of “don’t go on social media until you’re thirteen”, most professionals believed that this was due to safeguarding legislation, rather than the reality of data protection law maintaining that a minor under the age of 13 being unable to consent to their data being collected by a platform (as set out in both the US Children’s Online Privacy Protection Act 1998 [Federal Trade Commission, 1998], the EU General Data Protection Regulation 2016 (GDPR) [European Union, 2016] and national implementations of the GDPR, such as the UK’s Data Protection Act 2018 [UK Government, 2018]).

Sometimes the challenges to legal assumptions were met with surprise and thanks, sometimes they were met with disappointment. We have observed this is our other work with professionals, sometimes these legal messages are far easier to deliver than more complex supportive messaging. If we say “don’t do it, it’s illegal”, we can immediately project blame onto the victim if they then engage in something, which, of course, harks back to one of the key reasons young people tell us they will not disclose, they do not want to “get judged”.

Nevertheless, within these training activities, what was clearly illustrated was the value of moving legal knowledge from preventative to deeper knowledge, as this helps break down the disclosure barrier with young people.

“Well, My Child/Grandchild Would Never Do This”

One thing that is constant with both the training delivered by headstart and also our wider practice, is that many professionals will bring a parental perspective to online safeguarding training and decision making.

During the training, there have been many times where a professional will start to talk about a conversation about their children, or how they have observed how their child behaves with digital technology, or how “my kids have left home now but I’ve seen my grandchildren on these devices all of the time”. From one perspective, this is to be expected—we bring our own experiences into professional practice all of the time, it is human nature. However, we did often unpick whether this was an appropriate thing to do in a safeguarding judgement, which needs to be evidence led. If we use as the foundation of our judgement “well, my child wouldn’t do this”, we are, of course, bringing our own value biases to bear in professional judgement. If one’s own child would not do something, yet a young person with whom one is working has disclosed it, we immediately bring judgement on them—they’ve done wrong. While this was discussed at length, and it is frequently acknowledged as problematic by professionals, it happens in virtually every session. It would seem this is a difficult thing to remove from the discourse, but we would at least hope when professionals do revert to parent in their judgements, they are at least cognisant of their actions.

“It’s the Parent’s Fault”

Developing the parental theme, another recurring discussion as what we might regard as deflection—the view of a professional that regardless of what they do—it will be impossible to resolve an issue because “the parents just let them do it anyway”. There are a couple of key facets to this discussion. Firstly, the permissive parent, that will allow their child to have, for example, “a mobile phone far too young” in the (biased) view of the professional, or “buy them inappropriate games”. We also hear professionals talking about the lack of awareness of online safeguarding issues and how efforts to education (such as parents online safety sessions in the school) are rarely well attended.

The second facet is the parental pile on, generally through social media. Two children will fall out about a digital issue, make complaints to the school and calls for intervention, and while the children resolve their disagreement, the parents are fully engaged on social media criticising both the children and also the school for not dealing with the problem effectively.

We would certainly observe through our wider practice that parents, obviously, have a role to play in the safeguarding of their children and that they can be both supportive and problematic. We have spoken to many young people who say they would not disclose to parents for risk of punishment or judgement, and we have spoken to parents about how they “would not expect this behaviour” from their children. Equally, we have spoken to young people who are confident that they can disclose harms and gain support from their parents, and parents who reflect upon their own behaviour when they were younger, and how their own children are experiencing similar, just on a more public, or digital, stage.

When it comes to parents making use of social media to exacerbate issues, we have every sympathy with professionals and often remind them of their employer’s duty of care towards them. While freedom of speech is a perennial claim by online trolls, libel and slander are both things where a professional might expect the support of their employer. Again, there is no easy answer to this, but it does remind us that parents are of course a stakeholder in their children’s safety and there should be discourse between stakeholders in this regard.

We have, over the last year, developed a parental offerFootnote 1 to compliment the more complex professional’s tool. Working with parents groups in the Headstart Kernow group, the general view was that parents are worried about online harms (and who can blame them if their primary source of information related to children’s use of online technologies is the media?) and some sort of “panic reduction” tool would be valuable. As a result of these discussions, we have produced a reduced version of the tool for different age groups that have key behaviours and some general guidance about supporting young people who disclose upset or harm. While it is too early to reflect upon the efficaciousness of these resources for parents, they have been well received by a lot of professionals who see them as a valuable tool to better engage parents and make sure parents and professionals are both approaching online safeguarding from the same perspective.

“Safeguarding Alert – Panic!”

A number of attendees at training talked about the concerns and, in some cases, panic, that resulted from a safeguarding alert being distributed across the local authority. Clearly, again, raising alerts is done for the best on intentions, but something they lack a level of critical thinking before release. Generally from law enforcement, these alerts would spread quickly across a region and will result in senior leaders cascading concerns to safeguarding leads who are told to “do something!”. In our discussions around these alerts, the key thing we always return to is “apply some critical thinking to the alert before reacting too strongly”.

A perennial favourite is something along the lines of “We’ve been sent a list of the top ten most dangerous apps and our students use four of them!”. Police forces, it seems, are very keen on distributing lists of “dangerous apps” and urge professionals and parents to check whether their young people use them. These lists are generally produced as a result of investigations and national/international police initiatives to explore the sort of platforms used in cases that result in online harms.

The problem is, however, all that these lists do is reflect what is popular among young people. Apps where, for example, grooming occur will be the apps used by children and, sadly, predators with a sexual interest in children will follow. In 2018, a BBC news article raised concerns that Kik Messenger (a now defunct messaging platform similar to WhatsApp and Signal) was used in “over a thousand grooming cases”. On the face of it this is cause for concern if young people disclosed using this platform. However, a simple examination of Kik Messenger suggests that it had been downloaded over 300 million times. So, what the headline should have said is “popular messaging platform very rarely used for child abuse”. A similar statistic, but one that is less likely to attract much attention might be “1000 predators wear trainers while grooming children”. Wearing trainers is not the causation, similarly it is rarely the app that is dangerous, and it is the behaviour upon it, which is why young people need to be confident that if they see something upsetting on a platform, or they are asked to do something they are uncomfortable with, they need to get support and help in removing the content or blocking and reporting an abuser. What they do not need to hear is “its on the dangerous app list, you’ve only got yourself to blame” or “if you hadn’t installed that dangerous app, you wouldn’t have been abused”.

In one case dealt with within the Headstart Kernow project, we were contacted about a “dangerous game”, that would “encourage children to engage with county lines”, a emergent form of drug dealing where vulnerable young people were groomed in to acting as distributors in their respective regions (Robinson et al., 2019). Of course this triggered concerns for use, given our previous work trying to debunk the causation between playing video games and acting upon what takes place in video games. With brief investigation, it turned out that the game was very basic app–based game that was generally receiving poor reviews and had been downloaded less than one thousand times. However, one review, clearly sarcastic, said “great for teaching kids about drug dealing”. This was all that it took to diffuse the situation and remove the impending safeguarding alert.

Clearly, this is an issue that requires dialogue between stakeholders, but from the training we could see that in a lot of cases professionals needed reassurance not a little critical thinking can deescalate these well-intentioned but potentially harmful concerns very quickly. One only needs to reflect upon the panic around the Momo Challenge (Phippen & Bond, 2019) to see the impact of knee-jerk reaction rather than critical thinking to these safeguarding alerts.

The Overarching Observation

We are mindful that, throughout this book, and drawing upon observations from the training sessions, we might be seen as being critical of professionals and see them as the problem in online safeguarding. While reporting on observations, it is important to do so objectively without prejudice. It is clear from our discussions with young people that many do not trust adults who care for them when it comes to resolving issues related to online harms, and we cannot report on this.

However, we should stress that, in the majority of cases, professionals we have worked with on this project, and more widely in our practice, come from a well-intentioned place and want to do their best to support young people they work with. However, without effective support and training, their knowledge falls back upon what they have developed from their own social lives, the use of digital technology themselves, what they discuss with peers, and what they learn from the media. This is coupled with increasingly demands from regulators and government bodies to become compliant with ever changing guidance and poorly understood legal contexts. During the Headstart project, there have been three iterations of Keeping Children Safe in Education, changes in the inspection process around online safety, reams of non-statutory guidance, signposting to numerous resources and changes in curriculum, all of which the professionals are expected to respond to with little national guidance. As we have stated in Chapter 2—while the statutory demands make it clear professionals should be training in online safeguarding, deliver education in online safeguarding and have technical measures in place to ensure children and young people are “safe from online harm”, there is little guidance on what good online safeguarding training and education looks like, just that they have to do it.

As we have observed above about gaining profession feedback prior to launch, we have also not had any new behaviours raised in training sessions. When asked for observations about the tool, professionals tended to be focussed upon specific technologies, and in some cases asked for lists of apps to be concerned about (we will discuss this in more detail below), or ways that can advise parents/carers to track and monitor young people’s devices. We should bear in mind that these discussions took place after the professionals had done the pre-sessional online materials, in which there is a great deal of messaging about how we need to refocus from technology to behaviours and support.

Many professionals admit that they don’t really know what young people do online, and have a minimal understanding of things like how privacy settings actually are and how they work. By way of example, in discussing YouTube in one training session, none of the professionals knew what sorts of videos young people watch on YouTube, that videos could be listed as “Unlisted” or what that meant, or that you could turn off comments (let alone how to). Yet figures show that 80% of people between 15 and 25 use YouTube (Statista 2021b), and it had the highest reach to this age group of any social media site in 2020 (Statista 2021a).

Professionals feel like they have to have all of the answers, to be the digital white knight and protect children in their care from the harms from being online, because they are told this is their statutory duty. However, this is not what we hear from young people—they do not want all of the answers, they want to get help and support. Even something when a professional, in response to a young person disclosing harm, evolves their response from “how could you have done that, how could you be so stupid?” to “ok, this happens, lets see what we can do about it”, is progress. This does not require the professional to be in possession of a list of “top ten online harms this month” or to be on top of the latest case law related to children and illegal data collection. It just requires the confidence to realise this is not about technology, it is about supporting the child, and there are networks across the stakeholder space that can provide answers even if the professional does not have them immediately to hand.

One thing we have noticed in our discussions with professionals is moving the discourse from a preventative one to one of harm reduction that is generally viewed as positive and relatable. A lot of professionals were more comfortable with this approach, but had never considered it for online safeguarding. While harm reduction is well established in public health challenges (e.g. see Inciardi & Harrison, 1999), there is little work that considers online safeguarding from a harm reduction perspective.

By way of example, in one training session, where we had discussed a length the need to move away from the technology and look at the behaviour, and just because something is described on a platform the professional does not recognise does not mean they cannot help the young person, one attendee said “but what should I do when they say they buy drugs on snapchat?”. We have touched on this topic before, in Chapter 3. Their view was this is problematic because, firstly, they do not know what SnapChat is and, secondly, surely this is worse because it is online. However, when we unpacked it from a perspective where they had received harm reduction training (“lets ignore the technology for the minute, what advice would you give to a young person saying they’d purchased some MDMA from someone in a pub carpark?”) they could see that the advice could be the same, and it was about supporting the young person, rather than panicking about what SnapChat was.

We are often told by professionals that they have tried to talk to young people about online issues, and “they don’t want to talk about them”. This seems entire in conflict with our own experiences with young people. Indeed, during a follow-up session with a college we are working with, we conducted a focus group with some of their students very recently. The college has been using the tool for a year and feel they want to push through a “culture change” across the institution that is youth centric and develops confidence to disclose online harms. One aspect of this they wished to explore was a conversation with some students to understand what would work around discussing online safeguarding in their tutor sessions. One thing that came out very strongly from the students we talked with was that “discussion and questions” are far more useful than “PowerPoints and videos”. They also made a very telling point that while they did not expect the tutor to have all of the answers, they did expect them to be able to manage the discussion to make sure everyone had chance to talk and it wasn’t taken over by the loudest voices.

Throughout all of the focus groups with young people on this project, as well as our own wider practice, we find young people very willing to talk about their online lives. When we’ve asked, they’ve been more than happy to explain the intricacies of the apps, games and websites they spend their time on, as well as a willingness to share the practices they have to manage the risks they face.

Professionals perceive high levels of risk to young people online and want to do all they can to keep them safe. However, their perceptions are often flawed by a lack of understanding of what young people are doing online, and a lack of willingness on their part to explore this with each individual young person. They are looking for a one-size-fits-all approach that simply does not exist. We are also mindful that a lot of professionals ask for “the” resource, that will make all of these problems go away. Again, a preventative mindset, and a need to be the white knight, results in them needing “something” to stop all of this. This is in contrast to the needs expressed by young people, as raised by the discussion above, they do not want all of the answers, because there are few clear answers, they want help.