International Perspectives: The datafied lives of children and young people
07 Nov, 2022

In Autumn 2021, we interviewed a family of four who live in a suburban area in a semi-detached home in Lombardia, Northern Italy. The kids, Francesco and Stefano, were aged 6 and 4 when Italy was under the first, and most severe, lockdown in March and April 2020. Since Francesco was already in primary school, the family bought their first domestic internet connection, a laptop and a tablet, to let him participate in remote learning activities. Since then, the two boys, now 8 and 6 respectively, have turned into skilled and passionate gamers: the basement hosts two second-hand Wii consoles, and they play Fortnite simultaneously whenever they can enjoy some time off school or sports activities. Moreover, they access Roblox and other free gaming apps on their father’s old phone, and the school encourages them to exercise on Google Classroom. Francesco’s and Stefano’s everyday life is not dissimilar to that of their peers. Yet their experiences introduce a number of issues we would like to discuss in this post:
- The pervasiveness of datafication, and its tremendous expansion during the COVID-19 pandemic;
- The tangible outcomes of data, algorithms and automated decision making on children’s and families’ lives;
- The cultural premises and consequences of datafication.
The pervasive datafication of children’s lives
Francesco’s and Stefano’s experience first reminds us of the increasing datafication of education. Different educational institutions, from early years to higher education alike, have become accustomed to making use of data-driven educational technologies in a variety of ways— for letting worried parents monitor their child’s daily routines in nurseries, for personalising the curriculum around every student’s personal needs, for predicting outcomes and preventing risks, for providing insights into the processes of learning and moulding students’ emotions to maximise learning. In some kindergartens, AI powered humanoid robots are working alongside human teachers – telling folk tales and asking follow-up questions from pre-schoolers to test their listening comprehension. In fact, the usage of various Edtech tools and apps has become so pervasive in early childhood settings that scholars have accused the early years sector of being “obsessed with data“. The COVID-19 pandemic, with repeated lockdowns and other social distancing measures, has undoubtedly expanded and accelerated the students’ data-drain, with privacy principles being suspended or “coronawashed“.
Remote learning meant that almost 1.6 billion children all over the world were coerced into using a variety of data-intensive educational online platforms (UNESCO, 2020). In fact, according to a study carried out by Human Rights Watch, nearly 90 percent of the educational apps and platforms (N= 194) that were used for learning during the COVID-19 pandemic harvested children’s data and shared it with marketers and data brokers, turning children’s data into a highly profitable commodity.
Many families also had to buy new devices, including tablets and computers, to allow every household member to learn or work from home. Although in some countries, for example in the US and in Singapore, many schools provided the students with devices they could use for home-based learning – such help oftentimes came with a price due to student activity monitoring software that was installed in these school-issued devices. Such software (e.g. Gaggle) is mainly used to “protect student safety and ensure their well-being” (Jones & Najera Mendoza, 2021:3). However, there is insufficient evidence to suggest that social media surveillance or content monitoring on students’ accounts is an effective strategy to address schools’ growing concerns about cyberbullying, students’ mental health (including indicators of self-harm, depression), or school violence, including detecting risky behaviors. Such monitoring solutions are not only applied to scanning students’ online searches, monitoring the content of their emails, and other documents as well as their online communications, but also provide schools with a range of monitoring and control functionalities. This includes blocking inappropriate material, tracking logins to school-related applications, viewing students’ screens in real-time, closing web browser tabs, and taking direct control of the input functionality, thus creating a variety of concerns related to children’s rights, privacy, and free speech without any real options for the students to opt out from such surveillance.
Thus, during the pandemic, our already media-rich homes became filled with even more technologies that enable the extraction of data from the flow of everyday life practices and identities, and their conversion into profitable resources for platforms. The growing market of domestic IoTs (Internet of Things) devices – including smart speakers, internet-connected toys, home hubs and many smart appliances – expands the range and detail of “data shared in the home”. These devices also support multiple mediatized parenting practices, such as sharenting (sharing one’s parental joys and challenges) and documenting children’s lives online. It includes “caring dataveillance” – the use of wearable devices, and other technological gadgets and apps, to make parenting practices easier. This includes keeping babies healthy, or when children grow up, it provides parental controls to keep children safe online and/or uses tracking technologies to keep an eye on children while being physically distant.
As much as with parenthood and parenting practices, children’s and young people’s leisure time, self-expression and sociality have also become more and more digitalised, and therefore dependent upon a number the commercially-driven business logic of data capitalism. Spontaneous practices like play and informal interactions with peers are now coded, standardised and manipulated by algorithms.
In sum, children today are conceived and raised in a digital world where almost all aspects of our lives, on- and offline, are increasingly monitored, analysed, and transformed into quantifiable data. With what consequences?
In our own book, we have documented ways in which algorithmic classifications can exclude children from access to certain types of education, or health insurance and loans, with the effect of exacerbating inequalities and fully pre-determining children’s experiences and life course. And with little or no control on the ways in which algorithms represent us and make decisions on our behalf. Algorithms are indeed biased – either because they are based on biased historical data or have biases built in their own design – opaque and unexplainable. So, why do we trust data, algorithms and automation? These are key challenges as we continue to investigate the role of digital technologies in children’s lives.
Get new blog posts straight to your inbox
Recent Blogs
-
International Perspectives: Children’s rights and parental responsibilities in a digital world
In the growing debate over children’s rights in relation to the digital world, parents may wonder about their role. As technology gets more complex,...
-
International Perspectives: Ukrainian Child Refugees’ Digital Practices of Belonging
Ukrainian child refugees use a wide repertoire of digital technologies in their everyday practices of belonging to their home country as well as their...
-
International Perspectives: Multi-Stakeholder Approach to Online Safety Must Also Involve Children
The past two years saw growing buzz around Web3 and the metaverse. Along with the celebratory discourse about exciting prospects of the metaverse were...
-
International Perspectives: Valuing the everyday roles played by families in children’s digital media practices at home
Archie and Beth explore the Nina and the Neurons game It’s a scorching summer day in a UK city and I am visiting three-year-old Archie and his mum,...