Beyond ChatGPT: The very near future of machine learning and early childhood
31 Jan, 2023
1. Ada and Jenni
Four-year-old Ada is playing in the late evening, after dinner, and asks Jenni to draw her a picture of a kitten exploring the moon, and, after the bath, to tell her a story about the kitten’s adventures. Jenni doesn’t have a body, but Ada imagines she does somewhere. For now, Jenni’s replies come out of one of the tiny white smart speakers that are in most rooms in Ada’s house. Jenni confirms that the picture and the story are ready whenever Ada is.
Once she’s bathed, and cleaned her teeth, Ada and her mum curl up together on Ada’s bed. A small screen in Ada’s bedroom, which could easily have been mistaken for a static picture frame, now glows gently with the picture Ada requested. The kitten is very cute, and has giant cartoon-like eyes. Ada loves it! “Jenni, can you ask Grandma to read tonight?” Ada asks.
The first time Ada’s mum heard Jenni speak using Ada’s grandmother’s voice, it was disconcerting; Ada’s grandmother sadly passed away six months before Ada was born, and there were very few videos of grandma to show Ada. Now, when Jenni reads, Ada and her mother listen together, and both find the stories and the voice soothing, but often for different reasons. Neither of them have heard this story before. It didn’t exist until Ada asked Jenni to create it.
As Ada’s grandmother’s voice tells the tales of the kitten’s adventures on the moon – which included a surprising escape from a dog, hiding inside a cheesy lunar crater, and a rocket trip – both Ada and her mum fall asleep (parenting is pretty exhausting, even if you’re not reading the last bedtime story).
2. The reality of Jenni
This might sound like science fiction (or a dystopian future) but all the technologies to make this happen exist; an enterprising individual or company will inevitably tie them together.
Digital assistants have already been with us for a decade through Alexa and Siri, and have become exceedingly good at understanding spoken language. Although these assistants speak with natural sounding but synthetic voices, Microsoft announced a new artificial intelligence (AI) model called VALL-E that can simulate a person’s speech with only a three second audio sample of that person’s voice.
However, the biggest advance in AI that makes this scenario possible is the transformer neural network, which allows computers to keep track of many concepts in written text, even when those concepts are spaced a long way apart. They were initially developed to greatly improve translation between languages but a recent model GPT-3 (the T stands for transformer) can tell a story based on a few ideas such as those provided by Ada, summarise complex documents or even answer homework questions. Transformer models are not restricted to producing text, they can also generate original pictures or music based on a text description. Figure 1 shows a picture drawn by DALL·E using Ada’s words.
3. Implications for young children and the adults around them
Today’s four-year-olds will be growing up with smart speakers and AI tools that can generate words, images, voices, music and video. Just as the written word or the printing press changed the relationship people have with memory, stories, play and work, changes of a similar magnitude are happening today in light of machine learning tools.
Current commentary about the tools has tended to focus on the disruptive, negative and scary potential impacts of these tools, reminiscent of the moral panic that has accompanied many technological innovations, including books and TV. Yet these tools are in their infancy, with best practical and ethical practice still being developed. Some early commentary and experiments are contributing to figuring out how best to use the tools.
Today’s children likely won’t be afraid of these tools. They certainly won’t be replaced by them. Today’s children will create, play and work with the next generation of ChatGPT, DALL-E and hundreds of similar tools. The challenge for adults around young children is how to help children understand the opportunities, advantages, limitations, and risks of using these tools.
The are many questions to consider when exploring these new tools. Even just from the story of Ada and Jenni, the sorts of issues tomorrow’s children will need to explore include:
- How can children learn to judge what AI content is factually correct vs fictional?
- How will children understand death when they can video chat with both their live grandpa and their dead grandma?
- How will children develop respect of creative works and ownership when AI can create pictures or music ‘in the style of’ any artist?
As we wrestle with the implications of these tools, we must ensure we’re helping today’s children understand and make the best practical and ethical uses of tomorrow’s machine learning tools.
Get new blog posts straight to your inbox
International Perspectives: UK “Secure by Design” vs Australian “Safety by Design”
While the Covid-19 pandemic has sent internet use to record levels among both children and adults, UK household adoption of connected d...
Where are parents seeking information about their children’s health and digital technology use?
Digital technology is ubiquitous in the lives of many families with young children and a concern for many parents. In 2021, The Australian Child Healt...
International Perspectives: Children connected to mobile digital technologies – Researching beyond access
I have been researching uses, practices and policies of media and digital technologies in Argentina for the last 20 years, and I am particularly conce...
Celebrate diversity: 3rd of December, the International Day of Persons with Disabilities
The International Day of Persons with Disabilities (IDPWD) has been celebrated on the 3rd of December since 1992. The United Nations General Assembly ...