Regulators and lawmakers globally have long called for greater protections for children online but it is only recently that the necessary safeguarding measures have been made requirements. The General Data Protection Regulation (GDPR) and UK Data Protection Act (DPA), for example, require parental consent for the processing of a child’s data (where consent is the legal basis for processing), foregrounding the need for special provisions for children’s internet use as well as greater opportunities for digital parenting.
Assessments can also be made, of course, retroactively of existing features and services with steps being taken thereafter to rectify any issues and minimise any service, feature or product’s impact on children. However, our aim is to ensure that, looking forward, digital best practice regarding children’s rights and wellbeing begins at the design stage. It is vital that businesses involve engineers and developers in implementing policy and Safety by Design principles, as described by the Australian eSafety Commissioner.
Child’s Rights Impact Assessment
And now, with the adoption of the of General Comment 25 by the UN Committee on the Rights of the Child, child rights impact assessments have been instituted as a specific recommendation for the business sector.The timing of the Comment’s adoption coincides with the ongoing development and testing of a Child Rights Impact Assessment (CRIA). The document is set to become a series of decision trees to be used by engineers and developers at the design stage of new products and features. These decision trees will include assessment of the age-appropriateness of a feature or service and guidance will be provided on strategies for minimising risk to children.
The CRIA being developed embodies those principles and constitutes an operationalisation of child-rights due diligence. It takes the widely known 4 Cs of risk (Content, Contact, Conduct and Contract) as its organising categories, each providing a lens through which services and features’ impacts may be assessed. The authors of the CRIA also propose the addition of two interconnected categories: AI and behavioural modification.
The uptake and progression of AI long ago surpassed existing regulation and while there have been many codes of practice and guidance released, lawmakers have not yet tackled the potential harms that the use of AI, in ways and for certain purposes, may pose, especially to younger users. Behavioural modification is or can be a consequence, intentional or otherwise, of AI.
For example, a video streaming service may recommend to a child another piece of content/media for them to view next. It is vital that we as a society consider how a child's agentic properties can be safeguarded when they are still unaware of the black-box mechanisms surrounding what is given to them as a choice. In this way, AI and behavioural modification are intertwined. Together with the existing groupings, they will comprise the A, B, Cs of risk.
The CRIA, then, will develop on the existing knowledge base, creating opportunities for standardised practice across industry with the best interests of children at heart. It will exist at the nexus of regulatory work, developing governance frameworks and industry trust and safety activity.