The Ethical Implications of Using AI in Kinship, Guardian, and Parenting Assessments

16.10.2024 04:35 AM By Glenn Payne

As Artificial Intelligence (AI) technology continues to develop, its applications in the family services industry have the potential to revolutionise assessments for kinship care, guardianship, and parenting. AI offers the possibility of automating data collection, identifying risks, and even predicting outcomes based on historical data, making the process more efficient. However, the use of AI in such sensitive areas also raises significant ethical concerns that must be carefully considered.


While AI can enhance decision-making and support social workers or assessors, the ethical challenges surrounding its use—particularly in child welfare and family services—must not be underestimated.


At One Future, while we are closely monitoring the advancements in AI we are not currently using it to inform our assessments. Until the ethical challenges outlined below are adequately addressed, AI will remain a tool we observe with interest, but it will not yet influence or assist in our assessment process. Our priority remains maintaining personal, human-centred care in every assessment we conduct.


1. Bias in AI Algorithms

One of the most significant ethical issues with AI is algorithmic bias. AI systems are trained on historical data, which can reflect societal biases. If an AI model is built using biased data, the outcomes it generates may also be biased. In kinship care and guardianship assessments, this could mean that certain demographic groups—based on race, socioeconomic status, or family structure—are unfairly flagged as higher risk, leading to unjust treatment.

A recent example occurred in the United Kingdom, where an AI system used in child protection faced criticism for disproportionately flagging families from minority and low-income backgrounds. Similar risks could emerge in Australia if AI is applied without sufficient oversight and diversity in its data sets.


2. Transparency and Accountability

AI systems, especially those built on complex algorithms, can sometimes operate as a "black box," meaning it’s difficult to understand how they arrive at specific conclusions. In family services, this lack of transparency could have serious implications for accountability. If an AI system flags a family as unsuitable for guardianship, it’s crucial that the reasons for that decision are clear and understandable. Without transparency, families may find it hard to challenge decisions that could have life-altering consequences for them and their children.


In social services, decisions involving a child's well-being must be transparent and defensible. Any system that makes recommendations or assessments must allow professionals and families to understand the underlying factors and logic.


3. Privacy and Data Security

AI relies heavily on data to function effectively, and in family services, this data is often sensitive, including personal information about children, families, and their living conditions. Data privacy becomes a paramount concern when using AI. If improperly handled or if security is compromised, the privacy of families could be breached, leading to severe ethical and legal ramifications.


AI systems in social services also carry the risk of data misuse, where sensitive family data could potentially be shared, sold, or used beyond its intended purpose.


4. Human Oversight and AI Dependency

AI has the potential to assist human assessors in making more informed decisions, but it must not replace human judgement. Social care assessments are deeply personal and require nuanced understanding, empathy, and cultural sensitivity—qualities that AI lacks. There is a danger that over-reliance on AI could lead to less personalised care, where decisions are made based solely on data patterns, rather than a full understanding of the unique circumstances of each family.


A report by UNICEF highlighted the risks of “datafication” in child welfare, where human professionals could become overly dependent on data-driven insights, potentially neglecting the complex human factors that are essential in these cases.


5. Informed Consent and Family Autonomy

When using AI in family services, gaining informed consent is essential. Families must be made aware of how AI is used in their assessments and the potential implications it may have. Without informed consent, families might not understand how AI will influence decisions about their care and guardianship, which can lead to distrust.


Additionally, families should have the right to challenge AI-generated assessments. There needs to be a clear, accessible process for families to dispute decisions they feel are incorrect or unfair.



6. Long-Term Impact and Dehumanisation

AI can be a powerful tool for identifying patterns and risks, but it can also contribute to a dehumanisation of the assessment process if not used thoughtfully. The family services sector is inherently human-centric, relying on relationships and trust between families and professionals. Over-reliance on AI might risk turning the assessment into a mechanical process, where the rich, personal aspects of family life are reduced to data points.


The Role of AI in Family Services

AI holds great potential in transforming the landscape of kinship, guardianship, and parenting assessments, offering tools that can streamline processes, draw on current research and evidence, and identify risks more efficiently. However, ethical considerations must remain at the forefront when adopting AI in family services. Addressing bias, ensuring transparency, protecting privacy, and maintaining human oversight are critical to ensuring that AI serves as a beneficial tool without compromising the integrity or fairness of the assessment process.



Glenn Payne

Glenn Payne

Executive Manager One Future Group
http://www.onefuture.com.au/

About the Author: Glenn has 25 years in social services, specialising in aged care, disability, and family services. He’s held executive roles in tech and strategy, winning international awards for innovation. Glenn is focused on improving outcomes through technology in the sector.