Artificial Intelligence in Social Care IT Systems
Julie Tyas Social Care Safety Officer from our Governance, Risk and Compliance team has written this article about Artificial Intelligence in Social Care IT Systems, which you can read below.
You can find out more about what our Governance, Risk and Compliance team do here: Governance Risk and Compliance
Introduction
The introduction of Artificial Intelligence (AI) into social care systems has the potential to be transformative in reshaping the social care landscape, bringing opportunities to improve service delivery, enhance operational efficiency, and support proactive interventions. However, whilst these capabilities may help to address some critical challenges, they must be balanced against wider ethical considerations, particularly regarding the rights, dignity, and autonomy of people who draw on care services. It is crucial to identify and mitigate these challenges. AI must align with both ethical principles and the needs of diverse groups of people, with their rights and interests remaining at the centre of all technological innovations in social care. AI deployment in social care systems must align with the core principles of social work.
The Role of AI Social Care
The role of AI in social care is able to present a variety of potential benefits such as:
However, there is now a widely accepted agreement amongst commentators and stakeholders who are active in this space, that these advantages must be weighed against potential risks. The stakes are particularly high in social care because people often experience heightened vulnerability. Many rely on these systems for essential support, and the introduction of AI into these environments must respect the delicate balance between innovation and the preservation of human agency and dignity.
Governance framework for the responsible use of AI in Social Care IT
?? Autonomy, dignity and person-centred care
Social care emphasises individualised and empathetic support, values that could be compromised if decision-making becomes overly automated. The risk of “technological determinism,” where technology dictates outcomes without room for human intervention could threaten the core principles of social work.
People risk losing agency over their care and support if AI-driven decisions are imposed without human oversight. This could alienate individuals and undermine person-centred care principles. Automated systems may fail to consider context or nuance in individual cases that would be identified by a human. People may feel devalued when AI-driven impersonal processes dominate their interactions. Upholding the dignity of individuals requires that AI be a tool to enhance, not replace, person-centred care.
Governance requirement
Governance processes must prioritise the perspectives and needs of people with lived experience. The Oxford University Institute of Ethics in AI argues that the role of human rights and trusting relationships between people and care providers should be central to AI use in social care, stating that “It’s use should centre on values underlying high quality care, such as autonomy, person-centredness, and wellbeing”. Only with this level of governance will AI deployment in social care systems align with the core principles of social work.
Involving other relevant stakeholders, such as advocacy groups and social care professionals? in the design, development and evaluation of AI systems can help to ensure the technology is fit for purpose. Different types of forums to gather relevant feedback can be invaluable for this purpose.
AI can most certainly be a force for good in this area. A LGA survey carried out from February to April 2024, and responded to by 7068 registered social workers, found that the majority of social workers felt encumbered with the paperwork they had to complete. Using AI in a responsible way to free up social worker time from administrative tasks and replacing these tasks with time for face to face involvement with people to make joint decisions with them, can certainly enhance person centred practice.
?? Bias and fairness
AI systems may inadvertently perpetuate or amplify biases present in training data. This can lead to discriminatory outcomes, disproportionately affecting marginalised communities. For instance, risk assessment tools that rely on historical data may unfairly label certain groups as higher risk, leading to unequal treatment. Predictive tools in child protection systems may disproportionately flag minority or low-income families for intervention and those without digital literacy or resources may be excluded from AI-supported systems. Bias can arise not only from data, but also from the design and application of algorithms and from the population groups who use the system, further compounding disparities.
Governance requirements
Bias in AI must be actively addressed and regular oversight mechanisms are required to evaluate AI systems for bias, accuracy, and compliance. Some ways to go about achieving this can be done by having:
??????? I.??????????? diverse stakeholder input during AI design and evaluation,
????? II.??????????? diverse and representative datasets,
???? III.??????????? performance metrics that prioritise equitable outcomes for all demographics,
??? IV.??????????? refinement of algorithms to minimise the risk of unfair outcomes,
????? V.??????????? continuous auditing, ideally with independent partner organisations.
领英推荐
The World Health Organisation’s “ethics by design” methodology to mitigate biases at the outset when developing new AI technologies is a good rule to go by.
?? Transparency and Accountability
The complexity of AI models can mean decision-making processes can be ‘opaque,’ resulting in "black box" systems, where decisions are not easily explainable to people who draw on care or stakeholders. This opacity undermines trust and complicates accountability when errors occur. Lack of explainability could mean people who draw on care or stakeholders may not understand how or why certain decisions are made and may struggle to understand or challenge outcomes made by AI systems. From a governance perspective, ensuring AI systems are transparent and decisions are traceable is non-negotiable to protect peoples rights.
Governance requirements
Particularly in the realm of social care and support, AI should augment rather than replace human decision-making. For any critical decisions affecting people's wellbeing, there must always be a human involved, there should be involvement from a qualified professional who considers the nuances of individual cases and AI should not compromise the need for this expertise.
To maintain trust and accountability, AI systems should provide clear, comprehensible explanations of their decision-making processes. People who draw on care, their carers and advocates must be able to challenge AI-driven decisions effectively.
?? Data privacy and confidentiality
AI relies heavily on data, often involving sensitive personal and health information, such as information about physical health, mental well-being, and socio-economic conditions. The potential for misuse or breaches poses significant risks to people, including breaches of confidentiality, identity theft, or discrimination. Frameworks may not effectively communicate how AI will use people’s data, causing challenges in ensuring informed consent.
Where anonymised data is used, advanced analytics may be able to re-identify people by cross-referencing with other data sets. In fragmented provider landscapes, data-sharing inefficiencies can exacerbate these challenges. Ensuring compliance with stringent data protection regulations is critical to safeguarding people’s confidentiality.
The integration of AI monitoring tools, such as predictive analytics and health tracking, can lead to excessive surveillance of people with lived experience, potentially infringing on privacy and creating a culture of mistrust. Ethical deployment requires finding the balance between proactive care and respecting personal boundaries.
Governance requirements
Robust data governance policies must be implemented to safeguard people’s information. This includes ensuring transparency about how data is collected, stored, and utilised, and securing informed consent. As stressed by the European Commission on Ethical AI, AI must be developed in alignment with legal standards such as GDPR and data protection laws.
Concluding Thoughts
The adoption of AI in Social Care IT systems holds transformative potential to streamline operations, increase early intervention and improve outcomes for people. However, it also demands a rigorous commitment to ethical practices that protect and empower people who are subject to it. From a governance, risk, and compliance perspective, frameworks that ensure AI aligns with the values of social care, equity, dignity, and compassion, must be an integral part at every stage of the deployment of AI in social care systems.
Additionally, organisations should promote ongoing learning and education to keep pace with technological and regulatory advancements. Social care workers must be trained to understand the ethical implications of AI and how to use these tools responsibly. This ensures that the technology is applied in ways that enhance service delivery while respecting the rights and dignity of people with lived experience
By addressing these ethical challenges proactively, we can harness the benefits of AI while safeguarding the trust and rights of those we serve, ensuring that the technology supports rather than undermines the core mission of social work.
References:
(Ref 01) Skills for Care: Ethics and artificial intelligence in adult social care
(Ref 02) Heder, Mihaly: AI and the resurrection of Technological Determinism
(Ref 03) European Commission on Ethical AI: High level Expert Group On Artificial Intelligence.
(Ref 04) Wang C, Liu S, Yang H, Guo J, Wu Y, Liu J: Ethical considerations using ChatGPT in Health Care (Journal of Medical Internet Research 2023;25:e48009)
(Ref 05) Oxford University Institute of Ethics in AI: Oxford Statement on the responsible use of generative AI in Adult Social Care
(Ref 06) Local Government Association: Employer standards survey for registered social workers 2024: National summary
(Ref 07) World Health Organization (WHO): Ethics and governance of artificial intelligence for health
(Ref 08) NHS AI Lab and Transformation Directorate: Pilot projects and ethical considerations in deploying AI for predictive analytics and monitoring.
Managing Director at Casson Consulting London Ltd
2 周Julie Tyas This is superb and as co convenor with Caroline Emmer De Albuquerque Green, PhD and Katie Thorn of the Oxford Project on the Responsible Use of AI in social care, we are pleased to see suppliers such as the Access Group putting these issues out there.(https://www.digitalcarehub.co.uk/ai-and-robotics/oxford-project-the-responsible-use-of-generative-ai-in-social-care/) John Boyle Nathan Downing Helena Zaum Jacky Morton Joe Bowerbank Alan Payne Julia Glenn Paul Hainsworth Sanghamitra Chakravarty Jack White Dr. Harvinder Power Amy Lewis Professor Erica Yang Julia Kuprina Lee Trueman ??
Digital Content Writer at The Access Group
2 周Given the issues with hospital bed demand and a shortage of carers nationwide, the parts about early intervention, preventative care, and supporting independence are hugely important. Many of our issues with the NHS stem from the long-term neglect of social care by successive governments, so it's good to see more conversations about how to remedy this. There's never a guarantee of government action, but the current government does have a lot of this as part of their strategy, so if providers like ourselves can deliver optimisation and efficiency, we can help get the ball rolling and encourage that higher level commitment.