Top tips for care providers

AI has caused significant cultural and industrial shifts worldwide. Its impact is apparent in everyday activities and is becoming increasingly relevant to adult social care. The reduction in staff hours and cost efficiencies achieved in other industries highlight the considerable demand for AI. However, AI may not always be the appropriate solution. These Top Tips provide guidance on understanding what AI is, when it should be implemented, and how to begin the journey of AI adoption. 

Top Tips

The success and reliability of AI systems depend on the quality of data you provide. It is important to make sure that all the data being used is correct and contains all the relevant information needed. The data should also be well structured and organised.  

You will need to make sure that you are complying with UK General Data Protection Regulation (GDPR) when using AI tools. In particular, you will need to consider:  

  • Where the data is being processed and if it is in the European Economic Area - you can normally find this out from the technology’s privacy policy; 
  • How you will tell people about how their data is being used.

If in doubt, speak to your software supplier. If you are using a free or online tool, you should not use personally identifiable information because you would be providing confidential information to a third party. As you would not have a contract with the technology supplier, you will have no way of guaranteeing the safety of that information or how the data will be used. 

Under UK law, people have rights about how their data is used. You must tell people how you are processing their data in the course of your activities. The free Better Security Better Care programme can provide support on fulfilling your data protection obligations

You should consider how your organisation can use AI ethically. The Oxford Project on the Responsible Use of Generative AI in social care has developed a principles framework and guidance on how to use this in practice. They have identified the following core principles for the ethical use of AI: 

  • Truth 
  • Transparency 
  • Equity 
  • Trust 
  • Accessibility 
  • Humanity 
  • Responsiveness 

AI can show biases based on the data it is trained on and this can affect its outputs. 

Bias in AI means unfair favouritism in the algorithms and data. Biases can come from historical and social inequalities in the training data (the data that is fed into the AI model), algorithm design, or human involvement in data curation and labelling.  

Bias in AI can reinforce stereotypes, cause discrimination, and affect decisions based on biased outputs. 

Equity: Ensuring Fairness for All - Digital Care Hub 

The Oxford Project on the responsible use of Generative AI in social care has written guidance on equity in AI that gives a useful example of how to mitigate bias. 

As with any new technology, having the training and development in place to make sure that staff are skilled and confident in using AI is essential. The success of AI will depend on the data that is given to it and how people feel about using it.  

Consider identifying super users or digital champions within your staff teams who can act as the internal experts on your AI systems. They can help other users with their questions and issues. Super users also have a crucial role during system implementations or upgrades, as they will help train other users and ensure smooth transitions and adoption.  

Skills for Care have produced guidance for introducing digital champions in social care organisations. They recommend that organisations should provide digital champions with: 

  • The right equipment and technology that is relevant to the skills they're supporting.  
  • Access to learning and continual professional development opportunities.  

You should also make sure that your super users have enough time to allow them to support best practice and share learning with your teams.  

We are expecting to receive proper training and access to continued learning so that we understand the AI technology we are expected to use, the risks of using it in the remits of our work and proper procedures to mitigate and respond to risks. There should be different levels of AI training and contact persons in the company that can support people with lower levels of training. But every member of staff should have basic awareness on AI if it is being used in the company. AI awareness should form part of the care certificate.

Source: Care workers’ guidance and statement of expectations on the responsible use of AI and particularly generative AI in adult social care 16 September 2024 

It is recommended that you always include a human review of any AI-generated outputs to ensure accuracy, fairness and to ensure it meets your organisational standards and policies.  You may need to implement review guidelines or quality controls to ensure human processes continue to sit alongside any AI developments.  

It is best to define the human role within any use of AI and to produce an AI Policy, which addresses such questions as:

  • Who will be responsible for the decisions and tasks of that AI?  
  • How does your organisation oversee your use of AI? And who has accountability for this role?
  • How will you approach AI adoption – do you use a particular framework or decision tool to assess the appropriateness of AI? 

This is a key step in determining what the asks and needs are of any new AI tool across the organisation and analysing if a specific AI technology can deliver that ask. It is best practice to define what you hope to achieve before choosing any technology. You may wish to follow this checklist: 

  • Create a specification by listing what you hope to achieve with the technology; 
  • Speak to the people you support and staff to get their feedback; 
  • Define the potential benefits and risks of using this type of technology; 
  • Identify measurable outcomes you can use to assess success. 

When undergoing any new adoption of AI technologies there will be a cost involved. Following these steps will help guide you in the financial process:  

  • Use your table that compares the benefits and draw backs of the AI technology; 
  • Compare this against the cost of the AI tool and consider if it is value for money, considering things like hardware and software costs, as well as more indirect costs such as staff training and system changes;  
  • Then input this data into a financial plan to track your spending and income, and always be aware of hidden costs and be ready to adjust your plan as needed. 

All organisations should consider having an AI policy, even if your policy is that staff cannot use AI for work purposes. This way staff and stakeholders are aware of your organisation’s stance on AI adoption and how you plan to use it in your organisation. This is important as there is still mistrust and concerns around the use of AI.   

If you are part of the policymaking process within your organisation it may be useful to include how different roles will be required to interact with AI as this can differ between senior leadership and front-line care delivery staff. Inserting a review checklist may also be helpful to know when is best for that continual human review to take place. Disseminating this AI policy amongst the workforce should also be part of the AI Policy process. This way the organisation can ensure that all persons are included and consulted in the AI journey. 

Please see this template AI policy drafted by the Humber and North Yorkshire ICB. 

To continuously improve your use of AI systems, actively seek feedback from both human reviewers and, if possible, the AI system itself. Use this feedback to identify areas for improvement and make necessary adjustments to the human review process. This can be part of your AI policy or written as a stage into your organisation’s AI process/journey. As the AI evolves and your organisation’s needs will change, remember to adapt the review process accordingly.