Question
How can social workers protect against bias when using AI tools?
Answer
Social workers, while typically users rather than developers of AI systems, still have an important role in safeguarding ethical standards when incorporating AI tools into their practice. One key protection is to thoroughly vet the AI providers. This involves asking specific questions about the evidence base behind the algorithm, such as:
- What kind of research supports its use?
- Have independent evaluations been conducted?
- How does the algorithm perform across diverse populations?
Another critical protection is ensuring transparency about the tool’s purpose and functionality. Social workers should seek access to summaries or documentation describing how the AI tool makes decisions, especially in high-stakes areas like risk assessment for future violence or mental health crises. Understanding whether the tool has been validated and tested for potential biases helps social workers evaluate whether it is appropriate for use with a particular population.
Finally, social workers can advocate for informed consent and give clients the option to opt out of AI-driven components when feasible. This respects the principle of autonomy and acknowledges that clients may have concerns about how their data is used. Informed discussions, combined with transparency about algorithm limitations, help ensure that AI tools enhance, rather than undermine, ethical and equitable clinical care.
This Ask the Expert is an edited excerpt from the course, 'The Ethics of Artificial Intelligence in Social Work Practice', presented by Allan Barsky, JD, MSW, PhD.