AI in ADHD Assessments: A Useful Tool—But Not a Clinician
- Sarah-Jane Butler

- Apr 17
- 3 min read

Short Read
AI is increasingly being used in ADHD assessments, but it should remain firmly in a supporting role. Diagnosis is a clinical judgement, not a technological output.
Long Read
AI is already part of clinical practice
There is a growing conversation about artificial intelligence in healthcare, often framed in extremes—either as a breakthrough solution or a looming risk. In reality, AI is already quietly embedded in everyday clinical work. Tools such as ChatGPT are being used to support documentation, structure information, and improve written communication.
In ADHD assessment, where large amounts of information are gathered and synthesised, this is particularly relevant.
In my view, the question is not whether clinicians should use AI, but how they use it—and where they draw the line.ADHD Assessment AI clinician best practice
Where AI genuinely adds value
ADHD assessments are detailed and often complex. They involve piecing together developmental history, educational experiences, occupational functioning, and multiple sources of evidence. One of the most practical benefits of AI is its ability to bring structure to this complexity. It can help organise information into clear, coherent reports, making them easier to follow for both clients and other professionals.
I also find AI useful in supporting the drafting process. It can help shape report sections and correspondence in a way that is concise and well-structured. However, this is always grounded in my own clinical notes, and every output is carefully reviewed and refined. AI can improve how something is written, but it does not determine what is said.
There is also a role for AI in improving clarity. ADHD reports need to be both clinically robust and accessible. Used well, AI can help refine language so that reports are easier to understand without losing their professional integrity.
Where I draw a firm boundary
Despite these advantages, there is a clear limit to what AI should be doing in clinical work.
AI does not diagnose ADHD. It does not interpret nuance, weigh conflicting evidence, or understand the individual behind the information presented. These are not technical tasks—they are clinical judgements that rely on training, experience, and accountability.
There is also a risk in how convincing AI can be. It is very good at producing text that sounds authoritative, even when it is based on incomplete or imperfect information. In ADHD assessments, this can lead to over-simplified narratives, inflated symptom descriptions, or conclusions that appear more certain than the evidence supports.
For me, this is precisely why AI must remain in a supporting role. It can assist with structure and clarity, but it should never be allowed to shape the clinical decision itself.
Responsibility cannot be outsourced
As a practitioner regulated by the Health and Care Professions Council, I retain full responsibility for every aspect of an assessment, including diagnosis, formulation, and recommendations. This does not change with the introduction of new tools.
AI does not share accountability. It does not carry professional responsibility. That remains entirely with the clinician.
Why this matters
For clients, the use of AI should not be a concern when it is used appropriately. In fact, it can improve the overall quality of the assessment by allowing clinicians to spend less time on administrative tasks and more time focusing on the person in front of them.
However, it is important that clients understand what AI is—and what it is not. It is not a diagnostic tool, and it is not making decisions about their care. It is simply supporting the process behind the scenes.
Looking ahead
AI will almost certainly become more embedded in ADHD assessment over time. We are likely to see tools that can summarise clinical interviews, identify patterns across datasets, and support ongoing monitoring of symptoms and outcomes.
These developments may improve efficiency and consistency, but they do not replace the need for clinical judgement. ADHD assessment is not just about identifying patterns—it is about understanding those patterns in the context of a person’s life.
In my view, the future of AI in this area is not about replacement, but augmentation. It will support clinicians in working more effectively, but the responsibility for interpretation and decision-making will remain firmly human.
Final thought
AI is a powerful and useful tool when used with care. But in ADHD assessment, the most important element remains unchanged:
A clinician applying expertise, judgement, and accountability to understand the individual sitting in front of them.



Comments