This is the questions list you send before you waste a single meeting. It is designed to flush out the real issues: where data goes, what is stored, whether you can meet DTAC expectations, and what clinical safety evidence exists for deployment.
How to use this
Copy/paste the sections into one email. Ask for written answers + evidence links. If they dodge in writing, they will dodge after go-live.
The questions (grouped so you can triage answers fast)
1
A) Data processing + retention
Where is audio processed (country/region)?
Is audio stored at all? If yes: where, for how long, and can we disable storage?
Are transcripts stored? For how long?
Do you retain prompts/outputs/logs, and what’s the retention policy?
2
B) Model training + secondary use
Is any customer data used to train models (now or in future)?
If ‘no’, is that contractual and does it cover subcontractors?
If ‘yes/optional’, what is the opt-in mechanism and how is it documented?
3
C) Access control + auditability
How is user access managed (SSO, RBAC)?
Do you provide audit logs of who accessed what and when?
Can we export logs for governance review and incident investigation?
4
D) Security controls (minimum bar)
Encryption in transit/at rest?
Pen test frequency and summary availability?
Incident response SLA (notify within X hours)?
Subprocessor list + change notification process?
5
E) Information Governance alignment (UK practicalities)
What DPIA support do you provide (templates, data flow diagrams)?
Do you support local patient transparency materials (posters/leaflets)?
How do you handle opt-out scenarios operationally?
6
F) Clinical safety evidence (developer side)
Do you have clinical risk management evidence aligned to NHS clinical safety expectations (developer responsibilities)?
What are your known failure modes and mitigations?
How do you capture and respond to safety feedback from deployments?
7
G) Deployment safety (adopter responsibilities)
What is your recommended clinician operating procedure (review/correction/sign-off)?
What training package exists and how is competency evidenced?
Do you provide ‘safe defaults’ (eg, minimised storage, conservative outputs)?
8
H) DTAC evidence pack
Which DTAC domains do you already evidence (and how)?
Provide documents/links for: clinical safety, data protection, technical security, interoperability, usability/accessibility.
What is your timeline for any gaps?
9
I) Interoperability + record integrity
How do outputs enter the clinical record (integration vs manual)?
Can output be structured to match local templates?
How do you prevent ‘duplicate note drift’ across systems?
10
J) Commercial + operational (the hidden cost)
Pricing model (per user / per site / per volume)?
What is included in implementation (training, configuration, support)?
What is the support route during clinics (real-time vs email)?
11
K) Exit + deletion
If we terminate, what data remains and why?
What deletion confirmation do we get?
Can we export outputs/metadata for governance record-keeping before deletion?
Red flag answers (treat as ‘no’)
‘We can’t share subprocessors’, ‘we’re working on retention’, ‘we don’t provide audit logs’, ‘we can’t confirm where processing occurs’, or ‘we train on data but it’s fine’ without a clear opt-in and contract language.
10-minute scoring (decide whether to proceed)
1
Step 1 — Set ‘must-have’ thresholds
Example thresholds: no audio storage (or strictly controlled), clear retention, auditable access logs, written stance on training, and an evidence pack aligned to DTAC expectations.
2
Step 2 — Score 0/1 for each threshold
If any threshold is 0, pause and request remediation in writing. Don’t ‘hope it will be fine’ during pilot.
3
Step 3 — Only then consider ‘nice-to-haves’
UX polish, speed, templates, and workflow extras only matter once governance and safety are defensible.
SourceBack to Toolkits Directory
Open Link