Streamlining manual output checking#

Lead: Rachael Williams (MHRA)

Proposal#

Summary#

This workshop will bring the community together to brainstorm strategies and tools for streamlining manual output checking in TREs.

Whilst automated output checking is going some way to minimising bottlenecks, manual verification is seen by many as a crucial step in maintaining confidentiality and in making sure that outputs align with the intentions of individual projects. This manual step can be time-consuming and prone to errors.

Participants will brainstorm effective techniques for optimising this process, including best practices to enhance efficiency without compromising governance.

By the end of the workshop, it is hoped that participants will be equipped with practical insights to enhance the speed and accuracy of manual output checking, ultimately improving the overall research workflow.

Preparation#

Please bring your experience, ideas, and questions, around what works and what doesn’t in the world of manual output checking, such as checklist development, collaborative workflows, and quality assurance measures.

Target audience#

All involved in manual output checking – both from a policy and procedures perspective, and with hands on experience.

Session#

Summary#

The room discussed how manual methods of output checking can be connected to more automatic methods, for instance SACRO, and how organisations can transition from purely manual methods to more automated ones.

Tips were also shared on how to make manual checking simpler.

Raw notes#

  • SACRO provides a set of drop in tools that researchers use alongside R or Stata - and at the stage they want to create an output they type “acro”. It will then run checks and produce an output - highlighting if there are potential issues. Sometimes the automated checks won’t apply - and an exception request can be submitted.

  • For machine learning models there are a number of things that can be checked. The gap (where work is ongoing) is how to apply the methods for traditional methods to machine learning models. A pool of expertise is being built in this area.

  • Organisations which have always used a TRE, where researchers are used to following certain procedures, may have fewer issues than organisations who have used other models of data access in the past (in terms of volumes of exception requests).

  • TIPS

    • Try and encourage researchers to do more than the minimum - encourage them to undertake the Safe Principals course so they understand how to police themselves.

    • Request a certain format/template to make it easier for both automated and manual checks.

    • Encourage project lead to approve prior to submitting for automated checks.

    • Options such as Azure Cognitive Search have been used successfully in the financial sector previously to look for individual PII.

    • SACRO also provides options for adding comments that can be shared between researcher and reviewer through the process.

  • KEY MESSAGES

    • Early engagement and education with your researchers is key.

    • Explore tools that can do automated checking - but allow for the ongoing conversation.

  • LINKS