How we’re creating a toolkit for evaluating digital health products
Read about how we’re moving towards a stronger evidence base in digital health.
Blog originally published on Department for Health and Social Care’s digital blog.
Imagine you’re a product manager who works in public health. You lead a team in developing a digital health product for quitting smoking. The product has thousands of active users across the UK.
Your organisation wants to know what effect the product has had on people’s health, but there’s no budget available to evaluate it. The team did not build indicators in, so you’re unsure if it’s successful in achieving its intended health outcomes — primarily, quitting smoking.
At Public Health England (PHE), we’re working on a project to enable PHE and the wider health system to better demonstrate the impact, cost-effectiveness and benefit of digital health products to public health.
We’re developing an evaluation toolkit, which supports product managers and the rest of their delivery team in building an evaluation strategy into their project from the start. The toolkit helps teams understand if their digital health product has achieved its intended health outcomes.
During the alpha phase of this project, we tested the value proposition behind the evaluation toolkit by supporting the PHE Couch to 5K team in building their own evaluation strategy for their Couch to 5K app. We used our discovery research to define how the evaluation service should work, and which steps are crucial to the evaluation process.
Based on our findings, we defined the stages of carrying out an evaluation as:
We tested this process with the Couch to 5K team over a series of workshops, with positive results. The team found that:
A crucial part of defining your digital health product’s outcomes is creating a logic model. We tested the logic model template in the evaluation toolkit with a bunch of teams. The Couch to 5K team at PHE, who used it to kick-start their evaluation journey. The Health Checks team at PHE, who are working on prevention cardiovascular disease for 40 to 74 year olds, used it to understand their outcomes as a team. The Vitamins project at Department of Health and Social Care (DHSC), who are carrying out a project on distributing vitamins to low income families, the logic model helped them to understand their intended health outcomes. Digital Health Intelligence team at PHE who used it to align their team around their outcomes. NHS Digital who used a logic model to create project specific indicators that measure the benefit of the NHS.UK platform in improving health literacy
These tests allow us to create a template that helps teams decide on the intended health outcomes of their digital health product, and how they’ll achieve them. As well as helping people define their outcomes, it also allowed teams and their wider stakeholders to align on their goals for their project. A Business Analyst in the Department of Health and Social Care shared that:
We carried out three rounds of usability testing with people from digital delivery teams at PHE, DHSC, charities and health start-ups. Based on our findings, we decided to focus on product managers as our primary users because they:
We learned that people were most trusting of evaluation advice which came from colleagues. As a result of this we set up online evaluation communities on Slack and KHub, to give people a space to share evaluation advice.
We also worked closely with partners at NICE, NHS Service Manual and apps library to ensure that the evaluation toolkit fits with their work and can be linked to from their platforms. This way, the evaluation advice will spread through the health system through colleagues.
In our first iterations of the prototype, we included a bank of common indicators. People could browse indicators in their subject area, to get a measure of how well they were meeting their intended health outcomes.
During our research we learned that people felt confident choosing indicators without the bank, and that indicators people chose were often very specific to their product. In the next rounds of testing we redesigned the indicators section so that it included some guidance and no bank of common indicators.
At this early stage in the design process, it was important to know whether the service would work for people with access needs. Static versions of the landing page and logic model wireframes from the evaluation toolkit were tested with users who had:
We learned a lot from these sessions, and made changes to the prototype:
We also carried out a number of testing sessions with academics from Edinburgh University, King’s College London and Imperial College London to get feedback on the evaluation process and understand if our explanation of evaluation is correct. The feedback we received was that we needed to define ‘evaluation’ and ‘evaluation methods’ more clearly, so we spent time tweaking these definitions until we reached an agreement. We also validated the process for evaluation, and added the ‘analyse your data’ page to the homepage, after hearing this was a key part of the evaluation process we had been missing.
Throughout alpha, we worked on making the language around evaluation understandable to non-evaluation experts. We carried out sense checking sessions, where evaluation and non-evaluation experts gave feedback on the evaluation toolkit. Our findings from these exercises helped us to:
Alongside the evaluation toolkit, we’re developing an evaluation culture at PHE. A culture that allows time for evaluation and fosters the skills needed to carry out evaluation is crucial in ensuring that evaluation is adopted.
Throughout the alpha phase, we researched and prototyped ways to build the evaluation culture at PHE. We created online channels for an evaluation community, which will continue to grow throughout the project. The Slack community immediately gained interest from people in the public health sector, with very little promotion.
We hosted an evaluation event that brought evaluators and those interested in evaluation together to share best practice. During the event, we encouraged people to share what they wanted out of an evaluation community. As the project continues, we’ll continue exploring what evaluation training could look like, building on the work done during the proof of concept.
We have held face-to-face testing sessions with delivery teams and have received positive feedback about these sessions. During usability sessions, people expressed a need to grow their evaluation skills.
For this reason, we’re further exploring the idea of evaluation training, where people can take part in a day long course on evaluation. The evaluation toolkit would support the training, which teams could continue to use after the training.
Evaluation may also form a part of DHSC’s spend controls, pipeline guidance and assurance process. We’re working to build it in to our approvals and spend control process, so that funding is distributed on the basis of health outcomes. This should incentivise teams to carry out evaluation.
We’ll now move into private beta phase. We’ll continue working with a multidisciplinary team, bringing in academic experts in evaluation and developers to build the toolkit. Our team will continue to create an evaluation service that works for delivery teams, so they can understand the impact that their digital health product is having on users’ health outcomes.
How we’re creating a toolkit for evaluating digital health products
Research & References of How we’re creating a toolkit for evaluating digital health products|A&C Accounting And Tax Services
Source
0 Comments