We started by systematically reviewing the literature to understand the different approaches and criteria used to assess the quality and impact of eHealth tools. We searched the PubMed, Cochrane, Web of Science, Scopus, and ProQuest databases for studies published between January 2012 and January 2022 in English, which yielded 675 results, of which 40 studies met the inclusion criteria. The PRISMA guidelines and the Cochrane Handbook for Systematic Reviews of Interventions were followed to ensure a systematic process. Similar measures from the different papers, frameworks, and initiatives were aggregated into 36 unique criteria grouped into 13 clusters. Using the sociotechnical approach, we classified the relevant criteria into technical, social, and organizational assessment criteria.
A peer-reviewed paper that transparently reports on the method and outcomes of the systematic literature review has been published at JMIR Human Factors and can be accessed here: humanfactors.jmir.org/2023/1/e45143
We wanted to address some of the challenges that we identified in the foundational work by validating the initial list of assessment criteria that resulted from our systematic review through a diversified expert panel. Expert consensus helped define which criteria were must-have criteria, which criteria were not as important, and whether there were any criteria that were missing and should be added to the validated framework. Additionally, discussions with experts went beyond the final list of criteria to address their diverse perspectives on how to make the assessment instrument as accessible and usable as possible.
We conducted a two-round modified Delphi process with in-between rounds of interviews to validate and refine the initial list of 55 assessment criteria that we synthesized based on the systematic literature review of pre-existing assessment frameworks. Consensus was defined a priori as at least 75% agreement. Two rounds of voting were conducted, electronically. Prior to Round 2, one-to-one semi-structured interviews were conducted to explore the perspectives of diverse stakeholders on some key challenges and directional decisions regarding the proposed assessment instrument.
A peer-reviewed paper that transparently reports on the method and outcomes of the Delphi study has been published at npj Digital Medicine and can be accessed here: www.nature.com/articles/s41746-023-00982-w
The Ethics Committee of Northwest and Central Switzerland (EKNZ) determined that ethical approval was not needed for this study according to the Federal Act on Research involving Human Beings, article 2 paragraph 1 (reference number Req-2022-01499). Only the research team from the research partner (FHNW) has access to participants’ data (this does not include the practice partners).
All study results are anonymised. Members of the expert panel that are publicly recognised, or attributed a personal quote, have waived their anonymity rights and have explicitly approved the public use of their name and the direct quote attributed to them.
The expert panel was composed of 57 experts from 18 countries and 9 stakeholder groups: eHealth experts, clinicians, patients and patient advocates, researchers, pharma executives, insurance and reimbursement experts, regulatory experts, investors, and eHealth technology providers.
The following experts waived their anonymity rights and agreed to be publicly recognized for their contributions to this project. The rest of the experts did not waive their anonymity and therefore stay anonymous.
The project team is deeply grateful for all the experts that generously shared their time and expertise to help shape this work. The experts below are shown in alphabetical order, clicking on the photo leads you to their LinkedIn profiles.