Ronquest N, Clinkscales M, Paret K. How are United States ICER'S evidence ratings determined?: A systematic review of ICER'S evidence ratings in evidence reports for new drugs in 2020 and 2021. Poster presented at the ISPOR Europe 2022; November 7, 2022. Vienna, Austria. [abstract] Value Health. 2022 Dec 1; 25(12):299-300. doi: 10.1016/j.jval.2022.09.1479


OBJECTIVES: Although the United States Institute for Clinical and Economic Review’s (ICER’s) value assessment framework is designed to align with methods used by major global health technology assessment agencies, its methods to rate evidence for each intervention’s comparative clinical effectiveness is unique. The objective of this research is to understand how evidence ratings were assigned to interventions recently reviewed by ICER and what factors may influence ICER’s ratings decision.

METHODS: Based on a systematic review of all evidence reports for drugs published in 2020 and 2021, we summarized characteristics of interventions that received each level of rating and the frequency of rating revisions between draft and final reports. Evaluations that involved a change in rating were critically assessed to summarize potential reasons for the influence of stakeholder comments on the rating.

RESULTS: In total, 45 interventions were reviewed during 17 assessments published in 2020 and 2021; 8 assessments provided multiple ratings for each intervention (6/17 for different subpopulations, 8/17 for different comparators), which resulted in 68 total ratings for combinations of interventions, populations, and comparators. Although most ratings resulted in either promising but inconclusive (P/I: 19% [13/68]) or insufficient (I: 19% [13/68]), more than one-third of cases were B+ or better (B+: 24% [16/68], A: 13% [9/68]). Ratings or B+ or better were given to interventions that resulted in substantial improvement in a key clinical endpoint compared with the selected comparator. When only indirect efficacy evidence was available, the ratings were C+ or below. There were only five instances where evidence ratings changed between draft and final reports.

CONCLUSIONS: Although a considerable portion of evidence ratings in recent ICER reports were B+ or better, stakeholder inputs rarely made differences in ratings. Future research is warranted to better characterize the magnitude and likelihood of net health benefit defining each evidence rating.

Share on: