Methodology

Based on feedback from the previous University Report Card release, we have significantly revised or expanded certain metrics to ensure a more fair and accurate assessment in the 2015 University Report Card. These changes are available for review in the version 2.0 Methodology as well as in a separate document that provides details on each of the adjustments made to the methodology and metrics.

 

The following describes the original methodology employed to develop the first version of the University Global Health Impact Report Card.

All elements of the evaluation, including selection of universities, selection of metrics, data collection, scoring and grading, were conducted by Universities Allied for Essential Medicines (UAEM) between January 2012 and March 2013.

 

SELECTION OF UNIVERSITIES

The Report Card seeks to evaluate the global health impact of leading research universities in the U.S. and Canada. For the purposes of this evaluation, “leading research universities” are defined as those which receive the highest levels of funding from the primary public funding agency for medical research in their respective countries: the National Institutions of Health (NIH) in the U.S., or the Canadian Institutes of Health Research (CIHR) in Canada.

The evaluation list was selected based on FY 2011 funding figures from the NIH and CIHR’s publicly available funding databases (RePORTER and Funded Research Information, respectively). The sample was limited to the 60 highest-funded universities in order to focus on institutions that are likely to be significant drivers of medical research, innovation and education, as well as sufficiently analogous for meaningful comparison. Before the release of the evaluation, six institutions were removed from the sample due to limited data and/or lack of applicability.

 

SELECTION OF EVALUATION METRICS

To provide a comprehensive overview of the global health impact of leading universities, the Report Card measures 14 key performance indicators in three general categories:

INNOVATION:
How well are universities filling the research gap that exists for neglected global diseases that receive comparatively little private investment?

ACCESS:
How well are universities ensuring that their biomedical discoveries are disseminated in an equitable and socially responsible manner?

EMPOWERMENT:
How well are universities preparing the next generation of global health leaders to respond to the access and innovation crises, and to what extent are they engaging in global partnerships that will empower global health leaders in developing countries?

The 14 specific metrics were selected on the basis of the following criteria:

  • Significance as indicators of global health impact
  • Availability of standardized data sources for all evaluated institutions
  • Consistent measurability and comparabilty across evaluated institutions
  • Ability of evaluated institutions to concretely improve performance on these metrics

NOTE: Specific information on each evaluation metric’s significance, data source, and potential for university improvement can be found in the detailed data “pop-outs” for each institution listed in the left hand column.

To view this detailed information, simply mouse over over the “?” symbol located in the upper right corner of each question box as illustrated here:

Help Select Demo

Because the universities selected for evaluation still vary in significant ways (e.g. levels of research funding, student body size, public vs. private institutions) Report Card metrics and scoring systems are designed to minimize the impact of such differences.

Most importantly, almost all quantitative metrics used in the Report Card are normalized with respect to degree of institutional funding, as a proxy for university size. When evaluating a university’s investment in neglected disease (ND) research, for example, the Report Card considers the percentage of that institution’s overall medical research funding devoted to ND research projects, rather than an absolute dollar amount devoted to ND research. This enables meaningful comparison across institutions while minimizing or eliminating the impact of variations in size, budget, or resources.

For categorical metrics, the Report Card employs pre-defined sets of discrete categories by which all universities can be uniformly evaluated, and for which performance is again likely to be independent of variations in university size, funding, capacity or resources.

 

DATA SOURCES AND COLLECTION

Report Card evaluation data can be separated into two general categories based on source and method of collection:

CATEGORY 1 –
Data obtained by accessing publicly available sources, such as university websites, online databases, and search engines; these data were collected by UAEM student members, staff, and interns.

These data sources include:

CATEGORY 2 –
Data obtained through self-reporting from university officials in response to standardized survey questionnaires designed by UAEM and provided to all evaluated institutions.

Each subsection of the Report Card (Innovation, Access, Empowerment) includes a combination of:

a.) metrics based entirely on data from public sources (CATEGORY 1), and

b.) metrics based either entirely on self-reported data (CATEGORY 2), or, when possible, based on self-reported data supplemented/verified through public data (i.e. CATEGORY 1 + CATEGORY 2).

This combination of metrics enables evaluation of universities which did not respond or refused to respond to requests for self-reported data. Furthermore, the Report Card metrics are weighted such that a university that receives maximum or near-maximum scores on all CATEGORY 1 metrics can still receive a section grade of at least “B-” even if no self-reported data was submitted. Thus, non-responding institutions are not precluded from receiving a competitive score.

 

QUALITY AND CONSISTENCY OF DATA

For CATEGORY 1, UAEM took the following steps to address quality and consistency of data collection:

  • Prospectively developed standardized operating procedures (SOPs) and standardized data extraction forms, including uniform search terms to which all investigators were required to adhere;
  • Implemented quality assurance procedures to ensure that investigators were obtaining consistent results from the collection procedures;
  • Where possible, multiple individual investigators independently and concurrently performed the same data collection and search processes to ensure consistency of data;
  • Standardized scoring was applied across all institutions (see “Scoring” below).

For CATEGORY 2, data quality and consistency, including concerns about questionnaire non-response, were addressed through the following:

  • Provided identical questionnaires to all institutions;
  • Developed a standardized process for identifying and verifying contacts to receive questionnaires at each institution;
  • Used standardized scripts and communication strategies to deliver the questionnaire to all institutions and conduct consistent follow up via e-mail, phone, and other contact methods;
  • Self-reported questions were structured such that the variable under question was either dichotomous or categorical, rather than continuous, so as to maximize the consistency and likelihood of response from institutions;
  • Applied standardized scoring of responses across all institutions;
  • Measured response rates both for the entire questionnaire and for individual questions.

 

SCORING AND GRADING

For each of the 14 metrics, universities were first assigned a raw score from 0 to 5 based on a standardized scoring scale applied to the data gathered for each institution. A standardized weighting multiplier from 0.5 to 5 was then applied to each metric. Weighting multipliers were based on the source of evaluation data (public vs. self-reported) and the relative importance of the metric in question as determined by UAEM. The Report Card displays the weighted score for each metric, which is the product of a university’s raw score and the weighting multiplier.

Grades for each subsection (Innovation, Access, Empowerment) are determined by the total of a university’s weighted scores for all questions in that subsection. For each subsection, a standard grading scale was developed that establishes the minimum number of aggregate weighted points required to receive a given grade. Overall grades are determined by a scale based on the sum of minimum weighted points needed in receive a given grade in all of the three subsections.

It is important to note that Canadian institutions were excluded from two metrics – one in the Innovation subsection and one in the Empowerment subsection – for which there was no clear Canadian analogue for the US data source on which these metrics were based. To account for these exclusions, the Innovation, Empowerment and overall grading scales for Canadian institutions were proportionally adjusted based on the maximum possible weighted score Canadian and US institutions could receive for those sections. Additionally, the overall ranking of universities is determined by the percentage of total possible weighted points received in the appropriate grading scale.