This page provides responses to the most frequently asked questions about the Report Card project. If you do not find an answer to your question here, you are welcome to use the form below to contact us.
- Who produced this Report Card?
- Where does the data used in the Report Card come from?
- What are the main publicly-available data sources used in the Report Card?
- What is the G-FINDER report, and does it really provide a comprehensive picture of neglected disease research?
- How did UAEM collect data from universities for the metrics that relied partially or wholly on self-reported information?
- How does the Report Card fairly evaluate non-responding universities?
- How does the Report Card fairly evaluate universities with varying sizes and research budgets?
- Does the Report Card measure all possible ways in which universities impact global health?
- Does the Report Card evaluate university global health research and training activities beyond neglected diseases?
- Does the Report Card address research on chronic/non-communicable diseases (NCDs) like cancer, heart disease or mental health, which are increasingly prevalent in the developing world?
Please use the form below to submit questions, feedback and any other input on the Report Card:
Who produced this Report Card?
This project was conceived, developed and produced by Universities Allied for Essential Medicines (UAEM), an international nonprofit organization of graduate and undergraduate students in medicine, research, law, and related fields. Students from a wide range of U.S. and Canadian institutions, including many of those evaluated by the Report Card, contributed to the project. Research and analysis to produce the Report Card was conducted over the course of 2012 and early 2013. Funding for the project was provided by the Doris Duke Charitable Foundation, the Open Societies Foundations, the Perls Foundation and the Moriah Fund.
Where does the data used in the Report Card come from?
As detailed in the methodology, this evaluation is based on a combination of metrics derived from a.) publicly-available information, and b.) self-reported data from evaluated institutions.
To promote fair evaluation and methodological rigor, we used standardized, authoritative, publicly accessible data sources for as many metrics as possible. Six of our 14 metrics rely entirely on publicly-available data sources, while another four are derived from a combination of publicly available and self-reported data.
Self-reported data was only sought on metrics for which public information was limited or inconsistent. Even then, we verified this data wherever possible – for example, we asked respondents to include names, descriptions, and course catalog links for courses on innovation or access to medicines issues, which we then verified and supplemented with our own searches of online university course catalogs.
What are the main publicly-available data sources used in the Report Card?
The most significant sources of publicly-available data used in this evaluation are:
- National Institutes of Health RePORTER
- Canadian Institutes of Health Funded Research Information
- G-FINDER (Global Funding of Innovation for Neglected Diseases) Public Search Tool
- Association of University Technology Managers (AUTM)STATT Database
- List of signatories to the Statement of Principles and Strategies for the Equitable Dissemination of Medical Technologies
- List of signatories to the Nine Points to Consider in Licensing University Technology
- University websites, technology transfer office websites, and online course catalogs
What is the G-FINDER report, and does it really provide a comprehensive picture of neglected disease research?
As one of our most significant sources of publicly-available data, G-FINDER deserves particular attention and explanation. The G-FINDER report is produced annually by the nonprofit organization Policy Cures with funding from the Bill and Melinda Gates Foundation. It is a comprehensive survey of worldwide funding for research and development of innovative neglected disease treatments, medicines and health technologies, compiling grant data from more than 100 funders, including USAID, the Bill and Melinda Gates Foundation, and the Howard Hughes Medical Institute.
The G-FINDER also establishes a specific, inclusive and empirically-grounded definition of “neglected diseases.” That definition and the specific diseases included are detailed here.
For these reasons, we consider the G-FINDER a “gold-standard” data source for both defining neglected diseases and cataloging the extent of research in these areas, and UAEM relied heavily on it for both purposes in developing our own evaluation. While the G-FINDER data and definitions may not capture fully 100% of all university research projects that could conceivably relate to neglected diseases, we are confident that it is the most rigorous and comprehensive record of neglected disease research funding available today, and more methodologically sound than relying on self-reported estimates of research investment from universities with varying definitions of neglected diseases and varying methods of budgeting and accounting.
How did UAEM collect data from universities for the metrics that relied partially or wholly on self-reported information?
Separate questionnaires were developed for each of the Report Card’s three sections (access, innovation and empowerment). Questionnaires were provided in online format using Qualtrics, a leading survey tool. Each section questionnaire was e-mailed to the officials best suited to provide data for that sections – vice-presidents, provosts or equivalent heads of research for the innovation questionnaire, technology transfer officials for access, and deans or equivalent heads of medical, public health and law schools for the empowerment section.
Extended open reporting periods were provided for each of the 3 section questionnaires – in October and November 2012 for innovation and access, and January 2013 for empowerment. During this period, relevant university officials were contacted a minimum of two times each, including e-mail and/or phone follow-ups to those from whom we did not receive an initial response.
Finally, UAEM sent advance notice with provisional scores and grades to every university President or Chancellor’s office in advance of the public release, providing one more opportunity to submit updated or missing data for metrics that were based on self-reported information. Several institutions responded with additional information, which was included in the publicly released grades. Several more acknowledged receipt of the pre-release notice.
How does the Report Card fairly evaluate non-responding universities?
We took great care to weight the Report Card metric scores such that a non-reporting institution that received high marks on the public information-based metrics could still receive a competitive score. As the website indicates, 3 of the top 5 highest ranking institutions did not respond to at least one of the questionnaires, yet still scored very competitively. It is also important to note, however, that because transparency and disclosure are elements that we sought to emphasize in every aspect of this project, universities that did respond to the self-reported questionnaire for a given section received a minimum credit for those metrics (typically 1 point out of 5), regardless of the substance of their response.
How does the Report Card fairly evaluate universities with varying sizes and research budgets?
Because the universities selected for evaluation vary in significant ways (e.g. levels of funding, student body size, public vs. private institutions), we designed Report Card metrics and scoring systems to minimize the impact of such differences.
Most importantly, almost all quantitative metrics are “normalized” with respect to degree of institutional funding, total number of licenses executed, or another school-specific variable that serves as a proxy for university size. For example, rather than scoring a university on the absolute dollar amount of funding devoted to neglected disease research, or the absolute number of non-exclusive licenses executed in a given year, these numbers were divided by a relevant total for that school (total NIH funding or total licenses executed) to arrive at a percentage for each institution. All institutions with percentages falling in the same scoring range received the same score, regardless of absolute institutional size. This approach enabled meaningful comparison across institutions while minimizing or eliminating the impact of variations in size, budget, or resources.
For non-quantitative metrics, the Report Card employs pre-defined sets of discrete categories by which all universities can be uniformly evaluated, and for which performance is again likely to be independent of variations in university size, funding, capacity or resources. For example, on the first question in the access section, public university commitments to socially responsible licensing were sorted into five pre-defined categories based on the specificity and details of the commitment each school had made. All universities falling into the same category received the same score.
Does the Report Card measure all possible ways in which universities impact global health?
We acknowledge that, as with any assessment tool, our metrics are imperfect. It would not be feasible, especially in this first iteration of the Report Card, to produce an evaluation that measured every single data point representing a contribution that a given university has made to advancing global health. Furthermore, these metrics only capture a snapshot in time – because of limitations in the time period of available data and the time required to compile and produce this evaluation, significant university initiatives launched within the past 12-18 months many not be captured.
We also recognize that many individuals and research groups within lower-ranked universities are doing ground-breaking and high impact work that may not be specifically highlighted or fully accounted for by our methodology. Our intention is that the Report Card be viewed as an assessment of each institution as a whole in relation to its peers, and should in no way be seen as discrediting outstanding individual efforts.
Still, taking the “leap of faith” to extrapolate scientific truth (to an acceptable degree of statistical certainty) from much smaller amounts of representative data (or generalize from a research sample to a larger population) is fundamental to the scientific method and the research enterprise as a whole. We took great pains to develop a very wide range of methodologically rigorous metrics, in order to capture a diversity of significant global health contributions. We believe that our metrics are rigorous and fair, providing a methodologically sound “snapshot” of university contributions to several of the most critical global health domains.
Does the Report Card evaluate university global health research and training activities beyond neglected diseases?
The Report Card includes several metrics intended to capture activities in broader global health areas, particularly in the Empowerment section. Empowerment question 1 credits institutions for offering global health programs or study tracks, while question 2 evaluates schools on the percentage of research funding received from the Fogarty International Center, which is the NIH’s primary funding institute for research and training focused explicitly on international health. In several cases, universities have requested that they be credited for research or partnership programs which are already included in these metrics.
At the same time, we acknowledge that the Report Card emphasizes research on neglected diseases and access to health technologies originating at universities. These are areas of global health that universities are uniquely positioned to impact, but which have been traditionally overlooked or under-emphasized – as the Report Card’s general findings of lower performance in these areas confirms. These metrics can also be reliably measured using high-quality, consistent, publicly accessible data sources, such as the G-FINDER.
We found that systematically quantifying, evaluating, and comparing broader global health research and programmatic activity beyond the metrics included here is a much more challenging endeavor. For example, field-based clinical programs are often funded by a wide range of government, foundation, and industry sources with widely varying degrees of public grant disclosure. We came to the difficult conclusion that systematically evaluating such activities beyond the above-mentioned metrics was not feasible for the first version of this project, and that an attempt to do so would result in an unacceptable sacrifice to our methodological rigor. However, we are actively seeking ways to more fully capture these additional contributions in the next iteration of the Report Card, and welcome suggestions for doing so.
Does the Report Card address research on chronic/non-communicable diseases (NCDs) like cancer, heart disease or mental health, which are increasingly prevalent in the developing world?
The Access section of the Report Card evaluates university activities that are absolutely essential to addressing the growing global NCD epidemic. When it comes to NCD research, the primary challenge is not that universities are failing to devote a large percentage of research dollars to cancer or heart disease, or other leading NCDs; it’s that their innovations are likely to come to market at astronomical prices unless they are patented and licensed in a socially responsible manner.
This is exactly the issue in the recent Indian court ruling on Novartis’ leukemia drug Gleevec. The basic research to develop Gleevec was conducted largely in academic laboratories, but ultimately transferred to the drug company Novartis, which sought to enforce exclusive intellectual property rights in India on tenuous grounds in order to reduce competition from more affordable generic alternatives. Today, NCD innovations regularly come to market at prices of tens of thousands of dollars per patient per year (the U.S. price for Gleevec is approximately $70,000 per patient per year). Such medicines and treatments simply won’t reach low-income developing world patients unless steps are taken to promote locally affordable versions.
The bottom line is this: While we laud universities’ extensive and important research on globally prevalent non-communicable diseases, institutions that are seriously committed to impacting global health through NCD research must be vigorously employing socially responsible licensing strategies to enable affordable generic production of resulting medicines in developing countries without delay.