HSR&D Home » Research » SDR 11-399 – HSR&D Study
Describing Variation in IRB Efficiency, Quality and Procedures
Daniel E. Hall, MD MDiv MHSc
VA Pittsburgh Healthcare System University Drive Division, Pittsburgh, PA
Aram Dobalian PhD MPH JD
VA Greater Los Angeles Healthcare System, Sepulveda, CA
Funding Period: July 2012 - December 2014
Data demonstrate unjustifiable variation in the quality and efficiency of IRB review. We therefore attempted to improve quality and efficiency of the VA IRB review process though an in-depth analysis of IRB processes at 10 VA sites, including the Central IRB.
Using a combination of systems engineering, qualitative, and quantitative methods, we: (1) developed current-state process flow maps of the IRB review processes at each site; (2) measured variation in the efficiency of IRB review in terms of IRB review times; (3) measured variation in the quality of IRB review using the IRB Researcher Assessment Tool (IRB-RAT), the Common Rule Criteria for IRB review, and the Office of Human Research Protections' algorithm for determining the appropriate type of review (e.g., expedited, exempt, full board); and (4) identified high-impact, high-feasibility initiatives for process improvement.
(1) To develop flow maps of the IRB review process, collaborators from the Veterans Engineering Resource Center/Office of Systems Redesign (VERC/OSR) at the Pittsburgh VA conducted a 1-day site visit at each of the 10 study sites. During the visit, the VERC conducted focus groups with IRB stakeholders (e.g., IRB Chairs, IRB reviewers and investigators) to map the review process beginning with the submission of a new IRB protocol and ending with its final determination, either approval or disapproval. Analysis focused on comparing similarities and differences between sites to identify opportunities for process improvement.
(2) To measure variation in IRB review times, we randomly collected IRB records from each site related to a sample up to 45 protocols (15 exempt, 15 expedited, 15 full board) that received a final determination between January 2010 and December 2011. Two trained coders extracted the time for each incremental step in the review process, allowing us to calculate overall and incremental times.
(3) We measured the quality of IRB review in three ways:
(a) Trained coders analyzed the sampled IRB materials for evidence of the Common Rule Criteria. The Common Rule tasks IRBs to assess 8 criteria such as risk minimization or equitable participant selection. The coders examined the documents for evidence that the criterion was evaluated and rated how thoroughly it was evaluated. Analysis focused on comparing the thoroughness of evaluation between sites and across each of the 8 criteria.
(b) Coders also used OHRP algorithms to determine the appropriate level of review based on the merits of the approved protocol (exempt, expedited, full board). Analysis focused on comparing the actual review level with the expected review level based on the coders assessment.
(c) We sent an anonymous, on-line survey to 252 IRB members and 428 investigators who serve on or use the 10 VA IRBs in our study. The survey included a revised and shortened version of the IRB-RAT developed by the team for use in VA settings. Analysis focused on identifying the IRB functions and activities at each site and across all sites that were in greatest need of improvement.
(4) We conducted video teleconferences at each site to share our findings regarding review times and quality and facilitate brainstorming of initiatives for quality improvement.
(1) Although flow maps demonstrate substantial similarity in IRB review processes across sites, each process was unique, and there were substantial differences between sites. The greatest variability was noted in the role of the information security officers (ISO) and privacy officers (PO), often demonstrating inefficient re-work. We also noted that the Research and Development Committee (R&DC) typically added little or no value to the review process despite adding approximately 14 days to the review time. We therefore recommend clarifying the overlapping roles of the IRB, ISO and PO to optimize process design. We also recommend changing policy to expedite R&DC review by authorizing the chair of the R&DC to certify the approvals of the HRPP subcommittees, including the IRB.
(2) Two raters extracted IRB review times from 48 exempt, 106 expedited and 124 full board protocols. The 2 raters agreed on 3038 of 3090 time points (98.3%) abstracted from a sub-sample of 72 (22.4%) protocols. Total review times ranged from 24 to 310 days, with means of 93 51, 107 54 and 131 63 days for expedited, exempt and full board protocols respectively. Multivariable models using robust variance estimation controlling for site and review level demonstrate that, on average, expedited protocols were reviewed 45 days faster than full board reviews (95% CI= 31-59 days). There was no significant difference between full board and exempt reviews. We found significant between-site differences, with some sites approaching a consensus panel goal of 60 days for IRB review and other sites needing improvement to reach this goal.
(3a) We determined the quality of IRB review in terms of the Common Rule Criteria in 117 expedited and 134 full board protocols. Inter-rater agreement was good in a sub-sample of 74 protocols (kappa= 0.76). Across all 10 IRBs, 94.3% of the criteria were assessed. However, only 32.3% of the assessments were explained with substantive detail. Most assessments (62.0%) were unexplained (e.g., a checklist checkbox). Multinomial logistic regression controlling for site, criterion and review type demonstrate that expedited reviews are less likely to have explained assessments than full board reviews. Significant interactions between site and criteria demonstrate differences that might guide criterion- and site-specific initiatives for quality improvement.
(3b) We applied the OHRP algorithm to determine the appropriate level of review for 63 exempt, 120 expedited and 138 full board protocols. Inter-rater agreement on OHRP level of review was excellent in a sub-sample of 67 protocols (kappa= 0.97). Agreement across all 10 IRBs between IRB determination and expected level of review based on OHRP criteria was 73.5% (kappa=0.59, p<0.001). Site-specific agreement ranged from 95.1% to 13.3%. Poisson regression showed that IRBs conducted full board reviews more frequently than expected (p<0.001), with significant variability across sites. Overall, almost half (48.9%) of protocols given full board review could have been expedited according to OHRP criteria, and this proportion varied from 86.7% to 0.0% across sites. System redesign at some sites might reduce unnecessary variation and improve efficiency.
(3c) Response rates for the revised IRB-RAT were 65.5% for IRB members and 52.6% for investigators. The survey included 27 statements describing IRB activities (e.g., "An IRB that is open to reversing its earlier decisions"). Respondents indicated how well each statement described both their "actual" and "ideal" IRB. For each statement, we calculated the difference between the ratings of ideal and actual IRBs. Using random effects modelling, we identified the statements that fell outside the 95% CI, corresponding to those functions closest to (and furthest from) the ideal IRB. The stakeholders agreed that the IRB was closest to the ideal when protecting human subjects, treating investigators with respect and taking appropriate action for alleged scientific misconduct. The IRB was furthest from the ideal regarding the timeliness of review, allocation of sufficient resources to the IRB and provision of complete rationales for required changes to or disapprovals of protocols. We also identified noteworthy site-level differences in the ratings of each IRB activity. Our method may help IRB stakeholders to identify and monitor site- and activity- specific initiatives for quality improvement.
(4) Video teleconferences were held at 9 of the 10 sites. The VERC/OSR facilitated brainstorming sessions unique to each site that culminated in specific, high impact, high feasibility initiatives for process improvement.
Our findings demonstrate that the quality of IRB review in the VA is better than expected, in terms of both review times and review quality. However, there is significant room for improvement. We have made concrete recommendations to improve the guidance regarding the ISO, PO and R&DC in the review process. We have also developed tools that can be used for ongoing assessment of quality improvement initiatives. In particular, we think the revised IRB-RAT could be used periodically to track the perceived performance of IRBs throughout the VA.
External Links for this Project
NIH ReporterGrant Number: I01HX000839-01
Dimensions for VADimensions for VA is a web-based tool available to VA staff that enables detailed searches of published research and research projects.
If you have VA-Intranet access, click here for more information vaww.hsrd.research.va.gov/dimensions/
VA staff not currently on the VA network can access Dimensions by registering for an account using their VA email address. Search Dimensions for this project
DRA: Health Systems
DRE: Research Infrastructure
MeSH Terms: none