New Tools to Improve the Rigor of Implementation Research: The Implementation Research Logic Model (IRLM) and Longitudinal Implementation Strategies Tracking System (LISTS)
JD Smith, PhD
Seminar date: 6/3/2021
Description: The complex nature of implementation research presents a number of methodologic challenges. The Implementation Research Logic Model (IRLM) (Smith, Li, & Rafferty, 2020) was developed to provide researchers with a means of specifying the relationships between determinants, strategies, mechanisms, and outcomes. These critical elements of implementation research were previously contained in siloed models and frameworks and conceptual integration of them has been challenging for many in the field. The IRLM has demonstrated acceptability, preliminary effectiveness at improving the quality of implementation research grant proposals, and is useful for planning, executing, reporting, and synthesizing implementation research. The goal was to increase the rigor and reproducibility of implementation research and to provide greater transparency to the complex processes involved in the study of practice and organizational change. The IRLM is semi-structured and aids researchers in concrete planning, evaluation, and execution of initiatives, which can later allow causal pathway analyses to identify why EBI’s implementation do or do not succeed and explaining observed implementation, system, and clinical/patient outcomes. Second, the specification and tracking of implementation strategies is at the core of implementation science. Yet, few feasible methods exist to obtain the needed details regarding strategy use, modification, and change over time that is needed for causal inference and intepretability of findings. Within the NCI IMPACT Research Consortium, Smith and colleagues developed and are testing a new tool: The Longitudinal Implementation Strategies Tracking System (LISTS) (Smith et al. 2020). LISTS uses a timeline follow-back procedure to specify the reported elements of Proctor et al. (2013), to capture prospective or unintended discontinuations, additions, or modification (informed by the FRAME-IS; Stirman et al. 2021), and capture the study units (e.g., clinics) for the reported strategy and its changes. LISTS represents a feasible approach, that is less burdensome than alternatives, for longitudinal tracking. A soon-to-be open source REDCap data acquisition platform will ensure adherence to LISTS and facilitate synthesis across units within projects and cross-project initiatives of similar interventions and service delivery contexts.