Since 2011, CRΘM expertise in outcomes development and validation, and statistical analysis has been mustered to support several projects within CRΘM and in collaboration with our local clinical and biomedical research communities. In addition to scholarly peer-reviewed work, many invitations to present on these important topics have been accepted from around the world. The projects include the following:
Developing patient-centered patient reported outcomes
Dr. Tractenberg developed a new model for the development of patient-centered patient reported outcomes (PRO, published January 2017, https://doi.org/10.1371/journal.pone.0171114), which highlights how to design a PRO so that the clinical, or a patient’s, perspective is treated as “privileged” or primary. The manuscript describes the model and presents results that this model results in a qualitatively different PRO than if the typical model, prioritizing clinical or research perspectives (“symptom-centered”) on instrument content, is followed. Research is ongoing now (2017-) to determine how the different types of PRO (patient-centered vs. symptom-centered) can contribute to basic, translational, and comparative effectiveness research.
Introducing the idea that scientific research results are undermined when measurement properties are not considered (or are known to be weak) – in big and small data
Dr. Tractenberg was invited to present a talk on key measurement concepts in the evaluation of scientific (and other) claims at the Federal Trade Commission (FTC) in the US (talk); Dr.s Tractenberg and Yumoto have collaborated on presentations for the FTC (March, 2014) and at the Center for Innovation to Implementation (Ci2I) at the Veteran’s Affairs (VA) of Palo Alto, CA (April, 2016); similarly themed presentations have also been given By Dr. Tractenberg in Glasgow, Scotland (2013) and Timasoara, Romania (2016). These presentations are as important because they were all invited as they are because of the diversity in applications for which they are perceived to be relevant (FTC: general and specific science; VA: health services research; Glasgow: neurointensivist big data; Romania: biomedical research).
Establishing measurement invariance – and when it does not exist
Dr.s Tractenberg & Yumoto collaborated on an important paper documenting the invariance, over time, race, and groups, of a battery of cognitive tests that was published in 2016 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4763720/. They have recently (October 2016) submitted for peer review a follow-up paper outlining a detailed series of analyses of a common test of odor identification, which has no evidence of measurement invariance.
Building, validating, and using Bayesian Networks for diagnosis and the detection of clinically meaningful change
Dr.s Yumoto and Tractenberg have been collaborating on adapting a potentially powerful statistical modeling tool, a Bayesian Network, for clinical outcomes. They presented the extensive modeling that is required to support such a Network at the 2009 Bayesian Biostatistics Conference (Houston, TX) and at the 2009 JSM meeting (Washington, DC). This method is critical to the purposes of the CRΘM generally because it capitalizes on existing data in order to support decision making at the individual patient level. Constructing a BN for a clinical setting, and particularly to estimate the likelihood that a clinically relevant change has occurred (over time), requires a specific model of clinically relevant changes in a variety of outcomes. This means that all of the clinical outcomes that are measured on any given patient can be taken into account in a probabilistic – and objectively quantified – manner, enhancing consistency in decision making across patients, clinicians, and sites in the case of multi-site studies. This is particularly important for rare diseases and/or clinical trials with slow recruitment, when decision making may tend most to drift or change over time, patients, or sites. Bayesian Network building could be useful in clinical studies to identify inclusion criteria or pilot projects to evaluate clinical trial outcomes that are comprised of more than one endpoint. Dr.s Tractenberg and Yumoto are currently (2016-) collaborating on a proof-of-concept study to utilize Bayesian Networks to support patient and clinician decision-making in Parkinson’s disease.
Defining and testing “reliability”
Dr. Tractenberg developed a new practical definition of reliability that makes classical test theory assumptions – which underpin the common conceptualizations of “test reliability” and “measurement error” – testable. Dr.s Tractenberg and Yumoto co-authored a paper (2012) demonstrating how this definition differs from the classical test theory definition of reliability, and also showing how – with their definition – the assumptions for classical reliability can now be tested for the first time. Dr. Tractenberg had previously (2010) authored an editorial outlining key features of classical reliability, and how important it is for clinical researchers (and clinical trialists) to understand the multitudes – and limitations – of classical (and modern) definitions of “reliability”.
Revisiting “negative” treatment trials in Alzheimer’s disease and beyond
Dr.s Pietrzak and Tractenberg are each involved in a series of “sensitivity analyses” that will re-evaluate several clinical trials to estimate the association of the trial outcomes (“negative”) with the analytic method that was employed (linear and logistic regressions). Approaches to these trials include validating a multi-dimensional representation of the outcome (measurement model) and re-estimation of treatment effects on this model, and the application of latent growth mixture models to estimate the treatment effects with explicit partitioning of measurement error in the effect and outcome. In 2016, Dr. Tractenberg collaborated with a team studying an established knowledge elicitation-and-modeling methodology (DSEV), which was presented at the 2016 BayesiaLab User Conference (Nashville, TX). Dr.s Tractenberg and Yumoto are collaborating on a presentation on the importance of negative results from simulation studies at a topic-contributed session focusing on the importance of negative results that Dr. Tractenberg organized for JSM 2017 (Baltimore, MD).
Qualified change: a method to appropriately analyze change in Likert rated items
In 2000, Dr. Tractenberg published an algorithm designed to make the analysis of change in Likert-rated (or similarly ordinal/nominal) items amenable to interpretable analysis. Dr.s Tractenberg and Yumoto published a 2013 paper https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4514524/ about the application, and psychometric properties, of the method in the analysis of change in quality of life. In October 2016, Dr. Tractenberg presented a talk integrating this qualified change method with her new patient-centered model for PRO development, at the Center for Innovation to Implementation (Ci2I) at the Veteran’s Affairs (VA) of Palo Alto, CA.
Variability as a performance-based biomarker for neurological injury or disease
Collaboration between CRΘM and the Neuroscience Research Center, National Rehabilitation Hospital
Variability is an outcome that Dr. Tractenberg has been studying since 2009; it may serve as a performance-based biomarker for brain health and is currently being studied in the context of grip force dynamometry for non-invasive, low-technology clinical outcome for stroke, and potentially as a biomarker for noninvasive detection of neuropathology in individuals with active neurodegeneration but no cognitive symptoms of mild cognitive impairment. Dr.s Tractenberg & Pietrzak published a paper outlining different methods for summarizing variability in patient performance as a clinical endpoint (2011).
A second project at the Neuroscience Research Center is to evaluate tapping and grip dynamometry as clinical outcomes in stroke. To date our work has resulted in four posters describing our results. These include the findings that variability derived from a finger tapping task was highly sensitive for distinguishing stroke patients (n=43) from their caregiver controls (n=10) at admission to hospital, but at discharge, finger tapping was no longer statistically different for these two groupings. This suggests that finger tapping is sensitive to the changes (improvements) that these patients experienced while hospitalized. By contrast, in this same sample, grip dynamometry was highly sensitive for distinguishing the same patients at intake from controls, but at discharge the dynamometry results were still sensitive to group differences, although less so. The question this pilot study sought to answer is whether there are clinical variables relating to the stroke that can explain the amount of recovery demonstrated in the change from “distinguishing” to “not statistically different” for the tapping task.
Measurement invariance, over time and across race groups
Collaboration between CRΘM and the Rush University Alzheimer’s Disease Research Center (2014-2015)
Dr.s Yumoto and Tractenberg completed a study of the measurement invariance of cognitive tests over time (do the tests give the same information about individuals at three annual visits?) and across race groups (do the tests give the same information at any given visit irrespective of the race group an individual belongs to?). We found that, across groups (non-Hispanic Caucasian and African American), the same model was a good fit to the cognitive test scores at each of the three visits, supporting the characterization of equivalence of the measurement model across group at each of the three annual visits. The within-group, over time results next supported the characterization of measurement equivalence of the measurement model within each group over time. Finally, we put both groups and all three visits into the same model. These results supported a single model (measurement invariance over time and race groups). We were then able to compare the change over time in the race groups confidently (assured we were looking at the same construct over time and groups). The African American factor score means were found to be significantly lower than those estimated for non-Hispanic Caucasian study participants (paper in preparation). The paper was published in 2016 and is publicly available here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4763720/