Research Reproducibility 2018: Poster Abstracts

For Authors

Please send any corrections to UtahRR@lists.utah.edu.

Poster dimensions should be no larger than 4’x4’ (four feet wide by four feet high). Posters may be smaller, but not larger due to the poster holders.

The poster session will be from 5 p.m. to 6 p.m. Posters may be set up during check-in. 

Poster presenters may be asked to record a one minute spiel about their poster for archival purposes. The recording may be done during the conference or after, time depending.

Bring business cards! Stick them in the poster holder, around the edges of your poster.

 

Subscribe to our mailing list

* indicates required

Poster Abstracts

 

Replicable Clinician Decision Methods with Adequetly Detailed Computer Protocols Provide Personalized Care Instructions

Alan H. Morris, MD (University of Utah)

Background and Aims The current approach to clinical decision-making is based on individual experts considering large amounts of information. These individual experts make variable decisions when faced with the same information. Much of this variation is unwarranted and not linked to good evidence. Medical care quality is reduced by unwarranted variation in clinical decisions, both between institutions and between physicians. Major reductions in unwarranted variation generally require effective physician decision-support. However, this decision-support should provide personalized instructions matched to individual patient needs - a goal not met by almost all current guidelines and protocols. We used a replicable clinician decision method that produces consistent context-sensitive personalized patient care instructions from detailed computer protocols. Protocol rules applied prospectively, at the time of clinician decision-making, produce a replicable decision (same decision from different clinicians) (Figure 1). Protocol rules derived retrospectively from database analysis do not. We aim to describe the differences between our method and guideline or protocol rules generated from retrospective data analysis. Efforts to extract reliable conclusions from meta-analyses of aggregated clinical trial data cannot overcome the deficiencies of commonly used non-replicable methods. Methods We developed and implemented clinically detailed computer protocols for management of mechanical ventilation, IV fluids, blood glucose management and for pulmonary function interpretation, during the past 30 years. We exported protocols across cultures and across medical disciplines, with clinician compliance of 95% with the protocol instructions. Results Common paper-based protocols are so variable that comparison of the study results becomes difficult and lead to conflicting results. Figure 2 illustrates variability in insulin dosing recommendations for 12 published protocols (gray area): IV insulin infusion rates (insulin units / hour); Van den Berghe protocol (dashed line); eProtocol-insulin, one of the detailed, context-sensitive, computer protocols developed at Intermountain's LDS Hospital. Conclusions Implementations of detailed computer protocols can increase quality of care and the scientific credibility of clinical research and care.

 

Human Cognitive Limitations Prevent Physicians From Reaching Their Clinical Practice and Research Goals

Alan H. Morris, MD (University of Utah)

Background and Aims The Hippocratic (expert, authoritarian) model of clinician decision-making requires clinicians to manage complex information and use judgment to make the "best decision." Medical decision-makers intend to consistently improve clinical outcomes for individual patients. This goal requires decisions to be tightly linked to best evidence. However, it has been known for decades that clinicians do not consistently make decisions, or carry out actions, linked to best evidence. One of the contributors to the failure of clinicians to make consistent decisions linked to best evidence is limited human cognition due to the limits of short-term (working) memory. Psychologists have known for decades that short-term memory is limited - estimated current limit = 4 ±1 constructs before decisions become degraded. We aimed to identify an important barrier to clinical use of decision-support with detailed computer protocols. Methods We used detailed computer protocols based on best evidence to standardize clinician decision-making. We identified a 12-step method for protocol development. Multiple clinicians following the protocol instructions make the same decision when faced with the same patient data. The protocols respond to changing patient context and generate personalized medicine instructions matched to the needs of the individual patient. Results Collaborating physicians followed the personalized medicine protocol instructions 95% of the time, and eliminated differences between institutions, including the National University Hospital of Singapore (Figure 1) (Figure 2 for partial glucose protocol screen image). We used detailed computer protocols for decades, but some clinicians resist adopting protocols. We realized the first step for such protocol use is the recognition that clinicians, like all humans, overestimate their abilities (according to the illusory superiority principle). Conclusions Detailed computer protocol use and distribution to sites uninvolved in its development is feasible. Recognizing clinician (human) cognitive limitations is an important first step.

 

Standardization With Computer Protocols is Necessary to Assure Replicable Interpretations of Pulmonary Function Tests (PFT)

Alan H. Morris, MD, Olinto Linares,PhD, Matthew Hegewald, MD (University of Utah; Intermountain Healthcare)

Background and Aims PFT interpretation in developed and developing countries is variable and associated with unwarranted variation. As a result, miscategorizations of patients occur and inappropriately influence both diagnoses and management of patients with pulmonary disorders. Miscategorizations also occur because of the high frequency of technical performance errors and of clinician interpretation errors. Non-replicable clinical decision methods rely on clinician judgment to fill the logic gaps in commonly used protocols and guidelines. It is clear that detailed computer protocol replicable clinician decision methods can be exported across cultures and across medical disciplines. These methods can translate research results to clinical practice, achieving clinician compliance rates of 95% with protocol instructions. They standardize clinician decisions and behavior, while retaining personalized patient care instructions. We have used a detailed computer protocol to interpret PFTs according to ATS/ERS guidelines for thousands of patients during the past 5 years. FEV1/FVC ratio depends on different prediction and interpretation strategies that use different variables (Figure 1, GLI12:Global Lung Initiative 2012, GOLD:Global Initiative for Chronic Obstruction Lung Disease), NHANESIII:National Health and Nutrition Examination Survey III). We aimed to quantify the NHANESIII and GLI12 spirogram prediction differences in a clinical population. Methods We used the adequately detailed computer protocol to interpret PFTs according to ATS-1994 and the ATS/ERS-2005 spirogram interpretations. We compared NHANESIII and GLI12 reference equation spirogram results in patients with restriction based on plethysmographic TLC measurement (shaded boxes in Figure 1). Results We categorized PFT result differences between NHANESIII and GLI12 reference equations (Figure 2a, b). Conclusions Standardized PFT interpretations allow scientifically rigorous comparisons that lead to credible clinical conclusions. Systematic use of such protocols could standardize and improve the diagnosis and categorization of patients with respiratory disease.

 

Development of a Research Experience and an Undergraduate Course for Learning About Reproducibility Issues in the Behavioral and Neural Sciences

Jonathan Amburgey, Ph.D., Brian Avery, Ph.D., Cierra Woods, & Carrie Graham (Westminster College)

Our goal is to mentor and involve undergraduate students in reproducible research and develop an undergraduate course that will explore recent high profile replication crises, practices, and reforms affecting research progress in the behavioral, neural, and biological sciences. We created a research experience opportunity for learning about past and present reproducibility practices among researchers in the behavioral and neural sciences, and, as part of an exploratory study, mentored two research assistants (RAs) on how to investigate the ways in which scientists in these fields are addressing two interrelated issues relevant for understanding failures to replicate: statistical power and p-hacking. As part of the study, RAs reviewed a 20-year period of peer-reviewed articles discussing these two issues using the PsycINFO Database, uncovering periods of inconsistent scholarly attention, which we speculate may represent waxing and waning concern among scientists that has implications for the validity and reliability of scientific research. We are currently collaborating to develop an interdisciplinary undergraduate course to explore recent high profile replication crises, practices, and reforms affecting research progress in the behavioral, neural, and biological sciences. The aim of this course is to educate students about the complex issues surrounding reproducibility in science, by highlighting methodological and data modeling assumptions and procedures, teaching students how to evaluate replication-oriented research through hands-on participation conducting replication experiments and studies, learning how to review assumptions and research decisions, and having students propose a replication experiment or study that incorporates modern replication approaches as a way to integrate their learning and apply course content and skills. We hope that sharing our interdisciplinary approach for mentoring and teaching students about reproducibility issues will help facilitate interest in reproducibility curriculum and pedagogy, and, we offer a roadmap as well as examples from our teaching to help guide future educational efforts.

 

Identification of High-Level Formalisms that Support Translational Research Reproducibility

Danielle Groat, PhD; Ram Gouripeddi, MBBS, MS; Yu Kuei Lin, MD; Willard Dere, MD; Mary Murray, MD; Per Gesteland, MD; Julio Facelli, PhD (University of Utah)

Conducting research of complex diseases often requires multiple data streams (e.g. omics, images, clinical, research, patient-generated, and exposome). We propose to extend OpenFurther (OF), a platform that can federate and integrate data on-the-fly, with a process workflow module (PWM) that will guide researchers through the data assimilation process and build multi-source data platforms on-demand. Type 1 diabetes (T1D) was selected as a use-case for development based on the complexity of the disease and its complications. After conducting informal interviews with T1D researchers, we identified potential data sources and use-case scenarios for T1D. We reviewed the status of the data sources and the process needs of the use-cases, which inform the design criteria for an architecture capable of building scalable data platforms on-demand. The informal interviews revealed several high-level processes for conducting translational research. Those processes were condensed into three areas: 1) the need to identify the status of data sources and define a strategy for their collection, 2) develop data assimilation methods that support complex analyses of the data for signal detection, 3) manage the data to support knowledge discovery, secondary use, and reproducibility. The identified processes will be stored as a library of reusable process constructs that describe the data and their processes. This library, along with process and workflow platforms (e.g. Pegasus) that have enabled reproducibility in other scientific domains constitute the PWM. Researchers will be able to construct workflows by combining data and processes as well as to annotate and track workflows and their results. The system will evaluate workflows for feasibility, and acceptable workflows will be executed by OF. OF with the PWM will support additional data services such as advanced analyses related to signal detection, sequential integration and temporal reasoning capabilities, and more importantly incorporate reproducibility practices into the system.

 

Improving Clinical Trial Research Reproducibility using Reproducible Informatics Methods

Jianyin Shao, Ramkiran Gouripeddi, Julio C Facelli (University of Utah)

Evidence-based medicine is based on well-designed and reproducible research. Clinical trials are the gold standard of experimental design for evaluating clinical interventions on patients and populations. However, many clinical trials are unable to reach the expected recruitment targets in a timely manner. One obstacle for clinical trial enrollment is the costly and laborious screening process using manual chart review of patients' medical records. Automatic matching of patients to clinical trials eligibility criteria can reduce the time and effort for manual chart review and may allow referral of patients by physicians at the point-of-care with higher accuracy. In previous studies, using distributional semantics methods, we found that the calculated semantic similarity between clinical trial eligibility criteria is comparable to human experts’ decision-making. In this study, we examine the feasibility and performance of automatic matching of patients in the MIMIC-III database with randomly selected ICU-related clinical trials using similar semantic matching methods such as concept bag and hierarchical concept bag with several different metrics. Automatic matching of patients to clinical trials could reduce the subjective biases in evaluation of eligibility criteria with patient records and reach a wider pool of patient candidates. Automatic matching in multi-site clinical trials could reduce the effect of site-specific confounding factors. In addition, such an approach could inform patients about potentially matching trials. Incorporation of this reproducible informatics method would enhance the reproducibility and validity of a clinical trial study.

 

Do positive result bias and inappropriate controls lead to a lack of reproducibility in regard to “What is an epileptic seizure?” in animal models of absence epilepsy?

Daniel S. Barth, Jeremy A. Taylor, Jon D. Reuter, and F. Edward Dudek (University of Utah; University of Colorado, Boulder)

Epilepsy is a chronic brain disorder characterized by spontaneous recurrent seizures. A seizure is an abnormal period of hyperactive and hypersynchronous brain activity, but normal brain function also entails periods of greatly increased activity and synchronous oscillations. Epilepsy can be either genetic (inherited) or acquired (from brain injury), typically with different types of seizures. Research has focused on development of animal models of different epilepsy syndromes in order to better understand the underlying mechanisms, and thus to find new therapies to treat epileptic seizures, which are often intractable. At the last Conference, we reported data from the Barth lab using fluid-percussion injury that challenged at least one animal model of acquired epilepsy; that is, the seizure-like events or spike-wave discharges (SWDs) in this model do not have the electrical properties of injury-induced epilepsy. Instead, these SWDs appeared similar to the brief and relatively benign absence seizures characteristic of a common genetic epilepsy. We found, however, that these SWDs also appeared different from absence seizures. Sensory stimuli (e.g., a tone or click) could reliably block SWDs, which is difficult to reconcile with absence epilepsy. Strains of standard laboratory rats have SWDs that are virtually identical to the SWDs in genetic models of absence epilepsy. Appropriately timed administration of a reward could alter the timing of the SWDs. Absence epilepsy is primarily a childhood disorder, but SWDs in these animal models are most prominent in older rats. Finally, wild rats captured on the CU campus also showed SWDs with properties virtually identical to the SWDs in models of absence epilepsy. This poster will describe the difficulties encountered when we question the validity of the controls, and thus the validity of the animal models of absence epilepsy.

 

Enabling Reproducible Computational Modeling: The Utah PRISMS Ecosystem

Albert Lund, Ram Gouripeddi, Nicole Burnett, Le-Thuy Tran, Peter Mo, Randy Madsen, Mollie Cummins, Kathy Sward, Julio Facelli (University of Utah)

Computational Modeling is the use computers to simulate behaviors of complex systems to predict what might happen in real systems due to changing conditions. Reproducibility is a hallmark of computational modeling include machine learning. Often pointed reasons for lack of reproducibility in computational modeling include; improper documentation of the data and processes used in the modeling exercise. These include the choices of modeling algorithms and their parameters, data pre and post processing methods, train/test data, software environment used for model development among others. In order to reduce to the burden and provide tools for computational modeling reproducibility, the Utah PRISMS Ecosystem includes a computational modeling framework for modeling air quality data and use it for exposomic research. This framework includes a metadata repository that describes data used as inputs for modeling (Data Domain), the computational models used (Model Domain), the software environment (Build Domain), modeling processes invoked in a particular instantiation (Invocation Domain), and the modeled output data (Data Domain). In addition the computational Modeling Framework includes a data integration platform for semantically integrating different data for and from models; and a process and workflow tools to automate modeling experiments. This framework will support replicability and reproducibility of air quality computational modeling. In addition, it will enable informed use of modeled data in translational research and is generalizable to different biomedical domains at multiple scales.

 

Towards Reproducible Translational Research using Templates

Ramkiran Gouripeddi, Mollie Cummins, Bernie LaSalle, Katherine Sward, Will Dere, Julio Facelli (University of Utah)

Requirements and processes in translational research are expressed in natural language making them subject to human interpretation and ambiguities. Using templates in other domains such as software engineering have enabled better precision and translation of requirements to developed products [1]. To enable better reproducibility, the Utah Center for Clinical and Translational Science (CCTS) is developing, evaluating and deploying templates for performing translational research. Translational research requisition processes are presented as templates or in natural language. In the absence of a template, natural language requests and processes are cast into templates using state of the art pattern matching methods. These requirements and processes are automatically checked for conformance with existing templates [1,2]. Those meeting the conformance checks are passed on to the respective CCTS Foundations and/or service cores for their fulfillment. Those failing the conformance checks are sent back to the investigator for review, and then discussed with the CCTS Foundations/Cores.  This template-based approach will be useful for various types of translational research services including: requesting and designing statistical analysis plans [3,4], research data and informatics support requests [5], trial recruitment services [6], biospecimen and DNA analysis services, and clinical study services. Templates have been shown to be effective in increasing precision of natural language requirements and reducing ambiguities [1]. When implemented we envision that this informatics-based process will enable efficiency, quality and reproducibility of translational research processes and results. 1. C. Arora, M. Sabetzadeh, L. Briand and F. Zimmer, "Automated Checking of Conformance to Requirements Templates Using Natural Language Processing," in IEEE Transactions on Software Engineering, vol. 41, no. 10, pp. 944-968, Oct. 1 2015. doi: 10.1109/TSE.2015.2428709 . 2. T. Molka, D. Redlich, M. Drobek, A. Caetano, X.-J. Zeng, and W. Gilani, “Conformance Checking for BPMN-based Process Models,” in Proceedings of the 29th Annual ACM Symposium on Applied Computing, New York, NY, USA, 2014, pp. 1406–1413.  3. R. Gouripeddi, M.R. Cummins, R. Madsen, B. LaSalle, A. Redd, X. Ye, A. Presson, T. Greene, J. Facelli, Streamlining Study Design and Statistical Analysis, , Research Reproducibility Conference, Monday, November 14, 2016, S.J. Quinney Law School, University of Utah, Salt Lake City. http://campusguides.lib.utah.edu/UtahRR16/abstracts, DOI: http://doi.org/10.5281/zenodo.180452  4. R. Gouripeddi, M. Cummins, R. Madsen, B. LaSalle, A. Redd, X. Ye, A. Presson, S. Harper, T. Greene, Julio Cesar Facelli, Streamlining Study Design and Statistical Analysis for Quality

 

Prolonged Care of the Burn Patient in a Non-Burn Facility Following a Mass Casualty Incident (MCI) : An Integrated Regional Approach

Annette F. Matherly, Tallie Casucci, Chris Stratford, Cory Ipson, John Rennemyer, Hendrik Alberts, Abdulrahman Alnami, Dave Barillo, Carolyn Blayney, Elisha Brownson, Jan Buttrey, Amalia Cochran, Kevin Chung, Lyndsay Deeter, Linda S. Edelman, Niknam Eshraghi, John L. Hick, Daniel F. Hourihan, Steve Ikuta, Arpana Jain, Sable Kersmann, Christopher K. Lake, Stefanos Lako nios, Kevin M. McCulley, Tanner Morley, Ron Pinheiro, Gemma Ryan, Yaron Shoham, Micah J. Smith, Len Sterling, Peter P. Taillac, and Sandra J. Yovino (University of Utah)

Introduction: A burn mass casualty incident (BMCI) with a significant number of patients creates unique challenges due to the scarcity of burn centers and complexity of initial care required. The Western Region Burn Disaster Consortium (WRBDC) had no standardized educational delivery method that taught prolonged care of the burn patient to non-burn center providers. Method: A diverse team of 41 experts were assembled from 14 states and two countries. Contributors were tasked with aggregation of essential evidence based clinical information to inform decision-making for 96 hours by community hospital and trauma center clinicians practicing in a facility without a burn center who received patients during a BMCI. Curriculum would include three tiered options for a fluctuating resource environment to ensure optimal care could be delivered despite the constraints of a large-scale incident. Results: Four patient phases of care were identified. E-Learning modules containing 22 objectives and pertinent education were added to an existing web platform. Modules include supplemental material and quick reference sheets in addition to essential, better and best care recommendations. Content is web-based in order to facilitate access by healthcare providers and includes ongoing process improvement methodologies and analytics to ensure information remains relevant. Conclusion: Standardized recommendations ensure rational patient care practice can be delivered by healthcare professionals who may have limited amounts of knowledge and expertise in burn care. While this project was specific to BMCI it does have applicability to other specialties and can be utilized as a model for development of similar educational initiatives.

 

Effects of repeated evaluation on acceptability rating of sentences: A partial replication of Zervakis & Mazaka (2012)

Alexander Cipro (University of Utah)

I replicated an experiment conducted by Dr. Zervakis and Dr. Mazak, of Duke University at the time of the experiment, in 2012. I was able to communicate with Dr. Zervakis about accurately replicating test items that she no longer had access to. The experiment was to test whether the 'syntactic satiation effect' held over exposure to multiple sentence types. Syntactic satiation effect is when a someone is repeatedly exposed to a negative syntactic structure, ungrammatical or difficult to process, they will become more willing to accept these structures as acceptable. I tested this hypothesis via an online survey. The survey consisted of five block of 100 sentences each. The survey was constructed from 4 ungrammatical, 4 difficult and simple sentence types. A control survey consisted of unrelated sentences in the first four blocks. The final block was identical for both surveys and contained the test item to be examined. All of the participants were native English speakers who were over the age of eighteen. Participants were asked to rate each sentence with a Likert scale of 1-7, from completely unacceptable to acceptable. After collecting and analyzing the data, I found the experimental group on average rated each sentence type as more acceptable than the control group. The previous experiment showed a significant effect for all but two of the sentence types, as did mine. My findings were predicted from the hypothesis and mirrored the original findings.

 

Connecting Department RPT Cultures, Publishing Expectations, and Reproducibility

Rachel Blume, Allyson Mower (University of Utah)

Academic institutions as employers expect employees categorized as faculty to spend a portion of their time investigating an area of a discipline on an individual basis, maintain that investigation during the time of employment, and describe their findings to others in the field. Employment statements studied for this poster focused heavily on the expectation of originality for an author’s scholarly expression, arrived at through independent work and with a sustained level of research. Quality of the output rose above both quantity and impact. But many academic departments push for a certain number of journal articles to be published before tenure as part of their departmental culture. Do tenure requirements and department cultures impact reproducibility? Many Librarians have advocated against the rapidly expanding growth of scholarship in regards to content and its impact on the publishing market, particularly with the exponential inflation rate of journals and dwindling library budgets. Although this extensive research exists, exploring what Librarians have named the “Serials Crisis,” little has been done to investigate the possible effects publish or perish has on quality of research, and thereby its reproducibility. In setting standards and guidelines for employment, university departments put pressure on the researcher to focus on “the least publishable unit” in order to reach a quantified goal. The question, therefore, becomes whether this focus on the amount of research consequently shifts the concentration of outcomes in research away from the assurance that work is adhering to quality or reproducibility standards of practice. Furthermore, the separation of greater scientific works into distinct publishing units may serve as a complicating factor in the replication of that work since data sets and findings are distinct from one another rather than found in a single more comprehensive piece.

 

Acoustic features of vowels in clear and conversational speech: A partial replication of Ferguson and Quené (2014)

Finlay, B.J. and Vonessen, J.S. (University of Utah)

This study was a partial replication of Ferguson and Quené (2014). Ferguson and Quené (2014) found that vowels produced in a clear speaking style have a longer duration, greater dynamic formant movement, and are hyper-articulated relative to vowels produced in a conversational speaking style. My partner and I sought to verify these effects through our replication study, using the same materials and stimuli from the original study. We had 20 native English speakers over the age of 18 for our participants. All participants were involved in two separate sessions. In the first session, they were instructed to read the materials in their typical, conversational speaking style. In the second session, they were asked to read the materials as if they speaking to an individual with hearing loss. The participants were recorded using acoustic analysis software called Pratt, which allowed us to look at the fine physical characteristics of their speech. We extracted the vowel sounds from the participants' speech and analyzed them for five different dependent variables, which were duration, F1, F2, vector length and trajectory length. We were able to verify some of the results of the original study. This included a very successful replication of the duration effect of vowels in clear speech. Overall, all of the vowels we acoustically analyzed had their physical characteristics change between the clear and conversational speaking styles for at least one of our dependent variables. Similar to the intent of the original study, it is hoped that by looking at the fine physical details and differences of vowels in clear speech and conversational speech we can cultivate our understanding of what acoustic cues are the most salient for intelligibility.

 

Performing Reproducible Translational Research by Integrating Immunomes and Exposomes

Ram Gouripeddi, Danielle Groat, Albert Lund, Andrew Miller, Katherine Sward, Julio Facelli (University of Utah)

The immune system defends hosts from pathogenic agents and responds with inflammatory processes to various environmental factors. In addition, the development, adaptation and modulation of the immune system are significantly influenced by the environment; greater than genetic factors1–3. These combinatorial interactions between the immune system, the environment and the genome leads to variations in disease phenotypes and response to therapeutic interventions. Therefore, understanding disease mechanisms and the development of interventions would need to consider the complex interplay of the endogenous immunological process along with the effects of the modern environment requiring generation of a complete picture of environmental exposures, and immune responses along with genetic factors. The immunome is defined as the complete set of all genes and their translating immunological proteins - many of them resulting from various environmental stimuli4–6. The exposome encompasses the life-course of environmental exposures (including lifestyle factors) from prenatal periods and complements the genome by providing a comprehensive description of lifelong exposure history7. Immunome research requires the integration of diverse data types for supporting different research use-cases. While there exist gaps and sparseness in data points needed to generate sufficiently complete immunomes and exposomes, using available data with an understanding of their limitations could enable reproducibility research. In order to systematically support these needs, we are developing a scalable computation infrastructure, the Utah PRISMS Informatics Ecosystem (UPIE)8, that enables generation, integration and utilization of immunomes and exposomes for translational research. UPIE is a semantically consistent, metadata-driven, event-based big data infrastructure. It includes methods and processes for collection of sensor and person-generated data; selection and invocation of computational models for filling spatio-temporal gaps in exposomic data; data integration; and participant and research facing tools. In this presentation, we discuss our metadata driven and uncertainty-characterizing approaches within UPIE enable the generation of immunomes and exposomes, and how their integration is essential for reproducibility of research. We explain the generalizability of this multi-scale and multi-omics platform for providing robust pipelines for reproducible research in conditions such as asthma and type 1 diabetes mellitus using publically available immunome data from sources such Immport9. References 1. Brodin, P. et al. Variation in the Human Immune System Is Largely Driven by Non-Heritable Influences. Cell 160, 37–47 (2015). 2. Björkstén, B. Environmental Influences on the Development of the Immune System: Consequences for Disease Outcome. Window Oppor. Pre-Pregnancy 24 Mon. Age 61, 243–254 (2008). 3. Morrot, A. et al. Metabolic Symbiosis and Immunomodulation: How Tumor Cell-Derived Lactate May Disturb Innate and Adaptive Immune Responses. Front. Oncol. 8, (2018). 4. Ortutay, C. & Vihinen, M. Immunome: A reference set of genes and proteins for systems biology of the human immune system. Cell. Immunol. 244, 87–89 (2006). 5. El-Chemaly, S. et al. The Immunome in Two Inherited Forms of Pulmonary Fibrosis. Front. Immunol. 9, (2018). 6. Biancotto, A. & McCoy, J. P. Studying the Human Immunome: The Complexity of Comprehensive Leukocyte Immunophenotyping. Curr. Top. Microbiol. Immunol. 377, 23–60 (2014). 7. Wild, C. P. Complementing the Genome with an “Exposome”: The Outstanding Challenge of Environmental Exposure Measurement in Molecular Epidemiology. Cancer Epidemiol. Prev. Biomark. 14, 1847–1850 (2005). 8. Sward, K., Patwari, N., Gouripeddi, R. & Facelli, J. An Infrastructure for Generating Exposomes: Initial Lessons from the Utah PRISMS Platform. in (2017). 9. Bhattacharya, S. et al. ImmPort, toward repurposing of open access immunological assay data for translational and clinical research. Sci. Data 5, (2018).

 

Reproducibility of drug class reviews for formulary decision-making

Fiander M, Gonzales V, Alonso-Martinez E, LaFleur J (University of Utah)

Background: The Drug Regimen Review Center (DRRC) contracts to provide reviews of efficacy and safety for drugs or drug classes being considered for inclusion in a public insurer's preferred drug list (PDL), similar to a formulary. The requirements for these reports do not dictate a particular methodology, but the objective is to provide decision makers with evidence to inform decisions. Prior to mid-2016, these reviews were researched and written entirely by pharmacists; and while the reports fulfilled their objectives and included evidence following the commonly accepted hierarchy of evidence, methodologies were not always reported in sufficient detail to be reproducible. Objective: To formalize methods of conduct and reporting for the DRRC reports by following methodological guidance on the conduct and reporting of systematic reviews. Methods: In mid-2016, the new head of the DRRC consulted with an information specialist (IS) experienced in systematic reviewing and literature searching to discuss options. Changes to conduct were discussed with reference to guidance in the Cochrane Handbook of Systematic Reviews of Interventions, and methodologies employed for the Rapid Review service offered by The Canadian Agency for Drugs and Technologies in Health (CADTH). Initial discussions focused on literature search sources and reporting, but expanded to other processes of evidence synthesis. Results: Process results will be provided including changes in scoping searches, bibliographic database searching, search strategy refinements, processes for documenting article eligibility, title and abstract screening, full text review, and included evidence types. Our poster will also provide data on an analysis of included evidence prior to the changes implemented in mid 2016 to determine if structured searching identifies evidence not included in pre-2016 reports.

 

Centralized scientific communities more likely generate non-replicable results

Valentin Danchev, Andrey Rzhetsky, James A. Evans (University of Chicago)

Growing concerns that most published results, including those widely agreed upon, may be false are rarely examined against rapidly expanding biomedical publications. Exact replications only occur on small scales due to prohibitive expense and limited professional incentive. We introduce a novel, high-throughput replication approach aligning 51,292 published claims about drug-gene interactions with high-throughput experiments performed through the NIH LINCS L1000 program. We propose that the likelihood of a published claim to replicate in future experiments depends in part on how scientific communities are networked in an increasingly collaborative system of science. We show (1) that claims reported in a single paper replicate 19% more frequently than expected, while those reported in multiple papers and widely agreed upon replicate 45% more frequently, manifesting collective correction in science. Nevertheless (2), among the 2,493 claims reported in multiple papers, centralized scientific communities perpetuate claims less likely to replicate even if widely agreed upon in the literature and irrespective of heterogeneity in high-throughput experiments, demonstrating how centralized, overlapping collaborations weaken collective inquiry. Decentralized research communities involve more independent teams and use more diverse methodologies, generating the most robust, replicable results. Our findings highlight the importance of science policies that foster decentralized collaboration to promote robust scientific advance. Our large-scale approach holds promise for simultaneously evaluating the robustness and replicability of numerous published claims