Skip to main content

Research Reproducibility 2016: Poster Abstracts

This conference will feature prominent speakers and opportunities to explore the concept of reproducibility.

Research Reproducibility 2016 Logo

For Authors

Please send any corrections to

Poster dimensions should be no larger than 4’x4’ (four feet wide by four feet high). Posters may be smaller, but not larger due to the poster holders.

The poster session will be from 1 p.m. to 2 p.m. Posters may be set up during check-in. Lunch opens at 12:30. Please try to finish lunch so that you can stand by your poster at 1 p.m.

Poster presenters may be asked to record a one minute spiel about their poster for archival purposes. The recording may be done during the conference or after, time depending.

Bring business cards! Stick them in the poster holder, around the edges of your poster.

There will be an award for the overall best poster!

About the Poster Session

The Poster Session will showcase cutting-edge research and works-in-progress in pursuit of making research true. Presenting a poster is a great opportunity, especially for students and new researchers, to obtain interesting and valuable feedback on ongoing research from conference attendees. A Best Poster Award will be presented based on the quality of research work, poster design, and oral presentation.

Asymmetric Interpretation of Noninferiority Trials Biases Results in Favor of the Therapy Designated "New"

Scott K Aberegg, MD, MPH
Pulmonary and Critical Care, University of Utah

Background: Noninferiority trials are used to compare a new therapy against a comparator of established efficacy. The CONSORT statement for reporting of noninferiority trials allows for flexibility in trial design and recommends an asymmetric interpretation of results that is biased in favor of the treatment that is designated as “new.”

Methods: In this descriptive study, we searched the five highest impact general medical journals (NEJM, JAMA, Lancet, BMJ, Annals of Internal Medicine) for trials published between 2011 and 2016 where the primary endpoint was tested using a noninferiority hypothesis. We tabulated design characteristics, results, and reporting of conclusions to determine if there is evidence that flexibility in design and asymmetric interpretation of results may bias the conclusions of these trials individually and collectively in favor of the “new” therapy.

Results: We identified 201 noninferiority comparisions (including co-primary endpoints) in 169 distinct trials during the five years investigated. For 127 (63%) of these comparisons, the choice of delta was not justified or described. Delta choice was justified concretely for 44 (22%) of the comparisons and vaguely for 30 (15%) of the comparisons. For 123 (61%) of the comparisons, the primary test of the noninferiority hypothesis was one-sided, and for 62 (31%) of the comparisons, the one-sided alpha value was greater than 0.025 (range .0123 - .10). For 168 comparisons where an absolute risk reduction (ARR) was presented or could be calculated, the mean delta (pre-specified margin of noninferiority) was 9.7% and among 60 of these 168 comparisons where mortality was the primary outcome or part thereof, the mean delta was 5.8%. Among all comparisons with a calculable ARR, the mean observed delta was -0.27% (negative differences favoring the “new” therapy); among those where the point estimate favored the “new” therapy the mean observed delta was -4.62% and among those where the point estimate favored the “old” therapy, the mean observed delta was +4.18%. Among all 201 comparisons, authors concluded the “new” therapy was noninferior in 158 (79%) of the comparisons, superior in 26 (13%) and inferior in 3 (1.5%). In 21/26 (81%) of the cases where the “new” therapy was declared superior, the 95% confidence interval of the result included values within the negative zone of indifference (the “mirror” of delta). In 23 (11%) of the comparisons, the “new” therapy was statistically significantly worse than the “old” therapy using a 2-sided test with alpha of 0.05, but was not called inferior because the confidence interval used in the planned hypothesis test extended into the zone of noninferiority. The number of statistically significant results favoring “new” versus “old” based on conventional 2-sided testing with alpha of 0.05 was identical with 26/201 (13%) favoring “new” and 26/201 (13%) favoring “old”.

Conclusion: Noninferiority trials published over a 5-year period in 5 high impact general medical journals showed variability in design parameters and were equally as likely to show statistically significant results favoring the “old” therapy as they were the “new” therapy. However, because of bias inherent in the design, interpretation, and reporting of noninferiority trials, superiority was declared 8.7 times more frequently than inferiority. Options for addressing this bias include disallowing a claim of superiority where the confidence interval includes values within negative (“mirror”) delta, or performing conventional superiority testing when a statistically significant difference favors the “old” therapy.

Creating Easy Way to Generate Regression Statistics Tables for Research Papers Using New Set of R Functions.

Ragheed Al-Dulaimi
School of Medicine - Epidemiology, University of Utah

R functions are useful and efficient way to save time with routine & repetitive tasks as well as validation of analysis results. In our daily practice, we face situations where changes to the descriptive & regression output tables are requested such as when the data is modified or variables need to be recoded. The raw R output and the word table has to be recreated. Usually, this is not a time efficient process and might associated with errors. We present an easy to use, flexible set of R functions for linear, logistic, cox proportional hazard & repeated measures models. Each function can produce a nice looking rtf table with the desired output for the specific model (The estimate, 95% C.I. & p-values) for both the univariate & the multivariate analyses. The user simply enters 3 parameters (the name of the data that contain the necessary variables only, the outcome variable name & the file path for the output). These functions can be easily used to produce well-organized tables. It allows a faster way to accommodate any requested changes in the analysis as well as reproducible research.

Protocol for a systematic review of effectiveness of integrated treatment for survivors of interpersonal violence who have substance abuse disorders

Karla Arroyo
College of Social Work, University of Utah

Although the comorbidity between substance abuse and Intimate Partner Violence has been widely studied and established, there is little evidence of integrated treatment approaches that focus on IPV survivors with substance abuse diagnosis (Collins, 1999).

Following the preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P), the research team created a protocol for a systematic review and meta-analysis project that is planned to start August 2016. Preliminary searchers and initial searchers will be conducted in partnership with a librarian to ensure a comprehensive and reproducible review. This systematic review will include experimental and quasi-experimental designs. Based on the expectation that there may be not enough studies that are RCTs, two groups will be created to cluster studies with higher rigor (RCTs) and those with less rigorous research designs (quasi-randomized and non-randomized). The two groups will be assessed and analyzed separately unless there is no more than one study in either group. Two researchers will independently screen studies, extract data, and assess for biases. The Covidence online software tool will be used to carry the screening and extraction of data tasks if finances allow, otherwise an excel spreadsheet will be used. Discrepancies will be resolved by consensus and by engaging a third author to resolve remaining disagreements. Meta-analysis will be conducted within and across study types using a comparison between two means and dichotomous outcomes, statistical heterogeneity will be evaluated then a determination to use random or fixed effect method will take place. . If more than 10 studies meet inclusion criteria, contour-enhanced funnel plots will be used to investigate for publication bias. While the test for reporting biases will highly depend on the degree of heterogeneity observed, it is expected that that re-victimization will be a dichotomous outcome for which the arcsine test is expected to be used.

A SAS program module that systematically implements a Case-Crossover study design for the investigation of drug safety

Zachary Burningham1, Chia-Chen Teng1, Tao He1, Ahmad Halwani2, Brian C. Sauer1
1 Salt Lake City Veterans Affairs Medical Center, Health Services Research and Development (IDEAS) Center and Division of Epidemiology, Department of Internal Medicine, University of Utah
2 Salt Lake City Veterans Affairs Medical Center, Health Services Research and Development (IDEAS) Center and Huntsman Cancer Institute, Division of Hematology and Hematologic Malignancies, Department of Internal Medicine, University of Utah

There has been a growing trend in the adoption of the Case-Cross Over (CCO) study design among pharmacoepidemiologists interested in examining drug safety. The CCO study design is sensitive to case and control window duration alterations. Biologic plausibility and the unique characteristics of the drug and condition pair under study play a considerable role in window duration adjustment. We developed a CCO program module that follows a systematic approach to implementing a CCO study design, with the capability of handling multiple drug-condition pairs and risk window specifications. SAS version 9.4 (SAS Institute, Cary, NC) served as the software engine for the CCO module. The CCO module is comprised of multiple components and SAS programs. However, the user only needs to specify parameters located in “,” which then coordinates the execution of other subprograms after establishing all the necessary data connections. In, the user specifies the location of the input files that highlight the drug-condition pairs to be analyzed and the output location of the results. In addition, the user can also modify the study characteristics specification file, which allows the user to define the study period, alter case and control window length, adjust control window lag, and increase the number of control windows. The CCO module utilizes conditional Mantel-Haenzel (M-H) Odds Ratios (OR) in estimating incidence ratios, as is typically performed in a 1:many matched case-control study. We evaluated the ability of the CCO module to perform as intended by utilizing data comprised of simulated treatment effects under different distributional relationships (i.e., acute, accumulative, and insidious) between drug exposures and outcomes. Our CCO module successfully detected simulated drug-condition relationships characterized as acute and was also successful in examining accumulative drug-condition relationships by increasing case and control risk window duration.

Research Data Service

Ryan Butcher, Ram Gouripeddi, Randy Madsen, Peter Mo, Bernie LaSalle
CCTS - BMIC, University of Utah

Data obtained from electronic health records (EHR) has great potential towards solving clinical research questions. However, the ability to reproduce a successful request for data from one scenario to another can vary significantly. Successful delivery of useful data requires effective communication between the clinical research team, data query experts, and data analysts. It is important that the high level vision of the researcher is translatable to each member of the project team in such a way that the verbal conversations, electronic communications, and other interactions, can be structured or distilled into discretely understood variables.

Over the past 3 years, the Biomedical Informatics Core, Center for Clinical and Translational Science, University of Utah has developed a Research Data Service (RDS) to deliver data to researchers covering a broad spectrum of clinical topics. In order to make RDS work in a reproducible manner we conceptualized relevant processes and designed workflows that streamline the data requesting, researcher engagement, query mediation, abstraction, data extraction, delivery and support with analytics. In addition, we developed methods for reusable data extraction, delivery (such as iterative sample data), storage and data quality analysis. This kind of approach provides a consistent workflow for tracing back provenance through the life-cycle of a data request and reproducing steps within the a given request or a new request. Another important benefit to this approach is that researcher expectations can be managed early on in the process leading to a productive allocation of resources. For example, a researcher’s ideal scenario may not be possible, however, a scenario that is still of clinical significance, and a useful step in the right direction, may still be worth pursuing. Finally, we will discuss the ongoing challenges working with teams of various technical and clinical backgrounds and how we use each data request to improve the RDS process.

Assessing Data Practices at the University of Utah

Rebekah Cummings
Marriott Library, University of Utah

Reproducibility is the gold standard of research, but often relies on a series of data management dependencies that are not common scientific practice. Data must be managed and documented to a degree that makes it understandable, meaningful and reproducible to secondary audience depending on their definition of the term ‘reproducibility’. Even small pieces of missing information like an unusual method that wasn’t captured or small piece of code used for analysis can render results irreproducible. Data and code (where applicable) must be made available for secondary analysis, preferably in an open repository with supporting documentation, contact information, and clear rights attached. The University of Utah Campus Data Group – comprised of representatives from the Marriott Library, Eccles Health Science Library, Faust Law Library, and Research Computing & Center for High Performance Computing – strives to help researchers meet these dependencies by offering data management, storage, and sharing services and technical infrastructure for our research community. This poster will report on the Campus Data Group’s 2016 University of Utah Research Data Services Assessment, which set out to answer three research questions:

  1. What types of data and in what quantity are being produced at The University of Utah?
  2. How are research data produced by University of Utah researchers currently managed, stored, backed up, described, and shared?
  3. What data services can the Marriott Library, Eccles Health Sciences Library, Faust Law Library, and Research Computing & Center for High Performance Computing provide that will best support the needs of our research community?

The results of this assessment will shed light on current data practices at the University of Utah and inform the creation of new services and infrastructure to increase research transparency and replicability.

A low-cost, low-barrier clinical trials registry to support effective recruitment

Mollie Cummins, Ram Gouripeddi, Julio Facelli
College of Nursing, University of Utah

Recruitment of an adequate sample of study participants who are representative of the target population is a key factor in reproducible research. However, clinical trials often struggle to recruit participants, for a variety of reasons. Emerging recruitment approaches that leverage the electronic health record (EHR) to support effective recruitment hold enormous potential to improve the quality of recruitment in clinical trials. However, many institutions currently lack a clinical trials registry with structured representation of inclusion and exclusion criteria. This type of database is a necessary prerequisite for EHR based approaches to recruitment. In this presentation, we describe our design for a basic, low barrier institutional clinical trials registry.

Aside from a clinical trials registry, common sources of partial information about research at academic health sciences centers include entities such as the UU’s Institutional Review Board (IRB), the uTRAC, the Institutional Animal Care and Use Committee (IACUC), and the Human Use Subcommittee (HUS) and Radioactive Drug Research Committee (RDRC) databases. However, these databases support specific functions in the conduct of research - at the UU, human subjects’ protection, research services billing, animal and radiation safety (respectively); and only provide a partial and disparate description of the studies.

We propose a low-barrier, low cost database composed of discrete and computable fields describing clinical trials, compliant with World Health Organization (WHO) registry standards, with additional fields that provide a structured representation of inclusion and exclusion criteria. Based upon RedCAP, records could be partially populated by data streams from other university information systems, with additional reporting completed using RedCAP’s integrated survey functionality. The establishment of a clinical trials registry enables more advanced informatics approaches to improving research reproducibility, including effective recruitment strategies. This basic, low barrier design for a clinical trials registry could meet the immediate need for such a registry at UU or other institutions.

Rigorous testing of null hypotheses with appropriate controls would improve reproducibility of experimental models of acquired epilepsy

F. Edward Dudek
Neurosurgery, University of Utah

Epilepsy is a chronic brain disorder characterized by spontaneous seizures. A seizure is an abnormal period of hyperactive and hypersynchronous brain activity. Epilepsy can be either genetic (inherited) or acquired (from brain injury), usually with different types of seizures. Research has focused on development of animal models of epilepsy in order to better understand the underlying mechanisms and to find new therapies to treat epileptic seizures, which are often intractable. In the last 5-10 years, three or four animal models of acquired epilepsy have been described with seizures that do not have the electrical properties of injury-induced epilepsy; rather, their properties are similar to the brief and relatively benign absence seizures characteristic of a common genetic epilepsy, and often seen in control animals. Although the rationale for these models of acquired epilepsy appeared to be based on an apparent similarity to clinical scenarios, it is possible that the null hypothesis was not rigorously tested; several important issues raise concerns about whether these models actually recapitulate the human condition (e.g., brain injury of appropriate type and sufficiently severe?). The research groups who have published on these models of acquired epilepsy claim to “have performed control experiments” (paraphrase), but important details about how these controls were undertaken are sketchy. Other laboratories have not been able to reproduce these models, which raises concerns about whether these brain injury models even develop epilepsy (i.e., the experimental animals appear the same as the controls, with absence-like seizures). These laboratories have been well-funded for 10-20 years, and use these models to develop new therapies. This poster will describe the difficulties encountered when we begin to question the validity of the controls, and thus the validity of the animal models and the conclusions that derive from them.

Streamlining Study Design and Statistical Analysis

Ram Gouripeddi, Mollie Cummins, Randy Madsen, Bernie LaSalle, Andrew Redd, Xiangyang Ye, Angela Presson, Tom Greene, Julio Facelli
Department of Biomedical Informatics, University of Utah

Key factors causing irreproducibility of research include those related to inappropriate study design methodologies and statistical analysis1. In modern statistical practice irreproducibility could arise due to statistical (false discoveries, p-hacking, overuse/misuse of p-values, low power, poor experimental design) and computational (data, code & software management) issues2. These require understanding the processes and workflows practiced by an organization, and the development and use of metrics to quantify reproducibility.

Within the Foundation of Discovery - Population Health Research, Center for Clinical and Translational Science, University of Utah, we are undertaking a project to streamline the study design and statistical analysis workflows and processes. As a first step we met with key stakeholders to understand the current practices by eliciting example statistical projects, and then developed process information models for different types of statistical needs using Lucidchart. We then reviewed these with the Foundation’s leadership and Standards Committee to come up with ideal workflows and model, and defined key measurement points (such as those around study design, analysis plan, final report, requirements for quality checks, and double coding) for assessing reproducibility. As next steps we will use our finding to embed analytical and infrastructural approaches within the statisticians’ workflows. This will include data and code dissemination platforms such as Box, Bitbucket and GitHub, documentation platforms such as Confluence, and workflow tracking platforms such as Jira. These tools will simplify and automate the capture of communications as a statistician work through a project. Data-intensive process will use process-workflow management platforms such as Activiti3, Pegasus4 and Taverna5. These strategies for sharing and publishing study protocols, data, code and results across the spectrum6, active collaboration with the research team, automation of key steps, along with decision support will ensure quality of statistical methods and reproducibility of research.


  1. Reproducibility and reliability of biomedical research’, organised by the Academy of Medical Sciences, BBSRC, MRC and Wellcome Trust in April 2015.
  2. National Academies of Sciences, Engineering, and Medicine. Statistical Challenges in Assessing and Fostering the Reproducibility of Scientific Results: Summary of a Workshop. Washington, DC: The National Academies Press, 2016. doi:10.17226/21915.
  3. Acivit,
  4. Pegasus,
  5. Apache Taverna.
  6. Peng, R. 2011. Reproducible research in computational science. Science 334(6060):1226-1227


An Infrastructure for Reproducible Exposomic Research

Ram Gouripeddi, Phillip Warner, Randy Madsen, Peter Mo, Nicole Burnett, Jingran Wen, Albert Lund, Ryan Butcher, Mollie Cummins, Julio Facelli, Katherine Sward
Department of Biomedical Informatics, University of Utah

Understanding effects of the modern environment on human health requires generation of a complete picture of environmental exposures, behaviors and socio-economic factors. The concept of an exposome encompasses the life-course of environmental exposures (including lifestyle factors) from prenatal periods and complements the genome by providing a comprehensive description of lifelong exposure history1. Exposomic research requires the integration of diverse data types for supporting different research use-cases. While there exist gaps and sparseness in data points needed to generate sufficiently complete exposomes, using available data with an understanding of their limitations could enable reproducibility research.

In order to systematically generate air quality exposomes for the Pediatric Research using Integrated Sensor Monitoring Systems (PRISMS) grant, we are developing a scalable computation infrastructure. Eliciting use-cases, we conceptually designed a data model to integrate different types of data as related to individuals and populations. Supporting proper use of such heterogeneous data requires the discovery, storage and presentation of metadata about these data. We use a graph database implementation of OpenFurther’s metadata repository2 for authoring and storage of these metadata. Using the OpenFurther platform we are developing metadata-driven big data infrastructure that generates an event-document store (EDS) of integrated data as needed for different use-cases. The EDS captures the spatio-temporal variations of various events (e.g. air pollutant concentrations, occurrence of conditions), and locations of the individuals and populations. In addition, to fill gaps in measurements and combine different data source we use mathematical models with characterized uncertainties. Our metadata-driven approach ensures reproducibility as it informs the end-user not only on the specifics about the data but also its limitations (including reducible and exposure uncertainties) for using the data in different use-cases. It is generalizable for integrating multi-scale and multi-omics data and provides robust pipeline for reproducible research data delivery.


  1. C. P. Wild, “The exposome: from concept to utility,” Int. J. Epidemiol., vol. 41, no. 1, pp. 24–32, Feb. 2012.
  2. An Informatics Architecture for an Exposome, R. Gouripeddi, Session II06 – Secondary Use of Data for Research (Interactive Learning), AMIA 2016 Joint Summits on Translational Science, March 22nd, 2016, San Francisco.


A Conceptual Architecture for Reproducible On-demand Data Integration for Complex Diseases

Ramkiran Gouripeddi, Karen Eilbeck, Mollie Cummins, Katherine Sward, Bernie LaSalle, Kathryn Peterson, Randy Madsen, Phillip Warner, Willard Dere, Julio C. Facelli
Department of Biomedical Informatics, University of Utah

Eosinophilic Esophagitis, which is a complex and emerging condition characterized by poorly defined phenotypes, and associated with both genetic and environmental conditions. Understanding such diseases requires researchers to seamlessly navigate across multiple scales (e.g., metabolome, proteome, genome, phenome, exposome) and models (sources using different stores, formats, and semantics), interrogate existing knowledge bases, and obtain results in formats of choice to answer different types of research questions. All of these would need to be performed to support reproducibility and sharability of methods used for selecting data sources, designing research queries, as well as query execution, understanding results and their quality.

We present a higher level of formalizations for building multi-source data platforms on-demand based on the principles of meta-process modeling and provide reproducible and sharable data query and interrogation workflows and artifacts. A framework based on these formalizations consists of a layered abstraction of processes to support administrative and research end users:

  • Top layer (meta-process): An extendable library of computable generic process concepts (PC) stored in a metadata repository1 (MDR) and describe steps/phases in the translational research life cycle.
  • Middle layer (process): Methods to generate on-demand queries by assembling instantiated PC into query processes and rules. Researchers design query processes using PC, and evaluate their feasibility and validity by leveraging metadata content in the MDR.
  • Bottom layer (execution): Interaction with a hyper-generalized federation platform (e.g. OpenFurther1) that performs complex interrogation and integration queries that require consideration of interdependencies and precedence across the selected sources.

This framework can be implemented using process exchange formats (e.g., DAX, BPMN); and scientific workflow systems (e.g., Pegasus2, Apache Taverna3). All content (PC, rules, and workflows), assembling, and executing mechanism are sharable. The content, design, and development of the framework is informed by user-centered design methodology and consists of researcher and integration-centric components to provide robust and reproducible workflows.


  1. Gouripeddi R, Facelli JC, et al. FURTHeR: An Infrastructure for Clinical, Translational and Comparative Effectiveness Research. AMIA Annual Fall Symposium. 2013; Wash, DC.
  2. Pegasus. The Pegasus Project. 2016;
  3. Apache Software Foundation. Apache Taverna. 2016;
A transparent and reproducible pipeline for extracting clinical lab information from a nationwide healthcare system's (VHA) Corporate Data Warehouse (CDW)

Ahmad Halwani, Jared Hansen, Zach Burningham, Clarke Low, Tina Huynh, Brain Sauer
Internal Medicine, University of Utah

Background: Clinical lab information provide a unique opportunity to assess real world treatment effectiveness and safety with higher granularity and validity compared to administrative data. Unfortunately, there is significant heterogeneity in how this information is encoded across time and geography. Efforts to clean this data have varied by group and across lab concepts. This presents a significant barrier to utilization of reliable and reproducible CDW clinical lab information in comparative effectiveness research.

Methods: We defined a conceptual framework for retrieval of lab information from VHA’s CDW using five features: Logical Observation Identifiers Names and Codes (LOINC) codes, test names, topography, unit, and unit reference ranges. This was then implemented as a framework in R comprised of 7 discrete modules. Each module corresponds to a defined task in the conceptual framework: Concept -> LOINC/test name -> cleaned LOINC/test name -> LOINC/test name internal identifier -> fact information retrieval -> topography selection -> unit and reference range cleaning and harmonization. Clinical information, including search strings, databases queried, review decisions by subject-matter-experts, and query results are in the form of tables that are kept separate from the modules- allowing documentation of query results for easy review and reproduction of any and all stages of the pipeline.

Results: Using this framework, we retrieved peripheral blood total white count of patients with hematologic malignancies. In a cohort of about 300,000 patients diagnosed and or treated for a hematologic malignancy in the VHA between 2001 & 2016, we identified ~ 11x10^6 potential total WBC count based on LOINC codes and lab test name. Of those, ~ 9x10^6 were mapped to the correct topography, and the overwhelming majority of which (99%) were mapped to a harmonized unit and reference range.

Conclusion: Our framework allows transparent and reproducible retrieval of clinical lab information from the VA CDW.

Development of a provider feedback SSIS dashboard using SAS analytic modules specifically designed for transparent and reusable workflows using Veteran Affairs health care data

Tao He, Celena Peters, Zachary Burningham, Chris Leng, Tina Hyun, Brian C. Sauer
Salt Lake City Veterans Affairs Medical Center, Health Services Research and Development (IDEAS) Center and Division of Epidemiology, Department of Internal Medicine, University of Utah

The reproducibility of data analysis and reuse of standardized processes are increasingly recognized as critical to the mission of healthcare operational and research endeavors. The purpose of this abstract is to explain how we operationalized generalized workflows to perform real time provider profiling and feedback using Microsoft Business Intelligence tools to execute analytic workflows that integrated the use of our SASÓ based Transparent ReUsable Statistical Tools (TRUST) modules. SQL Server Integration Services (SSIS) is a platform to help extract, transform and load data for analytic treatment and display in SQL Server Reporting Service (SSRS) and SharePoint, respectively. The Microsoft tools, unfortunately, are limited in their analytic capabilities. Integration with SASÓ or R is needed for implementing predictive and inferencing analytics. The Veterans Affairs maintains a SAS GRID that supports parallel processing and has some efficiency advantages over SQL. We develop a workflow package with well-defined SQL stored procedure and TRUST modules designed for providers to evaluate the use of opioids for their patient panel within the VA. We developed a tool to extend the SSIS platform for SAS support. SAS programs can be executed in SSIS as SQL code within our development environment. We can easily detect and flag SAS errors and provide users a SAS execution log file to locate where the error occurred. Our opioid feedback dashboard was developed using our SSIS workflow package with SAS extensions. This project is being extended to other clinical domains and drug surveillance projects.


Andrew Hersh
Pulmonary and Critical Care, University of Utah

We are currently looking at the replicability rates of Critical Care RCT's as a function of trial attributions (e.g. blinding, total n, lost to follow-up, funding sources, ect). While we do not expect to have completed data collection and analysis of our entire cohort (>1100 trials identified), we hope to have completed a subset of these trials. Potential posters could include discussion of development of our database, novel methods we have developed for assessing when two trials are similar enough to reliably compare, the effects of trial attributions on their replicability, and secular changes in replicability in Critical Care literature.


Imtiaz Khan
Computing and Information Systems, Cardiff Metropolitan University, UK

Poor study design and not fully narrating or understanding the experimental methodology, has been identified as the second biggest factor for irreproducibility in preclinical research. In this discovery like environment methodologies are predominantly idiosyncratic and adaptive in nature, while collaborations are multidisciplinary and multi-institutional. Therefore irrespective of enumeration and standardization, communication and interpretation of methodologies with traditional narrative (textual, verbal or video) approaches still remain subjective. Addressing this reality, we have developed a non-narrative approach for communicating experimental methodologies – a virtual laboratory environment called ProtocolNavigator. Focusing on cell biology research, in ProtocolNavigator’s virtual bench, instead of documenting the methodology, researchers emulate their real life laboratory activities as the basis for curation. The emulation leads to the automatic depiction of a time-integrated interactive map of the experiment that includes action patterns, manipulations, and data acquisition represented by activity icons. Immersing within this virtual laboratory and navigating through the map, researchers divulge activity-patterns, which in turn provide a language-independent visual perception of experimental design, provenance trails for sample, data and metadata. Importantly, this immersive experience delivers contextualization and virtual experience, which facilitates identification of variations knowledge abstraction and assessment. We have undertaken an extensive ethnographic study to measure the impact of virtualization and visualization within a multidisciplinary, multi-institutional stem cell research team. We found evidence that the experimental design is viewed as a “big picture” for collaborative work that enhance understanding about each other’s work and collective intelligence. Yet at the same time forged a Panopticon like perception within the team, where members become conscious about being watched and monitored by others. We suggest that this collective intelligence and surveillance perception could be exploited to forge a bottom-up approach for establishing good laboratory practice that eventually may lead to better reproducibility.

Making Systematic Reviews Reproducible: Include a Librarian on the Team

Mellanye Lackey
Eccles Health Sciences Library, University of Utah

Objective: Research has shown that the quality and reproducibility of systematic reviews increases when a librarian is a part of the team. Librarians bring unparalleled expertise and experience to systematic review teams.

Methods: At the University of Utah, librarians are highly trained on all aspects of conducting SRs. The Systematic Review Core of the Population Health Foundation of the Center for Clinical and Translational Science has expertise to educate groups about best practices for conducting a true, replicable systematic review. Librarians help clarify the scope of the review and assist the group with registering a review protocol. By fully participating in the process, from protocol to final manuscript co-authorship, librarians have more opportunity to make sure the primary methodology for a systematic review, literature searching, is transparent and as reproducible as possible.

Results: Several teams have already successfully integrated librarians into their review teams. An “Evidence Retrieval and Knowledge Synthesis Librarian” is available to work full time on systematic reviews. If groups do not have funds to fully include a librarian, they should still meet with librarians for a preliminary consultation. Partnering to ensure the highest quality of replicable research is central to the mission of the SR core.

Conclusion: Interested groups should email for more information. They may also request a collaboration from the Study Design and Biostatistics Center in the Population Health Foundation


Librarians’ Recommendations to Improve Search Strategies in Cochrane Systematic Review Protocols

Mellanye Lackey
Eccles Health Sciences Library, University of Utah

Objective: To improve overall quality of systematic reviews via librarian peer review of search strategies in Cochrane Systematic Review protocols; to increase librarian participation in systematic review search strategy peer review, to highlight the importance of librarian peer review, and to increase library capacity for systematic review search support.

Methods: Five librarians reviewed search strategies submitted in three Cochrane Systematic Review protocols to the Anaesthesia, Critical and Emergency Care (ACE) group. Cochrane currently recommends seeking the help of an information professional when conducting a systematic review, but does not require verification of this assistance for submission or acceptance. The PRESS checklist directed the evaluation process of the search strategies.

Results: The librarians recommended amending the three search strategies. For each systematic review protocol search strategy, the librarians discovered relevant citations that were not returned with the submitters’ searches. The PRESS checklist provided a concise framework for evaluating the search strategies. Librarians gained fresh experience reviewing systematic review searches and conducting peer reviews using the Cochrane Collaboration’s processes.

Conclusions: The librarians intend to continue reviewing search strategies submitted in Cochrane protocols. This initiative hopes to demonstrate that librarian input can improve the quality of systematic reviews.

Research Reproducibility: A Scopus Database Citation Analysis

Ayaba Logan
Library Science and Informatics, Medical University of South Carolina

Is all research reproducible? Do certain types of research, such as reviews, lend its self to being reproducible based on design? And how have Librarians/ Informationist/ Information Specialist contributed to the reproducibility of reviews, as a type of research. These are three questions this citation analysis of a Scopus search on research reproducibility aims at answering or clarifying. An abstract only search for the terms reproducibility, reproducible, repeatable, repeatability: (ABS (reproducibility) OR ABS (reproducible) OR ABS (repeatability) OR ABS (repeatable)) was conducted resulting in 268,451 documents. A secondary abstract only search was conducted by adding librarian professional titles to the search resulting in 13 documents. A third abstract only search was conducted by adding literature search to the original search string: (ABS (reproducibility) OR ABS(reproducible) OR ABS (repeatability) OR ABS (repeatable))) AND ABS (literature search) resulting in 506 documents. A final abstract only search was conducted with the original search string and adding generalizable, which resulted in 81 documents, which is only 0.0302% of the original document results. All available CSV files were downloaded for analysis in Excel. Based on the count of document results in Scopus, by title, Information Specialist/ Librarians support about 2.57% of the published scientific literature on reproducibility. Two of the searches, generalizable and Information Specialist/ Librarians by title, demonstrate the need for more published research that discusses the people involved and its generalizability in the abstract. Further analysis will dig into the particulars of those document results characteristics to better understand the types of published research that is described as reproducible or generalizable.

Reproducible Research and Electronic Notebooks

Daureen Nesdill
Marriott Library, University of Utah

Reproducibility of research is an increasing concern as researchers move from print to a hybrid print/electronic to a totally electronic laboratory. Funding agencies have responded to this concern by addressing the 2013 White House OSTP mandate that ensures publications and data resulting from research projects they fund are freely available to other researchers and to the public. The funding agencies are also requesting a data management plan. They realize that research projects must be adequately managed so, in addition to being freely available, the research is reproducible. In response to the mandate researchers are now implementing cloud-based electronic notebooks into their workflow. The University of Utah is implementing a site license for LabArchives to be used by researchers not in clinical research. These tools are great for organizing and sharing ongoing research within the research group, but to increase efficiency and the reproducibility of the research, users need to plan ahead. What are the best practices for using electronic notebooks? What are the best practices for conducting research in your discipline? LabArchives allows for incorporating most file types into their system, but the size limit for any file is 250MB. Therefore, the system also allows for the linking of large files to servers outside the system. How is this accomplished to ensure the integrity of datasets? Is a codebook or lab diary being utilized so naming and research procedures are available to all in the group? These and additional concerns will be addressed in this presentation.


Electronic Lab Notebooks on Campus

Daureen Nesdill1, Darell Schmick2
1Marriott Library, University of Utah
2Eccles Health Sciences Library, University of Utah

The complex and expensive electronic lab notebooks (ELN) originating in the pharmaceutical and chemical industries have given way to cloud-based, inexpensive and user-friendly versions targeted for academia. Vendor representatives are strongly pushing site licenses. Are site licenses needed or even appropriate? What issues exist in implementing ELNs?
At the University of Utah a pilot study was initiated to answer these questions. Researchers were recruited from all colleges across campus and provided with an account for one year for their research group. Research groups were interviewed throughout the year to determine both the positive and negative reactions to using ELNs. The results of the research will ultimately determine if the University of Utah initiates a site license.

Uncertainty Quantification and Reproducibility in the Biomedical Domain

Pflieger L, Hernandez R, Facelli JC
Department of Biomedical Informatics, University of Utah

Computational modeling and simulation are being used with increasing frequency in biomedical science to accelerate research discovery, translation and systemic healthcare transformation. The ability of such models to reproduce predictions and results depends on adequate characterization and quantification of model uncertainty. The objective of this study was to evaluate the status of uncertainty quantification (UQ) in the biomedical domain and identify gaps when compared with other disciplines in which UQ is a standard practice. We performed a literature search of peer-reviewed research using PubMed, Embase and Scopus from 1976 to those published in 2016. The search included articles pertaining to biological research, patient diagnosis, treatment or health-risk, as well as identified biomedical application areas, types of uncertainty analysis and the methodologies used. Using the well-established Verification Validation and Uncertainty Quantification (VVUQ) methodologies from engineering and physical sciences, we provide a gap analysis to identify potential areas for future research and to inform best-practice guidelines.

Reproducible Environments for Computing Research

Robert Ricci, Eric Eide
School of Computing, University of Utah

Repeating research in computer science requires more than just code and data: it requires an appropriate environment in which to run experiments. In some cases, this environment appears fairly straightforward: it consists of a particular operating system and set of required libraries. In many cases, however, it is considerably more complex: the execution environment may be an entire network, may involve complex and fragile configuration of the dependencies, or may require large amounts of resources in terms of computation cycles, network bandwidth, or storage. The result is that when one tries to repeat published results, creating an environment sufficiently similar to one in which the experiment was originally run can be troublesome; this problem only gets worse as time passes. What the computer science community needs, then, are environments that have the explicit goal of enabling repeatable research. This poster will outline the problem of repeatable research environments, present a set of requirements for such environments, and describe Apt, an active facility run by the University of Utah that attempts to address them.

Investigator Responsibilities: Creating a Culture of Transparency

Erin Rothwell
College of Nursing, University of Utah

Purpose: For decades, the Food and Drug Administration (FDA) has required investigators conducting studies involving FDA-regulated interventions to complete and sign a “Statement of Investigator” that lists several commitments they agree to before the agency will authorize them to initiate a clinical trial. However, neither the Common Rule nor the Office for Human Research Protections (OHRP) requires investigators to commit to a similar statement or code of conduct for research. A public, signed investigator oath may create a culture of investigator responsibility to adhere to those ethical standards improving not only the rigor, but transparency of their research.

Methods: Interviews were conducted with investigators in the intermountain west (n=17) to explore the acceptability of an “oath by investigators” given to research subjects along with the consent form. Statements for an investigator oath were created based on data from the interviews. The initial items of an investigator oath were edited and revised through a Delphi method among national experts (n=12) in biobanking, informed consent and/or research ethics.

Results: Five major categories related to consent and investigator responsibilities were identified from the analysis of the interview data. These included: 1) changing purpose of consent; 2) awareness and accountability of investigator behavior; 3) inconsistent nature of research; 4) similarities and differences between research and clinical care; and 5) components of the investigator oath. The initial draft of the investigator oath had 17 statements and, after the second round from the Delphi panelists, 11 statements remained.

Conclusions: To strengthen the culture of investigator responsibility to conduct research with more rigor and transparency, a public commitment by the investigator might help. Future research should explore how an investigator oath impacts not only investigator responsibilities, but how it may improve trust and support among the public.

Standard Enabled Workflow for Synthetic Biology

Meher Samineni
College of Computer Engineering, University of Utah

The issue of reproducibility of experimental data in the field of synthetic biology has been well documented. (Peccoud et al., Nature Biotechnology 2011). The Synthetic Biology Open Language (SBOL) is a standard that has been developed for the expression of information regarding genetic constructs that implements engineering principles of modularity and hierarchical representations (Galdzicki et al., Nature Biotechnology 2014). Tools exist to support the visualization of the data in SBOL format. For example, SBOL Designer can be used to visually lay down the framework of more complicated parts while iBioSim can be utilized to model the interactions between genetic parts (Myers et al., Bioinformatics 2009). The iBioSim tool also includes a round-trip conversion to the Systems Biology Markup Language (SBML) enabling a robust model of the interactions between the various components of an overall genetic design (Roehner et al., ACS Synthetic Biology 2015; Nguyen et al., ACS Synthetic Biology 2016). Lastly, the SBOL community is close to completely implementing their solution to the systemic issue of reproducibility; the final step is adoption of the standard by labs and the tools that are already integrated into lab procedures. This presentation will demonstrate a complete SBOL-enabled workflow, which is capable of enhancing the reproducibility of genetic designs by capturing data from the sequence to the behavioral-level. Several example designs from literature have been encoded into SBOL with varying degrees of completeness; this task is made more difficult by the same issues which affect reproducibility in a more general sense -- incomplete or overly-generalized representations of genetic systems or sequences, nonspecificity with regards to specific species of interactants, and lack of cohesive data maintenance and availability after publication. Models which can be successfully reproduced can be simulated and produce results quite similar to experimental data.

Improving Clinical Trial Cohort Definition Criteria and Enrollment with Distributional Semantic Matching

Jianyin Shao, Ramkiran Gouripeddi, Julio C. Facelli
Department of Biomedical Informatics, University of Utah

Evidence-based medicine relies on well-designed/performed reproducible research. Clinical trials are the gold standard for evaluating clinical interventions on patients and populations. Current approaches for clinical trial cohort recruitment have multiple issues that have been extensively reported in the literature. Among these issues and relevant to the topic of this conference are (1) the ambiguities observed in eligibility criteria cohort definitions and variability in interpretation and queries made by research coordinators, which could be associated with the lack of reproducibility of certain clinical results; (2) challenges to enroll and retain expected number of participants, which could result in underpowered studies due to reduced number of participants.

Using distributional semantic methods, we aim to automatically match extracted clinical concepts within clinical trial criteria and patient data. In our initial work we use a bag of concepts and a bag of negated concepts respectively to represent the clinical trial inclusion and exclusion criteria. Concept Bag algorithms are used to calculate a match score between the trial criteria and patient EHR data by measuring the similarity between the bags of concepts. We extracted clinical concepts using Metamap and tested our methods using a well-curated set of trials from and patient data. Results from this pilot study will inform the development of a trial criteria-patient matching framework as a service-oriented architecture that integrates with EHR systems and clinical workflows, engaging providers in the recruitment process at the point-of-care. Such an automated system will improve reproducibility of clinical research by (1) reducing the selection bias possibly introduced by trial investigators, and (2) increasing statistical power and reducing false discovery rate by facilitating enrollment of appropriate number of participants in a clinical research.

Metadata Discovery and Integration to Support Reproducible Research using the OpenFurther Platform

Jingran Wen, Peter Mo, Randy Madsen, Ryan Butcher, Phillip Warner, Ramkiran Gouripeddi, Julio C. Facelli
Department of Biomedical Informatics, University of Utah

Modern biomedical research, often requires reusing and combining (federation and/or integration of) data from multiple disparate sources such as clinical and electronic health record (phenotypes), genomic public and private annotations (genotypes), proteomics, metabolomics, biospecimen collections and environmental data. Each data source embeds within itself different meanings (semantic) and structural (syntactic) descriptions about the data either explicitly or implicitly. Metadata as described by the FAIR1 (Findable, Accessible, Interoperable, and Reusable) principles is a requirement for reproducible research - which requires discovery of these metadata and its understanding to facilitate proper use of data. Current state of the art requires a great deal of human manual curation, which renders these procedures non-scalable and consequently of limited practical value in the emerging big data biomedical science paradigm.

To overcome these limitations, we are prototyping a computational infrastructure that supports automated and semi-automated mapping of metadata artifacts and terminologies. First, we advanced OpenFurther’s metadata repository2 to adapt metadata specifications developed by the bioCADDIE consortium3 to store metadata for scalable interoperability between systems for creating, managing and using data. Second, we applied machine learning methods for automatically discovering metadata. Our preliminary results show that machine learning models were able to classify protein structure, genetic variant and general English corpus data with an average accuracy of 99%. Finally, we will use the findings for these work to develop a metadata and semantics discovery and mapping framework which will be agnostic to specific mapping algorithms or tools as many of these are domain-specific and also dependent on data; and will choose the best available solution based on the mapping performance making it scalable and suitable for emerging big data applications. This will allow proper reuse, federation and integration of the metadata-enriched data as needed for supporting reproducible research.

  1. Wilkinson MD, et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016;3:160018.
  2. Gouripeddi R, Facelli JC, et al. FURTHeR: An Infrastructure for Clinical, Translational and Comparative Effectiveness Research. AMIA Annual Fall Symposium. 2013; Wash, DC.
  3. WG3 Members. (2015). WG3-MetadataSpecifications: NIH BD2K bioCADDIE Data Discovery Index WG3 Metadata Specification v1. Zenodo. 10.5281/zenodo.28019