Clearly, the DNA sequencing technology has matured and is continuously being optimized for performance while becoming increasingly affordable, thus opening an era of reliable and foreseeable quality data growth demanding high computational resources and data storage. With the rise of varied applications, be it in cancer research, infectious diseases, and other therapy areas, and with the launch of large initiatives (e.g. The Precision Medicine Initiative (PMI)), the need to address the ensuing data explosion is imminent.
Oncology is the dominant sector that currently benefits from next-generation sequencing followed by applications in inherited and rare disease understanding, infectious disease, the microbiome, and others. It is expected that soon, agriculture will also benefit from the technological developments that currently help propel both biomedical research and the clinical sector.
Different organizations deal with the explosion of data differently. For example, academic institutions and core facilities that have to tightly manage their data tend to move the challenge over to the receiving side of the equation, the individual researcher or group that has ordered the data. This often means that the data stays on the local server for the least amount of time and/or is stored long-term on hard drives. This can work for smaller labs or individuals who have only a few data sets, but clearly this is not a solution for larger throughput labs that process hundreds to thousands of samples per year. For them to be able to store that data and make it accessible in the short and long term remains a challenging issue. This is particularly important in the clinical sector as new discoveries impact the analyses, and new tools are developed on an ongoing basis. Hence, some of the “old” data may need to be pulled from the archives for re-processing with yet another, better-developed algorithm. This has immediate implications, as data has to be readily accessible, but also requires that the right accompanying patient consent form be in place. Another consideration in the context of these management challenges of large data is the mechanisms of how and to whom the data is made accessible via the different channels. These are some areas in which end-users and industry professionals interviewed shared their thoughts with us for our latest report.
While many uncertainties have yet to be addressed, the companies in the recently released deep dive report NGS Data Analysis and Interpretation Analysis Report are attempting varied approaches to create winning solutions to address the data explosion challenges. This report is unique, in that it is not a predictive market research report, but rather builds on the data gathered from many end-user interviews. We’ve conducted close to 30 interviews, including with researchers from AstraZeneca, Biogen, Children’s Hospital of Eastern Ontario, Erasmus Medical Center, Memorial Sloan Kettering Cancer Center, Toma Biosciences, UCSF Medical Center, University of Minnesota, Stanford University, Texas A&M University, VIB, in addition to industry professionals (see figure below which summarizes the end-user interviews).
Take advantage of our current special offer and purchase this new report at a 10% discount with Promo Code FIRST10OFFECO – valid until September 30, 2016.
Download Table of Contents to learn more about the report specifics.