Digital Transformation of the Laboratory. Группа авторовЧитать онлайн книгу.
Figure 1.5 Digital data life cycle.
IoT technology [34] will allow much better connectivity between the equipment in the LotF. This will enable better, quicker, and more precise control of the lab kit, as well as more effective capturing of the raw data off the equipment. This in turn will allow the next stage in the life cycle – “Analyze Data” – to happen sooner and with more, better quality data. This improved interconnectedness in the lab will be made possible by the same 5G communication technology which will be making the devices and products in the home of 2025 more networked and more remotely controllable.
As improved instrument interconnectedness and IoLT enable more data to be captured by more instruments more effectively, the issue of how you manage the inevitable data flood to make the deluge useful comes to the fore. The biggest initiative in 2020 to maximize the benefits of the so‐called big data [35] revolves around the FAIR principles. These state that “for those wishing to enhance the reusability of their data holdings,” those data should be FAIR. In the LotF, the FAIR principles will need to be fully embedded in the lab culture and operating model. Implementing FAIR [36] is very much a change process rather than just introducing new technology. If fully implemented, though, FAIR will make it massively easier for the vast quantities of digital assets generated by organizations to be made much more useful. Data science as a discipline, and data scientists (a role which can be considered currently to equate to that of “informatician”), will grow enormously in importance and size/number. Organizations that are almost purely data driven will thrive, with any lab work they feel the need to do being outsourced via LaaS [37] to flexible, cost‐effective LotFs that operate per the REAF process.
Supporting the growth of FAIR requires the data that is generated in these LaaS LotFs to be easily transferable back to the requester/customer in a format which the lab can generate easily, accurately, and reproducibly, and which the customer can import and interpret, again, easily, accurately, and reproducibly. This facile interchange of “interoperable” data will be enabled by the widespread adoption of data standards such as SiLA and Allotrope. We describe these new data standards in more detail in the following section.
Two additional, significant data considerations for the LotF are those of data security and data privacy, just as they are now. The more LotF services that are operated outside the “firewall” of an organization, and the more that future labs are driven by data, the more risks potentially arise from accidental or malicious activities. Making sure that those risks are kept low, through continued diligence and data security, will ensure that the LotF is able to develop and operate to its full capability. Similarly, in labs that work with human‐derived samples (blood, tissues, etc.), the advent of regulations such as the General Data Protection Regulations (GDPR) [38, 39], along with the historical stringency surrounding informed consent [40] over what can happen to human samples and the data that arises from their processing, will put even more pressure on the organizations that generate and are accountable for human data to ensure these data are effectively secured. Improved adherence to the FAIR data principles, especially Findability and Accessibility, will ensure that LotFs working with human‐derived materials can be responsive to data privacy requests and are not compromised.
Going hand in hand with the data explosion of the past decade has been the evolution of the now ubiquitous, key operational technology of “Cloud Computing.” As explained by one of the originating organizations in this area, “cloud computing is the delivery of computing services – including servers, storage, databases, networking, software, analytics, and intelligence – over the Internet (the cloud) to offer faster innovation, flexible resources, and economies of scale.” [41] In the context of LotF, assuming that the equipment in the lab is fully networked, cloud computing means that all the data generated by the lab can be quickly, fully, and securely captured and stored on remote infrastructure (servers). This book is not the place to describe cloud computing in detail, but it should be sufficient to say that the LotF will not be reliant on IT hardware close to its location (i.e. on‐site) but will be highly reliant on speedy, reliable, available networks and efficient, cost‐effective cloud computing.
Finally, there is a data and modeling technology, which has been present in industries outside life science for many years, which could play a growing role in the LotF which is more automated and more remote. This is the technology termed “digital twin.” [42, 43] We say more on this exciting new technology in Section 1.2.5.1.
1.2.5 New Technology
In any future‐looking article we can only make some best guesses as to the new technologies and science that could be important during the next 5–10 years. In this section we make some suggestions as to what new technologies we feel will impact the LotF, and what new science will be happening in those future labs. In the first part of this section, we focus on new technologies. In the second part, we suggest some scientific areas which we feel will grow in importance and hence might drive the evolution of the LotF and the technology that is adopted in that new lab environment.
New technologies will undoubtedly play a major role in driving the development of the critical components within the LotF, but their introduction and usage need to be appropriate to the type of lab being used. The role of the new technologies must be aligned to the future challenges and needs of the lab environment. These needs include, more specifically:
Flexibility and agility of the experiment cycles, balancing between prediction (in silico) and physical (in vitro) experiments
Improved data collection and experiment capture (e.g. “data born FAIR”)
Reproducibility of the experiment processes
Enhancements to the scientists' UX and capabilities in the lab.
To emphasize these aspects, we focus on three broad areas in this section:
1 Lab automation integration and interoperability
2 Quantum computing and the LotF
3 Impact of AI and machine learning (ML).
1.2.5.1 Lab Automation Integration and Interoperability
Lab instrument integration and interoperability to support higher levels of lab automation have been and will continue to evolve quickly, driven by the pressure from scientists and lab managers and, above all to have better ways to manage and control their equipment [44–46]. Capabilities as diverse as chemical synthesis [47] and next‐generation sequencing (NGS) [48] are seeking to better automate their workflows to improve speed and quality and to align with the growing demands of AI in support of generative and experimental design as well as decision‐making [49]. An additional stimulus toward increased automation, integration, and interoperability is that of experiment reproducibility. The reproducibility crisis that exists in science today is desperately in need of resolution [50]. This is manifested not only in terms of being unable to confidently replicate externally published experiments, but also in not being able to reproduce internal experiments – those performed within individual organizations. Poor reproducibility and uncertainty over experimental data will also reduce confidence in the outputs from AI; the mantra “rubbish in, rubbish out” will thus continue to hold true! Having appropriate automation and effective data management can support this vital need for repeatability, for example of biological protocols [51]. This will be especially important to support and justify the lab as a service business model, which we have mentioned previously. It is our belief that the increased reliability and enhanced data‐gathering capability offered by increased automation initiatives in the LotF will be one important way to help to address the challenge of reproducibility.
Updated