Digital Transformation of the Laboratory. Группа авторовЧитать онлайн книгу.
to digital twin technology and improved remote control, massively improved computational technology combined with advances in AR and VR will allow operators, who might be located nowhere near the lab in which their experiment is being run, to don appropriate AR/VR headsets and walk into an empty space that will “feel” to them like they are right inside the lab or even right inside the experiment itself. The potential for scientists to “walk” into the active site of an enzyme and “manually” dock the molecules they have designed, or for an automation operator to “step into” the reaction vessel running the large‐scale manufacturing of, say, a chemical intermediate to check that there are no clumps, or localized issues (e.g. overheating), will revolutionize how the LotF can operate, making it more likely to be more successful and, importantly, safer.
One final, obvious application of digital twin technology is where that LotF is not even on Earth. Running experiments in low or zero gravity can lead to interesting, sometimes unexpected findings [62]. This has led to numerous experiments having been performed on the NASA Space Station [63]. But having a trained astronaut who can effectively run any experiment or protocol, from organic synthesis to genetic manipulation, is asking a great deal. Digital twin technology could make the LotF in zero gravity a much more compelling proposition [64].
Returning to the area of instrument integration and interoperability, a more practical consideration is how different instruments communicate with each other, and how the data they generate is shared.
Within any lab there is and always will be a wide range of different instruments from different manufactures, likely bought over several years to support the business workflows. This “kit diversity” creates a challenge when you want to define a protocol which involves linking two or more instruments together that do not use the same control language. SiLA‐2 [65] is a communication standard [66] for lab instruments, such as plate readers, liquid handling devices, and other analytical equipment, to enable interoperability. As indicated throughout this section, the ability to fully connect devices together will enable a more flexible and agile lab environment, making it possible to track, monitor, and remote control automation assets. This will further enable enhanced robotic process automation (RPA) as well as easier transition to scale up and transfer to remote parties. Specific devices connected together for one workflow will be easily repurposable for other tasks without a monolithic communication design and coding.
Data in all its forms will remain the dominant high‐value output from lab experiments. As with protocols and communications, there need to be standards to support full data integration and interoperability within and between research communities. Over the years, data standards have evolved to support many aspects of the life science process whether that is for registration of new chemical entities [67], images [68], or macromolecular structures [69] or for describing the experiment data itself. Analytical instrument data (e.g. from nuclear magnetic resonance machines [NMRs], chromatographs, and mass spectrometers) are produced by a myriad of instruments, and the need to analyze and compare data from different machines and support data life cycle access in a retrievable format has driven the creation of the Allotrope data format [70] (ADF). This is a vendor‐neutral format, generated initially for liquid chromatography, with plans to expand to other analytical data. These wide community‐driven efforts such as those from Allotrope, SLAS, IMI [71], or the Pistoia Alliance [72] highlight the value of research communities coming together in life sciences, as happens elsewhere in industries such as financials and telecoms. Such enhanced efforts of collaboration will be needed even more in future.
In summary, the use of open standards will be critical for the success of the LotF, as the range of devices grows and science drives changes. There will need to be reliable, robust ways for the instruments, workflows, and data to be shared and accessible in order to support flexible, open‐access, and cross‐disciplinary collaborations, innovation, and knowledge exchange. The automation in the LotF will need to be effective across many different sizes and types of labs – from large, high‐throughput labs doing screening or sequencing, to midsize labs with some automation workbenches, to the long tail of labs with a few specialist instruments. In planning for a new lab, creating a holistic vision of the design will be a key first element. That vision will include the future processes that your lab will want to tackle, as well as the potential new technologies to be deployed in the lab, e.g. IoT, AR, or voice control. Additionally, new skills will need to be acquired by those involved to help implement these changes, and an investment in staff and their training remains vital. Furthermore, in future there will likely be an ecosystem of lab environments both local and more disparate to consider; the LotF will be smarter and more efficient but not just through the adoption of a single device.
1.2.5.2 Quantum Computing and the Lab of the Future
The field of quantum computing is moving so fast that any review or update is soon superseded by the next breakthrough [73]. Consequently, this section focuses on introducing some of the concepts of quantum computing and how it might impact the LotF [74] and its workflows going forward especially those in the design (model/predict) and analyze stages.
Quantum computers differ [75] from our current classical computers in that they offer a fundamentally different way of performing calculations; they use the theories of quantum mechanics and entanglement [76, 77] and qubits (multiple simultaneous states) rather than bits (binary states – 0 and 1). This gives them the potential to solve problems and process tasks that would take even the fastest classical computer hundreds if not thousands of years to perform. Such performance will enable the simulation of highly complex systems, such as the behavior of biological molecules and other life science challenges [78]. A key concept in this area is that of the “quantum supremacy” [79] threshold, where quantum computers crossover from being an interesting and pure research‐driven project to doing things that no classical computer can. Quantum supremacy is defined as a situation where a quantum computer would have to perform any calculation that, for all practical purposes, a classical computer cannot do because of the time involved. There is much discussion about whether we have reached “quantum supremacy,” but it does seem clear that, for the foreseeable future, quantum computers will not be used for everyday activities but for highly specific, truly value‐adding and accelerated tasks, much in the same way that exascale supercomputers work today. One further critical step will be needed to ensure that quantum computers are able to operate outside the research domain, that is to create quantum compilers [80] which can make code run efficiently on the new hardware, just as traditional compilers were needed with classical computers.
There are good parallels between quantum computers and the exascale, supercomputers, and clusters of today when thinking about the impact on life science and broader research. There are limited numbers of supercomputers at the moment due to their cost and the skills needed to fully utilize them. Consequently, researchers have to bid for time slots at the regional and national centers which house them. It is likely that the same process will happen with quantum computers, with regional/national [81–83] centers being established to support scientists who use them remotely to process their calculations and model building. Quantum cloud computing [84] and associated services will likely evolve in the same way that existing cloud compute and storage infrastructure and business models have evolved over the past decade.
We now focus on how quantum computing will more directly impact the LotF and the experiments which will run within it. The researchers involved will, at least in the early years, have to balance their “classical” and “quantum” calculation time with their physical experiment effort to help drive their insights and decision‐making. Experiments will still be performed on complex systems, but they will be influenced even more by the work done in silico. There will likely be more rapid experiment cycles since the ability to perform quicker calculations will enhance the speed of progress and encourage research in areas that have until now been hard to explore, for example molecular simulations of larger biological entities.
Data will remain a key element of the workflow. Being able to send data to quantum computers and to retrieve the outputs from them quickly will help to influence the next experiment. However, with the rarity of quantum computing, careful planning of the holistic workflow