Эротические рассказы

Digital Transformation of the Laboratory. Группа авторовЧитать онлайн книгу.

Digital Transformation of the Laboratory - Группа авторов


Скачать книгу
the new oil” [21] has come to such prominence in recent times. While it may be viewed by many as hackneyed, and by many more as fundamentally flawed [22], the idea carries a lot of credence as we move toward a more data‐driven global economy. One of the main flaws arising from the oil analogy is the lack of organizations being able to suitably refine data into the appropriate next piece of the value chain, compared to oil, which has a very clear refining process and value chain. Furthermore, the “Keep it in the Ground” [23, 24] sustainability momentum makes the data‐oil analogy perhaps even less useful. However, within the LotF, and in a more open, collaborative global R&D world, experimental data, both raw and refined, will grow in criticality. Without doubt, data will remain a primary asset arising from the LotF.

      At this point then it is worth considering how data and data management fit into the processes that drive the two fundamental lab types, which we have referred to earlier, namely (i) the hypothesis‐driven, more research/discovery‐driven lab and (ii) the protocol‐driven, more “manufacturing”‐like lab.

      1.2.4.1 Data in the Hypothesis‐driven, Research Lab

Schematic illustration of the hypothesis-experiment-analyze-share cycle.

      1.2.4.2 Data in the Protocol‐driven Lab

      There are many similarities, often close, between the linear REAF process and the HEAS cycle especially in the Experiment/Observe and Analyze/Report steps, but the REAF process does not start with an idea or hypothesis. REAF represents a service, which starts with a formal request, for example to run a protocol to manufacture a good or to test a sample, and ends with a product or a set of results, which can be fed back to the original customer or requester. As we noted in Section 1.2.4.1 above, it is increasingly likely that the LotF will be set up with a Laboratory as a Service (LaaS) mentality; REAF may therefore be much more broadly representative of how labs of the future might operate.

Schematic illustration of the request-experiment-analyze-feedback process.

      1.2.4.3 New Data Management Developments

      So what new developments in data management will be prevalent in both the hypothesis‐ and the protocol‐driven labs of 2030? In the previous two sections we asserted that these labs will be populated by fewer people; there will be more robotics and automation, and the experiment throughput will be much higher, often on more miniaturized equipment. Building on these assertions then, perhaps the most impactful developments in the data space will be:

      1 The all pervasiveness of internet of things (IoT) [25, 26]. This will lead, in the LotF, to the growth of the internet of laboratory things (IoLT) environments; this will also be driven by ubiquitous 5G communications capability.

      2 The widespread adoption of the findable, accessible, interoperable, and reusable (FAIR) data principles. These state that all data should be FAIR [27].

      3 The growing use of improved experimental data and automation representation standards, e.g. SiLA [28] and Allotrope [29].

      4 Data security and data privacy. These two areas will continue to be critical considerations for the LotF.

      5 The ubiquity of “Cloud.” The LotF will not be able to operate effectively without access to cloud computing.

      6 Digital twin approaches. These will complement both the drive toward labs operating more as a service and the demand for remote service customers wanting to see into, and to directly control from afar what is happening in the lab. Technologies such as augmented reality (AR) will also help to enable this (see Sections 1.2.5 and 1.2.6).

      7 Quantum computing [30–33]. This moves from research to production and so impacts just about everything we do in life, not just in the LotF. Arguably, quantum computing might have a bigger impact in the more computationally intensive parts of the hypothesis‐ and protocol‐driven LotF, e.g. Idea/Hypothesis design and Analyze/Insight, but it will still disrupt the LotF massively. We say more on this in Sections 1.2.5 and 1.2.6.

Яндекс.Метрика