Эротические рассказы

Multiblock Data Fusion in Statistics and Machine Learning. Tormod NæsЧитать онлайн книгу.

Multiblock Data Fusion in Statistics and Machine Learning - Tormod Næs


Скачать книгу
that clear (for examples, see Chapter 8 on complex relations). We will try, however, to remain as consistent as possible and give extra explanations of terms at the appropriate places. In the rest of this chapter we will delineate our potential audience. We will give some examples of why multiblock methods are necessary and give an overview of the types of problems encountered. Moreover, we will give some history and discuss briefly some fundamental concepts which we need in the rest of the book. We end by giving the notation which we will use in this book and a list of abbreviations.

      1.2 Potential Audience

      Our ambition is to serve different types of audiences. The first set of users consists of practitioners in the natural and life sciences, such as in bioinformatics, sensometrics, chemometrics, statistics, and machine learning. They will mainly be interested in the question how to perform multiblock data analysis and what to use in which data analysis situation. They may benefit from reading the main text and studying the examples. The second set of users are method developers. They want to know what is already available and spot niches for further development; apart from the main text and the examples they may also be interested in the elaborations. The final set of users are computer scientists and software developers. They want to know which methods are worthwhile to build software for and may also study the algorithms.

      1.3 Types of Data and Analyses

      1.3.1 Supervised and Unsupervised Analyses

      In any multiblock data analysis, we first have to choose between the main paradigms unsupervised and supervised analysis. Unsupervised analysis refers to explorative analysis looking for structure and connections in the data either in a single data block or across data blocks, typically using dimension reduction including maximisation/minimisation of some criterion combined with orthogonalisation, or by clustering techniques. It is crucial that the roles of the blocks are exchangeable: we can change the order of the blocks without changing the solution.

      Supervised analysis refers to predictive data analysis, where emphasis is on a single block of data, Y, (dependent block/response) which is connected to one or more blocks of data, Xm, (independent block(s)/predictors) through regression or classification. The role of the blocks is now important: some blocks are regarded as dependent and some are regarded as independent.

      Figure 1.7 L-shape data of consumer liking studies.

      1.3.2 High-, Mid- and Low-level Fusion

      Figure 1.1 High-level, mid-level, and low-level fusion for two input blocks. The Z’s represent the combined information from the two blocks which is used for making the predictions. The upper figure represents high-level fusion, where the results from two separate analyses are combined. The figure in the middle is an illustration of mid-level fusion, where components from the two data blocks are combined before further analysis. The lower figure illustrates low-level fusion where the data blocks are simply combined into one data block before further analysis takes place.

      ELABORATION 1.2

      High Level Supervised Fusion

      A possible drawback with this strategy as compared to low-level and feature-level fusion is that it does not provide further insight into how the different measurements relate to each other and how they can be combined in a good way in the prediction of the outcome. On the other hand, high-level fusion of prediction results for new samples does not generally require the individual predictors to be developed from the same samples. In other words, when two (or more) predictors are to be combined for a new sample, they do not need to come from the same data source. It is possible to simply plug in the new data and obtain predictions that can be combined as described below. In this sense it is more flexible (Ballabio et al. (2019)) than low- and feature-level fusion. It has been shown in Doeswijk et al. (2011) that fusing classifiers most often gives similar or improved prediction results as compared to using only one of them. An overview of the use of high-level fusion (and other methods) can be found in Borràs et al. (2015).

      A simple way of combining classifiers is to use voting based on counting the number of times the classifiers agree. There are different types of voting schemes that are proposed in the literature. One of them is simple democratic majority voting which means that the group/class that gets the highest number of votes is chosen. In the case of ties, the result is inconclusive. An alternative strategy is 75% voting which means that 75% of the votes should be for the same class before a decision can be made.

      Fusing quantitative predictors is most easily done using averages or weighted averages with weights depending on the prediction error of the different predictions, as determined by, for instance, cross-validation. This strategy has similarities with so-called bagging (see, e.g., Freund (1995)). In machine learning, high-level supervised fusion is found in the sub-domain ‘ensemble learning’.

      1.3.3 Dimension Reduction


Скачать книгу
Яндекс.Метрика