Эротические рассказы

Introduction to Graph Neural Networks. Zhiyuan LiuЧитать онлайн книгу.

Introduction to Graph Neural Networks - Zhiyuan Liu


Скачать книгу
can be denoted as C ∈ ℝm × p, where

Image

      It can be proved that matrix product is associative but not necessarily commutative. In mathematical language,

Image

      holds for arbitrary matrices A, B, and C (presuming that the multiplication is legitimate). Yet

Image

      is not always true.

      For each n × n square matrix A, its determinant (also denoted as |A|) is defined as

Image

      where k1k2kn is a permutation of 1, 2, …, n and τ(k1k2kn) is the inversion number of the permutation k1k2kn, which is the number of inverted sequence in the permutation.

      If matrix A is a square matrix, which means that m = n, the inverse matrix of A (denoted as A−1) satisfies that

Image

      where I is the n × n identity matrix. A−1 exists if and only if |A| ≠ 0.

      The transpose of matrix A is represented as AT, where

Image

      There is another frequently used product between matrices called Hadamard product. The Hadamard product of two matrices A ∈ ℝm × n and B ∈ ℝm × n is a matrix C ∈ ℝm × n, where

Image

      • Tensor: An array with arbitrary dimension. Most matrix operations can also be applied to tensors.

      Let A be a matrix in ℝn × n. A nonzero vector v ∈ ℂn is called an eigenvector of A if there exists such scalar λ ∈ ℂ that

Image

      Here scalar λ is an eigenvalue of A corresponding to the eigenvector v. If matrix A has n eigenvectors {v1, v2, …, vn} that are linearly independent, corresponding to the eigenvalue {λ1, λ2, … λn}, then it can be deduced that

Image

      Let V = [v1 v2 … vn]; then it is clear that V is an invertible matrix. We have the eigendecomposition of A (also called diagonalization)

Image

      It can also be written in the following form:

Image

      However, not all square matrices can be diagonalized in such form because a matrix may not have as many as n linear independent eigenvectors. Fortunately, it can be proved that every real symmetric matrix has an eigendecomposition.

      As eigendecomposition can only be applied to certain matrices, we introduce the singular value decomposition, which is a generalization to all matrices.

      First we need to introduce the concept of singular value. Let r denote the rank of AT A, then there exist r positive scalars σ1 ≥ σ2 ≥ … σr > 0 such that for 1 ≤ ir, vi is an eigenvector of AT A with corresponding eigenvalue Image. Note that v1, v2, …, vr are linearly independent. The r positive scalars σ1, σ2, …, σr are called singular values of A. Then we have the singular value decomposition

Image

      where U ∈ ℝm × m and V (n × n) are orthogonal matrices and Σ is an m × n matrix defined as follows:

Image

      In fact, the column vectors of U are eigenvectors of AAT, and the eigenvectors of AT A are made up of the the column vectors of V.

      Uncertainty is ubiquitous in the field of machine learning, thus we need to use probability theory to quantify and manipulate the uncertainty. In this section, we review some basic concepts and classic distributions in probability theory which are essential for understanding the rest of the book.

      In probability theory, a random variable is a variable that has a random value. For instance, if we denote a random value by X, which has two possible values x1 and x2, then the probability of X equals to x1 is P(X = x1). Clearly, the following equation remains true:

Image

      Suppose there is another random variable Y that has y1 as a possible value. The probability that X = x1 and Y = y1 is written as P(X = x1; Y = y1), which is called the joint probability of X = x1 and Y = y1.

      Sometimes we need to know the relationship between random variables, like the probability of X = x1 on the condition that Y = y1, which can be written as P(X = x1|Y = y1). We call this the conditional probability of X = x1 given Y = y1. With the concepts above, we can write the following two fundamental rules of probability theory:

Image Image

      The former is the sum rule while the latter is the product


Скачать книгу
Books sex-story
Яндекс.Метрика