Mathematics for Enzyme Reaction Kinetics and Reactor Performance. F. Xavier MalcataЧитать онлайн книгу.
(4.2)–(4.4); if more than two matrices are at stake, this very same rule can be iteratively applied.
In the case of product of matrices, one should write
(4.115)
based on Eqs. (4.2), (4.46), and (4.47); upon application of Eq. (4.105), one gets
(4.116)
so the row index (i.e. i) and the column index (i.e. l) should be exchanged to give
– where commutativity of the multiplication of scalars was meanwhile taken advantage of. A further utilization of Eq. (4.105) converts Eq. (4.117) to
(4.118)
so the definition of multiplication as per Eq. (4.47) may be applied backward to write
a shorter version of Eq. (4.119) reads
stemming from Eqs. (4.2) and (4.46). Hence, the transpose of a product of matrices equals the product of transposes of the factor matrices, effected in reverse order. As expected, this property can be likewise applied to any number of factor matrices.
If matrix A is a scalar matrix, say α In, then Eq. (4.120) still applies, viz.
(4.121)
since there are no significant off‐diagonal elements, the matrix at stake is intrinsically symmetric as per Eq. (4.107) – so one may write
along with Eq. (4.64). The order of placement of factors in the right‐hand side of Eq. (4.122) is, in turn, arbitrary as per Eq. (4.24) – so one ends up with
this is the conventional form of expressing the result of transposing the product of a scalar by a matrix, which degenerates to the product of the said scalar by the transpose matrix.
4.5 Inversion of Matrices
When addressing algebraic operations involving matrices, analogues as close as possible to the algebraic operations applying to plain scalars have been systematically sought so far; this was possible with addition of matrices, as well as subtraction of matrices, seen as addition to the symmetric as per Eq. (4.44) – where elements in corresponding positions of the two matrices undergo a one‐by‐one transformation. In the case of multiplication of matrices, the elements of each row of the first one are multiplied by the elements of each column of the second matrix, in corresponding positions; hence, a row‐by‐column product is at stake, more complex than the one-by-one approach in addition of matrices.
Division of matrices poses an unsurmountable problem, though; first of all, the said hypothetical algebraic operation would be possible only when a square matrix played the role of divisor and its rank further coincided with its order (i.e. maximum number of linearly independent rows, or columns for that matter) – unlike happens with a plain scalar, where its being different from zero would suffice for divisor. Unfortunately, the approach here must be of a matrix‐by‐matrix nature – so a new matrix, called the inverse, is to be first calculated and then multiplied by the dividend. Remember that a/b (with a and b denoting plain scalars) may also be seen as a × b−1 – with the latter factor referring to the algebraic inverse, or reciprocal, 1/b; and that a × a−1 is merely unity, so it is reasonable to expect that the product of a matrix by its inverse equals the matrix equivalent of unity, or identity matrix. The general strategy to calculate the inverse matrix will be developed below – with further algorithms to be discussed at a later stage, as well as the major features associated with matrix inversion.
4.5.1 Full Matrix
The inverse (n × n) matrix A−1, of a given (n × n) matrix A, satisfies, by definition,
therefore, if A−1 is described by
then one may insert Eq. (4.125) and Eq. (4.1) with m = n to transform Eq. (4.124) to
– corresponding to AA−1 being equal to In . Recalling the algorithm of multiplication of matrices as per Eq. (4.47), one finds that Eq.