Mathematics for Enzyme Reaction Kinetics and Reactor Performance. F. Xavier MalcataЧитать онлайн книгу.
target="_blank" rel="nofollow" href="#fb3_img_img_e03abffc-d1c4-53ea-a3b1-38a24e1671ab.png" alt="equation"/>
with a scalar quantity being now at stake.
4 Matrix Operations
Matrix is a nuclear concept in linear algebra; arrays of (real) numbers possess a long history associated to solution of linear equations – and records indicate that Italian mathematician Girolamo Cardano first brought a related method from China to Europe in 1545, using his book Ars Magna as vehicle. The first explicit mention to a matrix appeared in 1851 by the hands of James J. Sylvester, an English mathematician – although in the context of determinants. Since he was interested in the determinant formed from a rectangular array of numbers and not in the array itself, he coined the word matrix from the Latin mater meaning womb (i.e. the place from which something else originates); it remained up to his collaborator Arthur Cayley to ascribe the modern sense to the concept of matrix.
Being an array of numbers, arranged as m rows × n columns, and enclosed by a set of square parenthesis, [ai,j] with i=1,2,…,m and j=1,2,…,n, a real matrix actually originates from Rm×n . It is termed rectangular when m≠n, and square when m=n; and reduces to a row vector when m = 1, or a column vector when n = 1. The main diagonal is formed by elements of the type ai,i; if all entries below the main diagonal are zero, the matrix is said to be upper triangular and lower triangular when all entries above the main diagonal are nil. A diagonal matrix is both upper and lower triangular, i.e. all elements off the main diagonal are zero; if all elements in the diagonal are, in turn, equal to each other, then a scalar matrix arises. The most important scalar matrices are square (m × m) identity matrices, containing only 1’s in the main diagonal, and denoted as Im . A nil matrix is formed only by zeros, and is usually denoted as 0m×n .
When elements symmetrically placed relative to the diagonal are the same, the matrix is termed symmetric; all diagonal matrices are obviously symmetric. The sum of the elements in the main diagonal of matrix A is denoted as trace, abbreviated to tr A . When rows and columns of matrix A with generic element ai,j are exchanged – with elements retaining their relative location within each row and each column, its transpose AT results; it is accordingly described by [aj,i]. Finally, the requirement for equality of two matrices is their sharing the same type (i.e. identical number of rows and identical number of columns) and the same spread (i.e. identical numbers in homologous positions).
4.1 Addition of Matrices
Consider a generic (m × n) matrix A, defined as
– or via its generic element (and thus in a more condensed form)
where subscripti refers to ith row and subscriptj refers to jth column; if another matrix, B, also of type (m × n), is defined as
then A and B can be added according to the algorithm
– so the sum will again be a matrix of (m × n) type.
Addition of matrices is commutative; in fact,
may be handled as
in view of Eq. (4.4) – where the commutative property of addition of scalars was taken advantage of; after using Eq. (4.5) backward, one gets
(4.7)
from Eq. (4.6), and finally
as per Eqs. (4.2) and (4.3) – thus confirming the initial statement.
If a third matrix C is defined as
then one can write
(4.10)
together with Eqs. (4.2) and (4.3); based on Eq. (4.4), one has that
(4.11)
and a further utilization of Eq. (4.4) leads to
– along with the associative property borne by addition of scalars. One may repeat the above reasoning by first associating A and B, viz.
(4.13)
at the expense of Eqs. (4.2), (4.3), and (4.9), with Eq. (4.4) allowing transformation to
(4.14)