MATRIX AND IT’S APPLICATION IN SCIENCE

The development of determinants, which arose from the study of coefficients of systems of linear equations, led to the introduction and development of the concept of a matrix and the subject of linear algebra. In 1963, Leibnitz, one of the calculus’s founders, used determinant, and in 1750, Cramer presented his determinant-based formula for solving systems of linear equations (today known as Cramer’s rule).
In late 1700, Lagrange’s work on bilinear form made the first implicit use of matrices. Lagrange’s goal was to characterize the maxima and minima of multivariable functions. His method is now known as the Lagrange multiplier method. To accomplish this, he first required that the first order partial derivation be 0 and that a condition on Although Lagrange did not explicitly use matrices, the matrix of second order partial derivatives holds; this condition is now known as positive or negative definiteness.
Gauss invented elimination around 1800 and used it to solve the least square problem in celestial computations and later in computations to measure the earth and its surface (Geodesy is the branch of applied mathematics concerned with measuring or determining the shape of the earth or with precisely locating points on the earth’s surface). Despite the fact that Gauss’ name is associated with this technique for removing variables from a system of linear equations, there was previous work on the subject.
Chinese manuscripts dating back several centuries have been discovered that explain how to solve a system of three equations in three unknown steps by “Guassian.” elimination. For many years, Gaussian elimination was thought to be part of the development of geodgesy rather than mathematics. Wihelm Jordan’s handbook on geodesy contained the first printed mention of Gaussian-Jordan elimination. Many people mistakenly believe that the Jordan in “Gauss-Jordan elimination” is Camille Jordan, a famous mathematician.
For matrix algebra to flourish, proper notation and definition of matrix multiplication were required. Both requirements were met at roughly the same time and in the same location. J.J Sylvester first used the term “matrix,” which was derived from the Latin word for “womb,” as a name for an array of numbers in 1848 in England.
Arthur Cayley’s work in 1855 sowed the seeds of matrix algebra. Cayley studied multiplication in order to create a matrix of composite transformation coefficient ST is the product of matrix S and matrix T. He went on to investigate the algebra of these compositions, as well as matrix inverses. Cayley stated in his 1858 memoir on the theory of matrices the famous Cayley-Hamilton theorem, which states that a square matrix is a root of its characteristics’ polynomial. The use of a single letter “A” to represent a matrix was critical in the evolution of matrix algebra. The formula det(AB) = det (A) det(B) provided an early connection between matrix algebra and determinants. “There would be many things to say about this theory of matrices, which seems to me to come before the theory of determinants,” Cayley wrote.
Mathematicians also made an attempt to However, there was no natural definition of the product of two vectors that held in arbitrary dimensions. Hermann Grassman proposed the first vector algebra with a non-commutative vector product (that is, V x W does not have to equal W x V) in his book – Ausedehnungslehre (1844). In addition, Grossmann’s text introduced the product of a column matrix and a row matrix, which resulted in what is now known as a simple or rank one matrix. Willard Gibbs, an American mathematical physicist, published his famous treatise on vector analysis in the late nineteenth century. Gibbs represented general matrices, which he called dyadics, in that treatise as the sum of simple matrices, which he called dyads. Later, the physicist P.A.M. Dirac was born. introduced the term “bracket” for what is now known as the “scalar product” of a “bar” (row) vector times a “ket” (column) vector, and “ket-bra” for the product of a ket times a bra, resulting in what is now known as a simple matrix, as shown above. Physicists introduced our convention of identifying column matrices and vectors in the twentieth century.
Matrices remained inextricably linked to linear transformations. By 1900, they had devolved into a finite-dimensional subset of the developing theory of linear transformations. Peano proposed the modern definition of a vector space in 1888. Soon after, an abstract vector space with function elements appeared. There was renewed interest in matrices, particularly numerical analysis of matrices, according to John Von Neumann. In 1947, Neumann and Herman Goldstein introduced condition numbers for analyzing round-off errors. Alan Turing and John von Neumann were giants in the development of stored-program computers in the twentieth century. In 1948, Turning proposed the LU decomposition of a matrix. The L is a lower triangular matrix with diagonal Is, while the U is an echelon matrix. The LU decomposition is commonly used in the solution of an n-th sequence of systems of linear equations, each with the same co-efficient matrix. A decade later, the benefit of QR decomposition was realized. Q is an orthogonal vector matrix, and R is a square upper triangular invertible matrix with positive entities on its diagonal.
QR factorization is employed in

Algorithms for computer computations such as solving equations and determining eigenvalues. Frobenius published an important work on matrices on linear substitutions and bilinear forms in 1878, oblivious to Cayley’s work. However, significant results in canonical matrices as representatives of equivalence classes of matrices have been demonstrated. In 1868 and 1874, he cites Kronecker and Weiserstrases, who considered special cases of his results.
Frobenius also demonstrated that a matrix satisfies its characteristic equation. Frobenius’ 1878 paper also includes a definition of matrix rank, which he used in his work on canonical forms and the definition of orthogonal matrices.

Weierstrass used an axiomatic definition of a determinant in his lectures and after his death. It was published in the note on determinant theory in 1903. Kronecker’s lectures on determinants were also published after his death in the same year. The modern theory of determinants was established with these two publications, but matrix theory took slightly longer to become a fully accepted theory. Introduction to higher algebra by Bocher in 1907 was an important early text that placed matrices in their proper place within mathematics. In the 1930s, Turnbull and Aitken wrote influential texts, and Missky’s “An introduction to linear algebra” in 1955 catapulted matrix theory to its current prominence as one of the most important undergraduate mathematics topics.

 

Leave a Comment