For example, lets look at the augmented matrix of the above system:. Performing Gauss-Jordan elimination gives us the reduced row echelon form:. Which tells us that z is a free variable, and hence the system has infinitely many solutions. At this point you might be asking "Why all the fuss over homogeneous systems?
One reason that homogeneous systems are useful and interesting has to do with the relationship to non-homogenous systems. It is often easier to work with the homogenous system, find solutions to it, and then generalize those solutions to the non-homogenous case. Hence if we are given a matrix equation to solve, and we have already solved the homogeneous case, then we need only find a single particular solution to the equation in order to determine the whole set of solutions.
Sign in. Sign In. Homogeneous Linear Systems. Author: c o. Description: To introduce homogeneous linear systems and see how they relate to other parts of linear algebra.
Introduction and Preliminaries Background In this packet, we assume a familiarity with solving linear systems , inverse matrices , and Gaussian elimination. Definition A linear equation is said to be homogeneous when its constant part is zero. A remarkable result of this section is that a linear combination of the basic solutions is again a solution to the system. Even more remarkable is that every solution can be written as a linear combination of these solutions.
Therefore, if we take a linear combination of the two solutions to Example [exa:basicsolutions] , this would also be a solution. For example, we could take the following linear combination.
Another way in which we can find out more information about the solutions of a homogeneous system is to consider the rank of the associated coefficient matrix.
We now define what is meant by the rank of a matrix. From our above discussion, we know that this system will have infinitely many solutions. If we consider the rank of the coefficient matrix of this system, we can find out even more about the solution.
Note that we are looking at just the coefficient matrix, not the entire augmented matrix. Consider our above Example [exa:basicsolutions] in the context of this theorem. This tells us that the solution will contain at least one parameter. The rank of the coefficient matrix can tell us even more about the solution! You can check that this is true in the solution to Example [exa:basicsolutions]. This is an issue only for square matrices. A square matrix is one in which the number of rows is equal to the number of columns.
A square matrix is the associated matrix of some homogeneous system. Since the matrix is square, the homogeneous system has the same number of equations as there are variables. The matrix is said to be nonsingular if the system has a unique solution. It is said to be singular if the system has an infinite number of solutions. The terms "singular" and "nonsingular" only apply to square matrices. Note that, by the above theorems, a square matrix is singular if and only it has at least one free variable when it is put into echelon form, which in turn is true if and only if an echelon form of the matrix has at least one row containing only zeros.
I would like to add some geometric perspective to all this, using ideas from the previous section. You should try to develop your higher-dimensional geometric intuition! Of course, if you pick two lines at random, it will be very unlikely that they are identical. The intersection of two planes through the origin is a line, unless the planes happen to be identical. When you add the third plane to the intersection, you are most likely intersecting that plane with a line and the result will be a single point namely the origin , except in the unlikely case that the line happens to lie entirely in the plane.
0コメント