General view of systems of linear equations; basic definitions. Solving systems of linear algebraic equations, solution methods, examples


Solving systems of linear algebraic equations (SLAEs) is undoubtedly the most important topic in a linear algebra course. Great amount problems from all branches of mathematics are reduced to solving systems linear equations. These factors explain the reason for this article. The material of the article is selected and structured so that with its help you can

  • pick up optimal method solutions to your system of linear algebraic equations,
  • study the theory of the chosen method,
  • solve your system of linear equations by considering detailed solutions to typical examples and problems.

Brief description of the article material.

First, we give all the necessary definitions, concepts and introduce notations.

Next, we will consider methods for solving systems of linear algebraic equations in which the number of equations is equal to the number of unknown variables and which have a unique solution. Firstly, we will focus on the Cramer method, secondly, we will show the matrix method for solving such systems of equations, thirdly, we will analyze the Gauss method (method sequential elimination unknown variables). To consolidate the theory, we will definitely solve several SLAEs in different ways.

After this, we will move on to solving systems of linear algebraic equations of general form, in which the number of equations does not coincide with the number of unknown variables or the main matrix of the system is singular. Let us formulate the Kronecker-Capelli theorem, which allows us to establish the compatibility of SLAEs. Let us analyze the solution of systems (if they are compatible) using the concept of a basis minor of a matrix. We will also consider the Gauss method and describe in detail the solutions to the examples.

We will definitely dwell on the structure of the general solution of homogeneous and inhomogeneous systems of linear algebraic equations. Let us give the concept of a fundamental system of solutions and show how the general solution of a SLAE is written using the vectors of the fundamental system of solutions. For better understanding Let's look at a few examples.

In conclusion, we will consider systems of equations that can be reduced to linear ones, as well as various problems in the solution of which SLAEs arise.

Page navigation.

Definitions, concepts, designations.

We will consider systems of p linear algebraic equations with n unknown variables (p can be equal to n) of the form

Unknown variables, - coefficients (some real or complex numbers), - free terms (also real or complex numbers).

This form of recording SLAE is called coordinate.

IN matrix form writing this system of equations has the form,
Where - the main matrix of the system, - a column matrix of unknown variables, - a column matrix of free terms.

If we add a matrix-column of free terms to matrix A as the (n+1)th column, we get the so-called extended matrix systems of linear equations. Typically, an extended matrix is ​​denoted by the letter T, and the column of free terms is separated by a vertical line from the remaining columns, that is,

Solving a system of linear algebraic equations called a set of values ​​of unknown variables that turns all equations of the system into identities. The matrix equation for given values ​​of the unknown variables also becomes an identity.

If a system of equations has at least one solution, then it is called joint.

If a system of equations has no solutions, then it is called non-joint.

If a SLAE has a unique solution, then it is called certain; if there is more than one solution, then – uncertain.

If the free terms of all equations of the system are equal to zero , then the system is called homogeneous, otherwise - heterogeneous.

Solving elementary systems of linear algebraic equations.

If the number of equations of a system is equal to the number of unknown variables and the determinant of its main matrix is ​​not equal to zero, then such SLAEs will be called elementary. Such systems of equations have a unique solution, and in the case of a homogeneous system, all unknown variables are equal to zero.

We began to study such SLAEs in high school. When solving them, we took one equation, expressed one unknown variable in terms of others and substituted it into the remaining equations, then took the next equation, expressed the next unknown variable and substituted it into other equations, and so on. Or they used the addition method, that is, they added two or more equations to eliminate some unknown variables. We will not dwell on these methods in detail, since they are essentially modifications of the Gauss method.

The main methods for solving elementary systems of linear equations are the Cramer method, the matrix method and the Gauss method. Let's sort them out.

Solving systems of linear equations using Cramer's method.

Suppose we need to solve a system of linear algebraic equations

in which the number of equations is equal to the number of unknown variables and the determinant of the main matrix of the system is different from zero, that is, .

Let be the determinant of the main matrix of the system, and - determinants of matrices that are obtained from A by replacement 1st, 2nd, …, nth column respectively to the column of free members:

With this notation, unknown variables are calculated using the formulas of Cramer’s method as . This is how the solution to a system of linear algebraic equations is found using Cramer's method.

Example.

Cramer's method .

Solution.

The main matrix of the system has the form . Let's calculate its determinant (if necessary, see the article):

Since the determinant of the main matrix of the system is nonzero, the system has a unique solution that can be found by Cramer’s method.

Let's compose and calculate the necessary determinants (we obtain the determinant by replacing the first column in matrix A with a column of free terms, the determinant by replacing the second column with a column of free terms, and by replacing the third column of matrix A with a column of free terms):

Finding unknown variables using formulas :

Answer:

The main disadvantage of Cramer's method (if it can be called a disadvantage) is the complexity of calculating determinants when the number of equations in the system is more than three.

Solving systems of linear algebraic equations using the matrix method (using an inverse matrix).

Let a system of linear algebraic equations be given in matrix form, where the matrix A has dimension n by n and its determinant is nonzero.

Since , then matrix A is invertible, that is, there is an inverse matrix. If we multiply both sides of the equality by the left, we get a formula for finding a matrix-column of unknown variables. This is how we obtained a solution to a system of linear algebraic equations using the matrix method.

Example.

Solve system of linear equations matrix method.

Solution.

Let's rewrite the system of equations in matrix form:

Because

then the SLAE can be solved using the matrix method. Using the inverse matrix, the solution to this system can be found as .

Let's construct an inverse matrix using a matrix from algebraic additions of elements of matrix A (if necessary, see the article):

It remains to calculate the matrix of unknown variables by multiplying the inverse matrix to a matrix-column of free members (if necessary, see the article):

Answer:

or in another notation x 1 = 4, x 2 = 0, x 3 = -1.

The main problem when finding solutions to systems of linear algebraic equations using the matrix method is the complexity of finding the inverse matrix, especially for square matrices of order higher than third.

Solving systems of linear equations using the Gauss method.

Suppose we need to find a solution to a system of n linear equations with n unknown variables
the determinant of the main matrix of which is different from zero.

The essence of the Gauss method consists of sequentially eliminating unknown variables: first x 1 is excluded from all equations of the system, starting from the second, then x 2 is excluded from all equations, starting from the third, and so on, until only the unknown variable x n remains in the last equation. This process of transforming system equations to sequentially eliminate unknown variables is called direct Gaussian method. After completing the forward stroke of the Gaussian method, x n is found from the last equation, using this value from the penultimate equation, x n-1 is calculated, and so on, x 1 is found from the first equation. The process of calculating unknown variables when moving from the last equation of the system to the first is called inverse of the Gaussian method.

Let us briefly describe the algorithm for eliminating unknown variables.

We will assume that , since we can always achieve this by rearranging the equations of the system. Let's eliminate the unknown variable x 1 from all equations of the system, starting with the second. To do this, to the second equation of the system we add the first, multiplied by , to the third equation we add the first, multiplied by , and so on, to the nth equation we add the first, multiplied by . The system of equations after such transformations will take the form

where , and .

We would have arrived at the same result if we had expressed x 1 in terms of other unknown variables in the first equation of the system and substituted the resulting expression into all other equations. Thus, the variable x 1 is excluded from all equations, starting from the second.

Next, we proceed in a similar way, but only with part of the resulting system, which is marked in the figure

To do this, to the third equation of the system we add the second, multiplied by , to the fourth equation we add the second, multiplied by , and so on, to the nth equation we add the second, multiplied by . The system of equations after such transformations will take the form

where , and . Thus, the variable x 2 is excluded from all equations, starting from the third.

Next, we proceed to eliminating the unknown x 3, while we act similarly with the part of the system marked in the figure

So we continue the direct progression of the Gaussian method until the system takes the form

From this moment we begin the reverse of the Gaussian method: we calculate x n from the last equation as , using the obtained value of x n we find x n-1 from the penultimate equation, and so on, we find x 1 from the first equation.

Example.

Solve system of linear equations Gauss method.

Solution.

Let us exclude the unknown variable x 1 from the second and third equations of the system. To do this, to both sides of the second and third equations we add the corresponding parts of the first equation, multiplied by and by, respectively:

Now we eliminate x 2 from the third equation by adding to its left and right sides the left and right sides of the second equation, multiplied by:

This completes the forward stroke of the Gauss method; we begin the reverse stroke.

From the last equation of the resulting system of equations we find x 3:

From the second equation we get .

From the first equation we find the remaining unknown variable and thereby complete the reverse of the Gauss method.

Answer:

X 1 = 4, x 2 = 0, x 3 = -1.

Solving systems of linear algebraic equations of general form.

In general, the number of equations of the system p does not coincide with the number of unknown variables n:

Such SLAEs may have no solutions, have a single solution, or have infinitely many solutions. This statement also applies to systems of equations whose main matrix is ​​square and singular.

Kronecker–Capelli theorem.

Before finding a solution to a system of linear equations, it is necessary to establish its compatibility. The answer to the question when SLAE is compatible and when it is inconsistent is given by Kronecker–Capelli theorem:
In order for a system of p equations with n unknowns (p can be equal to n) to be consistent, it is necessary and sufficient that the rank of the main matrix of the system be equal to the rank of the extended matrix, that is, Rank(A)=Rank(T).

Let us consider, as an example, the application of the Kronecker–Capelli theorem to determine the compatibility of a system of linear equations.

Example.

Find out whether the system of linear equations has solutions.

Solution.

. Let's use the method of bordering minors. Minor of the second order different from zero. Let's look at the third-order minors bordering it:

Since all the bordering minors of the third order are equal to zero, the rank of the main matrix is ​​equal to two.

In turn, the rank of the extended matrix is equal to three, since the minor is of third order

different from zero.

Thus, Rang(A), therefore, using the Kronecker–Capelli theorem, we can conclude that the original system of linear equations is inconsistent.

Answer:

The system has no solutions.

So, we have learned to establish the inconsistency of a system using the Kronecker–Capelli theorem.

But how to find a solution to an SLAE if its compatibility is established?

To do this, we need the concept of a basis minor of a matrix and a theorem about the rank of a matrix.

The minor of the highest order of the matrix A, different from zero, is called basic.

From the definition of a basis minor it follows that its order is equal to the rank of the matrix. For a non-zero matrix A there can be several basis minors; there is always one basis minor.

For example, consider the matrix .

All third-order minors of this matrix are equal to zero, since the elements of the third row of this matrix are the sum of the corresponding elements of the first and second rows.

The following second-order minors are basic, since they are non-zero

Minors are not basic, since they are equal to zero.

Matrix rank theorem.

If the rank of a matrix of order p by n is equal to r, then all row (and column) elements of the matrix that do not form the chosen basis minor are linearly expressed in terms of the corresponding row (and column) elements forming the basis minor.

What does the matrix rank theorem tell us?

If, according to the Kronecker–Capelli theorem, we have established the compatibility of the system, then we choose any basis minor of the main matrix of the system (its order is equal to r), and exclude from the system all equations that do not form the selected basis minor. The SLAE obtained in this way will be equivalent to the original one, since the discarded equations are still redundant (according to the matrix rank theorem, they are a linear combination of the remaining equations).

As a result, after discarding unnecessary equations of the system, two cases are possible.

    If the number of equations r in the resulting system is equal to the number of unknown variables, then it will be definite and the only solution can be found by the Cramer method, the matrix method or the Gauss method.

    Example.

    .

    Solution.

    Rank of the main matrix of the system is equal to two, since the minor is of second order different from zero. Extended Matrix Rank is also equal to two, since the only third order minor is zero

    and the second-order minor considered above is different from zero. Based on the Kronecker–Capelli theorem, we can assert the compatibility of the original system of linear equations, since Rank(A)=Rank(T)=2.

    As a basis minor we take . It is formed by the coefficients of the first and second equations:

    The third equation of the system does not participate in the formation of the basis minor, so we exclude it from the system based on the theorem on the rank of the matrix:

    This is how we obtained an elementary system of linear algebraic equations. Let's solve it using Cramer's method:

    Answer:

    x 1 = 1, x 2 = 2.

    If the number of equations r in the resulting SLAE less number unknown variables n, then on the left sides of the equations we leave the terms that form the basis minor, and we transfer the remaining terms to the right sides of the equations of the system with the opposite sign.

    The unknown variables (r of them) remaining on the left sides of the equations are called main.

    Unknown variables (there are n - r pieces) that are on the right sides are called free.

    Now we believe that free unknown variables can take arbitrary values, while the r main unknown variables will be expressed through free unknown variables in a unique way. Their expression can be found by solving the resulting SLAE using the Cramer method, the matrix method or the Gauss method.

    Let's look at it with an example.

    Example.

    Solve a system of linear algebraic equations .

    Solution.

    Let's find the rank of the main matrix of the system by the method of bordering minors. Let's take a 1 1 = 1 as a non-zero minor of the first order. Let's start searching for a non-zero minor of the second order bordering this minor:

    This is how we found a non-zero minor of the second order. Let's start searching for a non-zero bordering minor of the third order:

    Thus, the rank of the main matrix is ​​three. The rank of the extended matrix is ​​also equal to three, that is, the system is consistent.

    We take the found non-zero minor of the third order as the basis one.

    For clarity, we show the elements that form the basis minor:

    We leave the terms involved in the basis minor on the left side of the system equations, and transfer the rest with opposite signs to the right sides:

    Let's give the free unknown variables x 2 and x 5 arbitrary values, that is, we accept , where are arbitrary numbers. In this case, the SLAE will take the form

    Let us solve the resulting elementary system of linear algebraic equations using Cramer’s method:

    Hence, .

    In your answer, do not forget to indicate free unknown variables.

    Answer:

    Where are arbitrary numbers.

Summarize.

To solve a system of general linear algebraic equations, we first determine its compatibility using the Kronecker–Capelli theorem. If the rank of the main matrix is ​​not equal to the rank of the extended matrix, then we conclude that the system is incompatible.

If the rank of the main matrix is ​​equal to the rank of the extended matrix, then we select a basis minor and discard the equations of the system that do not participate in the formation of the selected basis minor.

If the order of the basis minor equal to the number unknown variables, then the SLAE has a unique solution, which we find by any method known to us.

If the order of the basis minor is less than the number of unknown variables, then on the left side of the system equations we leave the terms with the main unknown variables, transfer the remaining terms to the right sides and give arbitrary values ​​to the free unknown variables. From the resulting system of linear equations we find the main unknowns variables by method Cramer, matrix method or Gaussian method.

Gauss method for solving systems of linear algebraic equations of general form.

The Gauss method can be used to solve systems of linear algebraic equations of any kind without first testing them for compatibility. The process of sequential elimination of unknown variables makes it possible to draw a conclusion about both the compatibility and incompatibility of the SLAE, and if a solution exists, it makes it possible to find it.

From a computational point of view, the Gaussian method is preferable.

Watch it detailed description and analyzed examples in the article the Gauss method for solving systems of linear algebraic equations of general form.

Writing a general solution to homogeneous and inhomogeneous linear algebraic systems using vectors of the fundamental system of solutions.

In this section we will talk about simultaneous homogeneous and inhomogeneous systems of linear algebraic equations that have an infinite number of solutions.

Let us first deal with homogeneous systems.

Fundamental system of solutions homogeneous system of p linear algebraic equations with n unknown variables is a collection of (n – r) linearly independent solutions of this system, where r is the order of the basis minor of the main matrix of the system.

If we denote linearly independent solutions of a homogeneous SLAE as X (1) , X (2) , ..., X (n-r) (X (1) , X (2) , ..., X (n-r) are columnar matrices of dimension n by 1) , then the general solution of this homogeneous system is represented as a linear combination of vectors of the fundamental system of solutions with arbitrary constant coefficients C 1, C 2, ..., C (n-r), that is, .

What does the term general solution of a homogeneous system of linear algebraic equations (oroslau) mean?

The meaning is simple: the formula specifies all possible solutions of the original SLAE, in other words, taking any set of values ​​of arbitrary constants C 1, C 2, ..., C (n-r), using the formula we will obtain one of the solutions of the original homogeneous SLAE.

Thus, if we find a fundamental system of solutions, then we can define all solutions of this homogeneous SLAE as .

Let us show the process of constructing a fundamental system of solutions to a homogeneous SLAE.

We select the basis minor of the original system of linear equations, exclude all other equations from the system and transfer all terms containing free unknown variables to the right-hand sides of the equations of the system with opposite signs. Let's give the free unknown variables the values ​​1,0,0,...,0 and calculate the main unknowns by solving the resulting elementary system of linear equations in any way, for example, using the Cramer method. This will result in X (1) - the first solution of the fundamental system. If you give free unknown values 0,1,0,0,…,0 and calculate the main unknowns, we get X (2) . And so on. If we assign the values ​​0.0,…,0.1 to the free unknown variables and calculate the main unknowns, we obtain X (n-r) . In this way, a fundamental system of solutions to a homogeneous SLAE will be constructed and its general solution can be written in the form .

For inhomogeneous systems of linear algebraic equations, the general solution is represented in the form , where is the general solution of the corresponding homogeneous system, and is the particular solution of the original inhomogeneous SLAE, which we obtain by giving the free unknowns the values ​​0,0,…,0 and calculating the values ​​of the main unknowns.

Let's look at examples.

Example.

Find the fundamental system of solutions and the general solution of a homogeneous system of linear algebraic equations .

Solution.

The rank of the main matrix of homogeneous systems of linear equations is always equal to the rank of the extended matrix. Let's find the rank of the main matrix using the method of bordering minors. As a non-zero minor of the first order, we take element a 1 1 = 9 of the main matrix of the system. Let's find the bordering non-zero minor of the second order:

A minor of the second order, different from zero, has been found. Let's go through the third-order minors bordering it in search of a non-zero one:

All third-order bordering minors are equal to zero, therefore, the rank of the main and extended matrix is ​​equal to two. Let's take . For clarity, let us note the elements of the system that form it:

The third equation of the original SLAE does not participate in the formation of the basis minor, therefore, it can be excluded:

We leave the terms containing the main unknowns on the right sides of the equations, and transfer the terms with free unknowns to the right sides:

Let us construct a fundamental system of solutions to the original homogeneous system of linear equations. The fundamental system of solutions of this SLAE consists of two solutions, since the original SLAE contains four unknown variables, and the order of its basis minor is equal to two. To find X (1), we give the free unknown variables the values ​​x 2 = 1, x 4 = 0, then we find the main unknowns from the system of equations
.

A system of m linear equations with n unknowns called a system of the form

Where a ij And b i (i=1,…,m; b=1,…,n) are some known numbers, and x 1 ,…,x n– unknown. In the designation of coefficients a ij first index i denotes the equation number, and the second j– the number of the unknown at which this coefficient stands.

We will write the coefficients for the unknowns in the form of a matrix , which we'll call matrix of the system.

The numbers on the right side of the equations are b 1 ,…,b m are called free members.

Totality n numbers c 1 ,…,c n called decision of a given system, if each equation of the system becomes an equality after substituting numbers into it c 1 ,…,c n instead of the corresponding unknowns x 1 ,…,x n.

Our task will be to find solutions to the system. In this case, three situations may arise:

A system of linear equations that has at least one solution is called joint. Otherwise, i.e. if the system has no solutions, then it is called non-joint.

Let's consider ways to find solutions to the system.


MATRIX METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

Matrices make it possible to briefly write down a system of linear equations. Let a system of 3 equations with three unknowns be given:

Consider the system matrix and matrices columns of unknown and free terms

Let's find the work

those. as a result of the product, we obtain the left-hand sides of the equations of this system. Then, using the definition of equality of matrices, this system can be written in the form

or shorter AX=B.

Here are the matrices A And B are known, and the matrix X unknown. It is necessary to find it, because... its elements are the solution to this system. This equation is called matrix equation.

Let the determinant of the matrix be different from zero | A| ≠ 0. Then the matrix equation is solved as follows. Multiply both sides of the equation on the left by the matrix A-1, inverse of the matrix A: . Because the A -1 A = E And EX = X, then we obtain a solution to the matrix equation in the form X = A -1 B .

Note that since the inverse matrix can only be found for square matrices, the matrix method can only solve those systems in which the number of equations coincides with the number of unknowns. However, matrix recording of the system is also possible in the case when the number of equations is not equal to the number of unknowns, then the matrix A will not be square and therefore it is impossible to find a solution to the system in the form X = A -1 B.

Examples. Solve systems of equations.

CRAMER'S RULE

Consider a system of 3 linear equations with three unknowns:

Third-order determinant corresponding to the system matrix, i.e. composed of coefficients for unknowns,

called determinant of the system.

Let's compose three more determinants as follows: replace sequentially 1, 2 and 3 columns in the determinant D with a column of free terms

Then we can prove the following result.

Theorem (Cramer's rule). If the determinant of the system Δ ≠ 0, then the system under consideration has one and only one solution, and

Proof. So, let's consider a system of 3 equations with three unknowns. Let's multiply the 1st equation of the system by the algebraic complement A 11 element a 11, 2nd equation – on A 21 and 3rd – on A 31:

Let's add these equations:

Let's look at each of the brackets and the right side of this equation. By the theorem on the expansion of the determinant in elements of the 1st column

Similarly, it can be shown that and .

Finally, it is easy to notice that

Thus, we obtain the equality: .

Hence, .

The equalities and are derived similarly, from which the statement of the theorem follows.

Thus, we note that if the determinant of the system Δ ≠ 0, then the system has a unique solution and vice versa. If the determinant of the system is equal to zero, then the system either has an infinite number of solutions or has no solutions, i.e. incompatible.

Examples. Solve system of equations


GAUSS METHOD

The previously discussed methods can be used to solve only those systems in which the number of equations coincides with the number of unknowns, and the determinant of the system must be different from zero. The Gauss method is more universal and suitable for systems with any number of equations. It consists in the consistent elimination of unknowns from the equations of the system.

Consider again a system of three equations with three unknowns:

.

We will leave the first equation unchanged, and from the 2nd and 3rd we will exclude the terms containing x 1. To do this, divide the second equation by A 21 and multiply by – A 11, and then add it to the 1st equation. Similarly, we divide the third equation by A 31 and multiply by – A 11, and then add it with the first one. As a result, the original system will take the form:

Now from the last equation we eliminate the term containing x 2. To do this, divide the third equation by, multiply by and add with the second. Then we will have a system of equations:

From here, from the last equation it is easy to find x 3, then from the 2nd equation x 2 and finally, from 1st - x 1.

When using the Gaussian method, the equations can be swapped if necessary.

Often, instead of writing a new system of equations, they limit themselves to writing out the extended matrix of the system:

and then bring it to a triangular or diagonal form using elementary transformations.

TO elementary transformations matrices include the following transformations:

  1. rearranging rows or columns;
  2. multiplying a string by a number other than zero;
  3. adding other lines to one line.

Examples: Solve systems of equations using the Gauss method.


Thus, the system has an infinite number of solutions.

Definition. System m equations with n unknowns in general view is written as follows:

Where a ij are the coefficients, and b i– permanent.

The solutions of the system are n numbers that, when substituted into the system, turn each of its equations into an identity.

Definition. If a system has at least one solution, then it is called joint. If a system does not have a single solution, then it is called inconsistent.

Definition. A system is called determinate if it has only one solution and indefinite if it has more than one.

Definition. For a system of linear equations the matrix

A = is called the matrix of the system, and the matrix

A * = called the extended matrix of the system

Definition. If b 1 , b 2 , …,b m = 0, then the system is called homogeneous. Comment. A homogeneous system is always consistent, because always has a zero solution.

Elementary transformations of systems.

1. Adding to both sides of one equation the corresponding parts of the other, multiplied by the same number, not equal to zero.

2. Rearranging equations.

3. Removing from the system equations that are identities for all X.

Cramer's formulas.

This method is also applicable only in the case of systems of linear equations, where the number of variables coincides with the number of equations.

Theorem. System of n equations with n unknowns

if the determinant of the system matrix is ​​not equal to zero, then the system has a unique solution and this solution is found using the formulas: x i = Where D = det A, A D i is the determinant of the matrix obtained from the system matrix by replacing the column i column of free members b i.

D i =

Example. Find the solution to the system of equations:

D = = 5(4 – 9) + (2 – 12) – (3 – 8) = -25 – 10 + 5 = -30;

D 1 = = (28 – 48) – (42 – 32) = -20 – 10 = -30.

D 2 = = 5(28 – 48) – (16 – 56) = -100 + 40 = -60.

D 3 = = 5(32 – 42) + (16 – 56) = -50 – 40 = -90.

Note 1. If the system is homogeneous, i.e. b i = 0, then for D¹0 the system has a unique zero solution x 1 = x 2 = … = x n = 0.

Note 2. At D=0 the system has an infinite number of solutions.

Inverse matrix method.

The matrix method is applicable to solving systems of equations where the number of equations is equal to the number of unknowns.

Let the system of equations be given: Let's create matrices:

A= - matrix of coefficients for variables or matrix of the system;

B = - matrix – column of free terms;

X = - matrix – column of unknowns.

Then the system of equations can be written: A×X = B. Let us multiply both sides of the equality from the left by A -1: A -1 ×A×X = A -1 ×B, because A -1 ×A = E, That E×X = A -1 ×B, then the following formula is valid:

X = A -1 ×B

Thus, to apply this method it is necessary to find inverse matrix.

Example. Solve the system of equations:

X = , B = , A =

Let's find the inverse matrix A -1.

D = det A = 5(4-9) + 1(2 – 12) – 1(3 – 8) = -25 – 10 +5 = -30≠0 ⇒ the inverse matrix exists.

M 11 = ; M 21 = ; M 31 = ;

M 12 = M 22 = M 32 =

M 13 = M 23 = M 33 =

A -1 = ;

Let's check:

A×A -1 =
=E.

Finding the X matrix.

X = = A -1 B = × = .

We received the system solutions: x =1; y = 2; z = 3.

4.Gauss method.

Let the system be given m linear equations with n unknown:

Assuming that the coefficient in the system a 11 is different from zero (if this is not the case, then the equation with a nonzero coefficient at x 1). We transform the system as follows: leave the first equation unchanged, and exclude the unknown from all other equations x 1 using equivalent transformations in the manner described above.

In the resulting system

,

assuming that (which can always be obtained by rearranging equations or terms within equations), we leave the first two equations of the system unchanged, and from the remaining equations, using the second equation, we eliminate the unknown with the help of elementary transformations x 2. In the newly received system

provided we leave the first three equations unchanged, and from all the others, using the third equation, we eliminate the unknown by elementary transformations x 3 .

This process continues until one of three possible cases occurs:

1) if as a result we arrive at a system, one of the equations of which has zero coefficients for all unknowns and a nonzero free term, then the original system is inconsistent;

2) if as a result of transformations we obtain a system with a triangular matrix of coefficients, then the system is consistent and definite;

3) if a stepwise system of coefficients is obtained (and the condition of point 1 is not met), then the system is consistent and indefinite.

Consider the square system : (1)

This system has a coefficient a 11 is different from zero. If this condition were not met, then in order to obtain it, it would be necessary to rearrange the equations, putting first the equation whose coefficient at x 1 is not equal to zero.

We will carry out the following system transformations:

1) because a 11 ¹0, we leave the first equation unchanged;

2) instead of the second equation, we write the equation obtained if we subtract the first multiplied by 4 from the second equation;

3) instead of the third equation, we write the difference between the third and the first, multiplied by 3;

4) instead of the fourth equation, we write the difference between the fourth and the first, multiplied by 5.

Received new system is equivalent to the original one and has zero coefficients in all equations except the first one x 1 (this was the purpose of transformations 1 – 4): (2)

For the above transformation and for all further transformations, you should not completely rewrite the entire system, as was just done. The original system can be represented as a matrix

. (3)

Matrix (3) is called extended matrix for the original system of equations. If we remove the column of free terms from the extended matrix, we get system coefficient matrix, which is sometimes simply called matrix of the system.

System (2) corresponds to the extended matrix

.

Let's transform this matrix as follows:

1) we will leave the first two lines unchanged, since the element a 22 is not zero;

2) instead of the third line, we write the difference between the second line and double the third;

3) replace the fourth line with the difference between the second line doubled and the fourth line multiplied by 5.

The result is a matrix corresponding to a system whose unknown x 1 is excluded from all equations except the first, and the unknown x 2 - from all equations except the first and second:

.

Now let's exclude the unknown x 3 from the fourth equation. To do this, we transform the last matrix as follows:

1) we will leave the first three lines unchanged, since a 33¹0;

2) replace the fourth line with the difference between the third, multiplied by 39, and the fourth: .

The resulting matrix corresponds to the system

. (4)

From the last equation of this system we obtain x 4 = 2. Substituting this value into the third equation, we get x 3 = 3. Now from the second equation it follows that x 2 = 1, and from the first - x 1 = –1. It is obvious that the resulting solution is unique (since the value is determined in the only way x 4 then x 3, etc.).

Definition: Let's call a square matrix that has non-zero numbers on the main diagonal and zeros below the main diagonal, triangular matrix.

The coefficient matrix of system (4) is a triangular matrix.

Comment: If, using elementary transformations, the coefficient matrix of a square system can be reduced to a triangular matrix, then the system is consistent and definite.

Let's look at another example: . (5)

Let us carry out the following transformations of the extended matrix of the system:

1) leave the first line unchanged;

2) instead of the second line, write the difference between the second line and double the first;

3) instead of the third line, we write the difference between the third line and triple the first;

4) replace the fourth line with the difference between the fourth and first;

5) replace the fifth line with the difference of the fifth line and double the first.

As a result of transformations, we obtain the matrix

.

Leaving the first two rows of this matrix unchanged, we reduce it to the following form by elementary transformations:

.

If now, following the Gauss method, which is also called the method of sequential elimination of unknowns, using the third line we bring the coefficients at x 3 in the fourth and fifth rows, then after dividing all elements of the second row by 5 and dividing all elements of the third row by 2, we obtain the matrix

.

Each of the last two rows of this matrix corresponds to the equation 0 x 1 +0x 2 +0x 3 +0x 4 +0x 5 = 0. This equation is satisfied by any set of numbers x 1 ,x 2, ¼, x 5 and should be removed from the system. Thus, the system with the just obtained extended matrix is ​​equivalent to a system with an extended matrix of the form

. (6)

The last row of this matrix corresponds to the equation
x 3 – 2x 4 + 3x 5 = –4. If unknown x 4 and x 5 give arbitrary values: x 4 = C 1; x 5 = C 2, then from the last equation of the system corresponding to matrix (6), we obtain x 3 = –4 + 2C 1 – 3C 2. Substituting expressions x 3 ,x 4, and x 5 into the second equation of the same system, we get x 2 = –3 + 2C 1 – 2C 2. Now from the first equation we can get x 1 = 4 – C 1+ C 2. The final solution of the system is presented in the form .

Consider a rectangular matrix A, whose number of columns m more than number of lines n. Such a matrix A let's call stepped.

It is obvious that matrix (6) is a step matrix.

If, when applying equivalent transformations to a system of equations, at least one equation is reduced to the form

0x 1 + 0x 2 + ¼0 x n = b j (b j ¹ 0),

then the system is incompatible or contradictory, since not a single set of numbers x 1 , x 2, ¼, x n does not satisfy this equation.

If, when transforming the extended matrix of the system, the matrix of coefficients is reduced to a stepwise form and the system does not turn out to be inconsistent, then the system is consistent and indefinite, that is, it has infinitely many solutions.

In the latter system, all solutions can be obtained by assigning specific numerical values ​​to the parameters C 1 And C 2.

Definition: Those variables whose coefficients are on the main diagonal of the step matrix (this means that these coefficients are different from zero) are called o main. In the example discussed above, these are the unknowns x 1 , x 2 , x 3. The remaining variables are called non-core. In the example above, these are the variables x 4, and x 5 . Non-primary variables can be given any values ​​or expressed through parameters, as was done in the last example.

Core variables are uniquely expressed through noncore variables.

Definition: If non-main variables are given specific numerical values ​​and the main variables are expressed through them, then the resulting solution is called private solution.

Definition: If non-basic variables are expressed in terms of parameters, then a solution is obtained, which is called general solution.

Definition: If all minor variables are given zero values, then the resulting solution is called basic.

Comment: The same system can sometimes be reduced to different sets of basic variables. So, for example, you can swap the 3rd and 4th columns in matrix (6). Then the main variables will be x 1 , x 2 ,x 4, and non-main ones - x 3 and x 5 .

Definition: If two different sets of basic variables are obtained at in various ways finding a solution to the same system, then these sets necessarily contain the same number of variables, called system rank.

Let's consider another system that has infinitely many solutions: .

Let us transform the extended matrix of the system using the Gaussian method:

.

As you can see, we did not get a step matrix, but the last matrix can be transformed by swapping the third and fourth columns: .

This matrix is ​​already stepped. The corresponding system has two non-basic variables - x 3 , x 5 and three main ones - x 1 , x 2 , x 4 . The solution to the original system is presented in the following form:

Here is an example of a system that has no solution:

.

Let's transform the system matrix using the Gaussian method:

.

The last row of the last matrix corresponds to the unsolvable equation 0x 1 + 0x 2 + 0x 3 = 1. Consequently, the original system is inconsistent.

Lecture No. 3.

Topic: Vectors. Scalar, vector and mixed product of vectors

1. The concept of a vector. Collinearity, orthogonality and coplanarity of vectors.

2. Linear operation on vectors.

3. Scalar product vectors and its application

4. Vector artwork vectors and its application

5. Mixed product of vectors and its application

1. The concept of a vector. Collinarity, orthogonality and coplanarity of vectors.

Definition: A vector is a directed segment with a starting point A and an ending point B.

Designation: , ,

Definition: The length or modulus of a vector of a vector is the number equal to length segment AB representing a vector.

Definition: A vector is called zero if the beginning and end of the vector coincide.

Definition: A vector of unit length is called unit. Definition: Vectors are called collinear if they lie on the same line or on parallel lines ( || ).

Comment:

1.Collinear vectors can be directed identically or oppositely.

2. The zero vector is considered collinear to any vector.

Definition: Two vectors are said to be equal if they are collinear,

have the same directions and have the same lengths ( = )

Purpose of the service. The online calculator is designed to study a system of linear equations. Usually in the problem statement you need to find general and particular solution of the system. When studying systems of linear equations, the following problems are solved:
  1. whether the system is collaborative;
  2. if the system is compatible, then it is definite or indefinite (the criterion for the compatibility of the system is determined by the theorem);
  3. if the system is defined, then how to find its unique solution (Cramer’s method, the inverse matrix method or the Jordan-Gauss method are used);
  4. if the system is uncertain, then how to describe the set of its solutions.

Classification of systems of linear equations

An arbitrary system of linear equations has the form:
a 1 1 x 1 + a 1 2 x 2 + ... + a 1 n x n = b 1
a 2 1 x 1 + a 2 2 x 2 + ... + a 2 n x n = b 2
...................................................
a m 1 x 1 + a m 2 x 2 + ... + a m n x n = b m
  1. Systems of linear inhomogeneous equations (the number of variables is equal to the number of equations, m = n).
  2. Arbitrary systems of linear inhomogeneous equations (m > n or m< n).
Definition. A solution to a system is any set of numbers c 1 ,c 2 ,...,c n , the substitution of which into the system instead of the corresponding unknowns turns each equation of the system into an identity.

Definition. Two systems are said to be equivalent if the solution of the first is the solution of the second and vice versa.

Definition. A system that has at least one solution is called joint. A system that does not have a single solution is called inconsistent.

Definition. A system that has a unique solution is called certain, and having more than one solution is uncertain.

Algorithm for solving systems of linear equations

  1. Find the ranks of the main and extended matrices. If they are not equal, then according to the Kronecker-Capelli theorem the system is inconsistent and this is where the study ends.
  2. Let rang(A) = rang(B) . We select the basic minor. In this case, all unknown systems of linear equations are divided into two classes. Unknowns whose coefficients are included in the basic minor are called dependent, and unknowns whose coefficients are not included in the basic minor are called free. Note that the choice of dependent and free unknowns is not always straightforward.
  3. We cross out those equations of the system whose coefficients are not included in the basis minor, since they are consequences of the others (according to the theorem on the basis minor).
  4. We move the terms of the equations containing free unknowns to the right side. As a result, we obtain a system of r equations with r unknowns, equivalent to the given one, the determinant of which is nonzero.
  5. The resulting system is solved in one of the following ways: the Cramer method, the inverse matrix method or the Jordan-Gauss method. Relationships are found that express the dependent variables through the free ones.

We continue to deal with systems of linear equations. So far we have considered systems that have a unique solution. Such systems can be solved in any way: by substitution method(“school”), according to Cramer's formulas, matrix method, Gaussian method. However, in practice, two more cases are widespread:

1) the system is inconsistent (has no solutions);

2) the system has infinitely many solutions.

For these systems, the most universal of all solution methods is used - Gaussian method. In fact, the “school” method will also lead to the answer, but in higher mathematics it is customary to use the Gaussian method of sequential elimination of unknowns. Those who are not familiar with the Gaussian method algorithm, please study the lesson first Gaussian method

The elementary matrix transformations themselves are exactly the same, the difference will be in the ending of the solution. First, let's look at a couple of examples when the system has no solutions (inconsistent).

Example 1

What immediately catches your eye about this system? The number of equations is less than the number of variables. There is a theorem that states: “If the number of equations in the system is less than the number of variables, then the system is either inconsistent or has infinitely many solutions.” And all that remains is to find out.

The beginning of the solution is completely ordinary - we write down the extended matrix of the system and, using elementary transformations, bring it to a stepwise form:

(1). On the top left step we need to get (+1) or (–1). There are no such numbers in the first column, so rearranging the rows will not give anything. The unit will have to organize itself, and this can be done in several ways. This is what we did. To the first line we add the third line, multiplied by (–1).

(2). Now we get two zeros in the first column. To the second line we add the first line, multiplied by 3. To the third line we add the first, multiplied by 5.

(3). After the transformation has been completed, it is always advisable to see if it is possible to simplify the resulting strings? Can. We divide the second line by 2, at the same time getting the desired one (–1) on the second step. Divide the third line by (–3).



(4). Add a second line to the third line. Probably everyone noticed the bad line that resulted from elementary transformations:

. It is clear that this cannot be so.

Indeed, let us rewrite the resulting matrix

back to the system of linear equations:

If, as a result of elementary transformations, a string of the form is obtained , Whereλ is a number other than zero, then the system is inconsistent (has no solutions).

How to write down the ending of a task? You need to write down the phrase:

“As a result of elementary transformations, a string of the form was obtained, where λ 0 " Answer: “The system has no solutions (inconsistent).”

Please note that in this case there is no reversal of the Gaussian algorithm, there are no solutions and there is simply nothing to find.

Example 2

Solve a system of linear equations

This is an example for you to solve on your own. Full solution and answer at the end of the lesson.

We remind you again that your solution may differ from our solution; the Gaussian method does not specify an unambiguous algorithm; the order of actions and the actions themselves must be guessed in each case independently.

Another technical feature of the solution: elementary transformations can be stopped At once, as soon as a line like , where λ 0 . Let's consider a conditional example: suppose that after the first transformation the matrix is ​​obtained

.

This matrix has not yet been reduced to echelon form, but there is no need for further elementary transformations, since a line of the form has appeared, where λ 0 . The answer should be given immediately that the system is incompatible.

When a system of linear equations has no solutions, it is almost a gift to the student, due to the fact that a short solution is obtained, sometimes literally in 2-3 steps. But everything in this world is balanced, and a problem in which the system has infinitely many solutions is just longer.

Example 3:

Solve a system of linear equations

There are 4 equations and 4 unknowns, so the system can either have a single solution, or have no solutions, or have infinitely many solutions. Be that as it may, the Gaussian method will in any case lead us to the answer. This is its versatility.

The beginning is again standard. Let us write down the extended matrix of the system and, using elementary transformations, bring it to a stepwise form:

That's all, and you were afraid.

(1). Please note that all numbers in the first column are divisible by 2, so we are happy with two on the top left step. To the second line we add the first line multiplied by (–4). To the third line we add the first line multiplied by (–2). To the fourth line we add the first line, multiplied by (–1).

Attention! Many may be tempted by the fourth line subtract first line. This can be done, but it is not necessary; experience shows that the probability of an error in calculations increases several times. We just add: to the fourth line we add the first line, multiplied by (–1) – exactly!

(2). The last three lines are proportional, two of them can be deleted. Here again we need to show increased attention, but are the lines really proportional? To be on the safe side, it would be a good idea to multiply the second line by (–1), and divide the fourth line by 2, resulting in three identical lines. And only after that remove two of them. As a result of elementary transformations, the extended matrix of the system is reduced to a stepwise form:

When writing a task in a notebook, it is advisable to make the same notes in pencil for clarity.

Let us rewrite the corresponding system of equations:

There is no smell of a “ordinary” single solution to the system here. Bad line where λ 0, also no. This means that this is the third remaining case - the system has infinitely many solutions.

An infinite set of solutions to a system is briefly written in the form of the so-called general solution of the system.

Common decision we will find the system using the inverse of the Gaussian method. For systems of equations with an infinite set of solutions, new concepts appear: "basic variables" And "free variables". First let's define what variables we have basic, and which variables - free. It is not necessary to explain in detail the terms of linear algebra; it is enough to remember that there are such basic variables And free variables.

Basic variables always “sit” strictly on the steps of the matrix. IN in this example the basic variables are x 1 and x 3 .

Free variables are everything remaining variables that did not receive a step. In our case there are two of them: x 2 and x 4 – free variables.

Now you need Allbasic variables express only throughfree variables. The reverse of the Gaussian algorithm traditionally works from the bottom up. From the second equation of the system we express the basic variable x 3:

Now look at the first equation: . First we substitute the found expression into it:

It remains to express the basic variable x 1 via free variables x 2 and x 4:

In the end we got what we needed - All basic variables ( x 1 and x 3) expressed only through free variables ( x 2 and x 4):

Actually, the general solution is ready:

.

How to write the general solution correctly? First of all, free variables are written into the general solution “by themselves” and strictly in their places. IN in this case free variables x 2 and x 4 should be written in the second and fourth positions:

.

The resulting expressions for the basic variables and obviously needs to be written in the first and third positions:

From the general solution of the system one can find infinitely many private solutions. It's very simple. Free variables x 2 and x 4 are called so because they can be given any final values. The most popular values ​​are zero values, since this is the easiest partial solution to obtain.

Substituting ( x 2 = 0; x 4 = 0) into the general solution, we obtain one of the particular solutions:

, or is a particular solution corresponding to free variables with values ​​( x 2 = 0; x 4 = 0).

Another sweet pair is ones, let's substitute ( x 2 = 1 and x 4 = 1) into the general solution:

, i.e. (-1; 1; 1; 1) – another particular solution.

It is easy to see that the system of equations has infinitely many solutions since we can give free variables any meanings.

Each the particular solution must satisfy to each equation of the system. This is the basis for a “quick” check of the correctness of the solution. Take, for example, the particular solution (-1; 1; 1; 1) and substitute it into the left side of each equation of the original system:

Everything must come together. And with any particular solution you receive, everything should also agree.

Strictly speaking, checking a particular solution is sometimes deceiving, i.e. some particular solution may satisfy each equation of the system, but the general solution itself is actually found incorrectly. Therefore, first of all, the verification of the general solution is more thorough and reliable.

How to check the resulting general solution ?

It's not difficult, but it does require some lengthy transformations. We need to take expressions basic variables, in this case and , and substitute them into the left side of each equation of the system.

To the left side of the first equation of the system:

The right side of the initial first equation of the system is obtained.

To the left side of the second equation of the system:

The right side of the initial second equation of the system is obtained.

And then - to the left sides of the third and fourth equation of the system. This check takes longer, but guarantees 100% correctness of the overall solution. In addition, some tasks require checking the general solution.

Example 4:

Solve the system using the Gaussian method. Find the general solution and two particular ones. Check the general solution.

This is an example for you to solve on your own. Here, by the way, again the number of equations is less than the number of unknowns, which means it is immediately clear that the system will either be inconsistent or have an infinite number of solutions.

Example 5:

Solve a system of linear equations. If the system has infinitely many solutions, find two particular solutions and check the general solution

Solution: Let's write down the extended matrix of the system and, using elementary transformations, bring it to a stepwise form:

(1). Add the first line to the second line. To the third line we add the first line multiplied by 2. To the fourth line we add the first line multiplied by 3.

(2). To the third line we add the second line, multiplied by (–5). To the fourth line we add the second line, multiplied by (–7).

(3). The third and fourth lines are the same, we delete one of them. This is such a beauty:

Basic variables sit on the steps, therefore - basic variables.

There is only one free variable that did not get a step here: .

(4). Reverse move. Let's express the basic variables through a free variable:

From the third equation:

Let's consider the second equation and substitute the found expression into it:

, , ,

Let's consider the first equation and substitute the found expressions and into it:

Thus, the general solution with one free variable x 4:

Once again, how did it turn out? Free variable x 4 sits alone in its rightful fourth place. The resulting expressions for the basic variables , , are also in place.

Let us immediately check the general solution.

We substitute the basic variables , , into the left side of each equation of the system:

The corresponding right-hand sides of the equations are obtained, thus the correct general solution is found.

Now from the found general solution we obtain two particular solutions. All variables are expressed here through a single free variable x 4 . No need to rack your brains.

Let x 4 = 0 then – the first particular solution.

Let x 4 = 1 then – another private solution.

Answer: Common decision: . Private solutions:

And .

Example 6:

Find the general solution to the system of linear equations.

We have already checked the general solution, the answer can be trusted. Your solution may differ from our solution. The main thing is that the general decisions coincide. Many people probably noticed an unpleasant moment in the solutions: very often, when reversing the Gauss method, we had to tinker with ordinary fractions. In practice, this is indeed the case; cases where there are no fractions are much less common. Be prepared mentally and, most importantly, technically.

Let us dwell on the features of the solution that were not found in the solved examples. The general solution of the system may sometimes include a constant (or constants).

For example, a general solution: . Here one of the basic variables is equal to constant number: . There is nothing exotic about this, it happens. Obviously, in this case, any particular solution will contain a five in the first position.

Rarely, but there are systems in which the number of equations is greater than the number of variables. However, the Gaussian method works in the harshest conditions. You should calmly reduce the extended matrix of the system to a stepwise form using a standard algorithm. Such a system may be inconsistent, may have infinitely many solutions, and, oddly enough, may have a single solution.

Let us repeat our advice - in order to feel comfortable when solving a system using the Gaussian method, you should get good at solving at least a dozen systems.

Solutions and answers:

Example 2:

Solution:Let us write down the extended matrix of the system and, using elementary transformations, bring it to a stepwise form.

Elementary transformations performed:

(1) The first and third lines have been swapped.

(2) The first line was added to the second line, multiplied by (–6). The first line was added to the third line, multiplied by (–7).

(3) The second line was added to the third line, multiplied by (–1).

As a result of elementary transformations, a string of the form is obtained, Where λ 0 .This means the system is inconsistent.Answer: there are no solutions.

Example 4:

Solution:Let us write down the extended matrix of the system and, using elementary transformations, bring it to a stepwise form:

Conversions performed:

(1). The first line, multiplied by 2, was added to the second line. The first line, multiplied by 3, was added to the third line.

There is no unit for the second step , and transformation (2) is aimed at obtaining it.

(2). The third line was added to the second line, multiplied by –3.

(3). The second and third lines were swapped (we moved the resulting –1 to the second step)

(4). The third line was added to the second line, multiplied by 3.

(5). The first two lines had their sign changed (multiplied by –1), the third line was divided by 14.

Reverse:

(1). Here are the basic variables (which are on the steps), and – free variables (who did not get a step).

(2). Let's express the basic variables in terms of free variables:

From the third equation: .

(3). Consider the second equation:, private solutions:

Answer: Common decision:

Complex numbers

In this section we will introduce the concept complex number, consider algebraic, trigonometric And exponential form complex number. We will also learn how to perform operations with complex numbers: addition, subtraction, multiplication, division, exponentiation and root extraction.

To master complex numbers, no special knowledge from a higher mathematics course is required, and the material is accessible even to schoolchildren. It is enough to be able to perform algebraic operations with “ordinary” numbers, and remember trigonometry.

First, let's remember the “ordinary” Numbers. In mathematics they are called set of real numbers and are designated by the letter R, or R (thickened). All real numbers sit on the familiar number line:

The group of real numbers is very diverse - there are whole numbers, fractions, and irrational numbers. In this case, each point on the number axis necessarily corresponds to some real number.