martes, 20 de julio de 2010

ITERATIVE METHODS FOR SOLUTION OF SYSTEMS OF LINEAR EQUATIONS


ITERATIVE METHODS FOR SOLUTION OF SYSTEMS OF LINEAR EQUATIONS


In general, these methods are based on fixed point method and the process is iterated so substitute in a formula.

Iterative methods compared with direct, we do not guarantee a better approach, however, are more efficient when working with large matrices.

In computational mathematics, is an iterative method to solve a problem (as an equation or a system of equations) by successive approximations to the solution, starting from an initial estimate. This approach contrasts with the direct methods, which attempt to solve the problem once (like solving a system of equations Ax = b by finding the inverse of the matrix A). Iterative methods are useful for solving problems involving a large number of variables (sometimes in the millions), where direct methods would be prohibitively expensive even with better computer power available.


  • JACOBI METHOD

The basis of the method is to construct a convergent sequence defined iteratively. The limit of this sequence is precisely the solution of the system. For practical purposes if the algorithm stops after a finite number of steps leads to an approximation of the value of x in the solution of the system.


  • GAUSS SEIDEL METHOD


The methods of Gauss and Cholesky methods are part of direct or finite. After a number of operations nito, in the absence of errors rounding, we get x solution of the system Ax = b. The Gauss-Seidel method is part of the so-called indirect methods or iterative. They start with x0 = (x01, X02,:::; x0 n), an approximation initial solution. Since x0 is building a new approach of the solution, x1 = (x11, x12;:::; x1n).

From built x1 x2 (here the superscript indicates the iteration and does not indicate a power). So on construye fxkg a sequence of vectors, with the aim, not always guaranteed to quelimk! 1xk = x:
Generally, indirect thods are a good option when the matrix is very large and dispersed or sparse (sparse), ie when the number of onzero elements is small compared to n2, total number of elements A.

In these cases you must use an appropriate data structure lets you store only the nonzero elements. In each iteration of the Gauss-Seidel method, there are n subiteraciones. In ca first subiteracion be amended only x1. The other coordinates x2, x3, ..., xn are not modified can.



fuente

DIRECT METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS


DIRECT METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS


In this lesson we study the solution of a Cramer system Ax = B, which means that A is ° c
invertible regular or verify ° c or det (A) 6 = 0, using a direct method. Since this is any
allowing, in the absence of errors, through a number of steps nito obtain the exact solution.
In property, this does not happen in general because of the inevitable rounding errors.


  • Gauss- Jordan Elimination

  • In mathematics, Gaussian elimination, Gaussian elimination or Gauss-Jordan elimination, so named because Carl Friedrich Gauss and Wilhelm Jordan, are linear algebra algorithms to determine the solutions of a system of linear equations, matrices and inverse finding. Asystem of equations is solved by the Gauss when their solutions are obtained by reducing an equivalent system given in which each equation has one fewer variables than the last. When applying this process, the resulting matrix is known as "stagger."

  • ALGORITHM GAUSS JORDAN

  • 1. Go to the far left column is not zero
    2. If the first line has a zero in this column, swap it with another that does not have
    3. Then, get below zero this item forward, adding appropriate multiples of row than h e row below it
    4. Cover the top row and repeat the above process with the remaining submatrix. Repeat with the rest of the lines (at this point the array is in the form of step)
    5. Starting with the last line is not zero, move up: for each row get a 1 up front and introduce zero multiples of this sum for the rows corresponding

    An interesting variant of Gaussian elimination is what we call Gauss-Jordan, (due to Gauss and Wilhelm Jordan mentioned), this is to be a front for getting the steps 1 to 4 (called direct path) and the time these completed and will result in the reduced echelon form matrix.

  • LU DESCOMPOSITION
ts name is derived from the English words "Lower" and "Upper", which in Spanish translates as "Bottom" and "Superior." Studying the process followed in the LU decomposition is possible to understand why this name, considering how original matrix is decomposed into two triangular
  • LU decomposition involves only operations on the coefficient matrix [A], providing an efficient means to calculate the inverse matrix or solving systems of linear algebra.

    First you must obtain the matrix [L] and the matrix [U]..

    [L] is a diagonal matrix with numbers less than 1 on .the diagonal. [U] is an upper diagonal matrix on the diagonal which does not necessarily have to be number one.

    The first step is to break down or transform [A] [L] and [U], ie to obtain the lower triangular matrix [L] and the upper triangular matrix [U]



  • INVERSE MATRIX

Is the matrix we get from chang ing rows by columns. The transpose of that represented by AT.

In mathematics, especially in linear algebra, a square matrix of order n is said to be invertible, nonsingular, nondegenerate or regular if there is another square matrix of order n, called the inverse matrix of A and represented as A-1matrices.


fuente

  • Shen Kangshen et al. (ed.) (1999). Nine Chapters of the Mathematical Art, Companion and Commentary, Oxford University Press. cited byOtto Bretscher (2005).
  • Linear Algebra and Its Applications, Thomson Brooks/Cole, pp. 46