What is the outer product?

The outer product is a multiplication of two vectors, denoted as ab ⊤, resulting in a matrix of dimensions R n × n.

What is a Linear Combination?

A linear combination of a finite number of vectors x1, ..., xk in a vector space V is an expression of the form v = λ1x1 + ... + λkxk, where λ1, ..., λk are scalars in R.

1/108

p.18

Matrix Operations: Addition and Multiplication

The outer product is a multiplication of two vectors, denoted as ab ⊤, resulting in a matrix of dimensions R n × n.

p.20

Linear Independence and Dependence

A linear combination of a finite number of vectors x1, ..., xk in a vector space V is an expression of the form v = λ1x1 + ... + λkxk, where λ1, ..., λk are scalars in R.

p.20

Linear Independence and Dependence

Vectors x1, ..., xk in a vector space V are linearly independent if the only solution to the equation 0 = λ1x1 + ... + λkxk is the trivial solution, where all coefficients λi are zero.

p.3

Matrix Operations: Addition and Multiplication

The dot product between two vectors a and b is computed by multiplying the elements of the ith row of matrix A with the jth column of matrix B and summing them up, commonly denoted by a ⊤ b or ⟨ a , b ⟩.

p.8

Gaussian Elimination for Solving Linear Equations

The general solution is the set of all solutions to a system of linear equations, expressed as a particular solution plus a linear combination of solutions to the homogeneous equation Ax = 0.

p.11

Gaussian Elimination for Solving Linear Equations

An equation system is in reduced row-echelon form if it is in row-echelon form, every pivot is 1, and the pivot is the only nonzero entry in its column.

p.11

Gaussian Elimination for Solving Linear Equations

Gaussian elimination is an algorithm that performs elementary transformations to bring a system of linear equations into reduced row-echelon form.

p.13

Gaussian Elimination for Solving Linear Equations

Reduced row-echelon form allows us to easily read out solutions or the inverse of a matrix from the augmented matrix representation.

p.15

Algorithms for Solving Linear Systems

The Moore-Penrose pseudo-inverse is a generalization of the matrix inverse that can be used to find solutions to linear equations, particularly in cases where the matrix is not invertible. It is defined as (A^T A)^{-1} A^T and provides the minimum norm least-squares solution to the equation Ax = b.

p.14

Inverse and Transpose of Matrices

The inverse of a matrix A, denoted A⁻¹, is a matrix that, when multiplied by A, yields the identity matrix I.

p.13

Vector Spaces and Subspaces

The kernel, or null space, is the set of all solutions to the homogeneous equation Ax = 0, and it forms a basis of the solution space.

p.9

Elementary Transformations in Linear Algebra

Swapping rows in a matrix involves exchanging the positions of two rows, which is a common elementary transformation used in solving systems of linear equations.

p.9

Elementary Transformations in Linear Algebra

Elementary transformations are operations applied to the rows of a matrix, including swapping rows, multiplying a row by a non-zero constant, and adding or subtracting rows from one another.

p.18

Vector Spaces and Subspaces

In the vector space V = R m × n, addition is defined elementwise for matrices A and B, resulting in a matrix where each element is the sum of the corresponding elements of A and B.

p.5

Inverse and Transpose of Matrices

The transpose of a matrix A, denoted as A⊤, is formed by swapping its rows and columns, resulting in a matrix B where bij = aji.

p.20

Vector Spaces and Subspaces

A basis is a set of vectors in a vector space that can be combined through linear combinations to represent every vector in that space.

p.20

Vector Spaces and Subspaces

The closure property guarantees that the sum of any two vectors and the product of any vector with a scalar will result in another vector within the same vector space.

p.4

Matrix Operations: Addition and Multiplication

Distributivity refers to the property that (A + B)C = AC + BC and A(C + D) = AC + AD for matrices A, B, C, and D.

p.16

Elementary Transformations in Linear Algebra

Associativity states that for all x, y, z in G, (x ⊗ y) ⊗ z = x ⊗ (y ⊗ z).

p.16

Elementary Transformations in Linear Algebra

(N0, +) is not a group because, although it has a neutral element (0), it lacks inverse elements for all its elements.

p.2

Matrix Representation of Linear Equations

A matrix is an m · n-tuple of elements a_ij, ordered according to a rectangular scheme consisting of m rows and n columns.

p.13

Gaussian Elimination for Solving Linear Equations

The augmented matrix notation represents a set of simultaneous linear equations AX = I_n, which is used to find the inverse of matrix A by transforming it into reduced row-echelon form.

p.3

Matrix Operations: Addition and Multiplication

The Hadamard product is an element-wise multiplication of two matrices, where c_ij is defined as a_ij * b_ij, differing from standard matrix multiplication.

p.3

Matrix Representation of Linear Equations

The identity matrix I_n in R^n×n is a square matrix with ones on the diagonal and zeros elsewhere, serving as the multiplicative identity in matrix multiplication.

p.7

Matrix Representation of Linear Equations

The notation Ax = b represents a matrix equation where A is a matrix of coefficients, x is a vector of unknowns, and b is a vector of constants, used to compactly express a system of linear equations.

p.8

Gaussian Elimination for Solving Linear Equations

Gaussian elimination is a constructive algorithmic method used to transform any system of linear equations into a simpler form, facilitating the solution process.

p.12

Elementary Transformations in Linear Algebra

A matrix A is in reduced row-echelon form if each leading entry of a row is 1, all entries in the column above and below a leading 1 are 0, and any rows consisting entirely of zeros are at the bottom of the matrix.

p.12

Gaussian Elimination for Solving Linear Equations

Pivot columns are the columns of a matrix that contain the leading 1s in the reduced row-echelon form, indicating the positions of the basic variables in the system of equations.

p.5

Matrix Operations: Addition and Multiplication

When a matrix A is multiplied by a scalar λ, each element of the matrix is scaled by λ, resulting in a new matrix K where Kij = λaij.

p.1

Matrix Representation of Linear Equations

We collect the coefficients into vectors and then collect these vectors into matrices to write the system in a compact notation.

p.17

Vector Spaces and Subspaces

The neutral element of (V, +) is the zero vector 0 = [0, ..., 0]ᵀ.

p.10

Elementary Transformations in Linear Algebra

The leading coefficient of a row (the first nonzero number from the left) is called the pivot.

p.2

Matrix Operations: Addition and Multiplication

The sum of two matrices A and B is defined as the element-wise sum, resulting in a new matrix where each element is the sum of the corresponding elements of A and B.

p.7

Systems of Linear Equations

A system of linear equations is a collection of one or more linear equations involving the same set of variables, typically expressed in the form Ax = b, where A is a matrix of coefficients, x is a vector of unknowns, and b is a vector of constants.

p.18

Vector Spaces and Subspaces

A vector space V = R n is defined with operations of addition and scalar multiplication, where addition is performed elementwise and scalar multiplication scales each component of the vector.

p.19

Vector Spaces and Subspaces

For U to be a subspace of V, it must be non-empty (specifically contain the zero vector), and it must be closed under addition and scalar multiplication.

p.15

Algorithms for Solving Linear Systems

Iterative methods are techniques used to solve systems of linear equations indirectly, such as the Richardson method, Jacobi method, Gauss-Seidel method, and Krylov subspace methods. They involve setting up an iteration that reduces the residual error in each step, converging to the solution.

p.14

Matrix Representation of Linear Equations

An augmented matrix is a matrix that includes the coefficients of a system of linear equations along with the constants from the equations.

p.6

Matrix Representation of Linear Equations

A system of linear equations can be compactly represented in matrix form as Ax = b, where A is the coefficient matrix, x is the vector of variables, and b is the vector of constants.

p.1

Geometric Interpretation of Solutions

The solution space can be geometrically interpreted as the intersection of two lines, where each linear equation represents a line.

p.4

Inverse and Transpose of Matrices

The inverse of a 2 × 2 matrix A can be computed using the formula A^(-1) = (1/(a11*a22 - a12*a21)) * [[a22, -a12], [-a21, a11]] if a11*a22 - a12*a21 ≠ 0.

p.17

Vector Spaces and Subspaces

The elements x ∈ V are called vectors.

p.16

Elementary Transformations in Linear Algebra

(Z, +) is an Abelian group because it satisfies all group properties including closure, associativity, a neutral element, and inverse elements.

p.16

Elementary Transformations in Linear Algebra

The identity matrix In is the neutral element with respect to matrix multiplication in (Rn×n, ·).

p.9

Matrix Representation of Linear Equations

An augmented matrix is a matrix that represents a system of linear equations, combining the coefficients of the variables and the constants from the equations into a single matrix, typically in the form [A | b].

p.5

Inverse and Transpose of Matrices

The determinant of a 2 × 2-matrix is a scalar value that can be used to check whether the matrix is invertible.

p.15

Gaussian Elimination for Solving Linear Equations

Gaussian elimination is a method for solving systems of linear equations, computing determinants, checking linear independence, and finding the inverse and rank of matrices. It is an intuitive and constructive approach but can be impractical for very large systems due to its cubic scaling in arithmetic operations.

p.11

Gaussian Elimination for Solving Linear Equations

To find solutions for Ax = 0, one looks at the non-pivot columns and expresses them as a linear combination of the pivot columns.

p.7

Elementary Transformations in Linear Algebra

The columns of a matrix represent the coefficients of the variables in the linear equations, and they can be combined in various ways to find solutions to the system.

p.19

Vector Spaces and Subspaces

For every vector space V, the trivial subspaces are V itself and the set containing only the zero vector, {0}.

p.5

Inverse and Transpose of Matrices

A square matrix is an n × n matrix, meaning it has the same number of rows and columns.

p.4

Inverse and Transpose of Matrices

The inverse of a matrix A is another matrix B such that AB = I_n and BA = I_n, where I_n is the identity matrix.

p.16

Elementary Transformations in Linear Algebra

A group is a set G with an operation ⊗ defined on G such that it satisfies closure, associativity, the existence of a neutral element, and the existence of an inverse element.

p.10

Gaussian Elimination for Solving Linear Equations

A matrix is in row-echelon form if all rows that contain only zeros are at the bottom of the matrix; all rows that contain at least one nonzero element are on top of rows that contain only zeros, and the first nonzero number from the left (the pivot) is always strictly to the right of the pivot of the row above it.

p.16

Elementary Transformations in Linear Algebra

The neutral element in (Rn, +) is the zero vector (0, ..., 0).

p.18

Matrix Operations: Addition and Multiplication

The inner product, also known as the scalar or dot product, is a multiplication of two vectors, denoted as a ⊤ b, resulting in a scalar value.

p.7

Particular and General Solution

A particular solution is a specific solution to a system of linear equations that satisfies all the equations in the system, often found by substituting known values into the equations.

p.12

Gaussian Elimination for Solving Linear Equations

A homogeneous system of linear equations is a system of equations of the form Ax = 0, where A is a matrix and x is a vector of variables.

p.4

Matrix Representation of Linear Equations

The identity matrix is an n × n matrix containing 1 on the diagonal and 0 everywhere else.

p.14

Inverse and Transpose of Matrices

A matrix is invertible if there exists another matrix such that their product is the identity matrix; this is only possible if the matrix is square and has full rank.

p.12

Algorithms for Solving Linear Systems

The Minus-1 Trick is a method used to read out the solutions of a homogeneous system of linear equations by manipulating the augmented matrix to include -1s as pivots, which helps in identifying solutions.

p.1

Geometric Interpretation of Solutions

When the lines are parallel, the solution set is empty, meaning there are no common solutions that satisfy all equations.

p.10

Linear Independence and Dependence

The variables that are not corresponding to the pivots in the row-echelon form are called free variables.

p.11

Gaussian Elimination for Solving Linear Equations

A particular solution is a specific solution to a system of linear equations that satisfies the equation, often expressed using pivot columns.

p.19

Vector Spaces and Subspaces

A vector subspace U of a vector space V is a subset that is itself a vector space under the operations defined in V, satisfying closure under addition and scalar multiplication, and containing the zero vector.

p.5

Inverse and Transpose of Matrices

An inverse matrix A⁻¹ of a matrix A is such that the product AB = I = BA, where I is the identity matrix.

p.20

Linear Independence and Dependence

Vectors x1, ..., xk are linearly dependent if there exists a non-trivial linear combination such that 0 = λ1x1 + ... + λkxk, with at least one λi not equal to zero.

p.6

Matrix Operations: Addition and Multiplication

Distributivity is the property that states (λ + ψ)C = λC + ψC, allowing the distribution of scalar addition over matrix multiplication.

p.7

Particular and General Solution

Infinitely many solutions occur when there are more unknowns than equations in a system, allowing for multiple combinations of variable values that satisfy all equations.

p.14

Algorithms for Solving Linear Systems

Linear regression is used to find approximate solutions to systems of linear equations when an exact solution does not exist.

p.4

Inverse and Transpose of Matrices

A matrix is singular if it does not possess an inverse, meaning it is noninvertible.

p.16

Elementary Transformations in Linear Algebra

Closure means that for all x, y in G, the result of the operation x ⊗ y is also in G.

p.10

Linear Independence and Dependence

The variables corresponding to the pivots in the row-echelon form are called basic variables.

p.10

Algorithms for Solving Linear Systems

The general solution captures the set of all possible solutions to the system of equations.

p.8

Elementary Transformations in Linear Algebra

Elementary transformations are operations applied to a system of linear equations that maintain the solution set while transforming the system into a simpler form.

p.14

Gaussian Elimination for Solving Linear Equations

Gaussian elimination is a method for solving systems of linear equations by transforming the augmented matrix into reduced row-echelon form.

p.3

Matrix Operations: Addition and Multiplication

Matrices can only be multiplied if their neighboring dimensions match; specifically, an n×k matrix A can be multiplied by a k×m matrix B.

p.18

Inverse and Transpose of Matrices

The transpose of a vector x, denoted as x ⊤, converts a column vector into a row vector.

p.19

Vector Spaces and Subspaces

The solution set of a homogeneous system of linear equations Ax = 0 is a subspace of R^n.

p.1

Geometric Interpretation of Solutions

The solution set is represented as the intersection of the lines defined by each linear equation on the x1x2-plane.

p.17

Vector Spaces and Subspaces

A real-valued vector space V = (V, +, ·) is a set V with two operations + and ·, where (V, +) is an Abelian group and the operations satisfy specific distributive and associative properties.

p.16

Elementary Transformations in Linear Algebra

An Abelian group is a group where the operation ⊗ is commutative, meaning that for all x, y in G, x ⊗ y = y ⊗ x.

p.9

Elementary Transformations in Linear Algebra

The transformation notation '⇝' indicates a transformation of the augmented matrix using elementary transformations, showing the progression from one matrix form to another.

p.19

Vector Spaces and Subspaces

Vector subspaces are significant in machine learning for applications such as dimensionality reduction, allowing for the simplification of data while preserving essential features.

p.15

Vector Spaces and Subspaces

A vector space is a structured space in which vectors reside, characterized by the ability to add vectors together and multiply them by scalars while remaining within the same space. It is defined by a set of elements and operations that maintain the structure of the set.

p.4

Matrix Operations: Addition and Multiplication

Associativity means that for matrices A, B, and C, the equation (AB)C = A(BC) holds true.

p.19

Vector Spaces and Subspaces

The intersection of arbitrarily many subspaces is itself a subspace.

p.17

Vector Spaces and Subspaces

If the inverse exists (A is regular), then A⁻¹ is the inverse element of A ∈ Rⁿˣⁿ, and in this case (Rⁿˣⁿ, ·) is a group called the general linear group.

p.16

Elementary Transformations in Linear Algebra

A neutral element e in G is such that for all x in G, x ⊗ e = x and e ⊗ x = x.

p.16

Elementary Transformations in Linear Algebra

(R, ·) is not a group because the element 0 does not have an inverse under multiplication.

p.2

Matrix Operations: Addition and Multiplication

The product C = AB is computed such that each element c_ij is the sum of the products of the corresponding elements from the rows of A and the columns of B.

p.3

Matrix Operations: Addition and Multiplication

Matrix multiplication is not commutative because the product AB does not equal BA in general, as demonstrated by differing dimensions of the resulting matrices.

p.18

Vector Spaces and Subspaces

A column vector is denoted as x = [x 1, ..., x n] and is used to simplify notation regarding vector space operations.

p.8

Systems of Linear Equations

A particular solution is a specific solution to the equation Ax = b, which can be found through various methods, including inspection or substitution.

p.15

Vector Spaces and Subspaces

Norms are mathematical functions that allow the computation of similarities between vectors in a vector space. They provide a way to measure the size or length of vectors and are essential for analyzing convergence in iterative methods.

p.12

Vector Spaces and Subspaces

The notation x ∈ R^5 indicates that the vector x is an element of a 5-dimensional real vector space, meaning it has five components that are real numbers.

p.1

Geometric Interpretation of Solutions

Each linear equation defines a plane in three-dimensional space, and the solution set can be a plane, a line, a point, or empty depending on the intersection of these planes.

p.17

Vector Spaces and Subspaces

The elements λ ∈ R are called scalars, and the outer operation · is multiplication by scalars.

p.10

Algorithms for Solving Linear Systems

A particular solution is a specific solution that satisfies the system of equations.

p.2

Matrix Representation of Linear Equations

A (1, n)-matrix is called a row vector, and a (m, 1)-matrix is called a column vector.

p.6

Matrix Operations: Addition and Multiplication

Associativity refers to the property that allows scalar values to be moved around in matrix operations, expressed as (λψ)C = λ(ψC) and λ(BC) = (λB)C = B(λC) for matrices B and C.

p.9

Matrix Representation of Linear Equations

The notation 'Ax = b' represents a system of linear equations, where A is the matrix of coefficients, x is the vector of variables, and b is the vector of constants.

p.5

Inverse and Transpose of Matrices

A symmetric matrix A is one that satisfies the condition A = A⊤, meaning it is equal to its transpose.

p.8

Linear Independence and Dependence

A non-trivial solution refers to a solution of a homogeneous system that is not the zero vector, indicating the existence of infinitely many solutions.

p.1

Systems of Linear Equations

It means that there are multiple pairs of values for the variables that satisfy all equations in the system simultaneously.

p.17

Vector Spaces and Subspaces

The set of regular (invertible) matrices A ∈ Rⁿˣⁿ is a group with respect to matrix multiplication and is called the general linear group GL(n, R).

p.16

Elementary Transformations in Linear Algebra

An inverse element for x in G is an element y in G such that x ⊗ y = e and y ⊗ x = e, where e is the neutral element.

p.2

Matrix Representation of Linear Equations

R^m × n is the set of all real-valued (m, n)-matrices.

Study Smarter, Not Harder

Study Smarter, Not Harder