Articles

2: Matrix Arithmetic - Mathematics


A fundamental topic of mathematics is arithmetic; adding, subtracting, multiplying and dividing numbers. We are comfortable with expressions such as [x+3x-xcdot x^2+x^5cdot x^{-1}] and know that we can “simplify” this to [4x-x^3+x^4.]

This chapter deals with the idea of doing similar operations, but instead of an unknown number (x), we will be using a matrix . So what exactly does the expression [A+3A-Acdot A^2+A^5cdot A^{-1}] mean? We are going to need to learn to define what matrix addition, scalar multiplication, matrix multiplication and matrix inversion are. We will learn just that, plus some more good stuff, in this chapter.


To perform matrix addition, two matrices must have the same dimensions. This means they must have the same number of rows and columns. In that case simply add each individual components, like below.

[A + B = egin 1 & -5 & 4 2 & 5 & 3 end + egin 8 & -3 & -4 4 & -2 & 9 end = egin 1 + 8 & -5 - 3 & 4 - 4 2 + 4 & 5 -2 & 3 + 9 end = egin 9 & -8 & 0 6 & 3 & 12 end]

Matrix addition does have many of the same properties as "normal" addition.

In addition, if one wishes to take the transpose of the sum of two matrices, then


C++ Program to Perform Arithmetic Operations on Matrix

Write a C++ Program to Perform Arithmetic Operations on Matrix with an example. In this C++ matrix arithmetic operations example, we allow users to enter the matrix sizes and matrixes items. Next, we used the C++ nested for loop to iterate matrix from 0 to rows and columns. Within the nested for loop, we performed arithmetic operations such as addition, division, subtraction, multiplication, and modules on both the matrixes and assigned them to new matrixes. Finally, we used one more nested for loop to print the matrix items.


In a two by two matrix, the cofactor of an entry is calculated by multiplying the following two factors.

  1. The negative one raised to the power of sum of the number of the row and the number of the column of the corresponding element.
  2. The minor of the respective entry.

Let us learn how to find the cofactor of every entry for the following example matrix.

Cofactor of an entry in the first row and the first column

$b_<11>$ is the entry in the first row and the first column. Now, find the minor of this element.

The cofactor of the element $b_<11>$ is denoted by $C_<11>$. For the element $b_<11>$, the number of the row is $1$ and the number of the column is $1$.

The cofactor of the entry $b_<11>$ is calculated by multiplying the minor of this entry with the negative one raised to the power of the sum of $1$ and $1$.

Therefore, the cofactor of the element $b_<11>$ in the matrix $B$ is positive $b_<22>$.

Cofactor of an entry in the first row and the second column

$b_<12>$ is the entry at the first row and the second column. Now, let’s find the minor of this element.

The cofactor of the element $b_<12>$ is denoted by $C_<12>$. For the element $b_<12>$, the number of the row is $1$ and the number of the column is $2$.

The cofactor of the entry $b_<12>$ is evaluated by multiplying the minor of this element with the negative one raised to the power of the sum of $1$ and $2$.

Therefore, the cofactor of the element $b_<12>$ in the matrix $B$ is negative $b_<21>$.

Cofactor of an entry in the second row and the first column

$b_<21>$ is the entry at the second row and the first column. Now, let us evaluate the minor of this entry.

The cofactor of the element $b_<21>$ is denoted by $C_<21>$. For the entry $b_<21>$, the number of the row is $2$ and the number of the column is $1$.

The cofactor of the entry $b_<21>$ is evaluated by multiplying the minor of this element with the negative one raised to the power of the sum of $2$ and $1$.

Therefore, the cofactor of the entry $b_<21>$ in the matrix $B$ is negative $b_<12>$.

Cofactor of an entry in the second row and the second column

$b_<22>$ is the entry in the second row and the second column. Now, let us evaluate the minor of this entry.

The cofactor of the element $b_<22>$ is represented by $C_<22>$. For the entry $b_<22>$, the number of the row is $2$ and the number of the column is $2$.

The cofactor of the entry $b_<22>$ is calculated by multiplying the minor of this entry with the negative one raised to the power of the sum of $2$ and $2$.

Therefore, the cofactor of the element $b_<22>$ in the matrix $B$ is positive $b_<11>$.

Signs

A sign technique can be used as a shortcut method while finding the cofactors of entries in a $2 imes 2$ matrix.

  1. In the first row, write a plus sign above the first element and a negative sign over the second element.
  2. In the second row, write a minus sign above the first element and a positive sign over the second element.

Now, let’s find the cofactors of the elements for the above matrix.

  1. $C_ <11>,=, +M_ <11>,=, +egin b_ <22> end ,=, b_<22>$
  2. $C_ <12>,=, -M_ <12>,=, -egin b_ <21> end ,=, -b_<21>$
  3. $C_ <21>,=, -M_ <21>,=, -egin b_ <12> end ,=, -b_<12>$
  4. $C_ <22>,=, +M_ <22>,=, +egin b_ <11> end ,=, b_<11>$

Remember that this shortcut method is recommendable to use for verifying our fundamental process and also to get the result quickly.

Example

Let’s find the cofactors of the entries in the the matrix $A$ of the order $2$.

The cofactor of the entry five is positive six.

The cofactor of the entry three is positive two.

The cofactor of the entry negative two is negative three.

The cofactor of the entry six is positive five.

In this way, the cofactor of every element can be calculated in a square matrix of the order two.


If a problem is difficult, don't panic, try these ideas:

  • Find a similar problem. Can you apply to your own problem?
  • Simplify the issue by removing some variables or dimensions.
  • When one approach fails, try the opposite.
  • Dream: fantasies about a situation such as assuming that all restrictions have been removed.
  • Establish sub-goals: break the problem into a number of smaller ones.
  • List the assumptions you have made about solving the problem and challenge them.
  • Try working through the situation from the way things are to the way you want them to be.
  • Choose other words to describe the problem. An alternative definition can yield new possibilities.

Where I can, I have put links to Amazon for books that are relevant to the subject, click on the appropriate country flag to get more details of the book or to buy it from them.

Specific to this page here:

This site may have errors. Don't use for critical systems.

Copyright (c) 1998-2021 Martin John Baker - All rights reserved - privacy policy.


Finding the Determinant of a 2×2 Matrix

Determinants are useful properties of square matrices, but can involve a lot of computation. A 2×2 determinant is much easier to compute than the determinants of larger matrices, like 3×3 matrices. To find a 2×2 determinant we use a simple formula that uses the entries of the 2×2 matrix. 2×2 determinants can be used to find the area of a parallelogram and to determine invertibility of a 2×2 matrix.

If the determinant of a matrix is 0 then the matrix is singular and it does not have an inverse.

Determinant of a 2×2 Matrix

Before we can find the inverse of a matrix, we need to first learn how to get the determinant of a matrix.

Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.

We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.


Matrix Arithmetic Calculators

The given below are the collection of matrix arithmetic calculators for you to perform various arithmetic operations like matrix - multiplication, addition, subtraction, and division.

Matrix: In mathematics, Matrix is an array of numbers, symbols or expressions arranged in rows and columns. Number of rows and columns decides the shape of matrix i.e., either square or rectangle.

Applications of Matrix: A major application of matrices is to represent linear transformation. In physics related uses, they are used in the study of electrical circuits, quantum mechanics and optics.

Matrix Arithmetic Calculators: Feel free to try all the arithmetic calculators given in the above section. The section holds a large variety of matrix arithmetic calculators like 3x3 & 2x2 matrix multiplication calculator, 4x4 & 5x5 matrix addition calculator, matrix division calculator etc.


Matrix Arithmetics under NumPy and Python

  • Matrix addition
  • Matrix subtraction
  • Matrix multiplication
  • Scalar product
  • Cross product
  • and lots of other operations on matrices

Vector Addition and Subtraction

Many people know vector addition and subtraction from physics, to be exact from the parallelogram of forces. It is a method for solving (or visualizing) the results of applying two forces to an object.

The addition of two vectors, in our example (see picture) x and y, may be represented graphically by placing the start of the arrow y at the tip of the arrow x, and then drawing an arrow from the start (tail) of x to the tip (head) of y. The new arrow drawn represents the vector x + y Subtracting a vector is the same as adding its negative. So, the difference of the vectors x and y is equal to the sum of x and -y:
x - y = x + (-y)
Subtraction of two vectors can be geometrically defined as follows: to subtract y from x, we place the end points of x and y at the same point, and then draw an arrow from the tip of y to the tip of x. That arrow represents the vector x - y, see picture on the right side.

Mathematically, we subtract the corresponding components of vector y from the vector x.

Scalar Product / Dot Product

In mathematics, the dot product is an algebraic operation that takes two coordinate vectors of equal size and returns a single number. The result is calculated by multiplying corresponding entries and adding up those products. The name "dot product" stems from the fact that the centered dot "·" is often used to designate this operation. The name "scalar product" focusses on the scalar nature of the result. of the result.

Definition of the scalar product:

We can see from the definition of the scalar product that it can be used to calculate the cosine of the angle between two vectors.

Calculation of the scalar product:

Finally, we want to demonstrate how to calculate the scalar product in Python:

Matrix Class

The matrix objects are a subclass of the numpy arrays (ndarray). The matrix objects inherit all the attributes and methods of ndarry. Another difference is that numpy matrices are strictly 2-dimensional, while numpy arrays can be of any dimension, i.e. they are n-dimensional.

The most important advantage of matrices is that the provide convenient notations for the matrix mulitplication. If X and Y are two Matrices than X * Y defines the matrix multiplication. While on the other hand, if X and Y are ndarrays, X * Y define an element by element multiplication.

Matrix Product

The matrix product of two matrices can be calculated if the number of columns of the left matrix is equal to the number of rows of the second or right matrix.
The product of a (l x m)-matrix A = (aij)i=1. l, j= 1..m and an (m x n)-matrix B = (bij)i=1. m, j= 1..n is a matrix C = (cij)i=1. l, j= 1..n, which is calculated like this:

The following picture illustrates it further:

If we want to perform matrix multiplication with two numpy arrays (ndarray), we have to use the dot product: Alternatively, we can cast them into matrix objects and use the "*" operator:

Simple Practical Application for Matrix Multiplication

In the following practical example, we come to talk about the sweet things of life.
Let's assume there are four people, and we call them Lucas, Mia, Leon and Hannah. Each of them has bought chocolates out of a choice of three. The brand are A, B and C, not very marketable, we have to admit. Lucas bought 100 g of brand A, 175 g of brand B and 210 of C. Mia choose 90 g of A, 160 g of B and 150 g of C. Leon bought 200 g of A, 50 of B and 100 g of C. Hannah apparently didn't like brand B, because she hadn't bought any of those. But she she seems to be a real fan of brand C, because she bought 310 g of them. Furthermore she bought 120 g of A.

So, what's the price in Euro of these chocolates: A costs 2.98 per 100 g, B costs 3.90 and C only 1.99 Euro.

If we have to calculate how much each of them had to pay, we can use Python, NumPy and Matrix multiplication:

This means that Lucas paid 13.98 Euro, Mia 11.97 Euro, Leon 9.90 and Hannah 9.75.

Cross Product

Let's stop consuming delicious chocolates and come back to a more mathematical and less high-calorie topic, i.e. the cross product.

The cross product or vector product is a binary operation on two vectors in three-dimensional space. The result is a vector which is perpendicular to the vectors being multiplied and normal to the plane containing them.

The cross product of two vectors a and b is denoted by a × b.


where n is a unit vector perpendicular to the plane containing a and b in the direction given by the right-hand rule.

If either of the vectors being multiplied is zero or the vectors are parallel then their cross product is zero. More generally, the magnitude of the product equals the area of a parallelogram with the vectors as sides. If the vectors are perpendicular the parallelogram is a rectangle and the magnitude of the product is the product of their lengths.


Algorithms for Applied Mathematics

Matrices have only been called such since 1850, when the term was coined by James Joseph Sylvester. His explanation for the terminology is succinct:

I have in previous papers defined a Matrix as a rectangular array of terms, out of which different systems of determinants may be engendered as from the womb of a common parent.

Though the terminology is relatively recent, some of the uses of matrices have been known in varying parts of the world since as early as the second century BC. Nearly all earliest uses of matrices are for the same purpose: solving systems of simultaneous linear equations.

Example 5.3.1 Solving a system of linear equations by elimination

Consider the following system of simultaneous linear equations in three variables:

We can solve this by the method of elimination, using repeated applications of the following three operations:

The order of equations may be rearranged. If the (i^ ext) and (j^ ext) equations are interchanged, we denote the swap by (E_i:E_j ext<.>)

An equation may be multiplied by a nonzero constant. If the (i^ ext) equation is multiplied by the constant (k ext<,>) we denote the operation by (k,E_i ext<.>)

A nonzero multiple of one equation may be added to another equation, and the sum stored in the position of the latter equation. For example if we wished to multiply the (i^ ext) equation by (k ext<,>) add it to the (j^ ext) equation, and leave that sume in the position of the (j^ ext) equation, we would denote the operation by (k,E_i+E_j ext<.>)

As these operations preserve the arithmetic properties of the equations, the set of solutions before and after any sequence of these operations must be the same. The goal in the method of elimination is to diagonalize the equation, so that the coefficient of (x_j) in row (i) is (0) for all (jlt i ext<.>) Starting from the example system above, we could begin by adding multiples of the first equation to the second and third equations: (-E_1+E_2) and (6E_1+E_3 ext<.>)

Next we can use a multiple of the second equation to eliminate the leading coefficient of the third equation, using (11E_2+E_3 ext<.>)

So eliminated, we can use back substitution to solve for (x_3 ext<,>) (x_2 ext<,>) and (x_1) in that order:

The importance of this method cannot be too highly stressed: solving systems of linear equations is a perennial problem in applied mathematics, often because the underlying systems of nonlinear equations can be nicely linearized by making acceptable sacrifices. We notice that at no step were we required to interchange the order of equations, as we never encountered a situation where the (i^ ext) variable had a coefficient of (0) in the (i^ ext) row.

The process used in the preceding example has nothing to do with the variables used — in fact, they are used solely as placeholders in the computation. Understanding this, we can recast the problem into a vector algebra problem and dispense with the variables entirely.

Example 5.3.2 Solving a linear matrix equation

Consider the matrix (A) and vectors (vec) and (vec) given by

Then the system in the previous example is exactly equivalent to the vector equation (Avec=vec ext<.>) In order to keep track of the operations performed on both the left and right of the equal sign, it is sufficient to augment the matrix (A) by the vector (vec ext<,>) like so:

Now the three operations of the method of elimination correspond to the elementary row operations on an augmented matrix, and the operations involved in the previous example correspond to the following sequence of row operations:

This is a row echelon form for the matrix (Avertvec ext<,>) and the process of obtaining it is called Gaussian elimination, or informally row reduction. If rows are interchanged or scaled, there can be many distinct row echelon forms of a matrix. If we perform additional row operations until the left-most nonzero entry in each row is a (1) and it is the only nonzero entry in its column, then we have produced a unique representation of the matrix, called its reduced row echelon form. The process of moving from the original matrix to its reduced row echelon form via row operations is called Gauss-Jordan elimination.

Example 5.3.3 Gauss-Jordan elimination

Continuing from the end of the preceding example, here are the final transformations from row echelon form to reduced row echelon form via Gauss-Jordan elimination.

It should be apparent that there is no difference in result between Gauss-Jordan elimination and regular Gaussian elimination to row echelon form followed by back substitution. However, the computational complexity (which essentially is a measurement of the number of operations performed by an algorithm) of Gauss-Jordan is higher than elimination and substitution. That being said, Gauss-Jordan elimination is among the best ways to calculate the inverse of an arbitrary nonsingular matrix, a task we will encounter later in the text.