The Weird Laws of Matrices

Matrices. If you have had no experience with them, they can look intimidating and kinda useless. They are sometimes huge “blocks“ of numbers consisting of rows and columns. These matrices are used to represent many different things, but what makes them useful is that—just like other equations and numbers—you can use mathematics laws on them. The easiest is probably addition and subtraction. This follows the basic rules you learned in primary school. If two matrices all have the same number of rows or columns (dimension), you can add the first number in Matrix A to the first number in Matrix be and so forth until you have a new matrix.

That seems intuitive. By what about multiplication? This is where normal intuition starts to fail. Two matrices can only be multiplied if the number of columns in the first matrix matches the number of rows in the second. This means that multiplication is not commutative. When multiplying matrices, take all of the numbers in the first row of the first matric and multiplying them with the corresponding values in the first column of the second matrix. See the picture by Khan Academy below:

You probably noticed that the resulting matrix has different dimensions than the first two matrices. But why? It is easiest to think of this in terms of vectors. Matrices can be used to represent equations where 2x + 3y is represented as [2 3]. But they can also represent vectors that have both direction and magnitude. This becomes especially useful in physics and computer science. Just like vector multiplication, adding the multiplied values together gives the new value of the resultant of the two matrices. That seems pretty intuitive when a matrix only has one row, but the same basic principle follows as the dimensions grow.

Matrices represent statistics and groups such as populations. They are also used to do algebra with large equations. Khan academy created this easy to follow example:

unnamed.png

As there are more and more variables, these are continued to be represented by the matrix. Once there are three more “unknown“ points (x, y, z …) it becomes a lot easier to use matrices to solve. Putting all of the constants and coefficients into a matrix allows for operations to be done to the entire system. It also makes it so that calculators and computers can easier process this data. Computers can also use matrices to represent various computer graphic drawings. Each value in a matrix represents a specific thing that the computer can read to generate. Even huge, seemingly intimidating matrices are incredibly easy for a computer to read and manipulate.

What I find the weirdest is the “determinant“ of a matrix and, in turn, the inverse. The determinant of a 2 x 2 matrix takes the first number and multiplies it with the number diagonal. Then you subtract the product of the second and third numbers. It is represented by (AD-BC). That seems easy enough to visualize, but it gets more convoluted the large the matrix is. Here an example of the determinant of a 3x3 matrix:

thumbnail_Untitled_Artwork.jpg

That seems great, but kinda useless. It’s easier if you think of the determinant as a function that outputs a value. The value of the determinant can then be used to find the inverse of a matrix. The formula for finding the inverse of a matrix is below:

thumbnail_Untitled_Artwork.jpg

Notice how you multiply the inverse of the determinant by the newly manipulated matrix. When you look at this, it seems incredibly arbitrary. With some values of the matrix being flipped and others being multiplied by -1, it seems very random. Again, finding the inverse of a matrix is useful in computer programming. Representing variables, it is easy to manipulate an entire matrix to move characters, cameras, or backgrounds in a computer game—especially when the game is 3d. Just like how you would “undo“ 5*4 = 20 by dividing 20/4 or 20* 1/4 = 5, multiplying by the inverse of a matrix does the same thing. This makes it easy to move back and forth between transformations. With this said, not every matrix is invertible. This is determined when there is a matrix that can be multiplied by anything to get the “identity“ matrix. Just like multiplying two inverse numbers results in 1, multiplying inverse matrices results in the identity matrix. This is a square matrix of any size that has 1’s starting in the first position and continuing diagonally with 0’s in every other spot.

Matrices in mathematics and their properties are a huge part of linear algebra because of their ability to represent equations and real-world situations. There are many more uses and properties of matrices that aren’t mentioned above. The larger a matric gets, the harder it is for humans to solve and manipulate. Luckily, computers are great at applying rules to inputs and matrices become useful in the world of computer programming.

Previous
Previous

Covering Spaces and Packing Things

Next
Next

Engineering A Car