# MAPLE ASSIGNMENT 9
# PROPERTIES OF DETERMINANTS
# The commands below check some of the properties of determinants. You
# should look to see that the new matrices being created are what you
# expect them to be.
> with(linalg):
> M1:=matrix(3,3,[1,6,-1,1,-5,7,3,3,-2]);
> det(M1);
> det(transpose(M1))-det(M1);
> M2:=swaprow(M1,1,3):
> concat(M1,M2);
> det(M2);
> M3:=swapcol(M1,1,2):
> concat(M1,M3);
> det(M3);
> M4:=mulrow(M1,3,0.3):
> concat(M1,M4);
> det(M4);
> det(M1)*0.3;
> M5:=mulcol(M1,2,3):
> concat(M1,M5);
> det(M5);
> det(M1)*3;
> M6:=addrow(M1,1,3,.5):
> concat(M1,M6);
> det(M6);
> M7:=addcol(M1,3,2,1.5):
> concat(M1,M7);
> det(M7);
# A matrix with two identical rows or two identical columns has a zero
# determinant. Fill in the blanks below to create such a matrix and then
# check this property out.
> M8:=matrix(3,3,[ , , , , , , , , ]);
> det(M8);
# There is a rule for the determinant of an inverse matrix which is
# illustrated by
> det(M1^(-1));
> 1/det(M1);
# Edit your matrix M8 so that you have a nonsingular matrix. 
# The determinant of the product is the product of the determinants.
# This does not work with sums however. 
> det(M8&*M1);
> det(M8)*det(M1);
> det(M8+M1);
> det(M8)+det(M1);
# 
# MATRIX PLOTS
# For fun you can do some plots of matrices. I suggest redrawing the
# plot so the axes are framed and the style is patch + contour.     
> with(plots):
> with(linalg):
> A:=matrix(6,6,[1,0,-1,1,5,7,3,3,9,9,3,3,7,5,1,-1,0,1,1,0,-1,3,6,21,3,3
> ,9,1,0,-1,1,5,7,3,3,9]);
> matrixplot(A);
> matrixplot(A,heights=histogram);
> H:=hilbert(8);
> T:=toeplitz([1,2,3,4,-4,-3,-2,-1]);
> matrixplot(H+T,heights=histogram);
> matrixplot(H+T,heights=histogram,gap=0.25);
> F := (x,y) -> sin(x*y):
> matrixplot(H+T,heights=histogram,gap=0.25,colour=F);
# EIGENVALUES & EIGENVECTORS
# Maple has commands for computing eigenvalues and eigenvectors as
# illustrated below. If A is not symmetric some eigenvalues may be
# complex numbers. You might like to edit A and try this out.
> A:=matrix(4,4,[1,-2,3,-4,-2,3,-4,5,3,-4,5,-6,-4,5,-6,7]);
> with(linalg):
> det(A);
> lambda:=evalf(Eigenvals(A));
# Note that two of these numbers are 0 and what you are seeing are
# rounding errors. The next command computes some eigenvectors for A and
# stores them in the matrix vecs. Eigenvectors are not unique so Maple
# computes the eigenvector q for which transpose(q)&*q=1. 
> evalf(Eigenvals(A,vecs));
> evalm(vecs);
# Now to verify that Maple is getting the right answer
> evalm(A&*vecs);
> evalm(lambda[1]*col(vecs,1));
# Note that again there are slight rounding errors. Edit this command to
# verify that the other columns of X are also eigenvectors of A. Maple
# will also produce the characteristic polynomial of A and this clearly
# shows that 2 of the eigenvalues are zero.
> charpoly(A,lambda); 
# If A is symmetric vecs is an orthogonal matrix which means that
# vecs^(-1)=transpose(vecs). You can check this out.
> transpose(vecs)&*vecs;
> evalm(%);
# DEFINITENESS
# Maple has commands for checking the definiteness of a symmetric matrix
# without having to compute the signs of the leading principal minors.
> definite(A, 'positive_def');
> definite(A, 'negative_def');
> definite(A, 'positive_semidef');
> definite(A, 'negative_semidef');
# You already knew these answers because A had both a negative and a
# positive eigenvalue. You can edit the commands above to
# show that B = transpose(A)&*A is pos. semidef. but not pos. def..
> B:=evalm(transpose(A)&*A);
# The following calculations illustrate that A is not pos or neg semi
# def by computing several values of transpose(X)&*A&*X. 
> X1:=vector([-2,-3,0,1]);
> evalm(X1&*A&*X1);
> X2:=vector([-8,3,1,-6]);
> evalm(X2&*A&*X2);
> X3:=vector([0,0,0,1]);
> evalm(X3&*A&*X3);
