30 – Theorems about linear independence

hey there so so this lesson is going to be more abstract okay we’re not going to do a lot of examples on the contrary we’re not going to do any examples we’re gonna write some theorems not hard theorems but theorems always require proofs and proofs are take a kind of a little switch in your mind to thinking abstractly and sometimes it’s a bit challenging so these are the two properties that we defined the property of a set of vectors being linearly dependent and the property of a set of vectors being linearly independent and we already have a feeling for these notions because we did many many examples now I want to write some theorems so here’s the first theorem so theorem number one any set any set that includes this pen is deceased any set that includes aa beautiful now zero vector the zero vector is linearly dependent that’s a theorem okay if you have a set of vectors v1 v2 up to VK and one of them is the zero vector it’s automatically linearly dependent let’s prove this truth so here’s the set here’s the set our set is of the form v1 v2 somewhere here there is the zero vector right and then there are a bunch of others up to VK the zero vector can be VK it could be v1 I don’t know but this is a set of vectors that contains the zero vector do you agree in general that’s what it looks like this is a set which includes zero do you agree now in order for this set to be a set of linearly dependent vectors I have to show you a linear combination of these vectors not all of which coefficients are 0 that gives 0 here you go take 0 times v1 plus 0 times v2 plus 0 times all the other guys but when you get to 0 take 1 times 0 that’s not a 10 it’s 1 times the zero vector let’s emphasize this ok the zero vector plus all the all the vectors in your linear combination are going to have zero coefficients but this vector is going to have a coefficient one do you agree first of all that this gives you the zero vector right and this is a this I’m missing the word is this is a linear combination we’re not all alpha i’s are zero right because here’s an alpha that’s not zero which gives zero that’s the definition of being a linearly dependent set do you agree everybody okay so any set that includes the zero vector is automatically linearly dependent by definition okay serum number two two vectors suppose you you’re set only includes two elements two vectors so two vectors are linearly dependent if and only if if and only if r1 is a scalar multiple of the other okay so two vectors are linearly dependent reduces to a simpler statement one has to be alpha times two V one has to be alpha times V two or vice versa okay let’s prove this all of these

theorems are not just V abstract nonsense they’re very very very useful in fact the previous one we already used in our justification of of method two remember when we said that if there’s a zero row it’s a linearly dependent set remember that okay all of these we’re gonna use a lot all over the place okay there are very very useful little observations none of these are very deep theorems there are some very deep theorems in algebra these are not okay these are very very basic very basic and very useful okay so let’s stay so we need to prove two things here two directions these are two statements in one okay so let’s start by proving this direction so suppose we have two vectors that are linearly independent we want we have to show that one is a scalar multiple art of the other so what does it mean that they’re linearly ended their linearly dependent it means that we can write alpha 1 V 1 plus alpha 2 V 2 equals zero we’re not both alpha 1 and alpha 2 are 0 simultaneously okay we’re either alpha 1 is not 0 or alpha 2 is not zero or both right do you agree that that’s what it means to be linearly dependent okay so if alpha 1 so this implies if let’s say alpha 1 is not zero then we can divide by it so we can move this to the other side and divide by alpha 1 so we can get that V 1 equals minus alpha 2 over alpha 1 V 2 if alpha 1 is not 0 do you agree and there you have it V 1 is a scalar multiple of V 2 so this is if alpha 1 is not 0 and if alpha 2 is not 0 we can write V 2 equals minus alpha 1 over alpha 2 V 1 so there’s an or here so in this case V 2 is expressed as a scalar multiple of V 1 good so this proves one direction so that this is this is not good let’s write the word so here so this is the proof of this direction if they’re linearly dependent then 1 is a scalar multiple of the other good the other direction the other direction now I’m proving this direction assume that 1 is a scalar multiple of the other assume that 1 is a scalar multiple of the other and show that they’re linearly dependent so what does it mean that 1 is a scalar multiple of the other if V 1 equals alpha v2 that’s what it means for one to be a scalar multiple of the other right then so if V 1 is alpha V 2 then well V 1 minus alpha V 2 is 0 and that’s it this is a linear combination where not all the coefficients are 0 because this is the one which gives 0 and by definition if you have such a linear combination that zero we’re not all the coefficients are 0 the vectors are linearly dependent good and of course if V 2 is alpha V 1 it would work just to see good clear still very very basic very basic very close to the definitions but very useful facts good everybody good on theorem number 2 ok theorem number 3 where is it there it is any set any set that contains a linearly dependent set is also linearly dependent so you take a set of vectors V 1 to V N and suppose some of them are already linearly dependent then the entire set is linearly dependent let’s prove that it should you should feel by now after doing examples and starting to get a more precise feelings for these definitions you should feel yep that sounds true that sounds true if they’re linearly dependent what you should think is that means that there’s somebody extra

already and you throw in a few more guys you make this set bigger of course they’re still going to be linearly dependent right oh that’s already the proof that’s correct but that’s already the proof I’m giving you the intuition I’m saying why intuitively this should be true okay good okay so let’s write the proof so let V 1 through VK be linearly dependent so we’re taking a set of linearly dependent vectors and now let’s add some vectors so it’s going to be a set that contains a linearly dependent set consider the set V 1 to V K and now let’s add some VK + 1 VK + 2 all the way up to V in I threw in n minus K more vectors so this is a set that contains our linearly dependent set I want to prove that this is linearly dependent okay so why is it linearly dependent because we know that there exists alpha 1 through alpha K not all zero such that alpha 1 V 1 plus alpha k VK equals 0 we know this holds because this set is linearly dependent do you agree that’s the definition right therefore if we look at alpha 1 V 1 for these exact same alphas alpha 1 V 1 plus alpha k VK plus and as you said at zeroes 0 VK plus 1 plus 0 V n this is still gonna be 0 right because this was 0 and we knew that because the original set of V 1 2 VK was linearly dependent I added nothing I just added 0 times all the other guys it’s still gonna be 0 do you agree but there you have it you have a linear combination of all the guys in the bigger set which gives 0 where not all the coefficients are 0 because not all these alphas are 0 do you see that ok so we have a linear combination that gives 0 and this implies that V 1 V K all the way to VN is linearly dependent good once again very basic very basic very useful okay so when you take a set that’s already linearly dependent throw in more guys you can’t help it they’re still gonna be linearly dependent maybe by removing a few you could make it independent that’s going to be hidden in some of the next theorems we’re only at number three but by adding more you can’t make a dependent set independent good everybody okay number four so there’s an obvious price to taking a new fresh marker which is that it’s harder to erase but you don’t care you’re just sitting there right I’m the one during the mm-hmm it’s an empirical theorem okay so number four number four a set contained in a linearly independent said is also linearly independent so this is in a sense compliment a compliment to theorem number three okay if you start with the linearly independent set and you take a subset of that of course they’re gonna be linearly independent okay and let’s write the proof but it’s really looking looking at three and uh the proof is

going to be playing logic doing plain logical arguments on three okay so let let’s call the set u so let u one let u one be a linearly independent set and let you two be a subset of u1 okay I’m taking only some of the elements in U one if you – if you two were linearly dependent then by theorem number three u 1 which is a set that contains it would also be linearly dependent but u 1 is linearly independent so that’s a Oh obviously can’t happen right clear so if if you two is or if you two is linearly dependent then so is you one by theorem three this is a contradiction we started with a linearly independent set by theorem three if you take a subset that was linearly dependent it would also be linearly dependent in contradiction to start so therefore you two has to be linearly independent as well okay I probably have my tenses wrong here right it’s not if you two is linearly dependent and so is you want I probably should say something like had you two been linearly dependent then you one would also have been or something like that right but hey we can do some non formal English here okay as long as we’re formal on the math that’s important is this clear is the idea clear okay so this is really not a new theorem it’s just a restatement of that theorem but I wanted to snare the less write it in its own right because again very useful very useful clear everybody good okay number five and again you can you can feel number four intuitively if you have a set where nobody’s extra you need everybody and you throw some guys out you’re only left with the partial set of course you’re still gonna need everybody right nobody’s gonna suddenly become extra because if it were extra it would have been extra in the bigger set as well okay good okay number five a set let’s do this let let let’s write number five and number six together because their proofs are gonna be related and let’s have some common notation above them so let s be the set v1 v2 all the way up to V K so this is a set of vectors and let’s assume that not all be eyes are zero okay if all the VIS are zero then obviously it’s a dependent set and there’s not much to say about it it’s boring do you agree okay so assume we have a set of vectors where they’re not all zeros okay and in particular and we could assume that v1 is not zero okay assume V 1 is not zero okay let’s stick though the the one vector that we are assuming that it’s definitely not zero let’s call it v1 okay and then there are two theorems so

number five is gonna be s this set of vectors is linearly dependent if and only if one of the V eyes is a linear combination of the others this should already feel very intuitive to you we know that being linearly dependent means that we can then what is extra at least one is extra we can write one as a linear combination of the others okay but that’s not the definition right the definition was slightly different so this requires proof okay and six is going to be s is linearly dependent if and only if one of the V eyes is a linear combination not just of the others but of its predecessors of the ones that came before it so if we if if it’s v4 then we can write V four as a linear combination of v1 to v3 okay a linear combination of its predecessors it’s a SS right looks SS ish to me is this right yeah good uh you’re spelling it up good good well then you can pick up any spelling mistakes that I have so is this the correct it’s or should there be a that you can’t do right because it’s it’s a correct word and it’s context related okay and what’s the answer does anybody know English well enough it’s correct okay I always get confused on this one I’m asking whether there’s the little thing key here okay let’s prove these two okay and you might ask again why is number six necessarily necessary why do we care that it’s a linear combination of the guys in front of it because it’s useful okay in in proofs and in arguments okay let’s prove five and six so we have to prove in fact four things because five is an if and only if statement therefore there are two directions to prove there and six is an if and only if statement so there are two directions to prove there okay but in fact right some of these directions follow from the others in special cases so let’s see what follows so here are the proofs of five and six so let’s first prove this direction of five so five this direction this direction is the direction saying that if one of the vectors is a linear combination of the others then the set is linearly dependent okay so let’s do that if let’s say VI one of the vectors is a linear combination of the others alpha 1 V 1 plus dot dot dot alpha I minus 1 VI minus 1 plus alpha I plus 1 VI plus 1 plus alpha KDK this is a completely general linear combination of the others right i skiped VI because VI is a linear combination of the others do you agree so if VI is a linear combination of the others then if I move VI here I’m gonna get a linear combination of all of them that gives zero where not all the coefficients are 0 because the ice coefficient is 1 do you agree then 0 equals alpha 1 V 1 plus dot dot plus 1

VI plus dot dot plus alpha K V K minus 1 you’re right because I know you’re right you’re right minus 1 VI plus alpha k VK right and this is a linear combination we’re not all the Alfa eyes are zero of all the vectors that give zero which implies therefore what did we call the set s is linearly dependent do you agree good okay do you agree that this direction of six follows from this from free for free right because what did six say let’s look at number let’s look at this board again so we prove that if we know this then the set is linearly dependent here we wrote not just any linear combination but a more restrictive linear combination but it’s still going to apply by the same exact reason if one of the vis is linear combination of its predecessors then obviously it’s a linear combination of the others right and therefore the set is linearly dependent do you agree so this direction of six follows from this direction of five okay good everybody so let’s write that this direction of six follows from this do you agree everybody good okay now let’s do the other direction so let’s do this direction of syncs okay I want to show that if the set is linearly dependent then one of the VI is the linear combination of its predecessors okay so this is the only slightly slightly non-basic idea that so that’s hidden in everything we did so far in this in this lesson okay so what do we know we know that the set is linearly dependent right that’s in this time action the data what we know is that we’re looking at a linearly dependent set right so s is linearly dependent therefore we know that there exists alpha eyes giving us a linear combination which is zero we’re not all the alpha eyes are zero so there exists an alpha 1 V 1 plus alpha 2 V 2 plus dot dot dot alpha KDK that equals zero we’re not all alphas are 0 this is what it means for us to be a linearly dependent set okay all we know is that not all the alphas are 0 some of them could be 0 right so let’s take the biggest one that’s not 0 okay so let J let G be the greatest index such that alpha J in this linear combination is not 0 so maybe there are 15,000 vectors here but only the first 3 are not 0 all the rest are zeros so J would be 3 ok good ok so let J be the greatest index of such that alpha G is not 0 then we can write we can write alpha 1 V 1 plus alpha 2 V 2 plus dot alpha G V G equals zero do you agree because all the rest are have zero coefficients so leaving them there or removing them doesn’t make any difference do you agree okay and now we know that alpha J is not zero we can move this to this side and divide by minus alpha G just like we did it in the case for two vectors okay and therefore so V J can be written as minus alpha one over alpha J

minus alpha two over oh sorry minus alpha one over alpha J V 1 minus alpha two over alpha G V 2 minus dot dot minus alpha J minus one over alpha J V J minus 1 good that’s it we wrote V J as a linear combination of its predecessors of only vectors with lesser index indices good I so first of all by the fact that s is linearly dependent we have such a combination okay not all the alphas here are zero but maybe alpha K is zero all we know is that there’s at least one that’s not but maybe some of them are okay so I want to remove all the ones that are okay because if if they are if if we would suppose we would try to do this trick and say VK is a linear combination of its predecessors which are all the rest it would mean move this to the other side and divide by alpha K but maybe alpha K is zero you can’t divide by it right so that therefore I’m saying okay VK is not gonna do it it’s not a linear combination of its predecessors throw it out throw all the guys out until you reach one until you reach from the right one that has a nonzero coefficient okay and why is this equal to zero because all the all that all the alphas with index bigger than J were zero because alpha J was the one the non zero one was the greatest index so do you agree that this fact is true okay and now since I know that alpha J is not zero I can move this to the right hand side and divide by it at a minus and I get this okay good okay so what remains is to prove this direction of five okay so let’s look do you agree that we just proved this direction of six in fact we completed the proof of six right exactly so once we know that if a set is linearly dependent one of the VI is is a linear combination of its predecessors well in particular its selling a combination of the others take all the rest of their coefficients to be zero so this direction of five follows from this direction of six okay so number five direction left to right follows from this good ok so this completes the proof of 5 & 6 whenever you do such things as prove two theorems simultaneously you have to be very careful not to do any cyclic arguments okay not to do things of the form prove six use it to prove five and then use five to prove the other direction of six are all sorts of things like that you have to be careful not to do cyclic arguments but here we didn’t we prove this from scratch then we prove this from that then we prove this from scratch and then we prove this for that we never proved something based on something else and then proved us something else based on that okay this was completely kosher okay good okay ah that’s five and six I want to quote one more one more theorem it it’s kind of not really it doesn’t really belong to this specific to this specific lesson but it is a theory about linear independence so I might as well as it here that the reason is that it’s not a general statement about vectors it’s it’s related again specifically to two matrices okay specifically so here’s theorem number seven a homogeneous system system of equations a x equals zero that’s a homogeneous system has a non-trivial solution if and only if if and only if the

columns the columns of a the columns of a are linearly dependent okay so this is another situation where there’s nothing deep going on it’s just the method just translating things we just need to translate this correctly and the proof will just be right there in front of our eyes okay so remember recall that in general a system ax equals B can be written in a different way as so in particular ax equals zero can be written as and this we we discussed previously as X 1 a 1 where a 1 is the first column of a + X 2 a 2 where a 2 is the second column of a + dot dot X and a n equals B in general or 0 for a homogeneous system ok where these are the columns of e remember this we discussed it previously everybody remember okay so what’s the statement here that the homogeneous system has a non-trivial solution well if and only if there’s a non trivial solution to this but what’s a non-trivial solution to this a non-trivial solution to this means this is nothing but a linear combination of the columns of a right we’re not all the coefficients are 0 because we’re assuming there’s a non-trivial solution that gives 0 do you see that so that’s it there’s nothing more to see having a non-trivial solution to the system means having excise not all 0 excise are the components of X right means having excise not all 0 that satisfy this but this is precisely saying that a 1 through a n are linearly dependent so ax equals 0 has a non-trivial solution if and only if a 1 through a n which are precisely the columns of a are linearly dependent do you agree so this is a very a very characteristic example of a situation where the theorem may look like there’s something big to prove and some theorems may look very naive but in fact requires some non-trivial and sophisticated technology to prove but this is not the case this is just rephrasing it in a way where it’s just obvious it’s just the restatement of the same fact ok good ok so we’re gonna stop here we’re almost done with this broader topic of dependence and independence and we’re almost almost means we have a few more things to say and then we’re going to continue to discussing the dimension that the concept of the dimension of a vector space and the concept of what is the basis for a vector space and those things are going to be related for example to the phenomenon that we noticed that two by two matrices are essentially the same vector space as r4 which are essentially the same vector space as polynomials of degree three over R ok ok so that’s it for today