If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Finding eigenvectors and eigenspaces example

Finding the eigenvectors and eigenspaces of a 2x2 matrix. Created by Sal Khan.

Want to join the conversation?

  • leaf green style avatar for user Chris Bennett
    What is a null space?
    (25 votes)
    Default Khan Academy avatar avatar for user
    • old spice man green style avatar for user Alexandru Draghici
      A null space is commonly referred to as the "kernel" of some function and is the set of all points that send a function to zero i.e all x's such that f(x)= 0 is true.

      In terms of linear algebra/linear transformation/matrix algebra you can think of a null space (or kernel) as the set of all vectors v such that

      Av=0 where A is a mxn matrix and 0 is the zero vector.
      (36 votes)
  • aqualine ultimate style avatar for user SoraFBF
    at , why does the Eigenvector equal span (1/2, 1), not span (1, - 1/2)?
    (9 votes)
    Default Khan Academy avatar avatar for user
  • piceratops ultimate style avatar for user Logan Schelly
    I understand most of the process, but I feel like the initial question was unanswered. Could someone help me?

    So, if I understand correctly, our initial question was "What are the Eigenvalues (lambda) and Eigenvectors (v) that satisfy the equation T(v) = A*v = lambda*v?"

    We found the values of lambda that are possible in the previous video (link at bottom).

    We then used each distinct possible value of lambda, and plugged it back in to the equation [A-(lambda*I)]v = 0 to determine all possible vectors v that would make that work (the null space).

    This is where I get confused. Sal ends up talking in the end of the video (starting about ) about the span of these vectors, or the Eigenspace. He acts as if the Eigenspace IS the answer.

    Well, is it?

    The initial question was "What are the Eigenvalues (lambda) and Eigenvectors (v) that satisfy the equation T(v) = A*v = lambda*v?"

    I think that the Eigenspaces would accommodate all combinations of possible Eigenvalues and Eigenvectors, but am I wrong in assuming that? Would we have to specify what the Eigenvalues are? I feel comfortable listing a span as an answer to the set of all possible Eigenvectors, but I feel like I'm not accounting for the 2 distinct Eigenvalues.

    Or am I just wrong in what the initial question was?

    Previous video link: https://www.khanacademy.org/math/linear-algebra/alternate_bases/eigen_everything/v/linear-algebra--example-solving-for-the-eigenvalues-of-a-2x2-matrix
    (6 votes)
    Default Khan Academy avatar avatar for user
    • leaf red style avatar for user Bob Fred
      one point of finding eigenvectors is to find a matrix "similar" to the original that can be written diagonally (only the diagonal has nonzeroes), based on a different basis.
      T(v) = A*v = lambda*v is the right relation. the eigenvalues are all the lambdas you find, the eigenvectors are all the v's you find that satisfy T(v)=lambda*v, and the eigenspace FOR ONE eigenvalue is the span of the eigenvectors cooresponding to that eigenvalue.
      suppose for an eigenvalue L1, you have T(v)=L1*v, then the eigenvectors FOR L1 would be all the v's for which this is true. the eigenspace of L1 would be the span of the eigenvectors OF L1, in this case it would just be the set of all the v's because of how linear transformations transform one dimension into another dimension. the (entire) eigenspace would be the span of all the eigenvectors from all the eigenvalues.
      (3 votes)
  • leafers seedling style avatar for user Suraj Rao
    Is it always necessary to have row of zeros?
    (4 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Craig Mayhew
    I feel I may have missed a video as I don't know what a "reduced row echelon form of a matrix" is and how to make one?
    (1 vote)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Anastasia Valentino
    Is there a general rule for the relationships of eigenvalues graphically? Eg. Are the eigenvectors of the corresponding eigenvalue perpendicular...?
    (3 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user InnocentRealist
      The eigenvalues don't have any direction because they're scalars. For some 2x2 matrices the eigenspaces for different eigenvalues are orthogonal, for others not.

      An nxn matrix always has n eigenvalues, but some come in complex pairs, and these don't have eigenspaces in R^n, and some eigenvalues are duplicated; so there aren't always n eigenspaces in R^n for an nxn matrix. Some eigenspaces have more than one dimension.
      (4 votes)
  • blobby green style avatar for user Natalie Zeidan
    For E5 shouldnt it be t{2;1] since v1=.05v2 which would mean 2v1=1v2
    (3 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user InnocentRealist
      Yes, v1 = (v2)/2. What's the solution space (all the vectors v = (v1, v2) | (lambda*I - A)v = 0)? First, what's the free variable in the rref?

      (v2 (because it's not a pivot variable - it's not constrained by the solution - it can be any real number). What is v1 in terms of v2?

      (v1 is constrained by the solution to be (v2)/2.) What's {v = (v1, v2)} (the entire solution space) in terms of the free variable(s)?

      ({(v1, v2)} = {((v2)/2, v2) | v2 is a real number}.) How do you express that as a span of basis vectors?

      ({(v2)/2, v2)} = {v2(1/2, 1)} = span(1/2, 1) = span(1, 2) = {t(1, 2) | t is real}.
      (2 votes)
  • leaf green style avatar for user nhm
    So what if we know what the eigenvectors and eigenvalues are? What practical purpose does this serve? I mean where would you use this?
    (1 vote)
    Default Khan Academy avatar avatar for user
  • aqualine ultimate style avatar for user Vincent Oostelbos
    I have a bit of an in-depth question about a practical implication of the use of "eigen-everything". It requires a bit of an introduction, sorry about that.

    So, I came here to try to figure out why eigenvalues and eigenvectors are used in the course I'm assisting at university called Theoretical ecology. Most of that course is using differential equations to describe population dynamics between populations of species that influence one another (predator–prey systems etc.). You plot the populations on different axes in a graph, and you get one or more "nullclines" for each equation, which are the lines of all points where the equation equals zero, in other words the population does not change given those population densities for all the populations involved. On either side of such nullclines, the population either grows or declines, and this changes (or in other words, the sign of the value of the differential equation flips) across the nullcline. Such a graph is called the phase space, and you can often times see if some equilibrium (which is where the nullclines intersect -- after all, that's when the growth of all populations equals zero) is stable (perturbations die out) or unstable (perturbations grow) by looking at the sign of the derivatives on all sides of the equilibrium in the phase space.

    However, sometimes it's not immediately obvious, and you actually need to calculate the so-called Jacobian matrix, which is the matrix of partial derivatives of the differential equations. E.g., how does the change in population X (which is zero in the equilibrium) change with a small positive step in population density X? How about with a small positive step in population density Y? How about the change in population Y with each of these steps? (That's all of them for a two-dimensional system, giving you a two-by-two Jacobian matrix.)

    Now, as it turns out, the equilibrium will then be stable if both eigenvalues of this matrix ('both' if it's two-by-two, anyway—I have yet to look at the next few videos to see if there will be more in larger matrices) are negative. But this is very difficult for me to understand why this is the case intuitively. Can anybody help me with this? What would the eigenvalues or eigenvectors in this case represent in biological terms?

    Hopefully this is all a bit understandable for someone who's never considered differential equations for population dynamics before—or, alternatively, there's someone out there who has who can answer this question. Thanks in advance.
    (3 votes)
    Default Khan Academy avatar avatar for user
  • piceratops tree style avatar for user Michael Arcilla
    At Sal writes t[ -1, 1] but in the video v2 = -1 and v1 = 1, shouldn't it be t[ 1, -1] ?
    (2 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user InnocentRealist
      What equation does the N(rref) @ around represent?

      (v1 + v2 = 0). What does this equation mean about v1 and v2?

      (They can be any real numbers, positive or negative, as long as their sum is 0.)

      Which is the pivot variable, v1 or v2?

      (Since by convention v1 comes first, it's coefficient is the first non 0 entry in the rref, so it's the pivot variable, and v2 is the free variable.)

      What does it mean, that v2 is a free variable?

      (v2 can be any real number.)

      What's the solution set {(v1, v2) | v1+v2=0} How should it be expressed?

      (Since the value of v1 depends on the value of v2, let's express (v1, v2) in terms of v2: {(v1, v2) = (-v2, v2)}.) What is this set as a span of fixed vectors?

      {(v1, v2) = (-v2, v2) = v2(-1, 1) = span(-1, 1) = t(-1, 1) for any real t}
      (2 votes)

Video transcript

In the last video, we started with the 2 by 2 matrix A is equal to 1, 2, 4, 3. And we used the fact that lambda is an eigenvalue of A, if and only if, the determinate of lambda times the identity matrix-- in this case it's a 2 by 2 identity matrix-- minus A is equal to 0. This gave us a characteristic polynomial and we solved for that and we said, well, the eigenvalues for A are lambda is equal to 5 and lambda is equal to negative 1. That's what we saw in the last video. We said that if you were trying to solve A times some eigenvector is equal to lambda times that eigenvector, the two lambdas, which this equation can be solved for, are the lambdas 5 and minus 1. Assuming nonzero eigenvectors. So we have our eigenvalues, but I don't even call that half the battle. What we really want is our eigenvectors and our eigenvalues. So let's see if we can do that. So if we manipulate this equation a little bit and we've manipulate it in the past. Actually, we've even come up with this statement over here. We can rewrite this over here as the 0 vector is equal to lambda times my eigenvector minus A times my eigenvector. I just subtracted Av from both sides. We know lambda times some eigenvector is the same thing as lambda times the identity matrix times that eigenvector. So all I'm doing is rewriting this like that. You multiply the identity matrix times an eigenvector or times any vector, you're just going to get that vector. So these two things are equivalent. Minus Av. That's still going to be able to the 0 vector. So far all I've done is manipulated this thing. This is really how we got to that thing up there. You factor out the v so to speak because we know that matrix vector products exhibit the distributive property. And we get lambda times the identity matrix minus A times my eigenvector have got to be equal to 0. Or another way to say it is, for any lambda eigenvalue, and let me write it for any eigenvalue lambda, the eigenvectors that correspond to that lambda, we can call that the eigenspace for a lambda. So that's a new word, eigenspace. Eigenspace just means all of the eigenvectors that correspond to some eigenvalue. The eigenspace for some particular eigenvalue is going to be equal to the set of vectors that satisfy this equation. Well, the set of vectors that satisfy this equation is just the null space of that right there. So it's equal to the null space of this matrix right there. The null space of lambda times the identity matrix. And by an identity matrix minus A. And so everything I've done here, this is true-- this is the general case. But now we can apply this notion to this matrix A right here. So we know that 5 is an eigenvalue. Let's say for lambda is equal to 5, the eigenspace that corresponds to 5 is equal to the null space of? Well, what is 5 times the identity matrix? It's going to be the 2 by 2 identity matrix. 5 times the identity matrix is just 5, 0, 0, 5 minus A. That's just 1, 2, 4, 3. So that is equal to the null space of the matrix. 5 minus 1 is 4. 0 minus 2 is minus 2. 0 minus 4 is minus 4. And then, 5 minus 3 is 2. So the null space of this matrix right here-- and this matrix is just an actual numerical representation of this matrix right here. The null space of this matrix is the set of all of the vectors that satisfy this or all of the eigenvectors that correspond to this eigenvalue. Or, the eigenspace that corresponds to the eigenvalue 5. These are all equivalent statements. So we just need to figure out the null space of this guy is all of the vectors that satisfy the equation 4 minus 2, minus 4, 2 times some eigenvector is equal to the 0 vector. And the null space of a matrix is equal to the null space of the reduced row echelon form of a matrix. So what's the reduced row echelon form of this guy? Well, I guess a good starting point-- let me keep my first row the same, 4 minus 2. And let me replace my second row with my second row plus my first row. So minus 4 plus 4 is 0. 2 plus minus 2 is 0. Now, let me divide my first row by 4 and I get 1, minus 1/2. And then I get 0, 0. So what's the null space of this? This corresponds to v. This times v1, v2-- that's just another way of writing my eigenvector v-- has got to be equal to the 0 vector. Or another way to say it is that my first entry v1, which corresponds to this pivot column, plus or minus 1/2 times my second entry has got to be equal to that 0 right there. Or, v1 is equal to 1/2 v2. And so if I wanted to write all of the eigenvectors that satisfy this, I could write it this way. My eigenspace that corresponds to lambda equals 5. That corresponds to the eigenvalue 5 is equal to the set of all of the vectors, v1, v2, that are equal to some scaling factor. Let's say it's equal to t times what? If we say that v2 is equal to t, so v2 is going to be equal to t times 1. And then, v1 is going to be equal to 1/2 times v2 or 1/2 times t. Just like that. For any t is a member of the real numbers. If we wanted to, we could scale this up. We could say any real number times 1, 2. That would also be the span. Let me do that actually. It'll make it a little bit cleaner. Actually, I don't have to do that. So we could write that the eigenspace for the eigenvalue 5 is equal to the span of the vector 1/2 and 1. So it's a line in R2. Those are all of the eigenvectors that satisfy-- that work for the equation where the eigenvalue is equal to 5. Now what about when the eigenvalue is equal to minus 1? So let's do that case. When lambda is equal to minus 1, then we have-- it's going to be the null space. So the eigenspace for lambda is equal to minus 1 is going to be the null space of lambda times our identity matrix, which is going to be minus 1 and 0, 0, minus 1. It's going to be minus 1 times 1, 0, 0, 1, which is just minus 1 there. Minus A. So minus 1, 2, 4, 3. And this is equal to the null space of-- minus 1, minus 1 is minus 2. 0 minus 2 is minus 2. 0 minus 4 is minus 4 and minus 1 minus 3 is minus 4. And that's going to be equal to the null space of the reduced row echelon form of that guy. So we can perform some row operations right here. Let me just put it in reduced row echelon form. So if I replace my second row plus 2 times my first row. So I'll keep the first row the same. Minus 2, minus 2. And then my second row, I'll replace it with two times-- I'll replace it with it plus 2 times the first. Or even better, I'm going to replace it with it plus minus 2 times the first. So minus 4 plus 4 is 0. And then if I divide the top row by minus 2, the reduced row echelon form of this matrix right here or this matrix right here is going to be 1, 1, 0. So the eigenspace that corresponds to the eigenvalue minus 1 is equal to the null space of this guy right here It's the set of vectors that satisfy this equation: 1, 1, 0, 0. And then you have v1, v2 is equal to 0. Or you get v1 plus-- these aren't vectors, these are just values. v1 plus v2 is equal to 0. Because 0 is just equal to that thing right there. So 1 times v1 plus 1 times v2 is going to be equal to that 0 right there. Or I could write v1 is equal to minus v2. Or if we say that v2 is equal to t, we could say v1 is equal to minus t. Or we could say that the eigenspace for the eigenvalue minus 1 is equal to all of the vectors, v1, v2 that are equal to some scalar t times v1 is minus t and v2 is plus t. Or you could say this is equal to the span of the vector minus 1 and 1. So let's just graph this a little bit just to understand what we just did. We were able to find two eigenvalues for this, 5 and minus 1. And we were able to find all of the vectors that are essentially-- or, we were able to find the set of vectors that are the eigenvectors that correspond to each of these eigenvalues. So let's graph them. So if we go to R2, let me draw my axes, this is my vertical axis. That's my horizontal axis. So all of the vectors that correspond to lambda equal 5 are along the line 1/2, 1. Or the span of 1/2, 1. So that is 1. That is 1. So you go 1/2 and 1 just like that. So that's that vector, spanning vector. But anything along the span of this, all the multiples of this, are going to be valid eigenvectors. So anything along that line, all of the vectors when you draw them in standard position, point to a point on that line. All of these vectors, any vector on there is going to be a valid eigenvector and the corresponding eigenvalue is going to be equal to 5. So you give me this guy right here. When you apply the transformation, it's going to be five times this guy. If this guy is x, t of x is going to be five times this guy. Whatever vector you give along this line, the transformation of that guy, the transformation is literally, multiplying it by the matrix A. Where did I have the matrix A? The matrix A right up there. You're essentially just scaling this guy by 5 in either direction. This is for lambda equal 5. And for lambda equals 1, it's the span of this vector, which is minus 1, 1. Which looks like this. So this vector looks like that. We care about the span of it. Any vector that when you draw in standard position lies, or points to, points on this line, will be an eigenvector for the eigenvalue minus 1. So lambda equals minus 1. Let's say you take the spanning vector here. You apply the transformation, you're going to get minus 1 times it. So if this is x, the transformation of x is going to be that right there. Same length, just in the opposite direction. If you have this guy right here, you apply the transformation, it's going to be in the same spanning line just like that. So the two eigenspaces for the matrix-- where did I write it? I think it was the matrix 1, 2, 3-- 1, 2, 4, 3. The two eigenvalues were 5 and minus 1. And then it has an infinite number of eigenvectors, so they actually create two eigenspaces. Each of them correspond to one of the eigenvalues. And these lines represent those two eigenspaces. You give me any vector in either of these sets and they're going to be an eigenvector. I'm using the word vector too much. You give me any vector in either of these sets, and they will be an eigenvector for our matrix A. And then, depending on which line it is, we know what their transformation is going to be. If it's going to be on this guy, we take the transformation, the resulting vector's going to be five times the vector. If you take one of these eigenvectors and you transform it, the resulting transformation of the vector's going to be minus 1 times that vector. Anyway, we now know what eigenvalues, eigenvectors, eigenspaces are. And even better, we know how to actually find them.