Tag: AI

Uncategorized

Prerequisite Skill sets for Artificial Intelligence Development. PART I

There is hardly any theory which is more elementary than linear algebra, inspite of the fact that generations of

Data scientist and  Mathematicians have obscured its simplicity by preposterous calculations with matrices 

            -Jean Dieudonnie French Mathematician

Mathematics requires a small dose, not of genius but of an imaginative freedom which in a larger dose would be insanity

               – August K Rodgers

It’ll be about linear algebra, which as a lot of you know is one of these subjects that’s required knowledge

for just about any technical education, but it’s also I have noticed generally poorly understood by IT professionals

taking it for the first time. An Data Scientist might go through a class and learn how to compute lost of things, like determinant,

matrices multiplication or cross product which use the determinant or eigenvalues, but they might come out

without really understanding why matrix multiplication is defined the way that it is, why cross product has

anything to do with the determinant or what an eigenvalues really represent.

In this blog I have tried to help all those data scientist to re look of Linear algebra in altogether in a different

perspective in terms of Analytical Geometry or 3 Dimensional Spatial Geometry and help them understand how

this can be easily put to use in Computer vision , Image processing, Speech recognition, and Robotics.

So Let’s start with some basics viz; Vector Space.

 

        Fields and Vector Spaces As has been indicated in the preface, our first aim is to re-work

the linear algebra presented in our AAS course, generalising from R to an arbitrary field as domain

from which coefficients are to be taken. Fields are treated in the companion course, Rings and

Arithmetic, but to make this course and these notes self-contained we begin with a definition of

what we mean by a field.

Axioms for Fields 

A field is a set F with distinguished elements 0, 1, with a unary operation −, and with two binary operations +

and × satisfying the axioms below (the ‘axioms of arithmetic’).

Conventionally, for the image of (a, b) under the function + : F × F → F we write a + b; for the image of (a, b)

under the function × : F × F → F we write a b; and x − y means x + (−y), x + y z means x + (y z).

The axioms of arithmetic

(1) a + (b + c) = (a + b) + c        [+ is associative] 

(2) a + b = b + a                   [+ is commutative]           

(3) a +0= a                                

(4) a + (−a) = 0 

(5) a (b c) = (a b) c                 [× is associative]                

(6) ab = ba                               [× is commutative] 

(7) a 1 = a 

(8) a 6 = 0 ⇒ ∃ x ∈ F : a x = 1 

(9) a (b + c) = a b + a c                            [× distributes over +]  

(10) 0 ≠ 1  

 

Note 1: All axioms are understood to have ∀ a, b, . . . ∈ F in front.

Note 2: See my Notes on Rings and Arithmetic for more discussion.

Note 3: Examples are Q, R, C, Z 2 or more generally Z p , the field of integers modulo p for any prime number p.

Vector Spaces

Let F be a field. A vector space over F is a set V with distinguished element 0, with a unary operation −,

with a binary operation +, and with a function × : F × V → V satisfying the axioms below.

Conventionally: for the image of (a, b) under the function + : V × V → V

we write a + b; for a + (−b) we write a − b; and for the image of (α, v)

under the function × : F × V → V we write α v .

Vector space axioms

(1) u + (v + w) = (u + v) + w  (2) u + v = v + u  (3) u +0= u  (4)u + (u) = 0 (5) α (β v) = (α β) v  (6) α (u + v) = α u + α v (7) (α + β) v = α v + β v (8) 1 v = v

Note: All axioms are understood to have appropriate quantifiers ∀ u, v, . . . ∈ V
and/or ∀ α, β, . . . ∈ F in front.

EXAMPLES:

  1.  Fn is a vector space over F ;
  2. the polynomial ring F [x]  is a vector space over F ;
  3.  M m×n (F ) select: is a vector space over F ;
  4. if K is a field and F a sub-field then K is a vector space over F ;

Exercise 1. Let X be any set, F any field. Define

F X to be the set of all  functions X  F with the usual pointwise addition and multiplication by  scalars (elements of F ). Show that F X is a vector space over F

Exactly as for rings, or for vector spaces over R, one can prove important “trivialities”.

Of course, if we could not prove them then we would add further axioms until we

had captured all properties that we require, or at least expect, of the algebra of vectors.

But the fact is that this set of axioms, feeble though it seems, is enough. For example:

PROPOSITION: Let V be a vector space over the field F . For any v ∈ V and any α ∈ F we have

(i)  0 v = 0; (ii) α 0 = 0; (iii) if α v = 0 then α = 0 or v = 0;(iv)  α (v) = (α v) = (α) v ;

Proof. For v ∈ V , from Field Axiom (3) and Vector Space Axiom (7) we have
0 v = (0 + 0) v = 0 v + 0 v.

Then adding −(0 v) to both sides of this equation and using Vector Space Axioms (4)
on the left and (1), (4), (3) on the right, we get that 0 = 0 v , as required for (i). The
reader is invited to give a proof of (ii).

For (iii), suppose that α v = 0 and α 6 = 0: our task is to show that v = 0. By Field
Axioms (8) and (6) there exists β ∈ F such that β α = 1. Then

v = 1 v = (β α) v = β (α v) = β 0

by Vector Space Axioms (8) and (5). But β 0 = 0 by (ii), and so v = 0, as required.
Clause (iv), like Clause (ii), is offered as an exercise:

Subspaces

Let V be a vector space over a field F . A subspace of V is a subset U such that
(1) 0 ∈ U and u + v ∈ U whenever u, v ∈ U
(2) if u ∈ U , α ∈ F then α u ∈ U . and −u ∈ U whenever u ∈ U ;

Note 1: Condition (1) says that U is an additive subgroup of V . Condition (2) is closure under

multiplication by scalars.

Note 2: We write UV to mean that U is a subspace of V . Note 3: Always {0} is a subspace; if U{0} then we say that U is nonzero or nontrivial. Likewise, V is a subspace; if UV then we say that U is a proper subspace. Note 4: A subset U of V is a subspace if and only if U and U is closed under + (that is, u, v  U  u + v  U ) and under multiplication by scalars.

EXAMPLES:

(1) Let  L1 , . . . , Lm be homogeneous linear expressions  cij  xj withcoefficients c ij  F , and let U :={(x 1 , . . . , x n )  F n | L1 = 0, . . . , Lm = 0}. Then U F n (2) Let F [n] [x] := {f  F [x] | f = 0 or deg f  n}. Then F [n] [x]   F [x]. (3) Upper triangular matrices form a subspace of M n×n (F )

 QUOTIENT SPACES

Suppose that U <= V where V is a vector space over a field F . Define the quotient
space V /U as follows:

set := {x + U | x ∈ V }   [additive cosets]

0 := U

additive inverse: −(x + U ) := (−x) + U

addition: (x + U ) + (y + U ) := (x + y) + U

multiplication by scalars: α(x + U ) :=αx +U

Note: The notion of quotient space is closely analogous with the notion of quotient
of a group by a normal subgroup or of a ring by an ideal. It is not in the Part A syllabus,
nor will it play a large part in this course. Nevertheless, it is an important and useful
construct which is well worth becoming familiar with.

Dimension

Although technically new, the following ideas and results translate so simply from
the case of vector spaces over R to vector spaces over an arbitrary field F that I propose
simply to list headers for revision:
(1) spanning sets;
linear dependence and independence;
bases;
(2) dimension;

(3) dim V = d V  F d ;

(4) any linearly independent set may be extended (usually in many ways) to a basis;
(5) intersection U ∩ W of subspaces; sum U + W ;

(6) dim (U + W ) = dim U + dim W − dim (U ∩ W );

LINEAR TRANSFORMATION

Let F be a field, V1, V2  vector spaces over F.

A map T : V 1  V 2 is said to be linear if

T 0 = 0,
T (−x) = −T x,
T (x + y) = T x + T y,
and T (λ x) = λ (T x)
for all x, y ∈ V and all λ ∈ F .

Note 1: The definition is couched in terms which are intended to emphasise that
what should be required of a linear transformation  is
that it preserves all the ingredients, 0, +, − and multiplication by scalars, that go into

the making of a vector space. What we use in practice is the fact that T : V 1  V 2 is

linear if and only if T (α x + β y) = α T x + β T y  for all x, y ∈ V and all α, β ∈ F .

Note 2:

The identity I : V  V is linear; if T : V 1  V 2 and S : V 2  V 3 are linear then S  T : V 1  V 3 is linear.

For a linear transformation T : V → W we define the kernel or null-space by Ker T := {x ∈ V | Tx = 0}.

We prove that Ker T V, that Im T W and we define nullityT:dim Ker T,rankT:= dim Im T.

RANK NULLITY THEOREM:  

nullity T + rank T = dim V

Note: The Rank-Nullity Theorem is a version of the First Isomorphism Theorem
for vector spaces, which states that

Im T V/Ker T.

DIRECT SUMS AND PROJECTION OPERATORS 

The vector space V is said to be the direct sum of its subspaces U and W , and we
write V = U ⊕ W , if V = U + W and U ∩ W = {0}.

Lemma: Let U , W be subspaces of V . Then V = U ⊕ W if and only if for every
v ∈ V there exist unique vectors u ∈ U and w ∈ W such that v = u + w.

Proof. Suppose first that for every v ∈ V there exist unique vectors u ∈ U and

w ∈ W such that v = u + w . Certainly then V = U + W and what we need to prove

is that U ∩ W = {0}. So let x ∈ U ∩ W . Then x = x + 0 with x ∈ U and 0 ∈ W .

Equally, x = 0 + x with 0 ∈ U and x ∈ W . But the expression x = u + w with u ∈ U

and w ∈ W is, by assumption, unique, and it follows that x = 0. Thus U ∩ W = {0}, as required.

Now suppose that V = U ⊕ W . If v ∈ V then since V = U + W there certainly exist vectors u ∈ U

and w ∈ W such that v = u + w . The point at issue therefore is: are u, w uniquely determined by v?

Suppose that u + w = u ′ + w ′ , where u, u ′ ∈ U and w, w ′ ∈ W . Then u − u ′ = w ′ − w .

This vector lies both in U and in W .

By assumption, U ∩ W = {0} and so u − u ′ = w ′ − w = 0. Thus u = u ′ and w = w ′ ,

so the decomposition of a vector v as u + w with u ∈ U and w ∈ W is unique, as required.

Note: What we are discussing is sometimes (but rarely) called the “internal” direct

sum to distinguish it from the natural construction which starts from two vector spaces

V 1 , V 2 over the same field F and constructs a new vector space whose set is the product set V 1 × V 2 and in which the vector space structure is defined componentwisecompare

the direct product of groups or of rings. This is (equally rarely) called the “external”

direct sum. These are two sides of the same coin: the external direct sum of V 1 and V  2 is the internal direct sum of its subspaces V 1 × {0} and {0} × V 2 ; while if V= U  W

then V is naturally isomorphic with the external direct sum of U and W .

We come now to projection operators. Suppose that V = U ⊕ W .

Define P : V → V as follows.

For v ∈ V write v = u + w where u ∈ U and w ∈ W and then define

P v := u. Strictly P depends on the ordered pair (U, W ) of summands of V , but to

keep things simple we will not build this dependence into the notation.

OBSERVATIONS:

(1) P is well-defined;
(2) P is linear;
(3) Im P = U , Ker P = W ;

 (4) P2 = P

Proofs. That P is well-defined is an immediate consequence of the existence and
uniqueness of the decomposition v = u + w with u ∈ U , v ∈ V .

To see that P is linear, let v1 , v2  V and α 1 , α 2  F (where, as always, F is the field of scalars). Let v1 = u1 + w1 and v2 = u2 + w2 be the decompositions of v1 and v2.Then P v1 = u1 and P v2 = u2 .  What about P (α1 v 1 + α2 v2 )? Well, α1 v1 + α2 v2                     = α1 (u1 + w1 ) + α2 (u2 + w2 ) = (α1 u1 + α2u2 ) + (α1w1 + α2w2 )    Since α1u1 + α2u2  U and α1w1 + α2w2  W it follows that P (α1v1 + α2v2 ) = α1u1 + α2u2.  Therefore P (α1v1 + α2v2 ) = α1P(v1) + α2P(v2), that is, P is linear.

For (3) it is clear from the definition that Im P ⊆ U ; but if u ∈ U then u = P u,

and therefore Im P = U . Similarly, it is clear that W ⊆ Ker P ;

but if v ∈ Ker P then

v = 0 + w for some w ∈ W , and therefore Ker P = W .

Finally, if v ∈ V and we write v = u + w with u ∈ U and w ∈ W then

P 2 v = P (P v) = P u = u = P v , and this shows that P2 = P , as required.

Terminology: the operator (linear transformation) P is called the projection of V onto U along W.

Note 1. Suppose that V is finitedimensional. Choose a basis u 1 , . . . , u r for U and a basis w 1 , . . . , w m for W .  Then the matrix of P with respect to the basis u 1 , . . . , u r , w 1 , . . . , w m of V is  Ir  00  0

 

Note 2. If P is the projection onto U along W then I − P , where I : V → V is the identity transformation,

is the projection onto W along U .

Note 3. If P is the projection onto U along W then u ∈ U if and only if P u = u. The fact that if u ∈ U then

P u = u is immediate from the definition of P , while if P u = u then obviously u ∈ Im P = U .

Our next aim is to characterise projection operators algebraically. It turns out that Observation (4) above is the key:

TERMINOLOGY :

An operator T such that T2 = T is said to be idempotent.

 

THEOREM : A linear transformation is a projection operator if and only if it is idempotent.

Proof: We have seen already that every projection is idempotent, so the problem is to prove that an idempotent

linear transformation is a projection operator. Suppose that P : V → V is linear and idempotent. Define

U := Im P, W := Ker P. Our first task is to prove that V = U  W . Let v  V . Then v = Pv + (v  Pv). Now Pv  U (obviously), and P(v  Pv) = Pv  P2v = 0, so v  Pv  W . Thus V = U + W . Now let x  U  W . Since x  U there exists y  V such that x = Py , and since x  W also Px = 0. Then x = Py = P2 y = Px = 0. Thus U  W = {0}, and so V = U  W .

To finish the proof we need to convince ourselves that P is the projection onto U along W .

For v ∈ V write v = u + w where u ∈ U and w ∈ W . Since u ∈ U there must exist x ∈ V such that
u = P x. Then

 

Pv = P(u + w) = Pu + Pw = P2 x + 0 = Px = u ,

 

and therefore P is indeed the projection onto U along W , as we predicted.

The next result is a theorem which turns out to be very useful in many situations.
It is particularly important in applications of linear algebra in Quantum Mechanics.

 

THEOREM :

Let P : V  V be the projection onto U along W and let T : V  V be any linear transformation. Then P T = T P  if and only if U and W are T invariant (that is T U U and T W  W ).

Proof : Suppose first that P T = T P . If u ∈ U , so that P u = u, then T u = T P u = P T u ∈ U , so T U 6 U .

And if w  W then P (T w) = T P w = T 0 = 0, and therefore T W W.

       Now suppose conversely that U and W are T -invariant. Let v ∈ V and write

v = u + w with u ∈ U and w ∈ W . Then

P T v = P T (u + w) = P (T u + T w) = T u ,

since T u ∈ U and T w ∈ W . Also,

T P v = T P (u + w) = T u .

Thus P T v = T P v for all v ∈ V and therefore P T = T P , as asserted.

We turn now to direct sums of more than two subspaces. The vector space V is said to be the direct sum of subspaces U1,..  . . . , Uk if for every v  V there exist unique vectors ui  Ui for 1  i k such that v = u1 + · · · + uk . We write V = U1 · · · Uk .

Note 1: If k = 2 then this reduces to exactly the same concept as we have just been
studying.

Moreover, if k > 2 then U 1  U 2  · · ·  U k = (· · · ((U 1  U 2 )  U 3 )  · · ·  U k ).

Note 2: 

If Ui  V for 1 i  k then V = U1  · · ·  Uk if and only if V = U1 + U2 + · · · + Uk and Ur  ( ir Ui ) = {0} for 1  r  k . It is NOT sufficient that Ui  Uj = {0} whenever i j . Consider, for example, the 2dimensional space F2 of pairs (x1 , x2 ) with x1 , x2  F . Its three subspaces

 

U 1 := {(x, 0) | x  F }, U 2 := {(0, x) | x  F }, U 3 := {(x, x) | x  F }

satisfy

U 1 ∩ U 2 = U 1 ∩ U 3 = U 2 ∩ U 3 = {0}

and yet it is clearly not true that

F2

is their direct sum.

 

Note 3:  If V = U1  U2  · · ·  Uk and Bi is a basis of Ui then B1  B2  · · ·  Bk   is a basis of V . In particular, dim V =  i=1k dim Ui . Let P1 , . . . , Pk be linear mappings V  V such that Pi2 = Pi for all i and PiPj = 0 whenever i  j . If P1 + · · · + Pk = I then {P1 , . . . , Pk } is known as a partition of the identity on V .

 

EXAMPLE : If P is a projection then {P, I − P } is a partition of the identity.

THEOREM

If V = U1  · · ·  Uk and Pi is the projection of V onto Ui along j i  Uj then {P1 , . . . , Pk } is a partition of the identity on V. Conversely, if {P1 , . . . , Pk } is a partition of the identity on V and Ui := Im Pi then V = U1  · · ·  Uk .

 

PROOF: 

Suppose first that V = U1  · · ·  Uk . Let Pi be the projection of V onto 2 Ui along ji Uj . Certainly Pi2 = Pi , and if i j then Im Pj Ker Pi so Pi Pj = 0. If v  V then there are uniquely determined vectors ui  Ui for 1  i  k such that v = u1 + · · · + uk . Then L Pi v = ui by definition of what it means for Pi to be projection of V onto Ui along ji Uj . Therefore                            Iv = v = P1 v + · · · + Pk v = (P1 + · · · + Pk ) v . Since this equation holds for all v  V we have I = P1 + · · · + Pk . Thus {P 1 , . . . , Pk } is a partition of the Identity.To understand the converse, let {P1 , . . . , Pk } be a partition of the identity on V and let Ui := Im Pi.  For v  V , defining ui := Piv we have                            v = Iv = (P1 + · · · + Pk ) v = P1v + · · · + Pkv= u1 + · · · + uk .

 

Suppose that v = w1 + · · · + wk where wi  Ui for 1  i  k . Then Piwi = wi since Pi is a projection onto Ui. And if j  i then Pj wi = Pj (Pi wi ) = (PjPi ) wi, so Pjwi = 0 since PjPi = 0. Therefore                                                     Piv = Pi(w1 + · · · + w k ) = Piw1 + · · · + Pi wk = wi , that is, wi = ui. This shows the uniqueness of the decomposition v = u1 + · · · + uk with ui  Ui and so V = U1  · · ·  Uk, and the proof is complete.

 

LINEAR FUNCTIONS AND DUAL SPACES

A linear functional on the vector space V over the field F is a function f : V  F such that                  f (α1v1 + α2v2 ) = α1f (v1) + α2f(v2)for all α1 , α2  F and all v1, v2  V . Note: A linear functional, then, is a linear transformation V  F , where F is construed as a 1dimensional vector space over itself. Example. If V = F n (column vectors) and y is a 1 × n row vector then the map v 7 y v is a linear functional on V .

 

The dual space V  of V is defined as follows:                                              Set := set of linear functionals on V                                                 0 := zero function  [v0 for all v  V ]                                      (f )(v) := (f (v))                               (f1 + f2 )(v) := f1 (v) + f2(v)           [pointwise addition]                                         (λf )(v) := λf (v)                         [pointwise multiplication by scalars]

 

Note: It is a matter of important routine to check that the vector space axioms are

satisfied. It is also important that, when invited to define the dual space of a vector

space, you specify not only the set, but also the operations which make that set into a vector space.

 

THEOREM :

Let V be a finitedimensional vector space over a field F . For every basis v1 , v2 , . . . , vn of V there is a basis f1 , f2 , . . . , fn of V such that fi (vj ) = 1 if i=j ,  0 if ij . in particular, dim V = dim V  Proof. Define f i as follows. For v  V we set f i (v) := αi where α1 , . . . , αn  F are such that v = α1v1 + · · · + αnvn. This definition is acceptable because v1 , . . . , vn span V and so such scalars α1 , . . . , αn certainly exist; moreover, since v1 , . . . , vn are linearly independent the coefficients α1 , . . . , αn are uniquely determined by v . If w  V , say w = β1v1 + · · · + βnvn , and λ, μ  F then f i (λ v + μ w) =fiλ jαjvj+μjβjvj                           =fiλ j(λαj+μβj)vj                           = λαi+μβi                            =λfi(v)+μfi(w), and so f i  V  . We have thus found elements f 1 , . . . , f n of V  such that  f i (v j ) =1 if i = j.0 if i  j.

To finish the proof we must show that they form a basis of V ′ .

To see that they are independent suppose that

jμ j f j = 0, where μ1 , . . . , μn  F . Evaluate at vi : 0 = jμjfj (vi) =  jμjfj(vi) =μiThus μ1 = · · · = μn = 0 and so f1 , . . . , fn are linearly independent.      To see that they span V let f  V and define g :=jf(vj) fj. Then also g  Vand for 1  i  n we have                                                  g(v i ) =j f(vj) fj(vi) = jf(vi) fj(vi) = f(vi) j Since f and g are linear and agree on a basis of V we have                                 f = g , that is, f =jf(vj)fj. Thus f1, . . . , fn is indeed a basis ofV , as the theorem states.

 

Note. The basis f1, f2 , . . . , fn is known as the dual basis of v1, v2 , . . . , vn . Clearly, it is uniquely determined by this basis of V . EXAMPLE: If V = Fn(n × 1 column vectors) then we may identify V with the space of 1 × n row vectors. The canonical basis e1 , e2 , . . . , en then has dual basis e1, e2 , . . . , en , the canonical basis of the space of row vectors.

Let V be a vector space over the field F . For a subset X of V the annihilator is defined by

X0 := {f  V | f(x) = 0 for all x  X } . Note: For any subset X the annihilator X0 is a subspace. For, if f1 , f2  X and α1 , α2  F then for any x  X(α1 f1 + α2f2 )(x) = α1 f1(x) + α2f2(x) = 0 + 0 = 0, and so α1f1 + α2f2  X.Note: X = {f  V | X  Ker f }.

THEOREM

Let V be a finite-dimensional vector space over a field F and let U be
a subspace. Then

dim U + dim U = dim V .

Proof:  

       Let u1 , . . . , um be a basis for U and extend it to a basis u1 , . . . , um , um+1 , . . . , un for V. Thus dim U = m and dim V = n. Let f1 , . . . , fn be the dual basis of V. Well prove that fm+1 , . . . , fn is a basis of U . Certainly, if m + 1 j  n then fj (ui ) = 0 for i  m and so fj  U since u1 , . . . , um span U. Now let f  U. There exist α1 , . . . , αn  F such that f = jαjfj . Then                                    f (u i ) = jαjfj(ui ) = αi,and so αi = 0 for 1 i  m, that is, f is a linear combination of fm+1, . . . , fn. Thus fm+1, . . . , fn span U and so they form a basis of it. Therefore dim U = n  m, that is, dim U + dim U = dim V .For example : Let V be a finitedimensional vector space over a field F . Show that if U 1 , U 2 are subspaces then (U 1 + U 2 ) = U1U2 and (U1  U2 ) = U1 + U2 .

 

Response. If f  U1  U2 then f (u1 + u2 ) = f(u1) + f(u2) = 0 + 0 = 0 for any u1U1and any u2  U2. Therefore U1 U2  (U1 + U2 ). On the other hand, U1  U1 + U2 and U2  U1 + U2 and so if f  (U1 + U2 ) then f  U1U2, that is (U1U2 )  U1 +U2. Therefore in fact (U1 U2 ) = U1 +U2 . Note that the assumption that V is finitedimensional is not needed here.For the second part, clearly U1  (U1  U2 ) and U2 (U1  U2 ) and so U1 + U2 (U1  U2 ). Now we compare dimensions:                  dim (U1 + U2 ) = dim U1 + dim U2 dim (U1  U2)                                                  = dim U1 + dim U2  dim (U1 + U2) [by the above]                                                  = (dim V  dim U1 ) + (dim V  dim U2 )  (dim V  dim (U1 + U2 ))                                                  = dim V  (dim U1 + dim U2  dim (U 1 + U2 ))                                                  = dim V  dim (U1  U2 )                                                  = dim (U1  U2) .                           Therefore U1 + U2  = (U1  U2 ) , as required.

 

To finish this study of dual spaces we examine the second dual, that is, the dual of
the dual. It turns out that if V is a finite-dimensional vector space then the second dual
V ′′ can be naturally identified with V itself.

THEOREM

Let V be a vector space over a field F . Define Φ : V  V  by (Φ v)(f ) := f (v) for all v  V and all f  V  . Then Φ is linear and oneone [injective]. If V is finitedimensional then Φ is an isomorphism.Proof. We check linearity as follows. For v1 , v2  V and α 1 , α 2  F , and for any f  V , Φ(α1 v1 + α2v2)(f) = f (α1v1 + α2v2 )                                    = α1f(v1) + α2f(v2)                                    = α1(Φv1)(f) + α2(Φv2 )(f)                                   = (α1(Φ v1) + α2 (Φv2 ))(f ),and so Φ(α1 v1 + α2 v2 )α1 (Φ v1 ) + α2 (Φ v2 ).Now Ker Φ = {v  V | Φ v = 0} = {v  V | f (v) = 0 for all f  V  } = {0} and therefore Φ is injective. If V is finitedimensional then dim Im (Φ) = dim V = dim V  = dim V  and so Φ is also surjective, that is, it is an isomorphism.
In the next article I will be writing about 3 dimensional implementation of Dual Spaces and Eigen Values & Eigenvectors.
to be continued.