The post Predictive Analytics appeared first on Anagha Agile Systems.

]]>Nowadays, the size of the data that is being generated and created in different organizations is increasing drastically. Due to this large amount of data, several areas in artificial intelligence and data science have been raised.

Businesses who don’t effectively use their data will be losing $1.2 trillion to their competitors every year by 2020. By that time, experts predict that there will be a 4300% increase in annual data production.

In order to stay competitive, companies need to find a way to leverage data into actionable strategies. Data science and artificial intelligence are the key to maximizing data utilization.

Predictive Analytics is among the most useful applications of data science.

Using it allows executives to predict upcoming challenges, identify opportunities for growth, and optimize their internal operations.

So to understand thoroughly about important role played by analytics in technology we will start with basics,

What is Predictive Analytics?

According to Wikipedia, “Predictive analytics encompasses a variety of statistical techniques from predictive modeling, machine learning, and data mining that analyze current and historical facts to make predictions about future or otherwise unknown events.”

Predictive analytics is the branch of the advanced analytics which is used to make predictions about unknown future events. Predictive analytics uses many techniques from data mining, statistics, modelling, machine learning, and artificial intelligence to analyse current data to make predictions about future. It uses a number of data mining, and analytical techniques to bring together the management, information technology, and modelling business process to make predictions about future. The patterns found in historical and transactional data can be used to identify risks and opportunities for future. Predictive analytics models capture relationships among many factors to assess risk with a particular set of conditions to assign a score, or weight-age. By successfully applying predictive analytics the businesses can effectively interpret big data for their opportunities.

Predictive Analytics can also be defined using techniques, tools and technologies that use data to find models – models that can anticipate outcomes with a significant probability of accuracy.

Data Scientist explore data, formulate hypothesis, and use algorithms to find predictive models

The six steps of Predictive Analytics are

1. Understand and Collect Data

2. Prepare and Clean Data

3. Create Model using Statistical & Machine Learning Algorithm

4. Evaluate the model to make sure it will work

5. Deploy the model, use the model in applications

6. Monitor the model, Measure the effectiveness of the model in the real world.

Predictive analytics can be further categorized as –

- Predictive Modelling –What will happen next, if ?
- Root Cause Analysis-Why this actually happened?
- Data Mining- Identifying correlated data.
- Forecasting- What if the existing trends continue?
- Monte-Carlo Simulation – What could happen?
- Pattern Identification and Alerts –When should an action be invoked to correct a process.

Below are some of the important reasons for using Predictive Analytics by organisation. Organizations are turning to predictive analytics to help solve difficult problems and uncover new opportunities. Common uses include:

**Detecting fraud. **Combining multiple analytics methods can improve pattern detection and prevent criminal behavior. As cybersecurity becomes a growing concern, high-performance behavioral analytics examines all actions on a network in real time to spot abnormalities that may indicate fraud, zero-day vulnerabilities and advanced persistent threats.

**Optimizing marketing campaigns.** Predictive analytics are used to determine customer responses or purchases, as well as promote cross-sell opportunities. Predictive models help businesses attract, retain and grow their most profitable customers.

**Improving operations. **Many companies use predictive models to forecast inventory and manage resources. Airlines use predictive analytics to set ticket prices. Hotels try to predict the number of guests for any given night to maximize occupancy and increase revenue. Predictive analytics enables organizations to function more efficiently.

**Reducing risk.** Credit scores are used to assess a buyer’s likelihood of default for purchases and are a well-known example of predictive analytics. A credit score is a number generated by a predictive model that incorporates all data relevant to a person’s creditworthiness. Other risk-related uses include insurance claims and collections.

The types of Modeling techniques used in Predictive Analytics

1. Classifiers : An algorithm that maps the input data to a specific category. This algorithm is used in Predictive Data Classification process. This process has two stages: The Learning stage and Prediction stage.

This supervisor algorithm used for categorise structured or unstructured data into different sections by tagging them with unique classification tag attribute. This process of classification is done after learning process. The goal is to teach your model to extract and discover hidden relationships and rules — the classification rules from training data. The model does so by employing a classification algorithm.

The prediction stage that follows the learning stage consists of having the model predict new class tag attributes or numerical values that classify data it has not seen before (that is, test data). The main goal of a classification problem is to identify the category/class to which a new data will fall under.

The following are the steps involved in building a classification model:

i. Initialise the classifier to be used.

ii. Train the classifier: All classifiers in scikit-learn uses a fit(X, y) method to fit the model(training) for the given train data X and train label y.

iii. Predict the target: Given an unlabelled observation X, the predict(X) returns the predicted label y.

iv. Evaluate the classifier model.

Some of the examples of Classification Algorithms :

1. Logistic Regression

2. Naive Bayes

3. Stochastic Gradient Descent

4. K-Nearest Neighbours

5. Decision Tree

6. Random Forest

7. Support Vector Machine

2. Recommenders:

A Recommendation system or Recommenders works in well-defined, logical phases viz., data collection, ratings, and filtering. These phases are described below.

• Recommender System helps match users with item

• Implicit or explicit user feedback or item suggestion

• Different Recommender System designs are based on the availability of data or content/context of the data.

Recommendation allows you to make recommendations (similarly to Association Rules or Market Basket Analysis) by generating rules (for example, purchasing Product A leads to purchasing Product B). Recommendation uses the link analysis technique. This technique is optimized to work on large volumes of transactions. Recommendation triggers all the existing rules in a projected graph whose antecedent is a neighbor of the given user in the bipartite graph. Recommendation provides a specialized workflow to make it easy to obtain a set of recommendations for a given customer.

Data Collection:

How does the Recommendation system capture the details? If the user has logged in, then the details are extracted either from an HTTP session or from the system cookies. In case the Recommendation system depends on system cookies, then the data is available only till the time the user is using the same terminal. Events are fired almost in every case — a user liking a Product or adding it to a cart and purchasing it. So that is how user details are stored. But that is just one part of what Recommenders do.

Ratings

Ratings are important in the sense that they tell you what a user feels about a product. User’s feelings about a product can be reflected to an extent in the actions he or she takes such as likes, adding to shopping cart, purchasing or just clicking. Recommendation systems can assign implicit ratings based on user actions.

Filtering

Filtering means filtering products based on ratings and other user data. Recommendation systems use three types of filtering: collaborative, user-based and a hybrid approach. In collaborative filtering, a comparison of users’ choices is done and recommendations given. For example, if user X likes products A, B, C, and D and user Y likes products A, B, C, D and E, then it is likely that user X will be recommended product E because there are a lot of similarities between users X and Y as far as choice of products is concerned.

Several reputed brands such as Social Media Ecosystems use this model to provide effective and relevant recommendations to the users consuming those services. In user-based filtering, the user’s browsing history, likes and ratings are taken into account before providing recommendations. Many companies also use a hybrid approach. Netflix is known to use a hybrid approach.

While big data and Recommendation engines have already proved an extremely useful combination for big corporations, it raises a question of whether companies with smaller budgets can afford such investments. Powerful media recommendation engines can be built for anything from movies and videos to music, books, and products – think Netflix, Pandora, or Amazon.

3. Clusters

Using unsupervised techniques like clustering, we can seek to understand the relationships between the variables or between the observations by determining whether observations fall into relatively distinct groups. During Clustering case, most of the data is categorised using unsupervised learning — so we don’t have response variables telling us whether a customer is a frequent shopper or not. Hence, we can attempt to cluster the customers on the basis of the variables in order to identify distinct customer groups. There are other types of unsupervised statistical learning including k-means clustering, hierarchical clustering, principal component analysis, etc.

Clustering is an unsupervised data mining technique where the records in a data set are organised into different logical groupings. The groupings are in such a way that records inside the same group are more similar than records outside the group. Clustering has a wide variety of applications ranging from market segmentation to customer segmentation, electoral grouping, web analytics, and outlier detection.

Clustering is also used as a data compression technique and data preprocessing technique for supervised data mining tasks. Many different data mining approaches are available to cluster the data and are developed based on proximity between the records, density in the data set, or novel application of neural networks.

Clustering can help us explore the dataset and separate cases into groups representing similar traits or characteristics. Each group could be a potential candidate for a Category/class. Clustering is used for exploratory data analytics, i.e., as unsupervised learning, rather than for confirmatory analytics or for predicting specific outcomes. Examples of several interesting case-studies, including Divorce and Consequences on Young Adults, Paediatric Trauma, and Youth Development, demonstrate hierarchical clustering,

4. Numerical, time series forecasting :A time series is a sequence of measurements over time, usually obtained at equally spaced intervals

Any metric that is measured over regular time intervals forms a time series. Analysis of time series is commercially importance because of industrial need and relevance especially w.r.t forecasting (demand, sales, supply etc).

An Ordered sequence of observations of a variable or captured object at equally distributed time interval. Time series is anything which is observed sequentially over the time at regular interval like hourly, daily, weekly, monthly, quarterly etc. Time series data is important when you are predicting something which is changing over the time using past data. In time series analysis the goal is to estimate the future value using the behaviours in the past data.

There are many statistical techniques available for time series forecast, however we have found few effective ones which are listed below:

Techniques of Forecasting:

1. Simple Moving Average (SMA)

2. Exponential Smoothing (SES)

3. Autoregressive Integration Moving Average (ARIMA)

4. Neural Network (NN)Croston

Components of a Time Series

• Secular Trend

– Linear

– Nonlinear

• Cyclical Variation

– Rises and Falls over periods longer than

one year

• Seasonal Variation

– Patterns of change within a year, typically

repeating themselves

• Residual Variation

Components of a Time Series

${Y}_{t}={C}_{t}+{S}_{t}+{R}_{t}$Time series data mining combines traditional data mining and forecasting techniques. Data mining techniques such as sampling, clustering and decision trees are applied to data collected over time with the goal of improving predictions.

To know more in detail about the AI and Machine Learning and we will explore Predictive Analytics:

Some of the popular Predictive Algorithms are given below:

Predictive Algorithms:

1. K-means Clustering

2. Association rules

3. Boosting trees

4. CHAID

5. Cluster Analysis

6. Feature Selection

7. Independent Component Analysis

8. Kohonen Networks (SOFM)

9. Neural Networks

10. Social network analysis (SNA)

11. Random Forest (Decision Trees)

12. Mars regression splines

13. Linear and logistic regression

14. Naive Bayesian classifiers

15.Optimal binning

16. Partial Least Squares

17. Response Optimisation

18. Root cause analysis.

In my next blog I will be explaining clearly each one of the predictive analytics algorithms mentioned above, starting with K-means Clustering. I hope you liked the article

please let you know with your comments what more would like to see in the upcoming articles on Predictive Analytics. Happy Reading !!

The post Predictive Analytics appeared first on Anagha Agile Systems.

]]>The post Prerequisite Skill sets for Artificial Intelligence Development. PART I appeared first on Anagha Agile Systems.

]]>*There is hardly any theory which is more elementary than linear algebra, inspite of the fact that generations of *

*Data scientist and Mathematicians have obscured its simplicity by preposterous calculations with matrices *

* -Jean Dieudonnie French Mathematician*

*Mathematics requires a small dose, not of genius but of an imaginative freedom which in a larger dose would be insanity*

* – August K Rodgers*

It’ll be about linear algebra, which as a lot of you know is one of these subjects that’s required knowledge

for just about any technical education, but it’s also I have noticed generally poorly understood by IT professionals

taking it for the first time. An Data Scientist might go through a class and learn how to compute lost of things, like determinant,

matrices multiplication or cross product which use the determinant or eigenvalues, but they might come out

without really understanding why matrix multiplication is defined the way that it is, why cross product has

anything to do with the determinant or what an eigenvalues really represent.

In this blog I have tried to help all those data scientist to re look of Linear algebra in altogether in a different

perspective in terms of Analytical Geometry or 3 Dimensional Spatial Geometry and help them understand how

this can be easily put to use in Computer vision , Image processing, Speech recognition, and Robotics.

So Let’s start with some basics viz; Vector Space.

the linear algebra presented in our AAS course, generalising from R to an arbitrary field as domain

from which coefficients are to be taken. Fields are treated in the companion course, Rings and

Arithmetic, but to make this course and these notes self-contained we begin with a definition of

what we mean by a field.

A field is a set F with distinguished elements 0, 1, with a unary operation −, and with two binary operations +

and × satisfying the axioms below (the ‘axioms of arithmetic’).

Conventionally, for the image of (a, b) under the function + : F × F → F we write a + b; for the image of (a, b)

under the function *× : F × F → F we write a b; and x − y means x + (−y), x + y z means x + (y z).*

The axioms of arithmetic

**(1) a + (b + c) = (a + b) + c [+ is associative] **

**(2) a + b = b + a [+ is commutative] **

**(3) a +0= a **

**(4) a + (−a) = 0 **

**(5) a (b c) = (a b) c [× is associative] **

**(6) ab = ba [× is commutative] **

**(7) a 1 = a **

**(8) a 6 = 0 ⇒ ∃ x ∈ F : a x = 1 **

**(9) a (b + c) = a b + a c [× distributes over +] **

**(10) 0 ≠ 1 **

Note 1: All axioms are understood to have ∀ a, b, . . . ∈ F in front.

Note 2: See my Notes on Rings and Arithmetic for more discussion.

Note 3: Examples are Q, R, C, Z 2 or more generally Z p , the field of integers modulo p for any prime number p.

Let F be a field. A vector space over F is a set V with distinguished element 0, with a unary operation −,

with a binary operation +, and with a function × : F × V → V satisfying the axioms below.

Conventionally: for the image of (a, b) under the function + : V × V → V

we write a + b; for a + (−b) we write a − b; and for the image of (α, v)

under the function × : F × V → V we write α v .

Note: All axioms are understood to have appropriate quantifiers *∀ u, v, . . . ∈ V*

and/or* ∀ α, β, . . . ∈ F* in front.

- ${F}^{n}isavectorspaceoverF;$
- the polynomial ring F [x] is a vector space over F ;
- ${M}_{m\times n}(F)select:isavectorspaceoverF;$
- if K is a field and F a sub-field then K is a vector space over F ;

Exercise 1. Let X be any set, F any field. Define

${F}^{X}tobethesetofallfunctionsX\to Fwiththeusualpoint\u2013wiseadditionand\phantom{\rule{0ex}{0ex}}multiplicationbyscalars(elementsofF).Showthat{F}^{X}isavectorspaceoverF$Exactly as for rings, or for vector spaces over R, one can prove important “trivialities”.

Of course, if we could not prove them then we would add further axioms until we

had captured all properties that we require, or at least expect, of the algebra of vectors.

But the fact is that this set of axioms, feeble though it seems, is enough. For example:

**PROPOSITION**: Let V be a vector space over the field F . For any v ∈ V and any α ∈ F we have

Proof. For v ∈ V , from Field Axiom (3) and Vector Space Axiom (7) we have

0 v = (0 + 0) v = 0 v + 0 v.

Then adding −(0 v) to both sides of this equation and using Vector Space Axioms (4)

on the left and (1), (4), (3) on the right, we get that 0 = 0 v , as required for (i). The

reader is invited to give a proof of (ii).

For (iii), suppose that α v = 0 and α 6 = 0: our task is to show that v = 0. By Field

Axioms (8) and (6) there exists β ∈ F such that β α = 1. Then

v = 1 v = (β α) v = β (α v) = β 0

by Vector Space Axioms (8) and (5). But β 0 = 0 by (ii), and so v = 0, as required.

Clause (iv), like Clause (ii), is offered as an exercise:

Let V be a vector space over a field F . A subspace of V is a subset U such that

(1) 0 ∈ U and u + v ∈ U whenever u, v ∈ U

(2) if u ∈ U , α ∈ F then α u ∈ U . and −u ∈ U whenever u ∈ U ;

Note 1: Condition (1) says that U is an additive subgroup of V . Condition (2) is closure under

multiplication by scalars.

$Note2:WewriteU\ne VtomeanthatUisasubspaceofV.$ $Note3:Always\left\{0\right\}isasubspace;ifU\ne \left\{0\right\}thenwesaythatUisnon\u2013zeroornon\u2013trivial.\phantom{\rule{0ex}{0ex}}Likewise,Visasubspace;ifU\ne VthenwesaythatUisapropersubspace.$ $Note4:AsubsetUofVisasubspaceifandonlyifU\ne \varnothing andUisclosed\phantom{\rule{0ex}{0ex}}under+(thatis,u,v\in U\Rightarrow u+v\in U)andundermultiplicationbyscalars.$Suppose that U <= V where V is a vector space over a field F . Define the quotient

space V /U as follows:

set := {x + U | x ∈ V } [additive cosets]

0 := U

additive inverse: −(x + U ) := (−x) + U

addition: (x + U ) + (y + U ) := (x + y) + U

$multiplicationbyscalars:\alpha (x+U):=\alpha x+\hspace{0.17em}U$Note: The notion of quotient space is closely analogous with the notion of quotient

of a group by a normal subgroup or of a ring by an ideal. It is not in the Part A syllabus,

nor will it play a large part in this course. Nevertheless, it is an important and useful

construct which is well worth becoming familiar with.

Although technically new, the following ideas and results translate so simply from

the case of vector spaces over R to vector spaces over an arbitrary field F that I propose

simply to list headers for revision:

(1) spanning sets;

linear dependence and independence;

bases;

(2) dimension;

(4) any linearly independent set may be extended (usually in many ways) to a basis;

(5) intersection U ∩ W of subspaces; sum U + W ;

(6) dim (U + W ) = dim U + dim W − dim (U ∩ W );

$LetFbeafield,{V}_{1},{V}_{2}vectorspacesoverF.$

$AmapT:{V}_{1}\to {V}_{2}issaidtobelinearif$T 0 = 0,

T (−x) = −T x,

T (x + y) = T x + T y,

and T (λ x) = λ (T x)

for all x, y ∈ V and all λ ∈ F .

**Note 1:** The definition is couched in terms which are intended to emphasise that

what should be required of a linear transformation is

that it preserves all the ingredients, 0, +, − and multiplication by scalars, that go into

linear if and only if T (α x + β y) = α T x + β T y for all x, y ∈ V and all α, β ∈ F .

**Note 2:**

For a linear transformation T : V → W we define the ** kernel** or

nullity T + rank T = dim V

Note: The Rank-Nullity Theorem is a version of the First Isomorphism Theorem

for vector spaces, which states that

The vector space V is said to be the direct sum of its subspaces U and W , and we

write V = U ⊕ W , if V = U + W and U ∩ W = {0}.

v ∈ V there exist unique vectors u ∈ U and w ∈ W such that v = u + w.

Proof. Suppose first that for every v ∈ V there exist unique vectors u ∈ U and

w ∈ W such that v = u + w . Certainly then V = U + W and what we need to prove

is that U ∩ W = {0}. So let x ∈ U ∩ W . Then x = x + 0 with x ∈ U and 0 ∈ W .

Equally, x = 0 + x with 0 ∈ U and x ∈ W . But the expression x = u + w with u ∈ U

and w ∈ W is, by assumption, unique, and it follows that x = 0. Thus U ∩ W = {0}, as required.

Now suppose that V = U ⊕ W . If v ∈ V then since V = U + W there certainly exist vectors u ∈ U

and w ∈ W such that v = u + w . The point at issue therefore is: are u, w uniquely determined by **v? **

Suppose that u + w = u ′ + w ′ , where u, u ′ ∈ U and w, w ′ ∈ W . Then u − u ′ = w ′ − w .

This vector lies both in U and in W .

By assumption, U ∩ W = {0} and so u − u ′ = w ′ − w = 0. Thus u = u ′ and w = w ′ ,

so the decomposition of a vector v as u + w with u ∈ U and w ∈ W is unique, as required.

**Note**: What we are discussing is sometimes (but rarely) called the “internal” direct

sum to distinguish it from the natural construction which starts from two vector spaces

${V}_{1},{V}_{2}overthesamefieldFandconstructsanewvectorspacewhosesetistheproduct$ $setV{}_{1}\times {V}_{2}andinwhichthevectorspacestructureisdefinedcomponentwise\u2014compare$the direct product of groups or of rings. This is (equally rarely) called the “external”

$directsum.Thesearetwosidesofthesamecoin:theexternaldirectsumof{V}_{1}andV{}_{2}$ $istheinternaldirectsumofitssubspacesV{}_{1}\times \left\{0\right\}and\left\{0\right\}\times V{}_{2};whileifV=U\oplus W$then V is naturally isomorphic with the external direct sum of U and W .

We come now to *projection operators.* Suppose that V = U ⊕ W .

Define P : V → V as follows.

For v ∈ V write v = u + w where u ∈ U and w ∈ W and then define

P v := u. Strictly P depends on the ordered pair (U, W ) of summands of V , but to

keep things simple we will not build this dependence into the notation.

(1) P is well-defined;

(2) P is linear;

(3) Im P = U , Ker P = W ;

*Proofs*. That P is well-defined is an immediate consequence of the existence and

uniqueness of the decomposition v = u + w with u ∈ U , v ∈ V .

*For (3)* it is clear from the definition that Im P ⊆ U ; but if u ∈ U then u = P u,

and therefore Im P = U . Similarly, it is clear that W ⊆ Ker P ;

but if v ∈ Ker P then

v = 0 + w for some w ∈ W , and therefore Ker P = W .

Finally, if v ∈ V and we write v = u + w with u ∈ U and w ∈ W then

${P}^{2}v=P(Pv)=Pu=u=Pv,$ $andthisshowsthat{P}^{2}=P,asrequired.$

* Note 2*. If P is the projection onto U along W then I − P , where I : V → V is the identity transformation,

is the projection onto W along U .

* Note 3*. If P is the projection onto U along W then u ∈ U if and only if P u = u. The fact that if u ∈ U then

P u = u is immediate from the definition of P , while if P u = u then obviously u ∈ Im P = U .

Our next aim is to characterise projection operators algebraically. It turns out that Observation (4) above is the key:

*Proof: We have seen already that every projection is idempotent, so the problem is to prove that an idempotent *

*linear transformation is a projection operator. Suppose that P : V → V is linear and idempotent. Define*

To finish the proof we need to convince ourselves that P is the projection onto U along W .

For v ∈ V write v = u + w where u ∈ U and w ∈ W . Since u ∈ U there must exist x ∈ V such that

u = P x. Then

** **

and therefore P is indeed the projection onto U along W , as we predicted.

**The next result is a theorem which turns out to be very useful in many situations.**

**It is particularly important in applications of linear algebra in Quantum Mechanics.**

*Proof : Suppose first that P T = T P . If u ∈ U , so that P u = u, then T u = T P u = P T u ∈ U , so T U 6 U . *

* Now suppose conversely that U and W are T -invariant. Let v ∈ V and write*

v = u + w with u ∈ U and w ∈ W . Then

P T v = P T (u + w) = P (T u + T w) = T u ,

since T u ∈ U and T w ∈ W . Also,

T P v = T P (u + w) = T u .

Thus P T v = T P v for all v ∈ V and therefore P T = T P , as asserted.

$Weturnnowtodirectsumsofmorethantwosubspaces.ThevectorspaceV\phantom{\rule{0ex}{0ex}}issaidtobethedirectsumofsubspaces{U}_{1},.....,{U}_{k}ifforeveryv\in Vthere\phantom{\rule{0ex}{0ex}}existuniquevectors{u}_{i}\in {U}_{i}for1\le i\le ksuchthatv={u}_{1}+\xb7\xb7\xb7+{u}_{{}_{k}}.\phantom{\rule{0ex}{0ex}}WewriteV={U}_{1}\oplus \xb7\xb7\xb7\oplus {U}_{{}_{k}}.$*Note 1:* If k = 2 then this reduces to exactly the same concept as we have just been

studying.

*Note 2: *

${U}_{1}:=\left\{\right(x,0)|x\in F\},\phantom{\rule{0ex}{0ex}}{U}_{2}:=\{(0,x)|x\in F\},\phantom{\rule{0ex}{0ex}}{U}_{3}:=\left\{\right(x,x)|x\in F\}$

satisfy

U 1 ∩ U 2 = U 1 ∩ U 3 = U 2 ∩ U 3 = {0}

and yet it is clearly not true that

${F}^{2}$is their direct sum.

* *

EXAMPLE : If P is a projection then {P, I − P } is a partition of the identity.

*PROOF: *

$Supposethatv={w}_{1}+\xb7\xb7\xb7+{w}_{k}where{w}_{i}\in {U}_{i}for1\u2a7di\u2a7dk.Then{P}_{i}{w}_{i}={w}_{i}\phantom{\rule{0ex}{0ex}}\mathrm{sin}ce{P}_{i}isaprojectiononto{U}_{i}.Andifj\ne ithen{P}_{j}{w}_{i}={P}_{j}({P}_{i}{w}_{i})=({P}_{j}{P}_{i}){w}_{i},\phantom{\rule{0ex}{0ex}}so{P}_{j}{w}_{i}=0\mathrm{sin}ce{P}_{j}{P}_{i}=0.Therefore\phantom{\rule{0ex}{0ex}}{P}_{i}v={P}_{i}({w}_{1}+\xb7\xb7\xb7+{w}_{k})={P}_{i}{w}_{1}+\xb7\xb7\xb7+{P}_{i}{w}_{k}={w}_{i},thatis,{w}_{i}={u}_{i}.\phantom{\rule{0ex}{0ex}}Thisshowstheuniquenessofthedecompositionv={u}_{1}+\xb7\xb7\xb7+{u}_{k}\phantom{\rule{0ex}{0ex}}with{u}_{i}\in {U}_{i}andsoV={U}_{1}\oplus \xb7\xb7\xb7\oplus {U}_{k},andtheproofiscomplete.$

${T}{h}{e}{}{d}{u}{a}{l}{}{s}{p}{a}{c}{e}{}V\u2018ofVisdefinedasfollows:\phantom{\rule{0ex}{0ex}}Set:=setoflinearfunctionalsonV\phantom{\rule{0ex}{0ex}}0:=zerofunction[v\mapsto 0forallv\in V]\phantom{\rule{0ex}{0ex}}(-f)\left(v\right):=-(f(v\left)\right)\phantom{\rule{0ex}{0ex}}({f}_{1}+{f}_{2})\left(v\right):={f}_{1}\left(v\right)+{f}_{2}\left(v\right)[pointwiseaddition]\phantom{\rule{0ex}{0ex}}(\lambda f)\left(v\right):=\lambda f\left(v\right)[pointwisemultiplicationbyscalars]$

*Note:* It is a matter of important routine to check that the vector space axioms are

satisfied. It is also important that, when invited to define the dual space of a vector

space, you specify not only the set, but also the operations which make that set into a vector space.

To finish the proof we must show that they form a basis of V ′ .

To see that they are independent suppose that

$\sum _{j}\mu jfj=0,$ $where{\mu}_{1},...,{\mu}_{n}\in F.$ $Evaluateat{v}_{i}:$ $0=\left(\sum _{j}{\mu}_{j}{f}_{j}\right)\left({v}_{i}\right)=\sum _{j}{\mu}_{j}{f}_{j}\left({v}_{i}\right)={\mu}_{i}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}Thus{\mu}_{1}=\xb7\xb7\xb7={\mu}_{n}=0andso{f}_{1},...,{f}_{n}arelinearlyindependent.\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}ToseethattheyspanV\u2018letf\in V\u2018anddefineg:=\sum _{j}f\left({v}_{j}\right){f}_{j}.Thenalsog\in V\u2018\phantom{\rule{0ex}{0ex}}andfor1\u2a7di\u2a7dnwehave\phantom{\rule{0ex}{0ex}}g(vi)=\left(\sum _{j}f\left({v}_{j}\right){f}_{j}\right)\left({v}_{i}\right)=\sum _{j}f\left({v}_{i}\right){f}_{j}\left({v}_{i}\right)=f\left({v}_{i}\right)j\phantom{\rule{0ex}{0ex}}SincefandgarelinearandagreeonabasisofVwehave\phantom{\rule{0ex}{0ex}}f=g,thatis,f=\sum _{j}f\left({v}_{j}\right){f}_{j}.\phantom{\rule{0ex}{0ex}}Thus{f}_{1},...,{f}_{n}isindeedabasisofV\u2018,asthetheoremstates.$${N}{o}{t}{e}{.}{}Thebasis{f}_{1},{f}_{2},...,{f}_{n}isknownasthedualbasisof{v}_{1},{v}_{2},...,{v}_{n}.\phantom{\rule{0ex}{0ex}}Clearly,itisuniquelydeterminedbythisbasisofV.\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}EXAMPLE:IfV={F}^{n}(n\times 1columnvectors)thenwemayidentifyV\u2018withthe\phantom{\rule{0ex}{0ex}}spaceof1\times nrowvectors.Thecanonicalbasis{e}_{1},{e}_{2},...,{e}_{n}thenhas\phantom{\rule{0ex}{0ex}}dualbasise{\u2018}_{1},e{\u2018}_{2},...,e{\u2018}_{n},thecanonicalbasisofthespaceofrowvectors.$

Let V be a vector space over the field F . For a subset X of V the annihilator is defined by

${X}^{0}:=\{f\in V\prime |f\left(x\right)=0forallx\in X\}.$ ${N}{o}{t}{e}:ForanysubsetXtheannihilator{X}^{0}isasubspace.For,if{f}_{1},{f}_{2}\in {X}^{\u25e6}and{\alpha}_{1},{\alpha}_{2}\in F\phantom{\rule{0ex}{0ex}}thenforanyx\in X({\alpha}_{1}{f}_{1}+{\alpha}_{2}{f}_{2})\left(x\right)={\alpha}_{1}{f}_{1}\left(x\right)+{\alpha}_{2}{f}_{2}\left(x\right)=0+0=0,andso{\alpha}_{1}{f}_{1}+{\alpha}_{2}{f}_{2}\in {X}^{\u25e6}.\phantom{\rule{0ex}{0ex}}Note:{X}^{\u25e6}=\{f\in V\u2018|X\subseteq Kerf\}.$a subspace. Then

*Proof: *

${R}{e}{s}{p}{o}{n}{s}{e}.Iff\in {{U}_{1}}^{\u25e6}\cap {{U}_{2}}^{\u25e6}thenf({u}_{1}+{u}_{2})=f\left({u}_{1}\right)+f\left({u}_{2}\right)=0+0=0forany\phantom{\rule{0ex}{0ex}}{u}_{1}\in {U}_{1}andany{u}_{2}\in {U}_{2}.Therefore{{U}_{1}}^{\u25e6}\cap {{U}_{2}}^{\u25e6}\subseteq ({U}_{1}+{U}_{2}{)}^{\u25e6}.Ontheotherhand,\phantom{\rule{0ex}{0ex}}{U}_{1}\subseteq {U}_{1}+{U}_{2}and{U}_{2}\subseteq {U}_{1}+{U}_{2}andsoiff\in ({U}_{1}+{U}_{2}{)}^{\u25e6}thenf\in {{U}_{1}}^{\u25e6}\cap {{U}_{2}}^{\u25e6},\phantom{\rule{0ex}{0ex}}thatis({U}_{1}\cap {U}_{2}{)}^{\u25e6}\subseteq {{U}_{1}}^{\u25e6}+{{U}_{2}}^{\u25e6}.Thereforeinfact({U}_{1}\cap {U}_{2}{)}^{\u25e6}={{U}_{1}}^{\u25e6}+{{U}_{2}}^{\u25e6}.\phantom{\rule{0ex}{0ex}}NotethattheassumptionthatVisfinite\u2013dimensionalisnotneededhere.\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}Forthesecondpart,clearly{{U}_{1}}^{\u25e6}\subseteq ({U}_{1}\cap {U}_{2}{)}^{\u25e6}and{{U}_{2}}^{\u25e6}\subseteq ({U}_{1}\cap {U}_{2}{)}^{\u25e6}and\phantom{\rule{0ex}{0ex}}so{{U}_{1}}^{\u25e6}+{{U}_{2}}^{\u25e6}\subseteq ({U}_{1}\cap {U}_{2}{)}^{\u25e6}.Nowwecomparedimensions:\phantom{\rule{0ex}{0ex}}dim({{U}_{1}}^{\u25e6}+{{U}_{2}}^{\u25e6})=dim{{U}_{1}}^{\u25e6}+dim{{U}_{2}}^{\u25e6}-dim({{U}_{1}}^{\u25e6}\cap {{U}_{2}}^{\u25e6})\phantom{\rule{0ex}{0ex}}=dim{{U}_{1}}^{\u25e6}+dim{{U}_{2}}^{\u25e6}-dim({U}_{1}+{U}_{2}{)}^{\u25e6}[bytheabove]\phantom{\rule{0ex}{0ex}}=(dimV-dim{U}_{1})+(dimV-dim{U}_{2})-(dimV-dim({U}_{1}+{U}_{2}\left)\right)\phantom{\rule{0ex}{0ex}}=dimV-(dim{U}_{1}+dim{U}_{2}-dim(U{}_{1}+{U}_{2}\left)\right)\phantom{\rule{0ex}{0ex}}=dimV-dim({U}_{1}\cap {U}_{2})\phantom{\rule{0ex}{0ex}}=dim({U}_{1}\cap {U}_{2}{)}^{\u25e6}.\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}Therefore{{U}_{1}}^{\u25e6}+{U}_{2}{}^{\u25e6}=({U}_{1}\cap {U}_{2}{)}^{\u25e6},asrequired.$

To finish this study of dual spaces we examine the second dual, that is, the dual of

the dual. It turns out that if V is a finite-dimensional vector space then the second dual

V ′′ can be naturally identified with V itself.

The post Prerequisite Skill sets for Artificial Intelligence Development. PART I appeared first on Anagha Agile Systems.

]]>The post QUANTUM COMPUTING BOOSTS DEEP LEARNING TO TOP LEVEL appeared first on Anagha Agile Systems.

]]>

“This is the beginning of the quantum age of computing and the latest advancement is towards building a universal quantum computer.

“A universal quantum computer, once built, will represent one of the greatest milestones in the history of information technology and has the potential to solve certain problems we couldn’t solve, and will never be able to solve, with today’s classical computers.”

a quantum computer has quantum bits or qubits, which work in a particularly intriguing way. Where a bit can store either a zero or a 1, a qubit can store a zero, a one, both zero and one, or an infinite number of values in between—and be in multiple states (store multiple values) at the same time! If that sounds confusing, think back to light being a particle and a wave at the same time, Schrödinger’s cat being alive and dead, or a car being a bicycle and a bus.

A quantum computer exponentially expands the vocabulary of binary code words by using two spooky principles of quantum physics, namely ‘entanglement’ and ‘superposition’. Qubits can store a 0, a 1, or an arbitrary combination of 0 and 1 at the same time. Multiple qubits can be made in superposition states that have strictly no classical analogue, but constitute legitimate digital code in a quantum computer.

In a quantum computer, each quantum bit of information–or qubit–can therefore be unfixed, a mere probability; this in turn means that in some mysterious way, a qubit can have the value of one or zero simultaneously, a phenomenon called superposition.

And just as a quantum computer can store multiple values at once, so it can process them simultaneously. Instead of working in series, it can work in parallel, doing multiple operations at once.

**In theory, a quantum computer could solve in less than a minute problems that it would take a classical computer millennia to solve.**

To date, most quantum computers have been more or less successful science experiments. None have harnessed more than 12 qubits, and the problems the machines have solved have been trivial. Quantum computers have been complicated, finicky machines, employing delicate lasers, vacuum pumps, and other exotic machinery to shepherd their qubits.

The world’s fastest supercomputer, China’s Sunway TaihuLight, runs at 93 petaflops (93 quadrillion flops, or around 1017) – but it relies on 10 million processing cores and uses massive amounts of energy.

In comparison, even a small 30-qubit universal quantum computer could, theoretically, run at the equivalent of a classical computer operating at 10 teraflops (10 trillion flops, or 1012

IBM says “With a quantum computer built of just 50 qubits, none of today’s TOP500 supercomputers could successfully emulate it, reflecting the tremendous potential of this technology,” .

A screenshot of IBM’s the author attempting to quantum compute. (courtesy : Silicon)

QUANTUM COMPUTING TIMELINE

The significant advance, by a team at the University of New South Wales (UNSW) in Sydney appears today in the international journal Nature.

“What we have is a game changer,” said team leader Andrew Dzurak, Scientia Professor and Director of the Australian National Fabrication Facility at UNSW.

“We’ve demonstrated a two-qubit logic gate – the central building block of a quantum computer – and, significantly, done it in silicon. Because we use essentially the same device technology as existing computer chips, we believe it will be much easier to manufacture a full-scale processor chip than for any of the leading designs, which rely on more exotic technologies.

“This makes the building of a quantum computer much more feasible, since it is based on the same manufacturing technology as today’s computer industry,” he added.

quantum computers could be used for discovering new drugs, securing the internet, modeling the economy, or potentially even building far more powerful artificial intelligence systems—all sorts of exceedingly complicated tasks.

A functional quantum computer will provide much faster computation in a number of key areas, including: searching large databases, solving complicated sets of equations, and modelling atomic systems such as biological molecules and drugs. This means they’ll be enormously useful for finance and healthcare industries, and for government, security and defence organisations.

For example, they could be used to identify and develop new medicines by greatly accelerating the computer-aided design of pharmaceutical compounds (and minimizing lengthy trial and error testing); develop new, lighter and stronger materials spanning consumer electronics to aircraft; and achieve much faster information searching through large databases.

Functional quantum computers will also open the door for new types of computational applications and solutions that are probably too premature to even conceive.

The fusion of quantum computing and machine learning has become a booming research area. Can it possibly live up to its high expectations?

“These advantages that you end up seeing, they’re modest; they’re not exponential, but they are quadratic,” said Nathan Wiebe, a quantum-computing researcher at Microsoft Research. “Given a big enough and fast enough quantum computer, we could revolutionize many areas of machine learning.” And in the course of using the systems, computer scientists might solve the theoretical puzzle of whether they are inherently faster, and for what.

*A recent Fortune essay states, “Companies like Microsoft, Google and IBM are making rapid breakthroughs that could make quantum computing viable in less than 10 years.”*

Very soon you will find hardware and software with fundamentally new integrated circuits that store and process quantum information. Many important computational problems will only be solved by building quantum computers. Quantum computing has been picking up the momentum, and there are many startups and scholars discussing quantum machine learning. A basic knowledge of quantum two-level computation ought to be acquired.

*I am sure that Quantum Information and Quantum Computation will play a crucial role in defining where the course of our technology is headed.*

*“For quantum computing to take traction and blossom, we must enable the world to use and to learn it,” said Gambetta. “This period is for the world of scientists and industry to focus on getting quantum-ready.”*

The post QUANTUM COMPUTING BOOSTS DEEP LEARNING TO TOP LEVEL appeared first on Anagha Agile Systems.

]]>The post Algorithms FAQs Part 1. appeared first on Anagha Agile Systems.

]]>coding skills every programmer has to know the how to implement Algorithms and Data structures in his application.

The process of providing solution to a problem is a sequence of steps is called an algorithm.

**2. ****What are the properties of an algorithm? **

An algorithm must possess the following properties:

a. It should get input data.

b. It should produce output.

c. The algorithm should terminate.

d. Algorithm should be clear to understand.

e. It should be easy to perform.

**3.**** ****What are the types of algorithms?**

Algorithms are of two types: iterative and recursive algorithms.

**4.**** ****Define iterative algorithms.**

Algorithms which use loops and conditions to solve the problem are called iterative algorithms.

**5.**** ****Define recursive algorithms.**

Algorithms which perform the recursive steps in a divide and conquer method are called recursive algorithms.

**6.**** ****What is an array?**

An array is a sequential collection of same kind of data elements.

A data structure is a way of organizing data that considers not only the items stored, but also their relationship to each other. Advance knowledge about the relationship between data items allows designing of efficient algorithms for the manipulation of data.

**8.**** ****Name the areas of application of data structures.**

The following are the areas of application of data structures:

a. Compiler design

b. Operating system

c. Statistical analysis package

d. DBMS

e. Numerical analysis

f. Simulation

g. Artificial Intelligence

h. Graphics

**9.**** ****What are the major data structures used in the following areas: Network data model, Hierarchical data model, and RDBMS?**

The major data structures used in the above areas are:

Network data model – Graph

Hierarchical data model – Trees

RDBMS – Array

**10.**** ****What are the types of data structures?**

Data structures are of two types: Linear and non linear data structures.

**11.**** ****What is a linear data structure?**

If the elements of a data structure are stored sequentially, then it is a linear data structure.

**12.**** ****What is non linear data structure?**

If the elements of a data structure are not stored in sequential order, then it is a non linear data structure.

**13.**** ****What are the types of array operations?**

The following are the operations which can be performed on an array:

Insertion, Deletion, Search, Sorting, Merging, Reversing and Traversal.

**14.**** ****What is a matrix?**

An array of two dimensions is called a matrix.

**15.**** ****What are the types of matrix operations?**

The following are the types of matrix operations:

Addition, multiplication, transposition, finding determinant of square matrix, subtraction, checking whether it is singular matrix or not etc.

**16. ****What is the condition to be checked for the multiplication of two matrices?**

If matrices are to be multiplied, the number of columns of first matrix should be equal to the number of rows of second matrix.

**17. ****Write the syntax for multiplication of matrices?Syntax**:

for (=0; < value;++)

{

for (=0; < value;++)

{

for (=0; < value;++)

{

arr[var1][var2] += arr[var1][var3] * arr[var3][arr2];

}

}

}

**18. ****What is a string?**

A sequential array of characters is called a string.

**19. ****What is use terminating null character?**

Null character is used to check the end of string.

**20. ****What is an empty string?**

A string with zero character is called an empty string.

**21. ****What are the operations that can be performed on a string?**

The following are the operations that can be performed on a string:

finding the length of string, copying string, string comparison, string concatenation, finding substrings etc.

**22. ****What is Brute Force algorithm?**

Algorithm used to search the contents by comparing each element of array is called Brute Force algorithm.

**23. ****What are the limitations of arrays?**

The following are the limitations of arrays:

Arrays are of fixed size.

Data elements are stored in continuous memory locations which may not be available always.

Adding and removing of elements is problematic because of shifting the locations.

**24. ****How can you overcome the limitations of arrays?**

Limitations of arrays can be solved by using the linked list.

**25. ****What is a linked list?**

Linked list is a data structure which store same kind of data elements but not in continuous memory locations and size is not fixed. The linked lists are related logically.

**26. ****What is the difference between an array and a linked list?**

The size of an array is fixed whereas size of linked list is variable. In array the data elements are stored in continuous memory locations but in linked list it is non continuous memory locations. Addition, removal of data is easy in linked list whereas in arrays it is complicated.

**27. ****What is a node?**

The data element of a linked list is called a node.

**28. ****What does node consist of?**

Node consists of two fields:

data field to store the element and link field to store the address of the next node.

**29. ****Write the syntax of node creation?**

Syntax:

struct node

{

data type ;

node *ptr; //pointer for link node

}temp;

** **

**30. ****Write the syntax for pointing to next node?Syntax**:

node->link=node1;

**32. ****What is a data structure? What are the types of data structures? Briefly explain them**

The scheme of organizing related information is known as ‘data structure’. The types of data structure are:

**Lists:** A group of similar items with connectivity to the previous or/and next data items.

**Arrays:** A set of homogeneous values

**Records**: A set of fields, where each field consists of data belongs to one data type.

**Trees:** A data structure where the data is organized in a hierarchical structure. This type of data structure follows the sorted order of insertion, deletion and modification of data items.

**Tables:** Data is persisted in the form of rows and columns. These are similar to records, where the result or manipulation of data is reflected for the whole table.

**33. ****Define a linear and non linear data structure.**

**Linear data structure:** A linear data structure traverses the data elements sequentially, in which only one data element can directly be reached. Ex: Arrays, Linked Lists

**Non-Linear data structure:** Every data item is attached to several other data items in a way that is specific for reflecting relationships. The data items are not arranged in a sequential structure. Ex: Trees, Graphs

**34. ****Define in brief an array. What are the types of array operations?**

An array is a set of homogeneous elements. Every element is referred by an index.

Arrays are used for storing the data until the application expires in the main memory of the computer system. So that, the elements can be accessed at any time. The operations are:

– Adding elements

– Sorting elements

– Searching elements

– Re-arranging the elements

– Performing matrix operations

– Pre-fix and post-fix operations

**35. ****What is a matrix? Explain its uses with an example**

A matrix is a representation of certain rows and columns, to persist homogeneous data. It can also be called as double-dimensioned array.

Uses:

– To represent class hierarchy using Boolean square matrix

– For data encryption and decryption

– To represent traffic flow and plumbing in a network

– To implement graph theory of node representation

**Algorithm:** A step by step process to get the solution for a well defined problem.

**Properties of an algorithm:**

– Should be unambiguous, precise and lucid

– Should provide the correct solutions

– Should have an end point

– The output statements should follow input, process instructions

– The initial statements should be of input statements

– Should have finite number of steps

– Every statement should be definitive

– Simple recursive algorithms. Ex: Searching an element in a list

– Backtracking algorithms Ex: Depth-first recursive search in a tree

– Divide and conquer algorithms. Ex: Quick sort and merge sort

– Dynamic programming algorithms. Ex: Generation of Fibonacci series

– Greedy algorithms Ex: Counting currency

– Branch and bound algorithms. Ex: Travelling salesman (visiting each city once and minimize the total distance travelled)

– Brute force algorithms. Ex: Finding the best path for a travelling salesman

– Randomized algorithms. Ex. Using a random number to choose a pivot in quick sort).

**38. ****What is an iterative algorithm?**

The process of attempting for solving a problem which finds successive approximations for solution, starting from an initial guess. The result of repeated calculations is a sequence of approximate values for the quantities of interest.

**39. ****What is an recursive algorithm?**

Recursive algorithm is a method of simplification that divides the problem into sub-problems of the same nature. The result of one recursion is the input for the next recursion. The repetition is in the self-similar fashion. The algorithm calls itself with smaller input values and obtains the results by simply performing the operations on these smaller values. Generation of factorial, Fibonacci number series are the examples of recursive algorithms.

**40. ****Explain quick sort and merge sort algorithms.**

Quick sort employs the ‘divide and conquer’ concept by dividing the list of elements into two sub elements.

The process is as follows:

1. Select an element, pivot, from the list.

2. Rearrange the elements in the list, so that all elements those are less than the pivot are arranged before the pivot and all elements those are greater than the pivot are arranged after the pivot. Now the pivot is in its position.

3. Sort the both sub lists – sub list of the elements which are less than the pivot and the list of elements which are more than the pivot recursively.

**41. **Merge Sort: A comparison based sorting algorithm. The input order is preserved in the sorted output.

Merge Sort algorithm is as follows:

1. The length of the list is 0 or 1, and then it is considered as sorted.

2. Otherwise divide the unsorted list into two lists each about half the size.

3. Sort each sub list recursively. Implement the step 2 until the two sub lists are sorted.

4. As a final step, merge both the lists back into one sorted list.

**42. ****What is Bubble Sort and Quick sort?**

**Bubble Sort:** The simplest sorting algorithm. It involves the sorting the list in a repetitive fashion. It compares two adjacent elements in the list, and swaps them if they are not in the designated order. It continues until there are no swaps needed. This is the signal for the list that is sorted. It is also called as comparison sort as it uses comparisons.

**Quick Sort:** The best sorting algorithm which implements the ‘divide and conquer’ concept. It first divides the list into two parts by picking an element a ‘pivot’. It then arranges the elements those are smaller than pivot into one sub list and the elements those are greater than pivot into one sub list by keeping the pivot in its original place.

**43. ****What are the difference between a stack and a Queue?**

**Stack** – Represents the collection of elements in Last In First Out order.

Operations includes testing null stack, finding the top element in the stack, removal of top most element and adding elements on the top of the stack.

**Queue** – Represents the collection of elements in First In First Out order.

Operations include testing null queue, finding the next element, removal of elements and inserting the elements from the queue.

Insertion of elements is at the end of the queue

Deletion of elements is from the beginning of the queue.

**44. ****Can a stack be described as a pointer? Explain.**

A stack is represented as a pointer. The reason is that, it has a head pointer which points to the top of the stack. The stack operations are performed using the head pointer. Hence, the stack can be described as a pointer.

**45. ****Explain the terms Base case, Recursive case, Binding Time, Run-Time Stack and Tail Recursion.**

**Base case:** A case in recursion, in which the answer is known when the termination for a recursive condition is to unwind back.

**Recursive Case:** A case which returns to the answer which is closer.

**Run-time Stack:** A run time stack used for saving the frame stack of a function when every recursion or every call occurs.

**Tail Recursion:** It is a situation where a single recursive call is consisted by a function, and it is the final statement to be executed. It can be replaced by iteration.

**46. ****Is it possible to insert different type of elements in a stack? How?**

Different elements can be inserted into a stack. This is possible by implementing union / structure data type. It is efficient to use union rather than structure, as only one item’s memory is used at a time.

**47. ****Explain in brief a linked list.**

A linked list is a dynamic data structure. It consists of a sequence of data elements and a reference to the next element in the sequence. Stacks, queues, hash tables, linear equations, prefix and post fix operations. The order of linked items is different that of arrays. The insertion or deletion operations are constant in number.

**48. ****Explain the types of linked lists.**

The types of linked lists are:

**Singly linked list:** It has only head part and corresponding references to the next nodes.

**Doubly linked list:** A linked list which both has head and tail parts, thus allowing the traversal in bi-directional fashion. Except the first node, the head node refers to the previous node.

**Circular linked list:** A linked list whose last node has reference to the first node.

**49. ****How would you sort a linked list?**

**Step 1:** Compare the current node in the unsorted list with every element in the rest of the list. If the current element is more than any other element go to step 2 otherwise go to step 3.

**Step 2:** Position the element with higher value after the position of the current element. Compare the next element. Go to step1 if an element exists, else stop the process.

**Step 3:** If the list is already in sorted order, insert the current node at the end of the list. Compare the next element, if any and go to step 1 or quit.

**Sequential search:** Searching an element in an array, the search starts from the first element till the last element.

The average number of comparisons in a sequential search is (N+1)/2 where N is the size of the array. If the element is in the 1st position, the number of comparisons will be 1 and if the element is in the last position, the number of comparisons will be N.

**51. ****What are binary search and Fibonacci search?**

**Binary Search:** Binary search is the process of locating an element in a sorted list. The search starts by dividing the list into two parts. The algorithm compares the median value. If the search element is less than the median value, the top list only will be searched, after finding the middle element of that list. The process continues until the element is found or the search in the top list is completed. The same process is continued for the bottom list, until the element is found or the search in the bottom list is completed. If an element is found that must be the median value.

**52. ****Define Fibonacci Search:** Fibonacci search is a process of searching a sorted array by utilising divide and conquer algorithm. Fibonacci search examines locations whose addresses have lower dispersion. When the search element has non-uniform access memory storage, the Fibonacci search algorithm reduces the average time needed for accessing a storage location.

The post Algorithms FAQs Part 1. appeared first on Anagha Agile Systems.

]]>The post Algorithms FAQs Part 2. appeared first on Anagha Agile Systems.

]]>**Answer: **Linear search is the simplest form of search. It searches for the element sequentially starting from the first element. This search has a disadvantage if the element is located at the end. Advantage lies in the simplicity of the search. Also it is most useful when the elements are arranged in a random order.

**What is binary search? **

**Answer: **Binary search is most useful when the list is sorted. In binary search, element present in the middle of the list is determined. If the key (number to search) is smaller than the middle element, the binary search is done on the first half. If the key (number to search) is greater than the middle element, the binary search is done on the second half (right). The first and the last half are again divided into two by determining the middle element.

**Explain the bubble sort algorithm.**

**Answer: **Bubble sort algorithm is used for sorting a list. It makes use of a temporary variable for swapping. It compares two numbers at a time and swaps them if they are in wrong order. This process is repeated until no swapping is needed. The algorithm is very inefficient if the list is long.

E.g. List: – 7 4 5 3

1. 7 and 4 are compared

2. Since 4 < 7, 4 is stored in a temporary variable.

3. the content of 7 is now stored in the variable which was holding 4

4. Now, the content of temporary variable and the variable previously holding 7 are swapped.

**What is quick sort? **

**Answer: **Quick sort is one the fastest sorting algorithm used for sorting a list. A pivot point is chosen. Remaining elements are portioned or divided such that elements less than the pivot point are in left and those greater than the pivot are on the right. Now, the elements on the left and right can be recursively sorted by repeating the algorithm.

**Describe Trees using C++ with an example.**

Tree is a structure that is similar to linked list. A tree will have two nodes that point to the left part of the tree and the right part of the tree. These two nodes must be of the similar type.

The following code snippet describes the declaration of trees. The advantage of trees is that the data is placed in nodes in sorted order.

struct TreeNode

{

int item; // The data in this node.

TreeNode *left; // Pointer to the left subtree.

TreeNode *right; // Pointer to the right subtree.

}

The following code snippet illustrates the display of tree data.

void showTree( TreeNode *root )

{

if ( root != NULL ) { // (Otherwise, there’s nothing to print.)

showTree(root->left); // Print items in left sub tree.

cout << root->item << ” “; // Print the root item.

showTree(root->right ); // Print items in right sub tree.

}

} // end inorderPrint()

*Data Structure trees – posted on August 03, 2008 at 22:30 PM by Amit Satpute*

**Question – What is a B tree?**

**Answer: **A B-tree of order m (the maximum number of children for each node) is a tree which satisfies the following properties :

1. Every node has <= m children.

2. Every node (except root and leaves) has >= m/2 children.

3. The root has at least 2 children.

4. All leaves appear in the same level, and carry no information.

5. A non-leaf node with k children contains k – 1 keys

**Question – What are splay trees?**

**Answer: **A splay tree is a self-balancing binary search tree. In this, recently accessed elements are quick to access again

It is useful for implementing caches and garbage collection algorithms.

When we move left going down this path, its called a zig and when we move right, its a zag.

Following are the splaying steps. There are six different splaying steps.

1. Zig Rotation (Right Rotation)

2. Zag Rotation (Left Rotation)

3. Zig-Zag (Zig followed by Zag)

4. Zag-Zig (Zag followed by Zig)

5. Zig-Zig

6. Zag-Zag

**Question – What are red-black trees?**

**Answer :**A red-black tree is a type of self-balancing binary search tree.

In red-black trees, the leaf nodes are not relevant and do not contain data.

Red-black trees, like all binary search trees, allow efficient in-order traversal of elements.

Each node has a color attribute, the value of which is either red or black.

Characteristics:

The root and leaves are black

Both children of every red node are black.

Every simple path from a node to a descendant leaf contains the same number of black nodes, either counting or not counting the null black nodes

**Question – What are threaded binary trees?**

**Answer: **In a threaded binary tree, if a node ‘A’ has a right child ‘B’ then B’s left pointer must be either a child, or a thread back to A.

In the case of a left child, that left child must also have a left child or a thread back to A, and so we can follow B’s left children until we find a thread, pointing back to A.

This data structure is useful when stack memory is less and using this tree the treversal around the tree becomes faster.

**Question – What is a B+ tree?**

**Answer: **It is a dynamic, multilevel index, with maximum and minimum bounds on the number of keys in each index segment. all records are stored at the lowest level of the tree; only keys are stored in interior blocks.

**Describe Tree database. Explain its common uses.**

A tree is a data structure which resembles a hierarchical tree structure. Every element in the structure is a node. Every node is linked with the next node, either to its left or to its right. Each node has zero or more child nodes. The length of the longest downward path to a leaf from that node is known as the height of the node and the length of the path to its root is known as the depth of a node.

Common Uses:

– To manipulate hierarchical data

– Makes the information search, called tree traversal, easier.

– To manipulate data that is in the form of sorted list.

**What is binary tree? Explain its uses.**

A binary tree is a tree structure, in which each node has only two child nodes. The first node is known as root node. The parent has two nodes namely left child and right child.

Uses of binary tree:

– To create sorting routine.

– Persisting data items for the purpose of fast lookup later.

– For inserting a new item faster

**How do you find the depth of a binary tree?**

The depth of a binary tree is found using the following process:

1. Send the root node of a binary tree to a function

2. If the tree node is null, then return false value.

3. Calculate the depth of the left tree; call it ‘d1’ by traversing every node. Increment the counter by 1, as the traversing progresses until it reaches the leaf node. These operations are done recursively.

4. Repeat the 3rd step for the left node. Name the counter variable ‘d2’.

5. Find the maximum value between d1 and d2. Add 1 to the max value. Let us call it ‘depth’.

6. The variable ‘depth’ is the depth of the binary tree.

**Explain pre-order and in-order tree traversal.**

A non-empty binary tree is traversed in 3 types, namely pre-order, in-order and post-order in a recursive fashion.

**Pre-order:
**Pre-order process is as follows:

– Visit the root node

– Traverse the left sub tree

– Traverse the right sub tree

**In-Order:
**In order process is as follows:

– Traverse the left sub tree

– Visit the root node

– Traverse the right sub tree

**What is a B+ tree? Explain its uses.**

B+ tree represents the way of insertion, retrieval and removal of the nodes in a sorted fashion. Every operation is identified by a ‘key’. B+ tree has maximum and minimum bounds on the number of keys in each index segment, dynamically. All the records in a B+ tree are persisted at the last level, i.e., leaf level in the order of keys.

B+ tree is used to visit the nodes starting from the root to the left or / and right sub tree. Or starting from the first node of the leaf level. A bi directional tree traversal is possible in the B+ tree.

**Define threaded binary tree. Explain its common uses**

A threaded binary tree is structured in an order that, all right child pointers would normally be null and points to the ‘in-order successor’ of the node. Similarly, all the left child pointers would normally be null and points to the ‘in-order predecessor’ of the node.

Uses of Threaded binary tree:

– Traversal is faster than unthreaded binary trees

– More subtle, by enabling the determination of predecessor and successor nodes that starts from any node, in an efficient manner.

– No stack overload can be carried out with the threads.

– Accessibility of any node from any other node

– It is easy to implement to insertion and deletion from a threaded tree.

**Explain implementation of traversal of a binary tree.**

Binary tree traversal is a process of visiting each and every node of the tree. The two fundamental binary tree traversals are ‘depth-first’ and ‘breadth-first’.

The depth-first traversal are classified into 3 types, namely, pre-order, in-order and post-order.

**Pre-order:** Pre-order traversal involves visiting the root node first, then traversing the left sub tree and finally the right sub tree.

**In-order:** In-order traversal involves visiting the left sub tree first, then visiting the root node and finally the right sub tree.

**Post-order:** Post-order traversal involves visiting the left sub tree first, then visiting the right sub tree and finally visiting the root node.

The breadth-first traversal is the ‘level-order traversal’. The level-order traversal does not follow the branches of the tree. The first-in first-out queue is needed to traversal in level-order traversal.

**Explain implementation of deletion from a binary tree.**

To implement the deletion from a binary tree, there is a need to consider the possibilities of deleting the nodes. They are:

– Node is a terminal node: In case the node is the left child node of its parent, then the left pointer of its parent is set to NULL. In all other cases, if the node is right child node of its parent, then the right pointer of its parent is set to NULL.

– Node has only one child: In this scenario, the appropriate pointer of its parent is set to child node.

– Node has two children: The predecessor is replaced by the node value, and then the predecessor of the node is deleted.

**Describe stack operation. **

Stack is a data structure that follows Last in First out strategy.

Stack Operations:-

� Push – Pushes (inserts) the element in the stack. The location is specified by the pointer.

� Pop – Pulls (removes) the element out of the stack. The location is specified by the pointer

� Swap: – the two top most elements of the stack can be swapped

� Peek: – Returns the top element on the stack but does not remove it from the stack

� Rotate:- the topmost (n) items can be moved on the stack in a rotating fashion

A stack has a fixed location in the memory. When a data element is pushed in the stack, the pointer points to the current element.

**Describe queue operation.**

Queue is a data structure that follows First in First out strategy.

Queue Operations:

� Enqueue- Inserts the element in the queue at the end.

� Dequeue – removes the element out of the queue from the front

� Size – Returns the size of the queue

� Front – Returns the first element of the queue.

� Empty – to find if the queue is empty.

**Discuss how to implement queue using stack.**

A queue can be implemented by using 2 stacks:-

� 1. An element is inserted in the queue by pushing it into stack 1

� 2. An element is extracted from the queue by popping it from the stack 2

� 3. If the stack 2 is empty then all elements currently in stack 1 are transferred to stack 2 but in the reverse order

� 4. If the stack 2 is not empty just pop the value from stack 2.

**Explain stacks and queues in detail.**

A stack is a data structure based on the principle Last In First Out. Stack is container to hold nodes and has two operations – push and pop. Push operation is to add nodes into the stack and pop operation is to delete nodes from the stack and returns the top most node.

A queue is a data structure based on the principle First in First Out. The nodes are kept in an order. A node is inserted from the rear of the queue and a node is deleted from the front. The first element inserted in the first element first to delete.

**Question – What are priority queues?**

A priority queue is essentially a list of items in which each item is associated with a priority

Items are inserted into a priority queue in any, arbitrary order. However, items are withdrawn from a priority queue in order of their priorities starting with the highest priority item first.

Priority queues are often used in the implementation of algorithms

**Question – What is a circular singly linked list?**

In a circular singly linked list, the last node of the list is made to point to the first node. This eases the traveling through the list..

**Describe the steps to insert data into a singly linked list.**

Steps to insert data into a singly linked list:-

**Insert in middle**

Data: 1, 2, 3, 5

1. Locate the node after which a new node (say 4) needs to be inserted.(say after 3)

2. create the new node i.e. 4

3. Point the new node 4 to 5 (new_node->next = node->next(5))

4. Point the node 3 to node 4 (node->next =newnode)

**Insert in the beginning**

Data: 2, 3, 5

1. Locate the first node in the list (2)

2. Point the new node (say 1) to the first node. (new_node->next=first)

**Insert in the end**

Data: 2, 3, 5

1. Locate the last node in the list (5)

2. Point the last node (5) to the new node (say 6). (node->next=new_node)

**Explain how to reverse singly link list.**

**Answer: **Reverse a singly link list using iteration:-

1. First, set a pointer ( *current) to point to the first node i.e. current=base

2. Move ahead until current!=null (till the end)

3. set another pointer (* next) to point to the next node i.e. next=current->next

4. store reference of *next in a temporary variable ( *result) i.e current->next=result

5. swap the result value with current i.e. result=current

6. and now swap the current value with next. i.e. current=next

7. return result and repeat from step 2

A linked list can also be reversed using recursion which eliminates the use of a temporary variable.

**Define circular linked list.**

**Answer: **In a circular linked list the first and the last node are linked. This means that the last node points to the first node in the list. Circular linked list allow quick access to first and last record using a single pointer.

**Describe linked list using C++ with an example.**

A linked list is a chain of nodes. Each and every node has at least two nodes, one is the data and other is the link to the next node. These linked lists are known as single linked lists as they have only one pointer to point to the next node. If a linked list has two nodes, one is to point to the previous node and the other is to point to the next node, then the linked list is known as doubly linked list.

To use a linked list in C++ the following structure is to be declared:

typedef struct List

{

long Data;

List* Next;

List ()

{

Next=NULL;

Data=0;

}

};

typedef List* ListPtr.

The following code snippet is used to add a node.

void List::AddANode()

{

ListPtr->Next = new List;

ListPtr=ListPtr->Next;

}

The following code snippet is used to traverse the list

void showList(ListPtr listPtr)

{

� while(listPtr!=NULL) {

cout<<listPtr->Data;

}

return temp;

}

/ / LListItr class; maintains “current position”.

*/ I*

/ / CONSTRUCTION: With no parameters. The LList class may

/ / construct a LListItr with a pointer to a LListNode.

/ /

/ / ******************PUBLOIPCE RATIONS*********************

/ / boo1 isValid( j –> True if not NULL

/ / void advance ( j –> Advance (if not already NULL)

/ / Object retrieve( ) –> Return item in current position

/ / ******************ERRORS********************************

/ / Throws BadIterator for illegal retrieve.

** **

template <class Object>

class LListItr

{

public:

LListItr( ) : current( NULL ) { 1

** **

boo1 isValid( j const

{ return current ! = NULL; )

** **

void advance ( j

{ if( isValid( j j current = current->next; 1

** **

const Object & retrieve( ) const

{ if( !isValid( j j throw BadIterator( ) ;

return current->element; }

** **

private:

LListNode<Object> *current; / / Current position

** **

LListItr( LListNode<Object> *theNode )

: current ( theNode ) {

** **

friend class LList<Object>; / / Grant access to constructor

};

**1 **/ / LList class.

**2 **/ /

**3 **/ / CONSTRUCTION: with no initializer.

**4 **/ / Access is via LListItr class.

**5 **/ /

**6 **/ / ******************PUBLIoC~ ERATIoNs******************X**

**7 **/ / boo1 isEmpty( ) –> Return true if empty; else false

**8 **/ / void makeEmpty( ) –> Remove all items

**9 **/ / LListItr zeroth( ) –> Return position to prior to first

**10 **/ / LListItr first( ) –> Return first position

**11 **/ / void insert( x, p ) –> Insert x after position p

**12 **/ / void remove( x ) –> Remove x

**13 **/ / LListItr find( x ) –> Return position that views x

**14 **/ / LListItr findprevious( x )

**15 **/ / –> Return position prior to x

**16 **/ / ******************ERR~R~*****~****************************

**17 **/ / No special errors.

**18**

**19 **template <class Object>

**20 **class LList

**21 **I

**22 **public:

**23 **LList( ) ;

**24 **LList( const LList & rhs ) ;

**25 **-LList( ) ;

**26**

**27 **boo1 isEmpty( const;

**28 **void makeEmpty( ) ;

**29 **LListItr<Object> zeroth( ) const;

**30 **LListItr<Object> first ( ) const;

**31 **void insert( const Object & x, const ~~istItr<Object&> p ) ;

**32 **LListItr<Object> find( const Object & x ) const;

**33 **LListItr<Object> findprevious( const Object & x ) const;

**34 **void remove( const Object & x ) ;

**35**

**36 **const LList & operator=( const LList & rhs ) ;

**37**

**38 **private:

**39 **LListNode<Object> *header;

};

**1 **/ / Construct the list.

**2 **template <class Object>

**3 **LList<Object>: :LList ( )

**4 **{

**5 **header = new LListNode<Object>;

**6 **1

**7**

**8 **/ / Test if the list is logically empty.

**9 **/ / Return true if empty, false, otherwise.

**10 **template <class Object>

**11 **boo1 LList<Object>: :isEmpty( ) const

**12 **{

**13 **return header->next == NULL;

**14 **1

**15**

**16 **/ / Return an iterator representing the header node.

**17 **template <class Object>

**18 **LListItr<Object> LList<Object>::zeroth( ) const

**19 **I

**20 **return LListItr<Object>( header ) ;

**21 **1

**22**

**23 **/ / Return an iterator representing the first node in the list.

24 / / This operation is valid for empty lists.

**25 **template <class Object>

**26 **LListItr<Object> LList<Object>::first( ) const

**27 **{

**28 **return LListItr<Object>( header->next ) ;

**29 **};

**1 **/ / Simple print function.

**2 **template <class Object>

**3 **void printlist( const LList<Object> & theList )

**4 **(

**5 **if( theList.isEmptyi ) )

6 cout << “Empty list” << endl;

**7 **else

8 }

**9 **LListItr<Object> itr = theList.first( ) ;

**10 **for( ; itr.isValid( ) ; itreadvance( ) )

**11 **cout << itr.retrieve( ) << ” “;

**12 **{

**13 **cout << endl;

**14 **}

** **

**1 **/ / Return iterator corresponding to the first node matching x.

**2 **/ / Iterator is not valid if item is not found.

**3 **template <class Object>

**4 **LListItr<Object> LList<Object>::find( const Object & x ) const

**5 **{

6 LListNode<Object> *p = header->next;

**7**

8 while( p ! = NULL && p->element ! = x )

9 p = p->next;

**10**

**11 **return LListItr<Object>( p ) ;

**12 **}

**1 **/ / Remove the first occurrence of an item X .

**2 **template <class Object>

**3 **void LList<Object>::remove( const Object & *x*

**4 **{

**5 **LListNode<Object> *p = findprevious( x ) .current;

**6**

**7 **if( p->next ! = NULL )

8 I

**9 **LListNode<Object> *oldNode = p->next;

**10 **p->next = p->next->next; / / Bypass

**11 **delete oldNode;

**12 **}

**13 **1

**Figure 17.13 **The remove routine for the LList class.

**1 **/ / Return iterator prior to the first node containing item x.

**2 **template <class Object>

**3 **LListItr<Object>

**4 **LList<Object>: : f indprevious ( const Object & x ) const

**5 **{

**6 **LListNode<Object> *p = header;

**7**

8 while( p-znext ! = NULL && p->next->element ! = x )

**9 **p = p->next;

**10**

**11 **return LListItr<Object>( p ) ;

**12 **}

**Figure 17.14 **The f indprevious routine-similar to the find routine-for use

with remove.

**1 **/ / Insert item x after p.

**2 **template <class Object>

**3 **void LList<Object>::

**4 **insert( const Object & x, const LListItr<Object> & p )

**5 **{

**6 **if ( p.current ! = NULL )

**7 **p.current->next = new LListNodecObject>( x,

8 p.current->next ) ;

**9 **}

**Figure 17.15 **The insertion routine for the LList class.

**1 **/ / Make the list logically empty.

**2 **template <class Object>

**3 **void LList<Object>: :makeEmpty( )

**4 **I

**5 **while( !isEmptyi ) )

6 remove ( first ( ) .retrieve ( ) ) ;

**7 **1

**8**

**9 **/ / Destructor.

**10 **template <class Object>

**11 **LList<Object>::-LList( )

**12 **(

**13 **makeErnpty ( ) ;

**14 **delete header;

**15 **1

**Figure 17.16 **The makeEmpty method and the LLis t destructor.

**1 **/ / Copy constructor.

**2 **template <class Object>

**3 **LList<Object>::LList( const LList<Object> & rhs )

**4 **{

**5 **header = new LListNodeiObject>;

**6 ***this = rhs;

**7 **}

**8**

**9 **/ / Deep copy of linked lists.

**10 **template <class Object>

**11 **const LList<Object> &

**12 **LList<Object>::operator=( const ~~ist<Object&> rhs *i*

**13 **{

**14 **if ( this ! = &rhs )

**15 **(

**16 **makeEmpty ( ) ;

**17**

**18 **LListItr<Object> ritr = rhs.first( ) ;

**19 **LListItr<Object> itr = zeroth( ) ;

**20 **for( ; ritr.isValid( ) ; ritr.advance( ) , itr.advance( )

**21 **insert( ritr.retrieve( ) , itr ) ;

**22 **{

**23 **return *this;

**24 **}

**Figure 17.17 **Two LLis **t **copy routines: operator= and copy constructor.

**Q: **Implement a Algorithm to check if the link list is in Ascending order?

**A: **template

bool linklist::isAscending() const{

nodeptr ptr = head;

while(ptr->_next)

{

if(ptr->_data > ptr->_next->_data)

return false;

ptr= ptr->_next;

}

return true;

}

** **

**Q: **Write an algorithm to reverse a link list?

**A: **template

void linklist::reverselist()

{

nodeptr ptr= head;

nodeptr nextptr= ptr->_next;

while(nextptr)

{

nodeptr temp = nextptr->_next;

nextptr->_next = ptr;

ptr = nextptr;

nextptr = temp;

}

head->_next = 0;

head = ptr;

}

The post Algorithms FAQs Part 2. appeared first on Anagha Agile Systems.

]]>The post HYPER PERFORMANCE SCRUM TEAMS. appeared first on Anagha Agile Systems.

]]>Scrum is an Agile Methodology, it is modelled and designed for Hyper-productive teams and high quality work, these teams operate at the rate of 5-10 times the velocity and very high quality than that of waterfall teams. Scrum is modelled so that it can scaled across the globe to any size.

In 2001, Sutherland, Schwaber, and fifteen colleagues got together in Snowbird, Colorado, and drafted the Agile Manifesto, which became a clarion call to software developers around the globe to pursue this radically different type of management.

Since then, Sutherland, Schwaber, and their colleagues have gone on to generate thousands of high-performance teams in hundreds of companies all around the world under the labels of Scrum and Agile.

SCRUM COACH Mike Cohn reports in his classic book, Succeeding with Agile :

“During the first year of making the switch, Salesforce.com released 94 percent more features, delivered 38 percent more features per developer, and delivered over 500 percent more value to their customers compared to the previous year. . . . Fifteen months after adopting Scrum, Salesforce.com surveyed its employees and found that 86 percent were having a ‘good time’ or the ‘best time’ working at the company. Prior to adopting Scrum, only 40 percent said the same thing. Further, 92 percent of employees said they would recommend an agile approach to others.”

High performance and productive is directly depends on the self-organizing capability of teams, Understanding this self-organizing capability and continuously improving on it is a challenge.

The huge forte of complex adaptive systems to be adapting new system and environments helps us to follow certain rules namely

- Shock Therapy.
- Choice uncertainty principle.
- Punctuated Equilibrium.

There are teams who follow waterfall but use terms and jargon of “Agile”, thinking that is enough to be an agile team. Then there are teams who follow few SCRUM principles and not every person in the team and management follow completely SCRUM, these teams are known as “SCRUMBUTT”.

It is believed that majority of the teams are not SCRUM teams only 20% -25% are pure SCRUM teams capable of working above than the performance of Waterfall team.

The best way to check if the scrum team is truly following the practices of SCRUM and Agile principles is to ask the scrum team members to undertake NOKIA Test.

Following are the questions asked during NOKIA test:

– Do you know who the product owner is?

– Is there a product backlog prioritized by business value that has estimates created by the team?

– Does the team generate burndown charts and know their velocity?

– You cannot have project managers (or anyone else) disrupting the work of the team.

– Only 10% of teams worldwide meet this criteria.

If the average score of the team is more than 6, then the scrum is properly followed otherwise it is SCRUMBUTT team.

Another way of checking if the organization is implement very good scrum and have hyper productive scrum teams is by measuring if their revenue or return of investment of the organization per annum.

There are very few exceptional top performing scrum teams who had successfully meet the specified commitment and since the outcome was overwhelming. The management and sales & marketing team were so overwhelmed that were unable to share the pace of the output and had find excuses and drop the idea of marketing to the client. Few of the examples given by Jeff Sutherland are below:

- The most productive team ever recorded at Borland produced a failed product.
- The most productive distributed team (SirsiDynix) had quality problems, management problems, and internal company conflicts that caused the product to be killed.
- The second most productive team in the world (Motorola – David Anderson data) was overwhelmed with bureaucracy, completely demotivated, their product was killed, and the team died a painful death

BEST SCRUM PROJECTS:

There are very few exceptional top performing scrum teams who had successfully meet the specified commitment and since the outcome was overwhelming. The management and sales & marketing team were so overwhelmed that were unable to share the pace of the output and had find excuses and drop the idea of marketing to the client. Few of the examples given by Jeff Sutherland are below:

- The most productive team ever recorded at Borland produced a failed product.
- The most productive distributed team (SirsiDynix) had quality problems, management problems, and internal company conflicts that caused the product to be killed.
- The second most productive team in the world (Motorola – David Anderson data) was overwhelmed with bureaucracy, completely demotivated, their product was killed, and the team died a painful death

The best way to understand the scrum is by studying failed scrum projects which performing exceptionally well but were killed due to the other factors of the organization. It is similar to the analogy of Thomas Alva Edison who failed 10000 times to final invent a modern light bulb. So we need to study the failed projects and understand the impediments and try to overcome them in our scrum project.

One of the best way and well established process of hyperproductive teams is the “Toyota Way”. The development and manufacture process of No. 1 Car Maker of the World. The following are the guidelines implemented by Toyota

- There are many things one doesn’t understand, and therefore we ask them why don’t you just go ahead and take action; try to do something? Agile Principle #3 #11
- You realize how little you know and you face your own failures and redo it again and at the second trial you realize another mistake…so you can redo it once again. Agile Principle #11#12
- So by constant improvement…one can rise to the higher level of practice and knowledge. Agile Principle #3

In 2007, Toyota had very bad week, they had to face lot of criticism and feedback from the owners of Toyota cars. Since Toyota follows very strict quality and agile policy they recalled all the defect cars from Japan. Later Toyota strived and worked hard to rectify the defects and improved the cars to greater extent and got their market leader share. Earlier that week their market share less than their competitors almost down to fourth or fifth place but regained it back very soon to 1st place in the market.

A well-known technologist “Alan Kay” who invented Personal workstation, Mouse, Ethernet, Windows Interface, Laser Printer and Small Talk has followed a strategy of selecting the data points for research and innovation.

Every time Alan Kay followed extreme data points of the IT & Technology Field for inventing something in that domain. He never used to look for data points which were incremental to already existing products in the IT field, he never looked for Cross Discipline products for innovation. By following this kind of strategy Alan has been successful in inventing very successful and high impact products.

Similarly Jeff and Ken followed the extreme data points for identifying the projects for inventing scrum, They choose IBM Surgical team, Takeuchi & Nonaka research paper on high productive quality teams and Borland Quattro Project as research data or extreme data for inventing scrum.

Anyone who is planning to use scrum and be an industry leader has to high motivated person first, this motivation helps the team perform better. So if you are planning on scrum you need to be motivated and ready to strive for greater goals like being Industry leader and revenue of the project is skyrocketing. Anyone can aspire to be great and that aspiration can be your starting point for scrum implementation.

Once you have planned for scrum and start using scrum, how to verify you are doing it rightly. For that there is Nokia test. You and your team need to undergo Nokia test and make sure you have the scoring of more than 6. IF your team score is less than 6 then you are not doing SCRUM.

The core and forte of a successful SCRUM has been utilizing the best practices of engineering and good communication. IF you see the first scrum at Easel Corporation implemented by Jeff Sutherland, it incorporated all the activities of extreme programming’s engineering practices. All the Most high performance teams use Scrum and XP together. It is hard to get a Scrum with extreme velocity without XP engineering practices. You cannot scale XP without Scrum.

Example 1: Anatomy of a failed project – SirsiDynix – ScrumButt

Example 2: Xebia ProRail project – Pretty Good Scrum

As agile became popular both in Japan, US and slowly spread around the globe. VC start preferring companies which invest their process & practices in agile. Some VC have start using SCRUM and agile methodologies in their own management and their portfolio started to have SCRUM and XP practiced companies. All the VC felt that having clients in their portfolio are agile companies they felt it has double safe. So VC have started the following:

- – Invest only in Agile projects
- ! hyperproductive company out of 10 is good enough to meet investment goals
- Invest in Scrum training could get 2 hyperproductive

- – Invest only in market leading, industry standard processes – this means Scrum and XP
- – Ensure teams implement basic Scrum practices
- Everyone must pass Nokia test
- Management held accountable at Board level for impediments

OpenView Venture Partners strategy is invest in Software company which practised SCRUM: They also incorporated SCRUM in their company. They were the first non-software company to:

- Investment partners practice Scrum
- Invest only in Agile projects
- Use only market leading, Industry standard process – this means Scrum and XP
- Ensure teams implement best Scrum practices

- Drive Scrum implementation at Board level. Ensure management is totally involved and understands Lean Product Development
- Many portfolio companies run senior management team, sales, marketing, client services, and support with Scrum.

All the investors start searching for the secret ingredient for their portfolio recipe and start asking

What is the secret recipe for building hyperproductive teams? They started summarizing what Jeff had advised to Open View, i.e.

- – What is the secret recipe for building hyperproductive teams?
- – First step is implementing basic Scrum practices and passing Nokia test.
- – Second, management needs to get totally involved, understand team velocity, and remove impediments.
- – Third, basic XP engineering practices need to be implemented, namely: o Test first development (may be pair programming, TDD);Continuous integration
- – Then they are ready for the fun stuff

According to Scrum, Hyperproductivity is defined as at least Toyota level of performance, it used to require at least two years for a non-agile company to reach 200 to 240% improvement. Now the scrum company requires only 300% improvement in 3 two weeks sprints. The only challenge is to keep the consistency of the teams to that optimum level very quickly and then remain in the hyper productive state. This is the main truth of self-organized teams to remain in hyper productive state

(Courtesy InfoQ)

Courtesy (Jeff Sutherland Agile Seminar)

Following are the successful examples of Shock Therapy are

1. MySpace in Beverly Hills.

2. JayWay in Sweden.

3. Pivotal Labs in San Francisco.

MySpace has been very good example of successful agile projects in most of the studies done in Agile.

MySpace has several hundred developers

– About 1/3 waterfall

– About 1/3 Scrum Butt with Project Managers

– About 1/3 pure Scrum

Scott Downey (Owner at Rapid Scrum LLC), when he was coaching MySpace the Agile practices and scrum training, he reportedly taken teams to high production state in a few weeks

It has been recorded that Average time to 240% of the velocity of a waterfall team is 2.9 days per team member where the team includes the Scrum Master and the Product Owner.

When a new team is sent for scrum training at MySpace, Scott Downey has a very strict and rigour training. Also Scott Downey sets all the rules of effectively running the scrum training and the projects. The rules are: My rules remain in effect until the team has met three criteria:

- They are Hyper-Productive(>240% higher targeted value contribution)
- They have completed three successful Sprints consecutively
- They have identified a good business reason to change the rule

– The rules are roughly these:

- Everyone on the team will attend scrum training session. I conduct an extremely condensed Scrum at MySpace course in about four hours, and the entire team comes together to a session. Until everyone has been trained, we won’t begin our first Sprint.
- Sprints will be one week long.

The biggest and crucial point of defining a successful product from SCRUM Development team is the commitment to definition of DONE:

Definition of “Done” is this:

o 1. Feature Complete

o 2. Code Complete

o 3. No known defects

o 4. Approved by the Product Owner

o 5. Production Ready

Initially Scrum Master guides and helps the new team to do Daily SCRUM Meeting, He guides each and every move of the SCRUM team till the new team has become familiar with the best practices of SCRUM team.

Initially Scrum Master guides and monitors the Estimation and discussions in Sprint Planning meeting, as the team matures they start handling the activities of sprint planning into unique meetings. The Product Owner participates as visitor and advisor to the meeting, Scrum master guides development team in creating user stories. Stories are made to comply the INVEST principles. In the Sprint Backlog meeting Commitment of the team of Product Backlog is read aloud and taken consent of team again after explaining clearly the team what “Commit” does and does not mean so there is no ambiguity about the product requirements and definition of DONE. Once the team commits to the sprint work, the meeting ends.

The work should be completed in right order and with regression testing with no defects. A lot of emphasis is placed on priority order than multi-tasking, this helps in completing the commitments early rather than having incomplete items on the list. Every member of the scrum team follows the standard layouts for all the artifacts of SCRUM namely Sprint Planning Boards, User Stories, Story Cards, Burndown Charts and Velocity tracking.

Once the Master Scrum master feels that he has team matured enough, he moves on to another new team and new Scrum master is assigned to monitor the team’s scrum development process. The main aim is to make the team a hyperproductive team.

As the team becomes matured and completely self-organized, the team member starts correcting each other and evokes lot of positive energy in the team. Soon the team becomes very agile, active and focused.

The Cosmic Stopping Problem, otherwise known as the choice uncertainty principle.

Jeff Sutherland in one of his paper “AGILE DEVELOPMENT:LESSONS LEARNED FROM THE FIRST SCRUM” suggest that, The most interesting effect of Scrum on Easel’s development environment was an observed “punctuated equilibrium” effect. This occurs in biological evolution when a species is stable for long periods of time and then undergoes a sudden jump in capability During the long period of apparent stability, many internal changes in the organism are reconfigured that cannot be observed externally. When all pieces are in place to allow a significant jump in functionality, external change occurs suddenly. A fully integrated component design environment leads to unexpected, rapid evolution of a software system with emergent, adaptive properties resembling the process of punctuated equilibrium observed in biological species. Sudden leaps in functionality resulted in earlier than expected delivery of software in the first Scrum.

A punctuated equilibrium – the equilibrium being the “Safety Zone” of working in a stable system for a while (e.g. during a Scrum Sprint when the Sprint Backlog does not shift within the sprint) punctuated by events that allow the chaos/shifting world outside to affect the system, and then return to the “Safety Zone” to have an opportunity for behavior that fits the new reality to emerge. This has been observed in nature as well as contributing to effective evolution.

This punctuated equilibrium is accompanied by Choice Uncertainty Principle also known as “Cosmic Problem”. It is found everywhere, at every level of the world. A very good Scrum master is able to handle this at every step of development, a good scrum does the following to avoid Cosmic Problem:

- o Don’t accept backlog that is not ready
- o Minimize work in progress
- o Stop the line when bad things happen and fix it so it can never happen again
- o If it is not “Done” put it back on the product backlog

Another major factor why Scrum is better than waterfall, is that collocation factor affects velocity which affects the cost of the project. In a study done at PatientKeeper a project from Scrum development moved to Waterfall team, the company had expenses instead of savings of 30%, so $2M of Scrum development at PatientKeeper company costs $6M when outsourced to waterfall teams. So never outsourced to waterfall teams, only outsource to Scrum teams.

A success path to Hyperproductivity is based on the Complex Adaptive System theory.

The essential architectural design concept that encourages refactoring at granularity level with regression testing and rigours system testing to build a product with zero defects. Team develops the software components in the right hierarchical order with optimization of speed of displaying the new features of the product. Working on only the well-defined stories and removing the requirements that are not ready which otherwise led to cosmic Stopping Problem, minimizing work in progress, avoiding obstacles through self-organization and finally applying the Shock therapy that decreases the Sprints count.

Finally SCRUM recommends, those who can do ‘Pretty Good Scrum then should do Great SCRUM’

There are 3 types of companies namely,

- – Doesn’t want to see or hear impediments. Suppress Scrum master. Prevents the process implementation. These companies always suffer and lose out.
- – Talk about impediments, but doesn’t fix impediments, internal struggle goes on. These companies suffer for a long time to be better.
- – Look for Impediments and fix it immediately. Avoid internal Conflicts. These companies are successful companies.

The Success of SCRUM driven work and its positive impact on the employees has forced almost all the non-software firms also to move to SCRUM, as clearly stated in Forbes

“The success of software development at firms like Salesforce.com [CRM], along similar customer-driven iterative methods in auto manufacture at firms like Toyota, has led to the spread of this different way of managing to related fields.

- • The Quality Software Engineering group at IBM [IBM] is responsible for software development processes and practices across the company. As part of the effort to promulgate Scrum in developing software, an iterative process of working was adopted for doing change management.
- • At the Chicago software firm Total Attorneys, iterative work patterns were so successful that they spread to the staff of call centers: small cross-functional teams work in cycles of three weeks.
- • At the Danish software firm, Systematic, iterative methods have been spreading from software development to other parts of the firm.
- • At the Swedish software firm Trifork, iterative methods have spread from software development to conference management.
- • And OpenView Venture Partners, a Boston-based venture capital firm, has expanded client-driven iterations into consulting and finance.”

The Power of SCRUM is now enthusing other firms to join the main stream of new methodology of management of work.

The post HYPER PERFORMANCE SCRUM TEAMS. appeared first on Anagha Agile Systems.

]]>The post Algorithms FAQs Part 3. appeared first on Anagha Agile Systems.

]]>**A: **BinaryHeap.h

————

#ifndef BINARY_HEAP_H_

#define BINARY_HEAP_H_

#include “dsexceptions.h”

#include “vector.h”

// BinaryHeap class

//

// CONSTRUCTION: with an optional capacity (that defaults to 100)

//

// ******************PUBLIC OPERATIONS*********************

// void insert( x ) –> Insert x

// deleteMin( minItem ) –> Remove (and optionally return) smallest item

// Comparable findMin( ) –> Return smallest item

// bool isEmpty( ) –> Return true if empty; else false

// bool isFull( ) –> Return true if full; else false

// void makeEmpty( ) –> Remove all items

// ******************ERRORS********************************

// Throws Underflow and Overflow as warranted

template

class BinaryHeap

{

public:

explicit BinaryHeap( int capacity = 100 );

bool isEmpty( ) const;

bool isFull( ) const;

const Comparable & findMin( ) const;

void insert( const Comparable & x );

void deleteMin( );

void deleteMin( Comparable & minItem );

void makeEmpty( );

private:

int currentSize; // Number of elements in heap

vector array; // The heap array

void buildHeap( );

void percolateDown( int hole );

};

#endif

BinaryHeap.cpp

————–

#include “BinaryHeap.h”

/**

* Construct the binary heap.

* capacity is the capacity of the binary heap.

*/

template

BinaryHeap::BinaryHeap( int capacity )

: array( capacity + 1 ), currentSize( 0 )

{

}

/**

* Insert item x into the priority queue, maintaining heap order.

* Duplicates are allowed.

* Throw Overflow if container is full.

*/

template

void BinaryHeap::insert( const Comparable & x )

{

if( isFull( ) )

throw Overflow( );

// Percolate up

int hole = ++currentSize;

for( ; hole > 1 && x < array[ hole / 2 ]; hole /= 2 )

array[ hole ] = array[ hole / 2 ];

array[ hole ] = x;

}

/**

* Find the smallest item in the priority queue.

* Return the smallest item, or throw Underflow if empty.

*/

template

const Comparable & BinaryHeap::findMin( ) const

{

if( isEmpty( ) )

throw Underflow( );

return array[ 1 ];

}

/**

* Remove the smallest item from the priority queue.

* Throw Underflow if empty.

*/

template

void BinaryHeap::deleteMin( )

{

if( isEmpty( ) )

throw Underflow( );

array[ 1 ] = array[ currentSize– ];

percolateDown( 1 );

}

/**

* Remove the smallest item from the priority queue

* and place it in minItem. Throw Underflow if empty.

*/

template

void BinaryHeap::deleteMin( Comparable & minItem )

{

if( isEmpty( ) )

throw Underflow( );

minItem = array[ 1 ];

array[ 1 ] = array[ currentSize– ];

percolateDown( 1 );

}

/**

* Establish heap order property from an arbitrary

* arrangement of items. Runs in linear time.

*/

template

void BinaryHeap::buildHeap( )

{

for( int i = currentSize / 2; i > 0; i– )

percolateDown( i );

}

/**

* Test if the priority queue is logically empty.

* Return true if empty, false otherwise.

*/

template

bool BinaryHeap::isEmpty( ) const

{

return currentSize == 0;

}

/**

* Test if the priority queue is logically full.

* Return true if full, false otherwise.

*/

template

bool BinaryHeap::isFull( ) const

{

return currentSize == array.size( ) – 1;

}

/**

* Make the priority queue logically empty.

*/

template

void BinaryHeap::makeEmpty( )

{

currentSize = 0;

}

/**

* Internal method to percolate down in the heap.

* hole is the index at which the percolate begins.

*/

template

void BinaryHeap::percolateDown( int hole )

{

/* 1*/ int child;

/* 2*/ Comparable tmp = array[ hole ];

/* 3*/ for( ; hole * 2 <= currentSize; hole = child )

{

/* 4*/ child = hole * 2;

/* 5*/ if( child != currentSize && array[ child + 1 ] < array[ child ] )

/* 6*/ child++;

/* 7*/ if( array[ child ] < tmp )

/* 8*/ array[ hole ] = array[ child ];

else

/* 9*/ break;

}

/*10*/ array[ hole ] = tmp;

}

TestBinaryHeap.cpp

——————

#include

#include “BinaryHeap.h”

#include “dsexceptions.h”

// Test program

int main( )

{

int numItems = 10000;

BinaryHeap h( numItems );

int i = 37;

int x;

try

{

for( i = 37; i != 0; i = ( i + 37 ) % numItems )

h.insert( i );

for( i = 1; i < numItems; i++ )

{

h.deleteMin( x );

if( x != i )

cout << “Oops! ” << i << endl;

}

for( i = 37; i != 0; i = ( i + 37 ) % numItems )

h.insert( i );

h.insert( 0 );

h.insert( i = 999999 ); // Should overflow

}

catch( Overflow )

{ cout << “Overflow (expected)! ” << i << endl; }

return 0;

}

**Q: **Implement Binary Search Tree in C++?

**A: **BinarySearchTree.h

———————-

#ifndef BINARY_SEARCH_TREE_H_

#define BINARY_SEARCH_TREE_H_

#include “dsexceptions.h”

#include // For NULL

// Binary node and forward declaration because g++ does

// not understand nested classes.

template

class BinarySearchTree;

template

class BinaryNode

{

Comparable element;

BinaryNode *left;

BinaryNode *right;

BinaryNode( const Comparable & theElement, BinaryNode *lt, BinaryNode *rt )

: element( theElement ), left( lt ), right( rt ) { }

friend class BinarySearchTree;

};

// BinarySearchTree class

//

// CONSTRUCTION: with ITEM_NOT_FOUND object used to signal failed finds

//

// ******************PUBLIC OPERATIONS*********************

// void insert( x ) –> Insert x

// void remove( x ) –> Remove x

// Comparable find( x ) –> Return item that matches x

// Comparable findMin( ) –> Return smallest item

// Comparable findMax( ) –> Return largest item

// boolean isEmpty( ) –> Return true if empty; else false

// void makeEmpty( ) –> Remove all items

// void printTree( ) –> Print tree in sorted order

template

class BinarySearchTree

{

public:

explicit BinarySearchTree( const Comparable & notFound );

BinarySearchTree( const BinarySearchTree & rhs );

~BinarySearchTree( );

const Comparable & findMin( ) const;

const Comparable & findMax( ) const;

const Comparable & find( const Comparable & x ) const;

bool isEmpty( ) const;

void printTree( ) const;

void makeEmpty( );

void insert( const Comparable & x );

void remove( const Comparable & x );

const BinarySearchTree & operator=( const BinarySearchTree & rhs );

private:

BinaryNode *root;

const Comparable ITEM_NOT_FOUND;

const Comparable & elementAt( BinaryNode *t ) const;

void insert( const Comparable & x, BinaryNode * & t ) const;

void remove( const Comparable & x, BinaryNode * & t ) const;

BinaryNode * findMin( BinaryNode *t ) const;

BinaryNode * findMax( BinaryNode *t ) const;

BinaryNode * find( const Comparable & x, BinaryNode *t ) const;

void makeEmpty( BinaryNode * & t ) const;

void printTree( BinaryNode *t ) const;

BinaryNode * clone( BinaryNode *t ) const;

};

#endif

BinarySearchTree.cpp

——————–

#include “BinarySearchTree.h”

#include

/**

* Implements an unbalanced binary search tree.

* Note that all “matching” is based on the < method.

*/

/**

* Construct the tree.

*/

template

BinarySearchTree::BinarySearchTree( const Comparable & notFound ) :

root( NULL ), ITEM_NOT_FOUND( notFound )

{

}

/**

* Copy constructor.

*/

template

BinarySearchTree::

BinarySearchTree( const BinarySearchTree & rhs ) :

root( NULL ), ITEM_NOT_FOUND( rhs.ITEM_NOT_FOUND )

{

*this = rhs;

}

/**

* Destructor for the tree.

*/

template

BinarySearchTree::~BinarySearchTree( )

{

makeEmpty( );

}

/**

* Insert x into the tree; duplicates are ignored.

*/

template

void BinarySearchTree::insert( const Comparable & x )

{

insert( x, root );

}

/**

* Remove x from the tree. Nothing is done if x is not found.

*/

template

void BinarySearchTree::remove( const Comparable & x )

{

remove( x, root );

}

/**

* Find the smallest item in the tree.

* Return smallest item or ITEM_NOT_FOUND if empty.

*/

template

const Comparable & BinarySearchTree::findMin( ) const

{

return elementAt( findMin( root ) );

}

/**

* Find the largest item in the tree.

* Return the largest item of ITEM_NOT_FOUND if empty.

*/

template

const Comparable & BinarySearchTree::findMax( ) const

{

return elementAt( findMax( root ) );

}

/**

* Find item x in the tree.

* Return the matching item or ITEM_NOT_FOUND if not found.

*/

template

const Comparable & BinarySearchTree::

find( const Comparable & x ) const

{

return elementAt( find( x, root ) );

}

/**

* Make the tree logically empty.

*/

template

void BinarySearchTree::makeEmpty( )

{

makeEmpty( root );

}

/**

* Test if the tree is logically empty.

* Return true if empty, false otherwise.

*/

template

bool BinarySearchTree::isEmpty( ) const

{

return root == NULL;

}

/**

* Print the tree contents in sorted order.

*/

template

void BinarySearchTree::printTree( ) const

{

if( isEmpty( ) )

cout << “Empty tree” << endl;

else

printTree( root );

}

/**

* Deep copy.

*/

template

const BinarySearchTree &

BinarySearchTree::

operator=( const BinarySearchTree & rhs )

{

if( this != &rhs )

{

makeEmpty( );

root = clone( rhs.root );

}

return *this;

}

/**

* Internal method to get element field in node t.

* Return the element field or ITEM_NOT_FOUND if t is NULL.

*/

template

const Comparable & BinarySearchTree::

elementAt( BinaryNode *t ) const

{

if( t == NULL )

return ITEM_NOT_FOUND;

else

return t->element;

}

/**

* Internal method to insert into a subtree.

* x is the item to insert.

* t is the node that roots the tree.

* Set the new root.

*/

template

void BinarySearchTree::

insert( const Comparable & x, BinaryNode * & t ) const

{

if( t == NULL )

t = new BinaryNode( x, NULL, NULL );

else if( x < t->element )

insert( x, t->left );

else if( t->element < x )

insert( x, t->right );

else

; // Duplicate; do nothing

}

/**

* Internal method to remove from a subtree.

* x is the item to remove.

* t is the node that roots the tree.

* Set the new root.

*/

template

void BinarySearchTree::

remove( const Comparable & x, BinaryNode * & t ) const

{

if( t == NULL )

return; // Item not found; do nothing

if( x < t->element )

remove( x, t->left );

else if( t->element < x )

remove( x, t->right );

else if( t->left != NULL && t->right != NULL ) // Two children

{

t->element = findMin( t->right )->element;

remove( t->element, t->right );

}

else

{

BinaryNode *oldNode = t;

t = ( t->left != NULL ) ? t->left : t->right;

delete oldNode;

}

}

/**

* Internal method to find the smallest item in a subtree t.

* Return node containing the smallest item.

*/

template

BinaryNode *

BinarySearchTree::findMin( BinaryNode *t ) const

{

if( t == NULL )

return NULL;

if( t->left == NULL )

return t;

return findMin( t->left );

}

/**

* Internal method to find the largest item in a subtree t.

* Return node containing the largest item.

*/

template

BinaryNode *

BinarySearchTree::findMax( BinaryNode *t ) const

{

if( t != NULL )

while( t->right != NULL )

t = t->right;

return t;

}

/**

* Internal method to find an item in a subtree.

* x is item to search for.

* t is the node that roots the tree.

* Return node containing the matched item.

*/

template

BinaryNode *

BinarySearchTree::

find( const Comparable & x, BinaryNode *t ) const

{

if( t == NULL )

return NULL;

else if( x < t->element )

return find( x, t->left );

else if( t->element < x )

return find( x, t->right );

else

return t; // Match

}

/****** NONRECURSIVE VERSION*************************

template

BinaryNode *

BinarySearchTree::

find( const Comparable & x, BinaryNode *t ) const

{

while( t != NULL )

if( x < t->element )

t = t->left;

else if( t->element < x )

t = t->right;

else

return t; // Match

return NULL; // No match

}

*****************************************************/

/**

* Internal method to make subtree empty.

*/

template

void BinarySearchTree::

makeEmpty( BinaryNode * & t ) const

{

if( t != NULL )

{

makeEmpty( t->left );

makeEmpty( t->right );

delete t;

}

t = NULL;

}

/**

* Internal method to print a subtree rooted at t in sorted order.

*/

template

void BinarySearchTree::printTree( BinaryNode *t ) const

{

if( t != NULL )

{

printTree( t->left );

cout << t->element << endl;

printTree( t->right );

}

}

/**

* Internal method to clone subtree.

*/

template

BinaryNode *

BinarySearchTree::clone( BinaryNode * t ) const

{

if( t == NULL )

return NULL;

else

return new BinaryNode( t->element, clone( t->left ), clone( t->right ) );

}

TestBinarySearchTree.cpp

————————

#include

#include “BinarySearchTree.h”

// Test program

int main( )

{

const int ITEM_NOT_FOUND = -9999;

BinarySearchTree t( ITEM_NOT_FOUND );

int NUMS = 4000;

const int GAP = 37;

int i;

cout << “Checking… (no more output means success)” << endl;

for( i = GAP; i != 0; i = ( i + GAP ) % NUMS )

t.insert( i );

for( i = 1; i < NUMS; i+= 2 )

t.remove( i );

if( NUMS < 40 )

t.printTree( );

if( t.findMin( ) != 2 || t.findMax( ) != NUMS – 2 )

cout << “FindMin or FindMax error!” << endl;

for( i = 2; i < NUMS; i+=2 )

if( t.find( i ) != i )

cout << “Find error1!” << endl;

for( i = 1; i < NUMS; i+=2 )

{

if( t.find( i ) != ITEM_NOT_FOUND )

cout << “Find error2!” << endl;

}

BinarySearchTree t2( ITEM_NOT_FOUND );

t2 = t;

for( i = 2; i < NUMS; i+=2 )

if( t2.find( i ) != i )

cout << “Find error1!” << endl;

for( i = 1; i < NUMS; i+=2 )

{

if( t2.find( i ) != ITEM_NOT_FOUND )

cout << “Find error2!” << endl;

}

return 0;

}

Vector : The Vector ADT extends the notion of� array by storing a sequence of arbitrary objects An element can be accessed, inserted or removed by specifying its rank (number of elements preceding it) An exception is thrown if an incorrect rank is specified (e.g., a negative rank)

The Stack ADT stores arbitrary objects Insertions and deletions follow the last-in first-out scheme.

The Queue ADT stores arbitrary objects Insertions and deletions follow the first-in first-out scheme Insertions are at the rear of the queue and removals are at the front of the queue Main queue operations:

enqueue(object): inserts an element at the end of the queue object

dequeue(): removes and returns the element at the front of the queue

A singly linked list is a concrete data structure consisting of a sequence of nodes

Each node stores

element

link to the next node

A doubly linked list provides a natural implementation of the List ADT Nodes implement Position and store:

element

link to the previous node

link to the next node

A tree is an abstract model of a hierarchical structure A tree consists of nodes with a parent-child relation

Root: node without parent (A)

Internal node: node with at least one child (A, B, C, F)

External node (a.k.a. leaf): node without children (E, I, J, K, G, H, D)

Ancestors of a node: parent, grandparent, grand-grandparent, etc.

Depth of a node: number of ancestors

Height of a tree: maximum depth of any node (3)

Descendant of a node: child, grandchild, grand-grandchild, etc.

Subtree: tree consisting of a node and its descendants

A binary tree is a tree with the following properties:

Each internal node has two children

The children of a node are an ordered pair

We call the children of an internal node left child and right child

Alternative recursive definition: a binary tree is either

a tree consisting of a single node,

or

a tree whose root has an ordered pair of children, each of which is a

binary tree

Notation

*n*** **number of nodes

*e*** **number of

external nodes

*i*** **number of internal

nodes

*h*** **height

Properties:

** e **=

** n **= 2

*h*** **≤

*h*** **≤ (

*e*** **≤ 2

*h*** **≥ log2

*h*** **≥ log2 (

A priority queue stores a collection of items

An item is a pair (key, element)

Main methods of the Priority Queue ADT

insertItem(k, o) inserts an item with key k and element o

removeMin() removes the item with smallest key and returns its element.

A heap is a binary tree storing keys at its internal nodes and satisfying the

following properties:

Heap-Order: for every internal node v other than the root,

** key**(

Complete Binary Tree: let ** h **be the height of the heap

for *i*** **= 0, . ,

at depth ** h **− 1, the internal nodes are to the left of the external nodes

The last node of a heap is the rightmost internal node of depth ** h **– 1

A hash function ** h **maps keys of a given type to integers in a fixed interval [0,

Example:

** h**(

The integer ** h**(

A hash table for a given key type consists of

Hash function *h*

Array (called table) of size *N*

When implementing a dictionary with a hash table, the goal is to store item (** k**,

A hash function is usually specified as the composition of two

functions:

Hash code map:

** h**1: keys → integers

Compression map:

** h**2: integers → [0,

The hash code map is applied first, and the compression map is applied next on the result, i.e.,

** h**(

The goal of the hash function is to �disperse� the keys in an apparently random way.

The dictionary ADT models a searchable collection of key element items

The main operations of a dictionary are searching, inserting, and deleting items

Multiple items with the same key are allowed

Merge-sort on an input

sequence ** S **with

elements consists of

three steps:

Divide: partition ** S **into

two sequences ** S**1 and

of about ** n**/2 elements

each

Recur: recursively sort ** S**1

and ** S**2

Conquer: merge ** S**1 and

** S**2 into a unique sorted

Sequence

**Algorithm **** mergeSort**(

**Input **sequence ** S **with

elements, comparator *C*

**Output **sequence ** S **sorted

according to *C*

**if**** **** S.size**()

(** S**1,

** mergeSort**(

** mergeSort**(

** S **←

Quick-sort is a randomized

sorting algorithm based

on the divide-and-conquer

paradigm:

Divide: pick a random

element ** x **(called pivot) and

partition ** S **into

** L **elements less than

** E **elements equal

** G **elements greater than

Recur: sort ** L **and

Conquer: join ** L**,

* *

We represent a set by the sorted sequence of its elements

By specializing the auxliliary methods the generic merge algorithm can be used to perform basic set operations:

union

intersection

subtraction

The running time of an operation on sets ** A **and

Quick-select is a **randomized **selection algorithm based on the prune-and-search paradigm:

Prune: pick a random element ** x**(called pivot) and partition

** L **elements less than

** E **elements equal

** G **elements greater than

Search: depending on k, either answer is in ** E**, or we need to recurse in either

* *

*Big O:*

*F(**n) (- O(g(n)) if there exists c > 0 and an integer n0 > 0 such that f(n) <= c*g(n) for all n >= n0.*

* *

*Big Omega*

*F(**n) (- O(g(n)) if there exists c > 0 and an integer n0 > 0 such that f(n) >= c*g(n) for all n >= n0.*

* *

*Big theta*

*F(**n) (- O(g(n)) if there exists c > 0 and an integer n0 > 0 such that f(n) = c*g(n) for all n >= n0.*

The post Algorithms FAQs Part 3. appeared first on Anagha Agile Systems.

]]>The post Typescript – A new web technology for WEB Creators. appeared first on Anagha Agile Systems.

]]>TypeScript is a superset of Javascript that compiles to idiomatic JavaScript code, TypeScript is used for developing industrial-strength scalable application. All Javascript code is TypeScript code; i.e. you can simply and paste JS code into TS file and execute with out any errors.

TypeScript works on any browser, any host, any OS. Typescript is aligned with emerging standards, class, modules, arrow functions, ‘this’ keyword implementation all align with ECMAScript 6 proposals.

TypeScript support most of the module systems implemented in popular JS libraries. CommonJS and AMD is used in TypeScript, these module systems

are compatible in any ECMA environment and developer can specify how the TypeScript should be compiled to specified ECMAScript i.e. ECMAScript 3 or

ECMAScript 5 or ECMAScript 6. All major Javascript library work with TypeScript (Node, underscore, jquery etc) using declaration of the required types definition

TypeScript support oops concepts like private, public, static, inheritance, TypeScript enables scalable appilcation development and excellent tooling using all popular

IDEs like Visual Studio, WebStorm, Atom, Sublime Text etc.

TypeScript adds zero overhead in performance and execution, since static types completely disappear at runtime.

TypeScripts has awesome language features like Interfaces, Classes and Modules enable clear contract between components.

Now I briefly try to highlight and explain the special features of TypeScript:

TypeScript, we support much the same types as you would expected in JavaScript, with a convenient enumeration types implemented to help things along. The Basic types of TypeScript are

**Boolean**

var isDone: boolean = false;

**Number**

var height: number = 6;

**String**

var name: string = “bob”;

name = ‘smith’;

**Array**

var list:number[] = [1, 2, 3];

The second way uses a generic array type, Array<elemType>:

var list:Array<number> = [1, 2, 3];

**Enum**

Enum is the new addition to TypeScript not available in JS

an enum is a way of implementing friendly names to sets of numeric values.

enum Color {Red, Green, Blue};

var c: Color = Color.Green;

**Any**

The ‘any’ type is a powerful way to work with existing JavaScript, allowing you

to gradually opt-in and opt-out of type-checking during compilation.

var notSure: any = 4;

notSure = “maybe a string instead”;

notSure = false; // okay, definitely a boolean

Also you can use ‘any’ during mixed types, For example, you may have an array but the array has a mix of different types:

var list:any[] = [1, true, “free”];

list[1] = 100;

**Void**

‘void’ is the opposite of ‘any’ ie., the absence of having any type at all. You may commonly see this as the return type of functions that do not return a value:

function warnUser(): void {

alert(“This is my warning message”);

}

TypeScript provides static typing through type annotations to enable type checking at compile-time. This is optional and

can be ignored to use the regular dynamic typing of JavaScript.

Example of static Type checking and emitting errors

Optionally static typed variables

class Greeter {

greeting: string;

constructor (message: string) {

this.greeting = message;

}

greet() : string {

return “Hello, ” + this.greeting;

}

}

var greeter = new Greeter(“Hi”);

var result = greeter.greet();

If you modify the above code snippet where string is replaced by number in greet() method.

you’ll see red squiggles in the playground editor if you try this:

greet() : number {

return “Hello, ” + this.greeting;

}

Type inference flows implicitly in TypeScript code

In TypeScript, there are several places where type inference is used to provide type information when there is no explicit type annotation. For example, in this code

var x = 3;

If you want TypeScript to determine the type of your variables correctly, do not do:

1st Code snippet:

var localVar;

// Initialization code

localVar = new MyClass();

The type of localVar will be interpretted to be ‘any’ instead of MyClass. TypeScript will not complain about it but you’ll not get ‘any’ static type checking.

Instead, do:

2nd Code snippet:

var localVal : MyClass;

localVal = new MyClass();

TypeScript introduces the –noImplicitAny flag to disallow such programs. Then the 1st code snippet will not compile:

Type inference also work in opposite direction known as ‘Contextual Type’

Contextual typing applies in many cases. Common cases include arguments to function calls, right hand sides of assignments, type assertions, members of object and array literals, and return statements. The contextual type also acts as a candidate

type in best common type. For example:

function createZoo(): Animal[] {

return [new Rhino(), new Elephant(), new Snake()];

}

In this example, best common type has a set of four candidates: Animal, Rhino, Elephant, and Snake. Of these, Animal can be chosen by the best common type algorithm.

The file extension for such a file is .d.ts, where d stands for definition. Type definition files make it possible to enjoy

the benefits of type checking, autocompletion, and member documentation. Any file that ends in .d.ts instead of .ts will never

generate a corresponding compiled module, so this file extension can also be useful for normal TypeScript modules that contain

only interface definitions.

Type Systems in TypeScript is automatically inferred since the lib.d.ts, the main TypeScript definition file

is loaded implicitly. For detail information on TypeScript defintion file visit here(http://definitelytyped.org/guides.html)

TypeScript code Works with existing JS libraries, TypeScript declaration files (*.d.ts) for most of the common JS libraries are

maintained seperately in DefinitelyTyped.org. TypeScript declaration files make it easy to work with exisiting libraries using

repoistory available in DefinitelyTyped.org

TypeScript decl files (*.d.ts) can used for debuging and source mapping of TypeScript and JS file, also for type referencing.

If you want to document your function, provide documentation in TypeScript declaration files. if you want to reference d.ts files use

reference comment syntax

You can create a function on an instance member of the class, on the prototype, or as a static function

Creating a function on the prototype is easy in TypeScript, which is great since you don;t even have to know you are using the prototype.

// TypeScript

class Bike {

engine: string;

constructor (engine: string) {

this.engine = engine;

}

kickstart() {

return “Running ” + this.engine;

}

}

Notice the start function in the TypeScript code. Now look at the emitted JavaScript below, which defines that start function on the prototype.

// JavaScript

var Bike = (function () {

function Bike(engine) {

this.engine = engine;

}

Bike.prototype.kickstart = function () {

return “Running ” + this.engine;

};

return Bike;

})();

One of the coolest parts of TypeScript is how it allows you to define complex type definitions in the form of interfaces.

Interfaces are used to implement duck Typing. duck typing is a style of typing in which an object’s methods and properties determine

the valid semantics, rather than its inheritance from a particular class or implementation of a specific interface.

I have 2 interfaces with the same interface but completely unrelated sematics:

interface Chicken {

id: number;

name: string;

}

interface JetPlane {

id: number;

name: string;

}

then doing the following is completely fine in TypeScript:

var chicken : Chicken = { id: 1, name: ‘Thomas’ };

var plane: JetPlane = { id: 2, name: ‘F 35’ };

chicken = plane;

TypeScript Interface uses ‘Duck typing’ or ‘Structural subtyping’

You can create a function on an instance member of the class, on the prototype, or as a static function

Creating a function on the prototype is easy in TypeScript, which is great since you don;t even have to know you are using the prototype.

// TypeScript

class Car {

engine: string;

constructor (engine: string) {

this.engine = engine;

}

start() {

return “Started ” + this.engine;

}

}

Notice the start function in the TypeScript code. Now look at the emitted JavaScript below, which defines that start function on the prototype.

// JavaScript

var Car = (function () {

function Car(engine) {

this.engine = engine;

}

Car.prototype.start = function () {

return “Started ” + this.engine;

};

return Car;

})();

TypeScript classes are basic unit of abstraction very similar to C#/Java classes. In TypeScript a class can be defined with

keyword “class” followed by class name. TypeScript classes can contain constructor, fields, properties and functions.

TypeScript allows developers to define the scope of variable inside classes as “public” or “private”.

It’s important to note that the “public/private” keyword are only available in TypeScript,

When using the class keyword in TypeScript, you are actually creating two things with the same identifier:

A TypeScript interface containing all the instance methods and properties of the class; and

A JavaScript variable with a different (anonymous) constructor function type

You can create a class and even add fields, properties, constructors, and functions (static, prototype, instance based). The basic syntax for a class is as follows:

// TypeScript

class Car {

// Property (public by default)

engine: string;

// Constructor

// (accepts a value so you can initialize engine)

constructor(engine: string) {

this.engine = engine;

}

}

The property could be made private by prefixing the definition with the keyword private. Inside the constructor the engine property is referred to using the this keyword.

TypeScript extends keyword provides a simple and convenient way to inherit functionality from a base class (or extend an interface)

CLASS INHERITANCE:

// TypeScript

class Vehicle {

engine: string;

constructor(engine: string) {

this.engine = engine;

}

}

class Truck extends Vehicle {

bigTires: bool;

constructor(engine: string, bigTires: bool) {

super(engine);

this.bigTires = bigTires;

}

}

When inheritance is implemented, compiler injects (extra extends) code which is other than the developer written code to show inheritance

TypeScript emits JavaScript that helps extend the class definitions, using the __extends variable. This helps take care of some of the heavy lifting on the JavaScript side.

var __extends = this.__extends || function (d, b) {

function __() { this.constructor = d; }

__.prototype = b.prototype;

d.prototype = new __();

};

var Vehicle = (function () {

function Vehicle(engine) {

this.engine = engine;

}

return Vehicle;

})();

var Truck = (function (_super) {

__extends(Truck, _super);

function Truck(engine, bigTires) {

_super.call(this, engine);

this.bigTires = bigTires;

}

return Truck;

})(Vehicle);

One easy way to help maintain code re-use and organize your code is with modules. There are patterns such as the Revealing Module Pattern (RMP) in JavaScript that make

this quite simple, but the good news is that in TypeScript modules become even easier with the module keyword (from the proposed ECMAScript 6 spec).

However, it is important to know how your code will be treated if you ignore modules: you end up back with spaghetti.

Modules can provide functionality that is only visible inside the module, and they can provide functionality that is visible from the outside using the export keyword.

TypeScript categorizes modules into internal and external modules.

TypeScript has the ability to take advantage of a pair of JavaScript modularization standards – CommonJS and Asynchronous Module Definition (AMD).

These capabilities allow for projects to be organized in a manner similar to what a “mature,” traditional server-side OO language provides.

This is particularly useful for Huge Scalable Web Applications.

TypeScript Internal modules are TypeScript’s own approach to modularize your code.

TypeScript Internal modules can span across multiple files, effectively creating a namespace.

There is no runtime module loading mechanism, you have to load the modules using <script/> tags in your code.

Alternatively, you can compile all TypeScript files into one big JavaScript file that you include using a single <script/> tag.

External modules leverage a runtime module loading mechanism. You have the choice between CommonJS and AMD.

CommonJS is used by node.js, whereas RequireJS is a prominent implementation of AMD often used in browser environments.

When using external modules, files become modules. The modules can be structured using folders and sub folders.

Benefits of Modules:

Scoping of variables (out of global scope)

Code re-use

AMD or CommonJS support

Encapsulation

Don’t Repeat Yourself (DRY)

Easier for testing

you can make internal aspects of the module accessible outside of the module using the export keyword.

You an also extend internal modules, share them across files, and reference them using the triple slash syntax.

///<reference path=”shapes.ts”/>

TypeScript introduces Lambda expressions which in itself is so cool but to make this work it also automates the that-equals-this pattern.

The TypeScript code:

var myFunction = f => { this.x = “x”; }

Is compiled into this piece of JavaScript, automatically creating the that-equals-this pattern:

var _this = this;

var myFunction = function (f) {

_this.x = “x”;

};

Arrow function expressions are a compact form of function expressions that have a scope of ‘this’.

You can define an arrow function expression by omitting the function keyword and using the lambda syntax =>.

a simple TypeScript function that calculates sum earned by deposited funds.

var calculateInterest = function (amount, interestRate, duration) {

return amount * interestRate * duration / 12;

}

Using arrow function expression we can define this function alternatively as follows:

var calculateInterest2 = (amount, interestRat, duration) => {

return amount * interestRate * duration / 12;

}

Standard Javascript functions will dynamically bind this depending on execution context,

arrow functions on the other hand will preserve this of enclosing context.

This is a conscious design decision as arrow functions in ECMAScript 6 are meant to address

some problems associated with dynamically bound ‘this’ (eg. using function invocation pattern).

Being primarily a OOPS developer, lambda expressions in TypeScript is an extremely useful and

compact way to express anonymous methods.

Bringing this syntax to JavaScript though TypeScript is definitely a win for me.

There are lots of awesome features to list in TypeScript, all those new features of TypeScript in next post.

Hope this post brings a lot interest in developing apps using TypeScript.

The post Typescript – A new web technology for WEB Creators. appeared first on Anagha Agile Systems.

]]>The post AGILE Framework and Methodologies : Introduction to SCRUM appeared first on Anagha Agile Systems.

]]>SCRUM is an excellent framework for developing complex software products. Broadly speaking, SCRUM is a collection of ideas related to project management time boxed to provide a high quality working software. Specific to IT, SCRUM is a simple framework for effective team collaboration on complex projects. SCRUM provides a small set of rules that create just enough structure for teams to be able to focus their innovation on solving the product requirements what might otherwise be an insurmountable challenge.

SCRUM Framework is an iterative incremental product development approach where interaction with the environment is allowed, accepts changes to the project scope, technology, functionality, cost, and schedule whenever required. Controls are used to measure & manage the impact of change. SCRUM accepts that the requirements are changing and unpredictable, the working product developed using SCRUM is the BEST possible software, factoring in cost, functionality, timing and quality.

In 1986, Hirotaka Takeuchi and Ikujiro Nonaka,published a paper “The New New Product Development Game” in HBR suggesting Waterfall doesn’t work and needs a dynamic development model.

Later, *Ken Schwaber and Jeff Sutherland, the co-creators of SCRUM,* Ken worked with Jeff Sutherland to formulate the initial versions of the SCRUM development process and to present SCRUM as a formal process at OOPSLA’95.

The SCRUM framework consists of SCRUM Teams and their associated roles, events, artifacts, and rules. Each component within the framework serves a specific purpose and is essential to SCRUM’s success and usage.

SCRUM employs an iterative, incremental approach to optimize predictability and control risk. Three pillars which uphold every implementation of empirical process control are

Significant aspects of the process must be visible to those responsible for the outcome.

SCRUM users must frequently inspect SCRUM artifacts and progress toward a goal to detect undesirable variances.

If an inspector determines that one or more aspects of a process deviate outside acceptable limits, and that the resulting product will be unacceptable, the process or the material being processed must be adjusted. An adjustment must be made as soon as possible to minimize further deviation.

SCRUM prescribes four formal opportunities for inspection and adaptation:

1. Self Organizing Teams

2. Product progresses in a series of month long sprint.

3. Requirements are captured as items in a list of product backlog

4. No specific engineering practices prescribed

5. Uses generative rules to create an agile environment for delivering projects

6. One of the “agile processes”

- The team is committed to achieve its goal – high quality working software.
- The team self organizes itself for meeting its commitment.
- The team delivers at each iterative cycle the most valuable features to the product owner.
- The team adapts to the changing needs suggested by feedback and retrospective from sprint review & retrospective meeting.
- The team’s performance is transparent and can be measured in terms of progress being made.
- The team and management honestly communicate about progress and risks.

The SCRUM way of working is based on values of commitment, team spirit, self respect, respect for others, trust and courage. SCRUM never suggest any methodology or engineering practice to teams to do their work but expect team to fulfill the commitment – high quality product.

The SCRUM FRAMEWORK consists of the Team, SCRUM Master and Product Owner.

TEAM is collectively responsible for meeting the commitment of each sprint goal and of the project as a whole.

are self organized, self-managing and motivated cross-functional members. The team is fully dedicated to innovate and create working product from product backlog items incrementally within the sprint.

is philosopher, guide to the team, helping the team to implement the scrum methodology for product development. He removes any impediments or hurdles for the team. He makes sure the process is moving. Institutionalize SCRUM process to the complete organization.

is responsible for delivering the vision in a way that maximizes the ROI & minimizes the risk, formulates the plan and converts it to product backlog. He responsible to communicate the progress and changes of the working product to all the stakeholders of the Product. PO is also responsible for prioritizing the functionality of the product that needs worked upon by the team.

All work is done in Sprints; each sprint is an iteration of 2-4 consecutive weeks. Each Sprint is initiated with a Sprint planning meeting where Product Owner and Team get together to collaborate what product backlog items needs to be worked upon in next sprint.

All the backlog items that team commits is put into sprint backlog where each product backlog item is divided into multiple tasks in sprint backlog, the tasks in the sprint backlog emerge as the sprint evolves. With Sprint planning, the sprint starts and the clock starts to tick towards Sprint time-box.

The team members needs to attend the DAILY SCRUM meeting and keep the sprint backlog up-to-date. Everyone answers 3 questions in DAILY SCRUM meeting

- What did you do yesterday?
- What will you do today?
- Is anything in your way?

The SCRUM Master updates the Task board based on the briefing in the Daily SCRUM meeting. These are not status updates but commitments in front of peers.

At the end of the sprint, a sprint review meeting is held. the purpose of the sprint review meeting is to demo the working software to the product owner. Product owners discusses with the stakeholders and team potential rearrangement of the Product Backlog based on the feedback. Stakeholders give feedback and identify any new functionality and request additions to the Product Backlog for prioritization.

After this SCRUM Master holds Sprint retrospective meeting with the team, At this time-boxed meeting SCRUM Master encourages team to review, within the scrum process to make it more effective for the next sprint. To track remaining work Sprint Burndown chart is used, This reports remaining estimated workload over the course of the project

To Summarize, SCRUM is a most popular framework of AGILE project management methodologies. It has **ROLES (Product Owner, Scrum Master, Team), CEREMONIES/EVENTS (Sprint Planning, Sprint Review, Sprint Retrospective, Daily Scrum meeting) and ARTIFACTS (Product Backlog, Sprint Backlog, Burndown charts).** In my future coming articles I will be explaining in detail all the ROLES, CEREMONIES/EVENTS and ARTIFACTS of SCRUM FRAMEWORK till then happy reading and drop me any feedback or comments below.

The post AGILE Framework and Methodologies : Introduction to SCRUM appeared first on Anagha Agile Systems.

]]>The post Agile Framework and Methodologies: Principles and Values of Agile. appeared first on Anagha Agile Systems.

]]>Agile empowers teams continuously plan their release to optimize its value throughout development life-cycle, so teams are competitive as possible in the marketplace. Development using an agile methodology preserves a product’s critical market relevance and ensures a team’s work doesn’t wind up on a shelf, never released.

A small group of people got together in 2001 to discuss their thoughts about the failure of traditional approach of software development life-cycle and is there a better way to do this? They came up with the agile manifesto, which describes 4 important values that are still relevant today, The use of the word agile in this context derives from the agile manifesto. It says, “we value:

That is, while there is value in the items on the right, we value the items on the left more.” Ever since then, the use of methods that support these values has become increasingly popular.

The twelve Agile principles derived from four key values in agile manifesto.

1. To meet the Customer Product expectations through iterative high quality, customer friendly software

2. To accept changes as the come from customer to increase customer’s competitive advantage in the market.

3. Deliver incremental working software to the customer in agreed time-boxes or period.

4. Team is mix of cross functional professionals i.e. both technical and business domain experts.

5. Team works in a highly motivated helpful environment, team enjoys all support & trust during project life-cycle.

6. Most effective communication among team members to convey information between them regularly i.e. daily face to face meeting.

7. Working Product is the only measure of progress.

8. Agile believes in constant iterative development, all the team members & sponsors need to keep up this constant development speed.

9. Continuous focus on Quality & Design Enhancements improves effectiveness & usability of the product being developed.

10. Simplifying the art of identifying the incomplete work is the important factor for continuing the product development.

11. Only the most motivated & highly disciplined self-organizing teams can innovate best designs and specifications for the product.

12. The team effectively adapts itself to ever-changing needs of the project & product requirements.

The real goal of any business is the Quality working software and the way to get there is all these things that Agile principles asks us to do, through a continual process of learning.

In the next article of Agile Framework and Methodologies series I will be discussing about finer details about SCRUM Methodologies.

The post Agile Framework and Methodologies: Principles and Values of Agile. appeared first on Anagha Agile Systems.

]]>