Category: Programming

AI

Predictive Analytics

2017 has seen role of big data analytics grow and diversify it’s application into various business domains, ranging from retail business to governance sphere. Analytics has facilitated the use of unstructured data into usable insights. As business house continue to take leaps of growth with the revolutionary data analytics, it is expected that this growing and expanding vertical shall leave an indelible mark in 2018 too.

Nowadays, the size of the data that is being generated and created in different organizations is increasing drastically. Due to this large amount of data, several areas in artificial intelligence and data science have been raised.

Businesses who don’t effectively use their data will be losing $1.2 trillion to their competitors every year by 2020. By that time, experts predict that there will be a 4300% increase in annual data production.

In order to stay competitive, companies need to find a way to leverage data into actionable strategies. Data science and artificial intelligence are the key to maximizing data utilization.

Predictive Analytics is among the most useful applications of data science.

Using it allows executives to predict upcoming challenges, identify opportunities for growth, and optimize their internal operations.

So to understand thoroughly about important role played by analytics in technology  we will start with basics,

What is Predictive Analytics?

According to Wikipedia, “Predictive analytics encompasses a variety of statistical techniques from predictive modeling, machine learning, and data mining that analyze current and historical facts to make predictions about future or otherwise unknown events.”

 

Predictive analytics is the branch of the advanced analytics which is used to make predictions about unknown future events. Predictive analytics uses many techniques from data mining, statistics, modelling, machine learning, and artificial intelligence to analyse current data to make predictions about future. It uses a number of data mining, and analytical techniques to bring together the management, information technology, and modelling business process to make predictions about future. The patterns found in historical and transactional data can be used to identify risks and opportunities for future. Predictive analytics models capture relationships among many factors to assess risk with a particular set of conditions to assign a score, or weight-age. By successfully applying predictive analytics the businesses can effectively interpret big data for their opportunities.

Predictive Analytics can also be defined using techniques, tools and technologies that use data to find models – models that can anticipate outcomes with a significant probability of accuracy.

Data Scientist explore data, formulate hypothesis, and use algorithms to find predictive models

The six steps of Predictive Analytics are
1. Understand and Collect Data
2. Prepare and Clean Data
3. Create Model using Statistical & Machine Learning Algorithm
4. Evaluate the model to make sure it will work
5. Deploy the model, use the model in applications
6. Monitor the model, Measure the effectiveness of the model in the real world.

Predictive analytics can be further categorized as –

  1. Predictive Modelling –What will happen next, if ?
  2. Root Cause Analysis-Why this actually happened?
  3. Data Mining- Identifying correlated data.
  4. Forecasting- What if the existing trends continue?
  5. Monte-Carlo Simulation – What could happen?
  6. Pattern Identification and Alerts –When should an action be invoked to correct a process.

Below are some of the important reasons for using Predictive Analytics by​ organisation. Organizations are turning to predictive analytics to help solve difficult problems and uncover new opportunities. Common uses include:

Detecting fraud. Combining multiple analytics methods can improve pattern detection and prevent criminal behavior. As cybersecurity becomes a growing concern, high-performance behavioral analytics examines all actions on a network in real time to spot abnormalities that may indicate fraud, zero-day vulnerabilities and advanced persistent threats.

Optimizing marketing campaigns. Predictive analytics are used to determine customer responses or purchases, as well as promote cross-sell opportunities. Predictive models help businesses attract, retain and grow their most profitable customers.

Improving operations. Many companies use predictive models to forecast inventory and manage resources. Airlines use predictive analytics to set ticket prices. Hotels try to predict the number of guests for any given night to maximize occupancy and increase revenue. Predictive analytics enables organizations to function more efficiently.

Reducing risk. Credit scores are used to assess a buyer’s likelihood of default for purchases and are a well-known example of predictive analytics. A credit score is a number generated by a predictive model that incorporates all data relevant to a person’s creditworthiness. Other risk-related uses include insurance claims and collections.

 

The types of Modeling techniques used in Predictive Analytics
1. Classifiers : An algorithm that maps the input data to a specific category. This algorithm is used in Predictive Data Classification process. This process has two stages: The Learning stage and Prediction stage.

This supervisor algorithm used for categorise structured or unstructured data into different sections by tagging them with unique classification tag attribute. This process of classification is done after learning process. The goal is to teach your model to extract and discover hidden relationships and rules — the classification rules from training data. The model does so by employing a classification algorithm.

The prediction stage that follows the learning stage consists of having the model predict new class tag attributes or numerical values that classify data it has not seen before (that is, test data). The main goal of a classification problem is to identify the category/class to which a new data will fall under.

The following are the steps involved in building a classification model:
i. Initialise the classifier to be used.
ii. Train the classifier: All classifiers in scikit-learn uses a fit(X, y) method to fit the model(training) for the given train data X and train label y.
iii. Predict the target: Given an unlabelled observation X, the predict(X) returns the predicted label y.
iv. Evaluate the classifier model.

Some of the examples of Classification Algorithms :
1. Logistic Regression
2. Naive Bayes
3. Stochastic Gradient Descent
4. K-Nearest Neighbours
5. Decision Tree
6. Random Forest
7. Support Vector Machine

2. Recommenders:
A Recommendation system or Recommenders works in well-defined, logical phases viz., data collection, ratings, and filtering. These phases are described below.
• Recommender System helps match users with item
• Implicit or explicit user feedback or item suggestion
• Different Recommender System designs are based on the availability of data or content/context of the data.

Recommendation allows you to make recommendations (similarly to Association Rules or Market Basket Analysis) by generating rules (for example, purchasing Product A leads to purchasing Product B). Recommendation uses the link analysis technique. This technique is optimized to work on large volumes of transactions. Recommendation triggers all the existing rules in a projected graph whose antecedent is a neighbor of the given user in the bipartite graph. Recommendation provides a specialized workflow to make it easy to obtain a set of recommendations for a given customer.

Data Collection:
How does the Recommendation system capture the details? If the user has logged in, then the details are extracted either from an HTTP session or from the system cookies. In case the Recommendation system depends on system cookies, then the data is available only till the time the user is using the same terminal. Events are fired almost in every case — a user liking a Product or adding it to a cart and purchasing it. So that is how user details are stored. But that is just one part of what Recommenders do.

Ratings
Ratings are important in the sense that they tell you what a user feels about a product. User’s feelings about a product can be reflected to an extent in the actions he or she takes such as likes, adding to shopping cart, purchasing or just clicking. Recommendation systems can assign implicit ratings based on user actions.

Filtering
Filtering means filtering products based on ratings and other user data. Recommendation systems use three types of filtering: collaborative, user-based and a hybrid approach. In collaborative filtering, a comparison of users’ choices is done and recommendations given. For example, if user X likes products A, B, C, and D and user Y likes products A, B, C, D and E, then it is likely that user X will be recommended product E because there are a lot of similarities between users X and Y as far as choice of products is concerned.

Several reputed brands such as Social Media Ecosystems use this model to provide effective and relevant recommendations to the users consuming those services. In user-based filtering, the user’s browsing history, likes and ratings are taken into account before providing recommendations. Many companies also use a hybrid approach. Netflix is known to use a hybrid approach.

While big data and Recommendation engines have already proved an extremely useful combination for big corporations, it raises a question of whether companies with smaller budgets can afford such investments. Powerful media recommendation engines can be built for anything from movies and videos to music, books, and products – think Netflix, Pandora, or Amazon.

3. Clusters
Using unsupervised techniques like clustering, we can seek to understand the relationships between the variables or between the observations by determining whether observations fall into relatively distinct groups. During Clustering case, most of the data is categorised using unsupervised learning — so we don’t have response variables telling us whether a customer is a frequent shopper or not. Hence, we can attempt to cluster the customers on the basis of the variables in order to identify distinct customer groups. There are other types of unsupervised statistical learning including k-means clustering, hierarchical clustering, principal component analysis, etc.

Clustering is an unsupervised data mining technique where the records in a data set are organised into different logical groupings. The groupings are in such a way that records inside the same group are more similar than records outside the group. Clustering has a wide variety of applications ranging from market segmentation to customer segmentation, electoral grouping, web analytics, and outlier detection.

Clustering is also used as a data compression technique and data preprocessing technique for supervised data mining tasks. Many different data mining approaches are available to cluster the data and are developed based on proximity between the records, density in the data set, or novel application of neural networks.

Clustering can help us explore the dataset and separate cases into groups representing similar traits or characteristics. Each group could be a potential candidate for a Category/class. Clustering is used for exploratory data analytics, i.e., as unsupervised learning, rather than for confirmatory analytics or for predicting specific outcomes. Examples of several interesting case-studies, including Divorce and Consequences on Young Adults, Paediatric Trauma, and Youth Development, demonstrate hierarchical clustering,

4. Numerical, time series forecasting :A time series is a sequence of measurements over time, usually obtained at equally spaced intervals

Any metric that is measured over regular time intervals forms a time series. Analysis of time series is commercially importance because of industrial need and relevance especially w.r.t forecasting (demand, sales, supply etc).

An Ordered sequence of observations of a variable or captured object at equally distributed time interval. Time series is anything which is observed sequentially over the time at regular interval like hourly, daily, weekly, monthly, quarterly etc. Time series data is important when you are predicting something which is changing over the time using past data. In time series analysis the goal is to estimate the future value using the behaviours in the past data.

There are many statistical techniques available for time series forecast, however we have found few effective ones which are listed below:
Techniques of Forecasting:
1. Simple Moving Average (SMA)
2. Exponential Smoothing (SES)
3. Autoregressive Integration Moving Average (ARIMA)
4. Neural Network (NN)Croston

Components of a Time Series
• Secular Trend
– Linear
– Nonlinear

• Cyclical Variation
– Rises and Falls over periods longer than
one year

• Seasonal Variation
– Patterns of change within a year, typically
repeating themselves

• Residual Variation

Components of a Time Series

Yt = Ct +St +Rt

Time series data mining combines traditional data mining and forecasting techniques. Data mining techniques such as sampling, clustering and decision trees are applied to data collected over time with the goal of improving predictions.

To know more in detail about the AI and Machine Learning and we will explore Predictive Analytics:

Some of the popular Predictive Algorithms are given below:

Predictive Algorithms:
1. K-means Clustering
2. Association rules
3. Boosting trees
4. CHAID

5. Cluster Analysis
6. Feature Selection
7. Independent Component Analysis
8. Kohonen Networks (SOFM)
9. Neural Networks

10. Social network analysis (SNA)
11. Random Forest (Decision Trees)
12. Mars regression splines
13. Linear and logistic regression
14. Naive Bayesian classifiers

15.Optimal binning
16. Partial Least Squares
17. Response Optimisation
18. Root cause analysis.

In my next blog I will be explaining clearly each one of the predictive analytics algorithms mentioned above, starting with K-means Clustering. I hope you liked the article

please let you know with your comments what more would like to see in the upcoming articles on Predictive Analytics. Happy Reading !!

 

 

AI

QUANTUM COMPUTING BOOSTS DEEP LEARNING TO TOP LEVEL

 

“This is the beginning of the quantum age of computing and the latest advancement is towards building a universal quantum computer.

“A universal quantum computer, once built, will represent one of the greatest milestones in the history of information technology and has the potential to solve certain problems we couldn’t solve, and will never be able to solve, with today’s classical computers.”

quantum computing is likely to have the biggest impact on industries that are data-rich and time-sensitive. Machine Learning is the main application for quantum computing.

a quantum computer has quantum bits or qubits, which work in a particularly intriguing way. Where a bit can store either a zero or a 1, a qubit can store a zero, a one, both zero and one, or an infinite number of values in between—and be in multiple states (store multiple values) at the same time! If that sounds confusing, think back to light being a particle and a wave at the same time, Schrödinger’s cat being alive and dead, or a car being a bicycle and a bus.

A quantum computer exponentially expands the vocabulary of binary code words by using two spooky principles of quantum physics, namely ‘entanglement’ and ‘superposition’. Qubits can store a 0, a 1, or an arbitrary combination of 0 and 1 at the same time. Multiple qubits can be made in superposition states that have strictly no classical analogue, but constitute legitimate digital code in a quantum computer.

In a quantum computer, each quantum bit of information–or qubit–can therefore be unfixed, a mere probability; this in turn means that in some mysterious way, a qubit can have the value of one or zero simultaneously, a phenomenon called superposition.

And just as a quantum computer can store multiple values at once, so it can process them simultaneously. Instead of working in series, it can work in parallel, doing multiple operations at once.

In theory, a quantum computer could solve in less than a minute problems that it would take a classical computer millennia to solve.

To date, most quantum computers have been more or less successful science experiments. None have harnessed more than 12 qubits, and the problems the machines have solved have been trivial. Quantum computers have been complicated, finicky machines, employing delicate lasers, vacuum pumps, and other exotic machinery to shepherd their qubits.

The world’s fastest supercomputer, China’s Sunway TaihuLight, runs at 93 petaflops (93 quadrillion flops, or around 1017) – but it relies on 10 million processing cores and uses massive amounts of energy.

In comparison, even a small 30-qubit universal quantum computer could, theoretically, run at the equivalent of a classical computer operating at 10 teraflops (10 trillion flops, or 1012

A QUANTUM COMPUTER IS “NOT JUST A ‘FASTER’ COMPUTER, BUT THEY ARE THE EQUIVALENT OF A JET PLANE TO A BICYCLE.”

IBM says “With a quantum computer built of just 50 qubits, none of today’s TOP500 supercomputers could successfully emulate it, reflecting the tremendous potential of this technology,” .

A screenshot of IBM’s the author attempting to quantum compute. (courtesy : Silicon)

 

 

 

 

QUANTUM COMPUTING TIMELINE

The significant advance, by a team at the University of New South Wales (UNSW) in Sydney appears today in the international journal Nature.

“What we have is a game changer,” said team leader Andrew Dzurak, Scientia Professor and Director of the Australian National Fabrication Facility at UNSW.

“We’ve demonstrated a two-qubit logic gate – the central building block of a quantum computer – and, significantly, done it in silicon. Because we use essentially the same device technology as existing computer chips, we believe it will be much easier to manufacture a full-scale processor chip than for any of the leading designs, which rely on more exotic technologies.

“This makes the building of a quantum computer much more feasible, since it is based on the same manufacturing technology as today’s computer industry,” he added.

 

The benefits of quantum computing

quantum computers could be used for discovering new drugs, securing the internet, modeling the economy, or potentially even building far more powerful artificial intelligence systems—all sorts of exceedingly complicated tasks.

A functional quantum computer will provide much faster computation in a number of key areas, including: searching large databases, solving complicated sets of equations, and modelling atomic systems such as biological molecules and drugs. This means they’ll be enormously useful for finance and healthcare industries, and for government, security and defence organisations.

For example, they could be used to identify and develop new medicines by greatly accelerating the computer-aided design of pharmaceutical compounds (and minimizing lengthy trial and error testing); develop new, lighter and stronger materials spanning consumer electronics to aircraft; and achieve much faster information searching through large databases.

Functional quantum computers will also open the door for new types of computational applications and solutions that are probably too premature to even conceive.

The fusion of quantum computing and machine learning has become a booming research area. Can it possibly live up to its high expectations?

“These advantages that you end up seeing, they’re modest; they’re not exponential, but they are quadratic,” said Nathan Wiebe, a quantum-computing researcher at Microsoft Research. “Given a big enough and fast enough quantum computer, we could revolutionize many areas of machine learning.” And in the course of using the systems, computer scientists might solve the theoretical puzzle of whether they are inherently faster, and for what.

A recent Fortune essay states, “Companies like Microsoft, Google and IBM are making rapid breakthroughs that could make quantum computing viable in less than 10 years.”

Very soon you will find hardware and software with fundamentally new integrated circuits that store and process quantum information. Many important computational problems will only be solved by building quantum computers. Quantum computing has been picking up the momentum, and there are many startups and scholars discussing quantum machine learning. A basic knowledge of quantum two-level computation ought to be acquired.

I am sure that Quantum Information and Quantum Computation will play a crucial role in defining where the course of our technology is headed.

“For quantum computing to take traction and blossom, we must enable the world to use and to learn it,” said Gambetta. “This period is for the world of scientists and industry to focus on getting quantum-ready.”

The Era of Quantum Computing Is Here.

Generics

Algorithms FAQs Part 1.

The following are the list of most important FAQs of Algorithms and Data Structures used in most of the programming languages. To improve your




coding skills every programmer has to know the how to implement Algorithms and Data structures in his application.

1. What is an algorithm?
The process of providing solution to a problem is a sequence of steps is called an algorithm.

 

2. What are the properties of an algorithm?  

An algorithm must possess the following properties:
a. It should get input data.
b. It should produce output.
c. The algorithm should terminate.
d. Algorithm should be clear to understand.
e. It should be easy to perform.

3. What are the types of algorithms?
Algorithms are of two types: iterative and recursive algorithms.

4. Define iterative algorithms.
Algorithms which use loops and conditions to solve the problem are called iterative algorithms.

5. Define recursive algorithms.
Algorithms which perform the recursive steps in a divide and conquer method are called recursive algorithms.

6. What is an array?
An array is a sequential collection of same kind of data elements.

7. What is a data structure?

A data structure is a way of organizing data that considers not only the items stored, but also their relationship to each other. Advance knowledge about the relationship between data items allows designing of efficient algorithms for the manipulation of data.

8. Name the areas of application of data structures.

The following are the areas of application of data structures:

a. Compiler design
b. Operating system
c. Statistical analysis package
d. DBMS
e. Numerical analysis
f. Simulation
g. Artificial Intelligence
h. Graphics

9. What are the major data structures used in the following areas: Network data model, Hierarchical data model, and RDBMS?

The major data structures used in the above areas are:
Network data model – Graph
Hierarchical data model – Trees
RDBMS – Array

10. What are the types of data structures?
Data structures are of two types: Linear and non linear data structures.

11. What is a linear data structure?
If the elements of a data structure are stored sequentially, then it is a linear data structure.

12. What is non linear data structure?
If the elements of a data structure are not stored in sequential order, then it is a non linear data structure.

13. What are the types of array operations?

The following are the operations which can be performed on an array:
Insertion, Deletion, Search, Sorting, Merging, Reversing and Traversal.

14. What is a matrix?
An array of two dimensions is called a matrix.

15. What are the types of matrix operations?

The following are the types of matrix operations:
Addition, multiplication, transposition, finding determinant of square matrix, subtraction, checking whether it is singular matrix or not etc.

16.  What is the condition to be checked for the multiplication of two matrices?
If matrices are to be multiplied, the number of columns of first matrix should be equal to the number of rows of second matrix.

 

17.  Write the syntax for multiplication of matrices?Syntax:
for (=0; < value;++)
{
for (=0; < value;++)
{
for (=0; < value;++)
{
arr[var1][var2] += arr[var1][var3] * arr[var3][arr2];
}
}
}

 

18.  What is a string?
A sequential array of characters is called a string.

19.  What is use terminating null character?
Null character is used to check the end of string.

20.  What is an empty string?
A string with zero character is called an empty string.

21.  What are the operations that can be performed on a string?
The following are the operations that can be performed on a string:
finding the length of string, copying string, string comparison, string concatenation, finding substrings etc.

22.  What is Brute Force algorithm?
Algorithm used to search the contents by comparing each element of array is called Brute Force algorithm.

23.  What are the limitations of arrays?
The following are the limitations of arrays:
Arrays are of fixed size.
Data elements are stored in continuous memory locations which may not be available always.
Adding and removing of elements is problematic because of shifting the locations.

24.  How can you overcome the limitations of arrays?
Limitations of arrays can be solved by using the linked list.

25.  What is a linked list?
Linked list is a data structure which store same kind of data elements but not in continuous memory locations and size is not fixed. The linked lists are related logically.

26.  What is the difference between an array and a linked list?
The size of an array is fixed whereas size of linked list is variable. In array the data elements are stored in continuous memory locations but in linked list it is non continuous memory locations. Addition, removal of data is easy in linked list whereas in arrays it is complicated.

27.  What is a node?
The data element of a linked list is called a node.

28.  What does node consist of?
Node consists of two fields:
data field to store the element and link field to store the address of the next node.

29.  Write the syntax of node creation?

Syntax:
struct node
{
data type ;
node *ptr; //pointer for link node
}temp;

 

30.  Write the syntax for pointing to next node?Syntax:
node->link=node1;

32.  What is a data structure? What are the types of data structures? Briefly explain them

The scheme of organizing related information is known as ‘data structure’. The types of data structure are:

Lists: A group of similar items with connectivity to the previous or/and next data items.
Arrays: A set of homogeneous values
Records: A set of fields, where each field consists of data belongs to one data type.
Trees: A data structure where the data is organized in a hierarchical structure. This type of data structure follows the sorted order of insertion, deletion and modification of data items.
Tables: Data is persisted in the form of rows and columns. These are similar to records, where the result or manipulation of data is reflected for the whole table.

33.  Define a linear and non linear data structure.

Linear data structure: A linear data structure traverses the data elements sequentially, in which only one data element can directly be reached. Ex: Arrays, Linked Lists

Non-Linear data structure: Every data item is attached to several other data items in a way that is specific for reflecting relationships. The data items are not arranged in a sequential structure. Ex: Trees, Graphs

34.  Define in brief an array. What are the types of array operations?

An array is a set of homogeneous elements. Every element is referred by an index.

Arrays are used for storing the data until the application expires in the main memory of the computer system. So that, the elements can be accessed at any time. The operations are:

– Adding elements
– Sorting elements
– Searching elements
– Re-arranging the elements
– Performing matrix operations
– Pre-fix and post-fix operations

35.  What is a matrix? Explain its uses with an example

A matrix is a representation of certain rows and columns, to persist homogeneous data. It can also be called as double-dimensioned array.

Uses:
– To represent class hierarchy using Boolean square matrix
– For data encryption and decryption
– To represent  traffic flow and plumbing in a network
– To implement graph theory of node representation

36.  Define an algorithm. What are the properties of an algorithm? What are the types of algorithms?

 

Algorithm: A step by step process to get the solution for a well defined problem.
Properties of an algorithm:

– Should be unambiguous, precise and lucid
– Should provide the correct solutions
– Should have an end point
– The output statements should follow input, process instructions
– The initial statements should be of input statements
– Should have finite number of steps
– Every statement should be definitive

37.  Types of algorithms:

– Simple recursive algorithms. Ex: Searching an element in a list
– Backtracking algorithms Ex: Depth-first recursive search in a tree
– Divide and conquer algorithms. Ex: Quick sort and merge sort
– Dynamic programming algorithms. Ex: Generation of Fibonacci series
– Greedy algorithms Ex: Counting currency
– Branch and bound algorithms. Ex: Travelling salesman (visiting each city once and minimize the total distance travelled)
– Brute force algorithms. Ex: Finding the best path for a travelling salesman
– Randomized algorithms. Ex. Using a random number to choose a pivot in quick sort).

38.  What is an iterative algorithm?

The process of attempting for solving a problem which finds successive approximations for solution, starting from an initial guess. The result of repeated calculations is a sequence of approximate values for the quantities of interest.

39.  What is an recursive algorithm?

Recursive algorithm is a method of simplification that divides the problem into sub-problems of the same nature. The result of one recursion is the input for the next recursion. The repetition is in the self-similar fashion. The algorithm calls itself with smaller input values and obtains the results by simply performing the operations on these smaller values. Generation of factorial, Fibonacci number series are the examples of recursive algorithms.

40.  Explain quick sort and merge sort algorithms.

Quick sort employs the ‘divide and conquer’ concept by dividing the list of elements into two sub elements.

The process is as follows:

1. Select an element, pivot, from the list.
2. Rearrange the elements in the list, so that all elements those are less than the pivot are arranged before the pivot and all elements those are greater than the pivot are arranged after the pivot. Now the pivot is in its position.
3. Sort the both sub lists – sub list of the elements which are less than the pivot and the list of elements which are more than the pivot recursively.

41.  Merge Sort: A comparison based sorting algorithm. The input order is preserved in the sorted output.

Merge Sort algorithm is as follows:

1. The length of the list is 0 or 1, and then it is considered as sorted.
2. Otherwise divide the unsorted list into two lists each about half the size.
3. Sort each sub list recursively. Implement the step 2 until the two sub lists are sorted.
4. As a final step, merge  both the lists back into one sorted list.

42.  What is Bubble Sort and Quick sort?

Bubble Sort: The simplest sorting algorithm. It involves the sorting the list in a repetitive fashion. It compares two adjacent elements in the list, and swaps them if they are not in the designated order. It continues until there are no swaps needed. This is the signal for the list that is sorted. It is also called as comparison sort as it uses comparisons.

Quick Sort: The best sorting algorithm which implements the ‘divide and conquer’ concept. It first divides the list into two parts by picking an element a ‘pivot’. It then arranges the elements those are smaller than pivot into one sub list and the elements those are greater than pivot into one sub list by keeping the pivot in its original place.

43.  What are the difference between a stack and a Queue?

Stack – Represents the collection of elements in Last In First Out order.
Operations includes testing null stack, finding the top element in the stack, removal of top most element and adding elements on the top of the stack.

Queue – Represents the collection of elements in First In First Out order.

Operations include testing null queue, finding the next element, removal of elements and inserting the elements from the queue.

Insertion of elements is at the end of the queue

Deletion of elements is from the beginning of the queue.

44.  Can a stack be described as a pointer? Explain.

A stack is represented as a pointer. The reason is that, it has a head pointer which points to the top of the stack. The stack operations are performed using the head pointer. Hence, the stack can be described as a pointer.

45.  Explain the terms Base case, Recursive case, Binding Time, Run-Time Stack and Tail Recursion.

Base case: A case in recursion, in which the answer is known when the termination for a recursive condition is to unwind back.

Recursive Case: A case which returns to the answer which is closer.

Run-time Stack: A run time stack used for saving the frame stack of a function when every recursion or every call occurs.

Tail Recursion: It is a situation where a single recursive call is consisted by a function, and it is the final statement to be executed. It can be replaced by iteration.

46.  Is it possible to insert different type of elements in a stack? How?

Different elements can be inserted into a stack. This is possible by implementing union / structure data type. It is efficient to use union rather than structure, as only one item’s memory is used at a time.

47.  Explain in brief a linked list.

A linked list is a dynamic data structure. It consists of a sequence of data elements and a reference to the next element in the sequence. Stacks, queues, hash tables, linear equations, prefix and post fix operations. The order of linked items is different that of arrays. The insertion or deletion operations are constant in number.

48.  Explain the types of linked lists.

The types of linked lists are:

Singly linked list: It has only head part and corresponding references to the next nodes.

Doubly linked list: A linked list which both has head and tail parts, thus allowing the traversal in bi-directional fashion. Except the first node, the head node refers to the previous node.

Circular linked list: A linked list whose last node has reference to the first node.

49.  How would you sort a linked list?

Step 1: Compare the current node in the unsorted list with every element in the rest of the list. If the current element is more than any other element go to step 2 otherwise go to step 3.

Step 2: Position the element with higher value after the position of the current element. Compare the next element. Go to step1 if an element exists, else stop the process.

Step 3: If the list is already in sorted order, insert the current node at the end of the list. Compare the next element, if any and go to step 1 or quit.

 

50.  What is sequential search? What is the average number of comparisons in a sequential search?

 

Sequential search: Searching an element in an array, the search starts from the first element till the last element.

The average number of comparisons in a sequential search is (N+1)/2 where N is the size of the array. If the element is in the 1st position, the number of comparisons will be 1 and if the element is in the last position, the number of comparisons will be N.

51.  What are binary search and Fibonacci search?

Binary Search: Binary search is the process of locating an element in a sorted list. The search starts by dividing the list into two parts. The algorithm compares the median value. If the search element is less than the median value, the top list only will be searched, after finding the middle element of that list. The process continues until the element is found or the search in the top list is completed. The same process is continued for the bottom list, until the element is found or the search in the bottom list is completed. If an element is found that must be the median value.

52.  Define Fibonacci Search: Fibonacci search is a process of searching a sorted array by utilising divide and conquer algorithm. Fibonacci search examines locations whose addresses have lower dispersion. When the search element has non-uniform access memory storage, the Fibonacci search algorithm reduces the average time needed for accessing a storage location.

Generics

Algorithms FAQs Part 2.

What is linear search?

Answer: Linear search is the simplest form of search. It searches for the element sequentially starting from the first element. This search has a disadvantage if the element is located at the end. Advantage lies in the simplicity of the search. Also it is most useful when the elements are arranged in a random order.

What is binary search?  

Answer: Binary search is most useful when the list is sorted. In binary search, element present in the middle of the list is determined. If the key (number to search) is smaller than the middle element, the binary search is done on the first half. If the key (number to search) is greater than the middle element, the binary search is done on the second half (right). The first and the last half are again divided into two by determining the middle element.

Explain the bubble sort algorithm.

Answer: Bubble sort algorithm is used for sorting a list. It makes use of a temporary variable for swapping. It compares two numbers at a time and swaps them if they are in wrong order. This process is repeated until no swapping is needed. The algorithm is very inefficient if the list is long.

E.g. List: – 7 4 5 3
1. 7 and 4 are compared

2. Since 4 < 7, 4 is stored in a temporary variable.

3. the content of 7 is now stored in the variable which was holding 4

4. Now, the content of temporary variable and the variable previously holding 7 are swapped.

What is quick sort?  

Answer: Quick sort is one the fastest sorting algorithm used for sorting a list. A pivot point is chosen. Remaining elements are portioned or divided such that elements less than the pivot point are in left and those greater than the pivot are on the right. Now, the elements on the left and right can be recursively sorted by repeating the algorithm.

Describe Trees using C++ with an example.

Tree is a structure that is similar to linked list. A tree will have two nodes that point to the left part of the tree and the right part of the tree. These two nodes must be of the similar type.

The following code snippet describes the declaration of trees. The advantage of trees is that the data is placed in nodes in sorted order.

struct TreeNode
{
int item; // The data in this node.
TreeNode *left; // Pointer to the left subtree.
TreeNode *right; // Pointer to the right subtree.
}

The following code snippet illustrates the display of tree data.

void showTree( TreeNode *root )
{
if ( root != NULL ) { // (Otherwise, there’s nothing to print.)
showTree(root->left); // Print items in left sub tree.
cout << root->item << ” “; // Print the root item.
showTree(root->right ); // Print items in right sub tree.
}
} // end inorderPrint()

Data Structure trees – posted on August 03, 2008 at 22:30 PM by Amit Satpute

Question – What is a B tree?

Answer: A B-tree of order m (the maximum number of children for each node) is a tree which satisfies the following properties :

1. Every node has <= m children.
2. Every node (except root and leaves) has >= m/2 children.
3. The root has at least 2 children.
4. All leaves appear in the same level, and carry no information.
5. A non-leaf node with k children contains k – 1 keys

Question – What are splay trees?

Answer: A splay tree is a self-balancing binary search tree. In this, recently accessed elements are quick to access again

It is useful for implementing caches and garbage collection algorithms.

When we move left going down this path, its called a zig and when we move right, its a zag.

Following are the splaying steps. There are six different splaying steps.
1. Zig Rotation (Right Rotation)
2. Zag Rotation (Left Rotation)
3. Zig-Zag (Zig followed by Zag)
4. Zag-Zig (Zag followed by Zig)
5. Zig-Zig
6. Zag-Zag

Question – What are red-black trees?

Answer :A red-black tree is a type of self-balancing binary search tree.
In red-black trees, the leaf nodes are not relevant and do not contain data.
Red-black trees, like all binary search trees, allow efficient in-order traversal of elements.
Each node has a color attribute, the value of which is either red or black.

Characteristics:
The root and leaves are black
Both children of every red node are black.
Every simple path from a node to a descendant leaf contains the same number of black nodes, either counting or not counting the null black nodes

Question – What are threaded binary trees?

Answer: In a threaded binary tree, if a node ‘A’ has a right child ‘B’ then B’s left pointer must be either a child, or a thread back to A.

In the case of a left child, that left child must also have a left child or a thread back to A, and so we can follow B’s left children until we find a thread, pointing back to A.

This data structure is useful when stack memory is less and using this tree the treversal around the tree becomes faster.

Question – What is a B+ tree?

Answer: It is a dynamic, multilevel index, with maximum and minimum bounds on the number of keys in each index segment. all records are stored at the lowest level of the tree; only keys are stored in interior blocks.

Describe Tree database. Explain its common uses.

A tree is a data structure which resembles a hierarchical tree structure. Every element in the structure is a node. Every node is linked with the next node, either to its left or to its right. Each node has zero or more child nodes. The length of the longest downward path to a leaf from that node is known as the height of the node and the length of the path to its root is known as the depth of a node.

Common Uses:

– To manipulate hierarchical data
– Makes the information search, called tree traversal, easier.
– To manipulate data that is in the form of sorted list.

What is binary tree? Explain its uses.

A binary tree is a tree structure, in which each node has only two child nodes. The first node is known as root node. The parent has two nodes namely left child and right child.

Uses of binary tree:

– To create sorting routine.
– Persisting data items for the purpose of fast lookup later.
– For inserting a new item faster

How do you find the depth of a binary tree?

The depth of a binary tree is found using the following process:

1. Send the root node of a binary tree to a function
2. If the tree node is null, then return false value.
3. Calculate the depth of the left tree; call it ‘d1’ by traversing every node. Increment the counter by 1, as the traversing progresses until it reaches the leaf node. These operations are done recursively.
4. Repeat the 3rd step for the left node. Name the counter variable ‘d2’.
5. Find the maximum value between d1 and d2. Add 1 to the max value. Let us call it ‘depth’.
6. The variable ‘depth’ is the depth of the binary tree.

Explain pre-order and in-order tree traversal.

A non-empty binary tree is traversed in 3 types, namely pre-order, in-order and post-order in a recursive fashion.

Pre-order:
Pre-order process is as follows:

– Visit the root node
– Traverse the left sub tree
– Traverse the right sub tree

In-Order:
In order process is as follows:
– Traverse the left sub tree
– Visit the root node
– Traverse the right sub tree

What is a B+ tree? Explain its uses.

B+ tree represents the way of insertion, retrieval and removal of the nodes in a sorted fashion. Every operation is identified by a ‘key’. B+ tree has maximum and minimum bounds on the number of keys in each index segment, dynamically. All the records in a B+ tree are persisted at the last level, i.e., leaf level in the order of keys.

B+ tree is used to visit the nodes starting from the root to the left or / and right sub tree. Or starting from the first node of the leaf level. A bi directional tree traversal is possible in the B+ tree.

Define threaded binary tree. Explain its common uses

A threaded binary tree is structured in an order that, all right child pointers would normally be null and points to the ‘in-order successor’ of the node. Similarly, all the left child pointers would normally be null and points to the ‘in-order predecessor’ of the node.

Uses of Threaded binary tree:

– Traversal is faster than unthreaded binary trees
– More subtle, by enabling the determination of predecessor and successor nodes that starts from any node, in an efficient manner.
– No stack overload can be carried out with the threads.
– Accessibility of any node from any other node
– It is easy to implement to insertion and deletion from a threaded tree.

Explain implementation of traversal of a binary tree.

Binary tree traversal is a process of visiting each and every node of the tree. The two fundamental binary tree traversals are ‘depth-first’ and ‘breadth-first’.

The depth-first traversal are classified into 3 types, namely, pre-order, in-order and post-order.

Pre-order: Pre-order traversal involves visiting the root node first, then traversing the left sub tree and finally the right sub tree.

In-order: In-order traversal involves visiting the left sub tree first, then visiting the root node and finally the right sub tree.

Post-order: Post-order traversal involves visiting the left sub tree first, then visiting the right sub tree and finally visiting the root node.

The breadth-first traversal is the ‘level-order traversal’. The level-order traversal does not follow the branches of the tree. The first-in first-out queue is needed to traversal in level-order traversal.

Explain implementation of deletion from a binary tree.

To implement the deletion from a binary tree, there is a need to consider the possibilities of deleting the nodes. They are:

– Node is a terminal node: In case the node is the left child node of its parent, then the left pointer of its parent is set to NULL. In all other cases, if the node is right child node of its parent, then the right pointer of its parent is set to NULL.

– Node has only one child: In this scenario, the appropriate pointer of its parent is set to child node.

– Node has two children: The predecessor is replaced by the node value, and then the predecessor of the node is deleted.

 

Describe stack operation. 

Stack is a data structure that follows Last in First out strategy.

Stack Operations:-

�  Push – Pushes (inserts) the element in the stack. The location is specified by the pointer.

�  Pop – Pulls (removes) the element out of the stack. The location is specified by the pointer

�  Swap: – the two top most elements of the stack can be swapped

�  Peek: – Returns the top element on the stack but does not remove it from the stack

�  Rotate:- the topmost (n) items can be moved on the stack in a rotating fashion

A stack has a fixed location in the memory. When a data element is pushed in the stack, the pointer points to the current element.

Describe queue operation.

Queue is a data structure that follows First in First out strategy.

Queue Operations:

�  Enqueue- Inserts the element in the queue at the end.

�  Dequeue – removes the element out of the queue from the front

�  Size – Returns the size of the queue

�  Front – Returns the first element of the queue.

�  Empty – to find if the queue is empty.

Discuss how to implement queue using stack.
A queue can be implemented by using 2 stacks:-

�  1. An element is inserted in the queue by pushing it into stack 1

�  2. An element is extracted from the queue by popping it from the stack 2

�  3. If the stack 2 is empty then all elements currently in stack 1 are transferred to stack 2 but in the reverse order

�  4. If the stack 2 is not empty just pop the value from stack 2.

Explain stacks and queues in detail.

A stack is a data structure based on the principle Last In First Out. Stack is container to hold nodes and has two operations – push and pop. Push operation is to add nodes into the stack and pop operation is to delete nodes from the stack and returns the top most node.

A queue is a data structure based on the principle First in First Out. The nodes are kept in an order. A node is inserted from the rear of the queue and a node is deleted from the front. The first element inserted in the first element first to delete.

Question – What are priority queues?
A priority queue is essentially a list of items in which each item is associated with a priority
Items are inserted into a priority queue in any, arbitrary order. However, items are withdrawn from a priority queue in order of their priorities starting with the highest priority item first.

Priority queues are often used in the implementation of algorithms

Question – What is a circular singly linked list?
In a circular singly linked list, the last node of the list is made to point to the first node. This eases the traveling through the list..

Describe the steps to insert data into a singly linked list.
Steps to insert data into a singly linked list:-

Insert in middle
Data: 1, 2, 3, 5
1. Locate the node after which a new node (say 4) needs to be inserted.(say after 3)

2. create the new node i.e. 4

3. Point the new node 4 to 5 (new_node->next = node->next(5))

4. Point the node 3 to node 4 (node->next =newnode)

Insert in the beginning

Data: 2, 3, 5

1. Locate the first node in the list (2)

2. Point the new node (say 1) to the first node. (new_node->next=first)

Insert in the end

Data: 2, 3, 5

1. Locate the last node in the list (5)

2. Point the last node (5) to the new node (say 6). (node->next=new_node)

Explain how to reverse singly link list.

Answer: Reverse a singly link list using iteration:-

1. First, set a pointer ( *current) to point to the first node i.e. current=base

2. Move ahead until current!=null (till the end)

3. set another pointer (* next) to point to the next node i.e. next=current->next

4. store reference of *next in a temporary variable ( *result) i.e current->next=result

5. swap the result value with current i.e. result=current

6. and now swap the current value with next. i.e. current=next

7. return result and repeat from step 2

A linked list can also be reversed using recursion which eliminates the use of a temporary variable.

Define circular linked list.

Answer: In a circular linked list the first and the last node are linked. This means that the last node points to the first node in the list. Circular linked list allow quick access to first and last record using a single pointer.

Describe linked list using C++ with an example.

A linked list is a chain of nodes. Each and every node has at least two nodes, one is the data and other is the link to the next node. These linked lists are known as single linked lists as they have only one pointer to point to the next node. If a linked list has two nodes, one is to point to the previous node and the other is to point to the next node, then the linked list is known as doubly linked list.

To use a linked list in C++ the following structure is to be declared:

 

typedef struct List
{

long Data;
List* Next;
List ()
{

Next=NULL;
Data=0;
}

};
typedef List* ListPtr.

The following code snippet is used to add a node.

void List::AddANode()
{

ListPtr->Next = new List;
ListPtr=ListPtr->Next;

}

The following code snippet is used to traverse the list

void showList(ListPtr listPtr)
{
� while(listPtr!=NULL) {
cout<<listPtr->Data;
}
return temp;
}

 

 

/ / LListItr class; maintains “current position”.

/ I

/ / CONSTRUCTION: With no parameters. The LList class may

/ / construct a LListItr with a pointer to a LListNode.

/ /

/ / ******************PUBLOIPCE RATIONS*********************

/ / boo1 isValid( j –> True if not NULL

/ / void advance ( j –> Advance (if not already NULL)

/ / Object retrieve( ) –> Return item in current position

/ / ******************ERRORS********************************

/ / Throws BadIterator for illegal retrieve.

 

template <class Object>

class LListItr

{

public:

LListItr( ) : current( NULL ) { 1

 

boo1 isValid( j const

{ return current ! = NULL; )

 

void advance ( j

{ if( isValid( j j current = current->next; 1

 

const Object & retrieve( ) const

{ if( !isValid( j j throw BadIterator( ) ;

return current->element; }

 

private:

LListNode<Object> *current; / / Current position

 

LListItr( LListNode<Object> *theNode )

: current ( theNode ) {

 

friend class LList<Object>; / / Grant access to constructor

};

 

 

 

/ / LList class.

/ /

/ / CONSTRUCTION: with no initializer.

/ / Access is via LListItr class.

/ /

/ / ******************PUBLIoC~ ERATIoNs******************X**

/ / boo1 isEmpty( ) –> Return true if empty; else false

/ / void makeEmpty( ) –> Remove all items

/ / LListItr zeroth( ) –> Return position to prior to first

10 / / LListItr first( ) –> Return first position

11 / / void insert( x, p ) –> Insert x after position p

12 / / void remove( x ) –> Remove x

13 / / LListItr find( x ) –> Return position that views x

14 / / LListItr findprevious( x )

15 / / –> Return position prior to x

16 / / ******************ERR~R~*****~****************************

17 / / No special errors.

18

19 template <class Object>

20 class LList

21 I

22 public:

23 LList( ) ;

24 LList( const LList & rhs ) ;

25 -LList( ) ;

26

27 boo1 isEmpty( const;

28 void makeEmpty( ) ;

29 LListItr<Object> zeroth( ) const;

30 LListItr<Object> first ( ) const;

31 void insert( const Object & x, const ~~istItr<Object&> p ) ;

32 LListItr<Object> find( const Object & x ) const;

33 LListItr<Object> findprevious( const Object & x ) const;

34 void remove( const Object & x ) ;

35

36 const LList & operator=( const LList & rhs ) ;

37

38 private:

39 LListNode<Object> *header;

};

 

/ / Construct the list.

template <class Object>

LList<Object>: :LList ( )

{

header = new LListNode<Object>;

1

7

/ / Test if the list is logically empty.

/ / Return true if empty, false, otherwise.

10 template <class Object>

11 boo1 LList<Object>: :isEmpty( ) const

12 {

13 return header->next == NULL;

14 1

15

16 / / Return an iterator representing the header node.

17 template <class Object>

18 LListItr<Object> LList<Object>::zeroth( ) const

19 I

20 return LListItr<Object>( header ) ;

21 1

22

23 / / Return an iterator representing the first node in the list.

24 / / This operation is valid for empty lists.

25 template <class Object>

26 LListItr<Object> LList<Object>::first( ) const

27 {

28 return LListItr<Object>( header->next ) ;

29 };

 

/ / Simple print function.

template <class Object>

void printlist( const LList<Object> & theList )

(

if( theList.isEmptyi ) )

6 cout << “Empty list” << endl;

else

8 }

LListItr<Object> itr = theList.first( ) ;

10 for( ; itr.isValid( ) ; itreadvance( ) )

11 cout << itr.retrieve( ) << ” “;

12 {

13 cout << endl;

14 }

 

/ / Return iterator corresponding to the first node matching x.

/ / Iterator is not valid if item is not found.

template <class Object>

LListItr<Object> LList<Object>::find( const Object & x ) const

{

6 LListNode<Object> *p = header->next;

7

8 while( p ! = NULL && p->element ! = x )

9 p = p->next;

10

11 return LListItr<Object>( p ) ;

12 }

/ / Remove the first occurrence of an item X .

template <class Object>

void LList<Object>::remove( const Object & x

{

LListNode<Object> *p = findprevious( x ) .current;

6

if( p->next ! = NULL )

8 I

LListNode<Object> *oldNode = p->next;

10 p->next = p->next->next; / / Bypass

11 delete oldNode;

12 }

13 1

Figure 17.13 The remove routine for the LList class.

/ / Return iterator prior to the first node containing item x.

template <class Object>

LListItr<Object>

LList<Object>: : f indprevious ( const Object & x ) const

{

LListNode<Object> *p = header;

7

8 while( p-znext ! = NULL && p->next->element ! = x )

p = p->next;

10

11 return LListItr<Object>( p ) ;

12 }

Figure 17.14 The f indprevious routine-similar to the find routine-for use

with remove.

/ / Insert item x after p.

template <class Object>

void LList<Object>::

insert( const Object & x, const LListItr<Object> & p )

{

if ( p.current ! = NULL )

p.current->next = new LListNodecObject>( x,

8 p.current->next ) ;

}

Figure 17.15 The insertion routine for the LList class.

 

/ / Make the list logically empty.

template <class Object>

void LList<Object>: :makeEmpty( )

I

while( !isEmptyi ) )

6 remove ( first ( ) .retrieve ( ) ) ;

1

8

/ / Destructor.

10 template <class Object>

11 LList<Object>::-LList( )

12 (

13 makeErnpty ( ) ;

14 delete header;

15 1

Figure 17.16 The makeEmpty method and the LLis t destructor.

/ / Copy constructor.

template <class Object>

LList<Object>::LList( const LList<Object> & rhs )

{

header = new LListNodeiObject>;

*this = rhs;

}

8

/ / Deep copy of linked lists.

10 template <class Object>

11 const LList<Object> &

12 LList<Object>::operator=( const ~~ist<Object&> rhs i

13 {

14 if ( this ! = &rhs )

15 (

16 makeEmpty ( ) ;

17

18 LListItr<Object> ritr = rhs.first( ) ;

19 LListItr<Object> itr = zeroth( ) ;

20 for( ; ritr.isValid( ) ; ritr.advance( ) , itr.advance( )

21 insert( ritr.retrieve( ) , itr ) ;

22 {

23 return *this;

24 }

Figure 17.17 Two LLis copy routines: operator= and copy constructor.

 

Q: Implement a Algorithm to check if the link list is in Ascending order?

A: template

bool linklist::isAscending() const{

nodeptr ptr = head;

while(ptr->_next)

{

if(ptr->_data > ptr->_next->_data)

return false;

ptr= ptr->_next;

}

return true;

}

 

Q: Write an algorithm to reverse a link list?

A: template

void linklist::reverselist()

{

nodeptr ptr= head;

nodeptr nextptr= ptr->_next;

while(nextptr)

{

nodeptr temp = nextptr->_next;

nextptr->_next = ptr;

ptr = nextptr;

nextptr = temp;

}

head->_next = 0;

head = ptr;

}

Generics

Algorithms FAQs Part 3.

Q: Implement Binary Heap in C++?

A: BinaryHeap.h

————

#ifndef BINARY_HEAP_H_

#define BINARY_HEAP_H_

#include “dsexceptions.h”

#include “vector.h”

// BinaryHeap class

//

// CONSTRUCTION: with an optional capacity (that defaults to 100)

//

// ******************PUBLIC OPERATIONS*********************

// void insert( x ) –> Insert x

// deleteMin( minItem ) –> Remove (and optionally return) smallest item

// Comparable findMin( ) –> Return smallest item

// bool isEmpty( ) –> Return true if empty; else false

// bool isFull( ) –> Return true if full; else false

// void makeEmpty( ) –> Remove all items

// ******************ERRORS********************************

// Throws Underflow and Overflow as warranted

template

class BinaryHeap

{

public:

explicit BinaryHeap( int capacity = 100 );

bool isEmpty( ) const;

bool isFull( ) const;

const Comparable & findMin( ) const;

void insert( const Comparable & x );

void deleteMin( );

void deleteMin( Comparable & minItem );

void makeEmpty( );

private:

int currentSize; // Number of elements in heap

vector array; // The heap array

void buildHeap( );

void percolateDown( int hole );

};

#endif

BinaryHeap.cpp

————–

#include “BinaryHeap.h”

/**

* Construct the binary heap.

* capacity is the capacity of the binary heap.

*/

template

BinaryHeap::BinaryHeap( int capacity )

: array( capacity + 1 ), currentSize( 0 )

{

}

/**

* Insert item x into the priority queue, maintaining heap order.

* Duplicates are allowed.

* Throw Overflow if container is full.

*/

template

void BinaryHeap::insert( const Comparable & x )

{

if( isFull( ) )

throw Overflow( );

// Percolate up

int hole = ++currentSize;

for( ; hole > 1 && x < array[ hole / 2 ]; hole /= 2 )

array[ hole ] = array[ hole / 2 ];

array[ hole ] = x;

}

/**

* Find the smallest item in the priority queue.

* Return the smallest item, or throw Underflow if empty.

*/

template

const Comparable & BinaryHeap::findMin( ) const

{

if( isEmpty( ) )

throw Underflow( );

return array[ 1 ];

}

/**

* Remove the smallest item from the priority queue.

* Throw Underflow if empty.

*/

template

void BinaryHeap::deleteMin( )

{

if( isEmpty( ) )

throw Underflow( );

array[ 1 ] = array[ currentSize– ];

percolateDown( 1 );

}

/**

* Remove the smallest item from the priority queue

* and place it in minItem. Throw Underflow if empty.

*/

template

void BinaryHeap::deleteMin( Comparable & minItem )

{

if( isEmpty( ) )

throw Underflow( );

minItem = array[ 1 ];

array[ 1 ] = array[ currentSize– ];

percolateDown( 1 );

}

/**

* Establish heap order property from an arbitrary

* arrangement of items. Runs in linear time.

*/

template

void BinaryHeap::buildHeap( )

{

for( int i = currentSize / 2; i > 0; i– )

percolateDown( i );

}

/**

* Test if the priority queue is logically empty.

* Return true if empty, false otherwise.

*/

template

bool BinaryHeap::isEmpty( ) const

{

return currentSize == 0;

}

/**

* Test if the priority queue is logically full.

* Return true if full, false otherwise.

*/

template

bool BinaryHeap::isFull( ) const

{

return currentSize == array.size( ) – 1;

}

/**

* Make the priority queue logically empty.

*/

template

void BinaryHeap::makeEmpty( )

{

currentSize = 0;

}

/**

* Internal method to percolate down in the heap.

* hole is the index at which the percolate begins.

*/

template

void BinaryHeap::percolateDown( int hole )

{

/* 1*/ int child;

/* 2*/ Comparable tmp = array[ hole ];

/* 3*/ for( ; hole * 2 <= currentSize; hole = child )

{

/* 4*/ child = hole * 2;

/* 5*/ if( child != currentSize && array[ child + 1 ] < array[ child ] )

/* 6*/ child++;

/* 7*/ if( array[ child ] < tmp )

/* 8*/ array[ hole ] = array[ child ];

else

/* 9*/ break;

}

/*10*/ array[ hole ] = tmp;

}

TestBinaryHeap.cpp

——————

#include

#include “BinaryHeap.h”

#include “dsexceptions.h”

// Test program

int main( )

{

int numItems = 10000;

BinaryHeap h( numItems );

int i = 37;

int x;

try

{

for( i = 37; i != 0; i = ( i + 37 ) % numItems )

h.insert( i );

for( i = 1; i < numItems; i++ )

{

h.deleteMin( x );

if( x != i )

cout << “Oops! ” << i << endl;

}

for( i = 37; i != 0; i = ( i + 37 ) % numItems )

h.insert( i );

h.insert( 0 );

h.insert( i = 999999 ); // Should overflow

}

catch( Overflow )

{ cout << “Overflow (expected)! ” << i << endl; }

return 0;

}

Q: Implement Binary Search Tree in C++?

A: BinarySearchTree.h

———————-

#ifndef BINARY_SEARCH_TREE_H_

#define BINARY_SEARCH_TREE_H_

#include “dsexceptions.h”

#include // For NULL

// Binary node and forward declaration because g++ does

// not understand nested classes.

template

class BinarySearchTree;

template

class BinaryNode

{

Comparable element;

BinaryNode *left;

BinaryNode *right;

BinaryNode( const Comparable & theElement, BinaryNode *lt, BinaryNode *rt )

: element( theElement ), left( lt ), right( rt ) { }

friend class BinarySearchTree;

};

// BinarySearchTree class

//

// CONSTRUCTION: with ITEM_NOT_FOUND object used to signal failed finds

//

// ******************PUBLIC OPERATIONS*********************

// void insert( x ) –> Insert x

// void remove( x ) –> Remove x

// Comparable find( x ) –> Return item that matches x

// Comparable findMin( ) –> Return smallest item

// Comparable findMax( ) –> Return largest item

// boolean isEmpty( ) –> Return true if empty; else false

// void makeEmpty( ) –> Remove all items

// void printTree( ) –> Print tree in sorted order

template

class BinarySearchTree

{

public:

explicit BinarySearchTree( const Comparable & notFound );

BinarySearchTree( const BinarySearchTree & rhs );

~BinarySearchTree( );

const Comparable & findMin( ) const;

const Comparable & findMax( ) const;

const Comparable & find( const Comparable & x ) const;

bool isEmpty( ) const;

void printTree( ) const;

void makeEmpty( );

void insert( const Comparable & x );

void remove( const Comparable & x );

const BinarySearchTree & operator=( const BinarySearchTree & rhs );

private:

BinaryNode *root;

const Comparable ITEM_NOT_FOUND;

const Comparable & elementAt( BinaryNode *t ) const;

void insert( const Comparable & x, BinaryNode * & t ) const;

void remove( const Comparable & x, BinaryNode * & t ) const;

BinaryNode * findMin( BinaryNode *t ) const;

BinaryNode * findMax( BinaryNode *t ) const;

BinaryNode * find( const Comparable & x, BinaryNode *t ) const;

void makeEmpty( BinaryNode * & t ) const;

void printTree( BinaryNode *t ) const;

BinaryNode * clone( BinaryNode *t ) const;

};

#endif

BinarySearchTree.cpp

——————–

#include “BinarySearchTree.h”

#include

/**

* Implements an unbalanced binary search tree.

* Note that all “matching” is based on the < method.

*/

/**

* Construct the tree.

*/

template

BinarySearchTree::BinarySearchTree( const Comparable & notFound ) :

root( NULL ), ITEM_NOT_FOUND( notFound )

{

}

/**

* Copy constructor.

*/

template

BinarySearchTree::

BinarySearchTree( const BinarySearchTree & rhs ) :

root( NULL ), ITEM_NOT_FOUND( rhs.ITEM_NOT_FOUND )

{

*this = rhs;

}

/**

* Destructor for the tree.

*/

template

BinarySearchTree::~BinarySearchTree( )

{

makeEmpty( );

}

/**

* Insert x into the tree; duplicates are ignored.

*/

template

void BinarySearchTree::insert( const Comparable & x )

{

insert( x, root );

}

/**

* Remove x from the tree. Nothing is done if x is not found.

*/

template

void BinarySearchTree::remove( const Comparable & x )

{

remove( x, root );

}

/**

* Find the smallest item in the tree.

* Return smallest item or ITEM_NOT_FOUND if empty.

*/

template

const Comparable & BinarySearchTree::findMin( ) const

{

return elementAt( findMin( root ) );

}

/**

* Find the largest item in the tree.

* Return the largest item of ITEM_NOT_FOUND if empty.

*/

template

const Comparable & BinarySearchTree::findMax( ) const

{

return elementAt( findMax( root ) );

}

/**

* Find item x in the tree.

* Return the matching item or ITEM_NOT_FOUND if not found.

*/

template

const Comparable & BinarySearchTree::

find( const Comparable & x ) const

{

return elementAt( find( x, root ) );

}

/**

* Make the tree logically empty.

*/

template

void BinarySearchTree::makeEmpty( )

{

makeEmpty( root );

}

/**

* Test if the tree is logically empty.

* Return true if empty, false otherwise.

*/

template

bool BinarySearchTree::isEmpty( ) const

{

return root == NULL;

}

/**

* Print the tree contents in sorted order.

*/

template

void BinarySearchTree::printTree( ) const

{

if( isEmpty( ) )

cout << “Empty tree” << endl;

else

printTree( root );

}

/**

* Deep copy.

*/

template

const BinarySearchTree &

BinarySearchTree::

operator=( const BinarySearchTree & rhs )

{

if( this != &rhs )

{

makeEmpty( );

root = clone( rhs.root );

}

return *this;

}

/**

* Internal method to get element field in node t.

* Return the element field or ITEM_NOT_FOUND if t is NULL.

*/

template

const Comparable & BinarySearchTree::

elementAt( BinaryNode *t ) const

{

if( t == NULL )

return ITEM_NOT_FOUND;

else

return t->element;

}

/**

* Internal method to insert into a subtree.

* x is the item to insert.

* t is the node that roots the tree.

* Set the new root.

*/

template

void BinarySearchTree::

insert( const Comparable & x, BinaryNode * & t ) const

{

if( t == NULL )

t = new BinaryNode( x, NULL, NULL );

else if( x < t->element )

insert( x, t->left );

else if( t->element < x )

insert( x, t->right );

else

; // Duplicate; do nothing

}

/**

* Internal method to remove from a subtree.

* x is the item to remove.

* t is the node that roots the tree.

* Set the new root.

*/

template

void BinarySearchTree::

remove( const Comparable & x, BinaryNode * & t ) const

{

if( t == NULL )

return; // Item not found; do nothing

if( x < t->element )

remove( x, t->left );

else if( t->element < x )

remove( x, t->right );

else if( t->left != NULL && t->right != NULL ) // Two children

{

t->element = findMin( t->right )->element;

remove( t->element, t->right );

}

else

{

BinaryNode *oldNode = t;

t = ( t->left != NULL ) ? t->left : t->right;

delete oldNode;

}

}

/**

* Internal method to find the smallest item in a subtree t.

* Return node containing the smallest item.

*/

template

BinaryNode *

BinarySearchTree::findMin( BinaryNode *t ) const

{

if( t == NULL )

return NULL;

if( t->left == NULL )

return t;

return findMin( t->left );

}

/**

* Internal method to find the largest item in a subtree t.

* Return node containing the largest item.

*/

template

BinaryNode *

BinarySearchTree::findMax( BinaryNode *t ) const

{

if( t != NULL )

while( t->right != NULL )

t = t->right;

return t;

}

/**

* Internal method to find an item in a subtree.

* x is item to search for.

* t is the node that roots the tree.

* Return node containing the matched item.

*/

template

BinaryNode *

BinarySearchTree::

find( const Comparable & x, BinaryNode *t ) const

{

if( t == NULL )

return NULL;

else if( x < t->element )

return find( x, t->left );

else if( t->element < x )

return find( x, t->right );

else

return t; // Match

}

/****** NONRECURSIVE VERSION*************************

template

BinaryNode *

BinarySearchTree::

find( const Comparable & x, BinaryNode *t ) const

{

while( t != NULL )

if( x < t->element )

t = t->left;

else if( t->element < x )

t = t->right;

else

return t; // Match

return NULL; // No match

}

*****************************************************/

/**

* Internal method to make subtree empty.

*/

template

void BinarySearchTree::

makeEmpty( BinaryNode * & t ) const

{

if( t != NULL )

{

makeEmpty( t->left );

makeEmpty( t->right );

delete t;

}

t = NULL;

}

/**

* Internal method to print a subtree rooted at t in sorted order.

*/

template

void BinarySearchTree::printTree( BinaryNode *t ) const

{

if( t != NULL )

{

printTree( t->left );

cout << t->element << endl;

printTree( t->right );

}

}

/**

* Internal method to clone subtree.

*/

template

BinaryNode *

BinarySearchTree::clone( BinaryNode * t ) const

{

if( t == NULL )

return NULL;

else

return new BinaryNode( t->element, clone( t->left ), clone( t->right ) );

}

TestBinarySearchTree.cpp

————————

#include

#include “BinarySearchTree.h”

// Test program

int main( )

{

const int ITEM_NOT_FOUND = -9999;

BinarySearchTree t( ITEM_NOT_FOUND );

int NUMS = 4000;

const int GAP = 37;

int i;

cout << “Checking… (no more output means success)” << endl;

for( i = GAP; i != 0; i = ( i + GAP ) % NUMS )

t.insert( i );

for( i = 1; i < NUMS; i+= 2 )

t.remove( i );

if( NUMS < 40 )

t.printTree( );

if( t.findMin( ) != 2 || t.findMax( ) != NUMS – 2 )

cout << “FindMin or FindMax error!” << endl;

for( i = 2; i < NUMS; i+=2 )

if( t.find( i ) != i )

cout << “Find error1!” << endl;

for( i = 1; i < NUMS; i+=2 )

{

if( t.find( i ) != ITEM_NOT_FOUND )

cout << “Find error2!” << endl;

}

BinarySearchTree t2( ITEM_NOT_FOUND );

t2 = t;

for( i = 2; i < NUMS; i+=2 )

if( t2.find( i ) != i )

cout << “Find error1!” << endl;

for( i = 1; i < NUMS; i+=2 )

{

if( t2.find( i ) != ITEM_NOT_FOUND )

cout << “Find error2!” << endl;

}

return 0;

}

 

 

Vector : The Vector ADT extends the notion of� array by storing a sequence of arbitrary objects An element can be accessed, inserted or removed by specifying its rank (number of elements preceding it) An exception is thrown if an incorrect rank is specified (e.g., a negative rank)

 

The Stack ADT stores arbitrary objects Insertions and deletions follow the last-in first-out scheme.

The Queue ADT stores arbitrary objects Insertions and deletions follow the first-in first-out scheme Insertions are at the rear of the queue and removals are at the front of the queue Main queue operations:

enqueue(object): inserts an element at the end of the queue object

dequeue(): removes and returns the element at the front of the queue

 

A singly linked list is a concrete data structure consisting of a sequence of nodes

Each node stores

􀂄 element

􀂄 link to the next node

 

A doubly linked list provides a natural implementation of the List ADT Nodes implement Position and store:

􀂄 element

􀂄 link to the previous node

􀂄 link to the next node

 

A tree is an abstract model of a hierarchical structure A tree consists of nodes with a parent-child relation

 

Root: node without parent (A)

Internal node: node with at least one child (A, B, C, F)

External node (a.k.a. leaf): node without children (E, I, J, K, G, H, D)

Ancestors of a node: parent, grandparent, grand-grandparent, etc.

Depth of a node: number of ancestors

Height of a tree: maximum depth of any node (3)

Descendant of a node: child, grandchild, grand-grandchild, etc.

Subtree: tree consisting of a node and its descendants

 

 

A binary tree is a tree with the following properties:

􀂄 Each internal node has two children

􀂄 The children of a node are an ordered pair

We call the children of an internal node left child and right child

Alternative recursive definition: a binary tree is either

􀂄 a tree consisting of a single node,

or

􀂄 a tree whose root has an ordered pair of children, each of which is a

binary tree

 

Notation

n number of nodes

e number of

external nodes

i number of internal

nodes

h height

 

Properties:

􀂄 i + 1

􀂄 = 2− 1

􀂄 h ≤ i

􀂄 h ≤ (− 1)/2

􀂄 e ≤ 2h

􀂄 h ≥ log2 e

􀂄 h ≥ log2 (+ 1) – 1

 

A priority queue stores a collection of items

An item is a pair (key, element)

Main methods of the Priority Queue ADT

􀂄 insertItem(k, o) inserts an item with key k and element o

􀂄 removeMin() removes the item with smallest key and returns its element.

 

A heap is a binary tree storing keys at its internal nodes and satisfying the

following properties:

􀂄 Heap-Order: for every internal node v other than the root,

key(v) ≥ key(parent(v))

􀂄 Complete Binary Tree: let be the height of the heap

􀂊 for i = 0, . , − 1, there are 2nodes of depth i

􀂊 at depth − 1, the internal nodes are to the left of the external nodes

The last node of a heap is the rightmost internal node of depth – 1

 

A hash function maps keys of a given type to integers in a fixed interval [0, − 1]

Example:

h(x) = mod is a hash function for integer keys

The integer h(x) is called the hash value of key x

A hash table for a given key type consists of

􀂄 Hash function h

􀂄 Array (called table) of size N

When implementing a dictionary with a hash table, the goal is to store item (ko) at index i h(k)

 

A hash function is usually specified as the composition of two

functions:

Hash code map:

h1: keys → integers

Compression map:

h2: integers → [0, − 1]

The hash code map is applied first, and the compression map is applied next on the result, i.e.,

h(x) = h2(h1(x))

The goal of the hash function is to �disperse� the keys in an apparently random way.

 

The dictionary ADT models a searchable collection of key element items

The main operations of a dictionary are searching, inserting, and deleting items

Multiple items with the same key are allowed

 

 

Merge-sort on an input

sequence with n

elements consists of

three steps:

􀂄 Divide: partition into

two sequences S1 and S2

of about n/2 elements

each

􀂄 Recur: recursively sort S1

and S2

􀂄 Conquer: merge S1 and

S2 into a unique sorted

Sequence

 

 

Algorithm mergeSort(S, C)

Input sequence with n

elements, comparator C

Output sequence sorted

according to C

if S.size() 1

(S1, S2) ← partition(Sn/2)

mergeSort(S1, C)

mergeSort(S2, C)

← merge(S1, S2)

 

 

Quick-sort is a randomized

sorting algorithm based

on the divide-and-conquer

paradigm:

􀂄 Divide: pick a random

element (called pivot) and

partition into

􀂊 elements less than x

􀂊 elements equal x

􀂊 elements greater than x

􀂄 Recur: sort and G

􀂄 Conquer: join Land G

 

We represent a set by the sorted sequence of its elements

􀂊 By specializing the auxliliary methods the generic merge algorithm can be used to perform basic set operations:

􀂄 union

􀂄 intersection

􀂄 subtraction

􀂊 The running time of an operation on sets and should be at most O(nA nB)

 

Quick-select is a randomized selection algorithm based on the prune-and-search paradigm:

􀂄 Prune: pick a random element x(called pivot) and partition into

􀂊 elements less than x

􀂊 elements equal x

􀂊 elements greater than x

􀂄 Search: depending on k, either answer is in E, or we need to recurse in either or G

 

Big O:

F(n) (- O(g(n)) if there exists c > 0 and an integer n0 > 0 such that f(n) <= c*g(n) for all n >= n0.

 

Big Omega

F(n) (- O(g(n)) if there exists c > 0 and an integer n0 > 0 such that f(n) >= c*g(n) for all n >= n0.

 

Big theta

F(n) (- O(g(n)) if there exists c > 0 and an integer n0 > 0 such that f(n) = c*g(n) for all n >= n0.

.NET

Typescript – A new web technology for WEB Creators.

Introduction to TypeScript

TypeScript is a superset of Javascript that compiles to idiomatic JavaScript code, TypeScript is used for developing industrial-strength scalable application. All Javascript code is TypeScript code; i.e. you can simply and paste JS code into TS file and execute with out any errors.
TypeScript works on any browser, any host, any OS. Typescript is aligned with emerging standards, class, modules, arrow functions, ‘this’ keyword implementation all align with ECMAScript 6 proposals.

TypeScript support most of the module systems implemented in popular JS libraries. CommonJS and AMD is used in TypeScript, these module systems
are compatible in any ECMA environment and developer can specify how the TypeScript should be compiled to specified ECMAScript i.e. ECMAScript 3 or
ECMAScript 5 or ECMAScript 6. All major Javascript library work with TypeScript (Node, underscore, jquery etc) using declaration of the required types definition
TypeScript support oops concepts like private, public, static, inheritance, TypeScript enables scalable appilcation development and excellent tooling using all popular
IDEs like Visual Studio, WebStorm, Atom, Sublime Text etc.
TypeScript adds zero overhead in performance and execution, since static types completely disappear at runtime.
TypeScripts has awesome language features like Interfaces, Classes and Modules enable clear contract between components.

Now I briefly try to highlight and explain the special features of TypeScript:

TypeScript, we support much the same types as you would expected in JavaScript, with a convenient enumeration types implemented to help things along. The Basic types of TypeScript are

Boolean
var isDone: boolean = false;

Number
var height: number = 6;

String
var name: string = “bob”;
name = ‘smith’;

Array
var list:number[] = [1, 2, 3];
The second way uses a generic array type, Array<elemType>:
var list:Array<number> = [1, 2, 3];

Enum
Enum is the new addition to TypeScript not available in JS
an enum is a way of implementing friendly names to sets of numeric values.

enum Color {Red, Green, Blue};
var c: Color = Color.Green;

Any
The ‘any’ type is a powerful way to work with existing JavaScript, allowing you
to gradually opt-in and opt-out of type-checking during compilation.
var notSure: any = 4;
notSure = “maybe a string instead”;
notSure = false; // okay, definitely a boolean

Also you can use ‘any’ during mixed types, For example, you may have an array but the array has a mix of different types:

var list:any[] = [1, true, “free”];

list[1] = 100;

Void
‘void’ is the opposite of ‘any’ ie., the absence of having any type at all. You may commonly see this as the return type of functions that do not return a value:

function warnUser(): void {
alert(“This is my warning message”);
}

TypeScript provides static typing through type annotations to enable type checking at compile-time. This is optional and
can be ignored to use the regular dynamic typing of JavaScript.

Example of static Type checking and emitting errors
Optionally static typed variables

class Greeter {
greeting: string;
constructor (message: string) {
this.greeting = message;
}
greet() : string {
return “Hello, ” + this.greeting;
}
}

var greeter = new Greeter(“Hi”);
var result = greeter.greet();

If you modify the above code snippet where string is replaced by number in greet() method.
you’ll see red squiggles in the playground editor if you try this:

greet() : number {
return “Hello, ” + this.greeting;
}

Type Inference in TypeScript:

Type inference flows implicitly in TypeScript code
In TypeScript, there are several places where type inference is used to provide type information when there is no explicit type annotation. For example, in this code
var x = 3;

If you want TypeScript to determine the type of your variables correctly, do not do:
1st Code snippet:
var localVar;
// Initialization code
localVar = new MyClass();
The type of localVar will be interpretted to be ‘any’ instead of MyClass. TypeScript will not complain about it but you’ll not get ‘any’ static type checking.

Instead, do:
2nd Code snippet:

var localVal : MyClass;
localVal = new MyClass();

TypeScript introduces the –noImplicitAny flag to disallow such programs. Then the 1st code snippet will not compile:
Type inference also work in opposite direction known as ‘Contextual Type’
Contextual typing applies in many cases. Common cases include arguments to function calls, right hand sides of assignments, type assertions, members of object and array literals, and return statements. The contextual type also acts as a candidate
type in best common type. For example:

function createZoo(): Animal[] {
return [new Rhino(), new Elephant(), new Snake()];
}

In this example, best common type has a set of four candidates: Animal, Rhino, Elephant, and Snake. Of these, Animal can be chosen by the best common type algorithm.

TypeScript File Definitions

The file extension for such a file is .d.ts, where d stands for definition. Type definition files make it possible to enjoy
the benefits of type checking, autocompletion, and member documentation. Any file that ends in .d.ts instead of .ts will never
generate a corresponding compiled module, so this file extension can also be useful for normal TypeScript modules that contain
only interface definitions.

Type Systems in TypeScript is automatically inferred since the lib.d.ts, the main TypeScript definition file
is loaded implicitly. For detail information on TypeScript defintion file visit here(http://definitelytyped.org/guides.html)

TypeScript code Works with existing JS libraries, TypeScript declaration files (*.d.ts) for most of the common JS libraries are
maintained seperately in DefinitelyTyped.org. TypeScript declaration files make it easy to work with exisiting libraries using
repoistory available in DefinitelyTyped.org
TypeScript decl files (*.d.ts) can used for debuging and source mapping of TypeScript and JS file, also for type referencing.
If you want to document your function, provide documentation in TypeScript declaration files. if you want to reference d.ts files use
reference comment syntax

Functions
You can create a function on an instance member of the class, on the prototype, or as a static function

Creating a function on the prototype is easy in TypeScript, which is great since you don;t even have to know you are using the prototype.
// TypeScript
class Bike {
engine: string;
constructor (engine: string) {
this.engine = engine;
}

kickstart() {
return “Running ” + this.engine;
}
}
Notice the start function in the TypeScript code. Now look at the emitted JavaScript below, which defines that start function on the prototype.
// JavaScript
var Bike = (function () {
function Bike(engine) {
this.engine = engine;
}
Bike.prototype.kickstart = function () {
return “Running ” + this.engine;
};
return Bike;
})();

 Interface

One of the coolest parts of TypeScript is how it allows you to define complex type definitions in the form of interfaces.
Interfaces are used to implement duck Typing. duck typing is a style of typing in which an object’s methods and properties determine
the valid semantics, rather than its inheritance from a particular class or implementation of a specific interface.

I have 2 interfaces with the same interface but completely unrelated sematics:

interface Chicken {
id: number;
name: string;
}

interface JetPlane {
id: number;
name: string;
}
then doing the following is completely fine in TypeScript:

var chicken : Chicken = { id: 1, name: ‘Thomas’ };
var plane: JetPlane = { id: 2, name: ‘F 35’ };
chicken = plane;

TypeScript Interface uses ‘Duck typing’ or ‘Structural subtyping’

Functions

You can create a function on an instance member of the class, on the prototype, or as a static function

Creating a function on the prototype is easy in TypeScript, which is great since you don;t even have to know you are using the prototype.
// TypeScript
class Car {
engine: string;
constructor (engine: string) {
this.engine = engine;
}

start() {
return “Started ” + this.engine;
}
}
Notice the start function in the TypeScript code. Now look at the emitted JavaScript below, which defines that start function on the prototype.
// JavaScript
var Car = (function () {
function Car(engine) {
this.engine = engine;
}
Car.prototype.start = function () {
return “Started ” + this.engine;
};
return Car;
})();

Class

TypeScript classes are basic unit of abstraction very similar to C#/Java classes. In TypeScript a class can be defined with
keyword “class” followed by class name. TypeScript classes can contain constructor, fields, properties and functions.
TypeScript allows developers to define the scope of variable inside classes as “public” or “private”.
It’s important to note that the “public/private” keyword are only available in TypeScript,

When using the class keyword in TypeScript, you are actually creating two things with the same identifier:

A TypeScript interface containing all the instance methods and properties of the class; and
A JavaScript variable with a different (anonymous) constructor function type

Creating a Class

You can create a class and even add fields, properties, constructors, and functions (static, prototype, instance based). The basic syntax for a class is as follows:
// TypeScript
class Car {
// Property (public by default)
engine: string;

// Constructor
// (accepts a value so you can initialize engine)
constructor(engine: string) {
this.engine = engine;
}
}

The property could be made private by prefixing the definition with the keyword private. Inside the constructor the engine property is referred to using the this keyword.

Inheritance in TypeScript
TypeScript extends keyword provides a simple and convenient way to inherit functionality from a base class (or extend an interface)
CLASS INHERITANCE:

// TypeScript
class Vehicle {
engine: string;
constructor(engine: string) {
this.engine = engine;
}
}

class Truck extends Vehicle {
bigTires: bool;
constructor(engine: string, bigTires: bool) {
super(engine);
this.bigTires = bigTires;
}
}

When inheritance is implemented, compiler injects (extra extends) code which is other than the developer written code to show inheritance

TypeScript emits JavaScript that helps extend the class definitions, using the __extends variable. This helps take care of some of the heavy lifting on the JavaScript side.
var __extends = this.__extends || function (d, b) {
function __() { this.constructor = d; }
__.prototype = b.prototype;
d.prototype = new __();
};
var Vehicle = (function () {
function Vehicle(engine) {
this.engine = engine;
}
return Vehicle;
})();
var Truck = (function (_super) {
__extends(Truck, _super);
function Truck(engine, bigTires) {
_super.call(this, engine);
this.bigTires = bigTires;
}
return Truck;
})(Vehicle);

Modules

One easy way to help maintain code re-use and organize your code is with modules. There are patterns such as the Revealing Module Pattern (RMP) in JavaScript that make
this quite simple, but the good news is that in TypeScript modules become even easier with the module keyword (from the proposed ECMAScript 6 spec).

However, it is important to know how your code will be treated if you ignore modules: you end up back with spaghetti.

Modules can provide functionality that is only visible inside the module, and they can provide functionality that is visible from the outside using the export keyword.

TypeScript categorizes modules into internal and external modules.

TypeScript has the ability to take advantage of a pair of JavaScript modularization standards – CommonJS and Asynchronous Module Definition (AMD).
These capabilities allow for projects to be organized in a manner similar to what a “mature,” traditional server-side OO language provides.
This is particularly useful for Huge Scalable Web Applications.

TypeScript Internal modules are TypeScript’s own approach to modularize your code.
TypeScript Internal modules can span across multiple files, effectively creating a namespace.

There is no runtime module loading mechanism, you have to load the modules using <script/> tags in your code.
Alternatively, you can compile all TypeScript files into one big JavaScript file that you include using a single <script/> tag.

External modules leverage a runtime module loading mechanism. You have the choice between CommonJS and AMD.
CommonJS is used by node.js, whereas RequireJS is a prominent implementation of AMD often used in browser environments.
When using external modules, files become modules. The modules can be structured using folders and sub folders.

Benefits of Modules:
Scoping of variables (out of global scope)
Code re-use
AMD or CommonJS support
Encapsulation
Don’t Repeat Yourself (DRY)
Easier for testing

exports keyword in TypeScript
you can make internal aspects of the module accessible outside of the module using the export keyword.

You an also extend internal modules, share them across files, and reference them using the triple slash syntax.
///<reference path=”shapes.ts”/>

Lambda or Arrow function expression :
TypeScript introduces Lambda expressions which in itself is so cool but to make this work it also automates the that-equals-this pattern.

The TypeScript code:
var myFunction = f => { this.x = “x”; }

Is compiled into this piece of JavaScript, automatically creating the that-equals-this pattern:

var _this = this;
var myFunction = function (f) {
_this.x = “x”;
};

Arrow function expressions are a compact form of function expressions that have a scope of ‘this’.
You can define an arrow function expression by omitting the function keyword and using the lambda syntax =>.

a simple TypeScript function that calculates sum earned by deposited funds.

var calculateInterest = function (amount, interestRate, duration) {
return amount * interestRate * duration / 12;
}
Using arrow function expression we can define this function alternatively as follows:

var calculateInterest2 = (amount, interestRat, duration) => {
return amount * interestRate * duration / 12;
}

Standard Javascript functions will dynamically bind this depending on execution context,
arrow functions on the other hand will preserve this of enclosing context.
This is a conscious design decision as arrow functions in ECMAScript 6 are meant to address
some problems associated with dynamically bound ‘this’ (eg. using function invocation pattern).

Being primarily a OOPS developer, lambda expressions in TypeScript is an extremely useful and
compact way to express anonymous methods.
Bringing this syntax to JavaScript though TypeScript is definitely a win for me.

There are lots of awesome features to list in TypeScript, all those new features of TypeScript in next post.

Hope this post brings a lot interest in developing apps using TypeScript.

.NET

SQL Server Database Programming Part 2

Managing Transactions

  1. A transaction is a set of actions that make up an atomic unit of work and must succeed or fail as a whole
  2. By default, implicit transactions are not enabled. When implicit transactions are enabled, a number of statements automatically begin a transaction. The developer must execute a COMMIT or ROLLBACK statement to complete the transaction.
  3. Explicit transactions start with a BEGIN TRANSACTION statement and are completed by either a ROLLBACK TRANSACTION or COMMIT TRANSACTION statement.
  4. Issuing a ROLLBACK command when transactions are nested rolls back all transactions to the outermost BEGIN TRANSACTION statement, regardless of previously issued COMMIT statements for nested transactions.
  5. SQL Server uses a variety of lock modes, including shared (S), exclusive (X), and intent (IS, IX, SIX) to manage data consistency while multiple transactions are being processed concurrently.
  6. SQL Server 2008 supports the READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ, SNAPSHOT, and SERIALIZABLE isolation levels.

Working with Tables and Data Types

  1. Creating tables is about more than just defining columns. It is very important to choose the right data type and to implement data integrity.
  2. You need to know the details of how the different data types behave before you can use them correctly.
  3. Data integrity needs be a part of your table definition from the beginning to make sure that you protect your data from faults.

Common Table Expressions (CTE)

  1. A recursive CTE contains two SELECT statements within the WITH clause, separated by the UNION ALL keyword. The first query defines the anchor for the recursion, and the second query defines the data set that is to be iterated across.
  2. If a CTE is contained within a batch, all statements preceding the WITH clause must be terminated with a semicolon.
  3. The outer query references the CTE and specifies the maximum recursion.

Subqueries

  1. Noncorrelated subqueries are independent queries that are embedded within an outer query and are used to retrieve a scalar value or list of values that can be consumed by the outer query to make code more dynamic.
  2. Correlated subqueries are queries that are embedded within an outer query but reference values within the outer query.

Ranking Functions

  1. ROW_NUMBER is used to number rows sequentially in a result set but might not produce identical results if there are ties in the column(s) used for sorting.
  2. RANK numbers a tie with identical values but can produce gaps in a sequence.
  3. DENSE_RANK numbers ties with identical values but does not produce gaps in the sequence.
  4. NTILE allows you to divide a result set into approximately equal-sized groups.

Stored Procedures

  1. A stored procedure is a batch of T-SQL code that is given a name and is stored within a database.
  2. You can pass parameters to a stored procedure either by name or by position. You can also return data from a stored procedure using output parameters.
  3. You can use the EXECUTE AS clause to cause a stored procedure to execute under a specific security context.
  4. Cursors allow you to process data on a row by row basis; however, if you are making the same modification to every row within a cursor, a set-oriented approach is more efficient.
  5. A TRY. . .CATCH block delivers structured error handling to your procedures.

User Defined Functions

  1. You can create scalar functions, inline table-valued functions, and multi-statement table-valued functions.
  2. With the exception of inline table-valued functions, the function body must be enclosed within a BEGIN. . .END block.
  3. All functions must terminate with a RETURN statement.
  4. Functions are not allowed to change the state of a database or of a SQL Server instance.

Triggers

  1. Triggers are specialized stored procedures that automatically execute in response to a DDL or DML event.
  2. You can create three types of triggers: DML, DDL, and logon triggers.
  3. A DML trigger executes when an INSERT, UPDATE, or DELETE statement for which the trigger is coded occurs.
  4. A DDL trigger executes when a DDL statement for which the trigger is coded occurs.
  5. A logon trigger executes when there is a logon attempt.
  6. You can access the inserted and deleted tables within a DML trigger.
  7. You can access the XML document provided by the EVENTDATA function within a DDL or logon trigger.

Views

  1. A view is a name for a SELECT statement stored within a database.
  2. A view has to return a single result set and cannot reference variables or temporary tables.
  3. You can update data through a view so long as the data modification can be resolved to a specific set of rows in an underlying table.
  4. If a view does not meet the requirements for allowing data modifications, you can create an INSTEAD OF trigger to process the data modification instead.
  5. You can combine multiple tables that have been physically partitioned using a UNION ALL statement to create a partitioned view.
  6. A distributed partitioned view uses linked servers to combine multiple member tables across SQL Server instances.
  7. You can create a unique, clustered index on a view to materialize the result set for improved query performance.

Queries Tuning

  1. Understanding how queries are logically constructed is important to knowing that they correctly return the intended result.
  2. Understanding how queries are logically constructed helps you understand what physical constructs (like indexes) help the query execute faster.
  3. Make sure you understand your metrics when you measure performance.

Indexes

  1. Indexes typically help read performance but can hurt write performance.
  2. Indexed views can increase performance even more than indexes, but they are restrictive and typically cannot be created for the entire query.
  3. Deciding which columns to put in the index key and which should be implemented as included columns is important.
  4. Analyze which indexes are actually being used and drop the ones that aren’t. This saves storage space and minimizes the resources used to maintain indexes for write operations.

Working with XML

  1. XML can be generated using a SELECT statement in four different modes: FOR XML RAW, FOR XML AUTO, FOR XML PATH, and FOR XML EXPLICIT.
  2. FOR XML PATH is typically the preferred mode used to generate XML.
  3. The XML data type can be either typed (validated by an XML schema collection) or untyped.
  4. In an untyped XML data type, all values are always interpreted as strings.
  5. You can use the value, query, exist, nodes, and modify methods to query and alter instances of the XML data type.

SQLCLR and FileStream

  1. To use user-defined objects based on SQLCLR, SQLCLR must be enabled on the SQL Server instance.
  2. The objects most suitable for development using SQLCLR are UDFs and user-defined aggregates.
  3. If you create UDTs based on SQLCLR, make sure that you test them thoroughly.
  4. Consider using Filestream if the relevant data mostly involves storing streams larger than 1 MB.

Spatial Data Types

  1. The geography and geometry data types provide you with the ability to work with spatial data with system-defined data types rather than having to define your own CLR data types.
  2. You can instantiate spatial data by using any of the spatial methods included with SQL Server 2008.

Full-Text Search

  1. SQL Server 2008 provides fully integrated full-text search capabilities.
  2. Full-text indexes are created and maintained inside the database and are organized into virtual full-text catalogs.
  3. The CONTAINS and FREETEXT predicates, as well as the CONTAINSTABLE and FREETEXTTABLE functions, allow you to fully query text, XML, and certain forms of binary data.

Service Broker Solutions

  1. Service Broker provides reliable asynchronous messaging capabilities for your SQL Server instance.
  2. You need to configure the Service Broker components for your solution. These components might include message types, contracts, services, queues, dialogs, and conversation priorities.
  3. You use the BEGIN DIALOG, SEND, and RECEIVE commands to control individual conversations between two services.

Database Mail

  1. Database Mail was introduced in SQL Server 2005 and should be used in place of SQL Mail.
  2. Database Mail is disabled by default to minimize the surface area of the server.
  3. You should use the sp_send_dbmail system stored procedure to integrate Database Mail with your applications.
  4. A wide variety of arguments allows you to customize the e-mail messages and attachments sent from the database server.

Windows PowerShell

  1. SQL Server PowerShell is a command-line shell and scripting environment, based on Windows PowerShell.
  2. SQL Server PowerShell uses a hierarchy to represent how objects are related to each other.
  3. The three folders that exist in the SQL Server PowerShell provider are SQLSERVER:SQL,SQLSERVER:SQLPolicy, and SQLSERVERSQLRegistration.
  4. You can browse the hierarchy by using either the cmdlet names or their aliases.

Change Data Capture (CDC)

  1. Change tracking is enabled first at the database and then at the table level.
  2. Change tracking can tell you what rows have been modified and provide you with the end result of the data.
  3. Change tracking requires fewer system resources than CDC.
  4. CDC can tell you what rows have been modified and provide you with the final data as well as the intermediate states of the data.
  5. SQL Server Audit allows you to log access to tables, views, and other objects.
Digg This