Category: .NET


Typescript – A new web technology for WEB Creators.

Introduction to TypeScript

TypeScript is a superset of Javascript that compiles to idiomatic JavaScript code, TypeScript is used for developing industrial-strength scalable application. All Javascript code is TypeScript code; i.e. you can simply and paste JS code into TS file and execute with out any errors.
TypeScript works on any browser, any host, any OS. Typescript is aligned with emerging standards, class, modules, arrow functions, ‘this’ keyword implementation all align with ECMAScript 6 proposals.

TypeScript support most of the module systems implemented in popular JS libraries. CommonJS and AMD is used in TypeScript, these module systems
are compatible in any ECMA environment and developer can specify how the TypeScript should be compiled to specified ECMAScript i.e. ECMAScript 3 or
ECMAScript 5 or ECMAScript 6. All major Javascript library work with TypeScript (Node, underscore, jquery etc) using declaration of the required types definition
TypeScript support oops concepts like private, public, static, inheritance, TypeScript enables scalable appilcation development and excellent tooling using all popular
IDEs like Visual Studio, WebStorm, Atom, Sublime Text etc.
TypeScript adds zero overhead in performance and execution, since static types completely disappear at runtime.
TypeScripts has awesome language features like Interfaces, Classes and Modules enable clear contract between components.

Now I briefly try to highlight and explain the special features of TypeScript:

TypeScript, we support much the same types as you would expected in JavaScript, with a convenient enumeration types implemented to help things along. The Basic types of TypeScript are

var isDone: boolean = false;

var height: number = 6;

var name: string = “bob”;
name = ‘smith’;

var list:number[] = [1, 2, 3];
The second way uses a generic array type, Array<elemType>:
var list:Array<number> = [1, 2, 3];

Enum is the new addition to TypeScript not available in JS
an enum is a way of implementing friendly names to sets of numeric values.

enum Color {Red, Green, Blue};
var c: Color = Color.Green;

The ‘any’ type is a powerful way to work with existing JavaScript, allowing you
to gradually opt-in and opt-out of type-checking during compilation.
var notSure: any = 4;
notSure = “maybe a string instead”;
notSure = false; // okay, definitely a boolean

Also you can use ‘any’ during mixed types, For example, you may have an array but the array has a mix of different types:

var list:any[] = [1, true, “free”];

list[1] = 100;

‘void’ is the opposite of ‘any’ ie., the absence of having any type at all. You may commonly see this as the return type of functions that do not return a value:

function warnUser(): void {
alert(“This is my warning message”);

TypeScript provides static typing through type annotations to enable type checking at compile-time. This is optional and
can be ignored to use the regular dynamic typing of JavaScript.

Example of static Type checking and emitting errors
Optionally static typed variables

class Greeter {
greeting: string;
constructor (message: string) {
this.greeting = message;
greet() : string {
return “Hello, ” + this.greeting;

var greeter = new Greeter(“Hi”);
var result = greeter.greet();

If you modify the above code snippet where string is replaced by number in greet() method.
you’ll see red squiggles in the playground editor if you try this:

greet() : number {
return “Hello, ” + this.greeting;

Type Inference in TypeScript:

Type inference flows implicitly in TypeScript code
In TypeScript, there are several places where type inference is used to provide type information when there is no explicit type annotation. For example, in this code
var x = 3;

If you want TypeScript to determine the type of your variables correctly, do not do:
1st Code snippet:
var localVar;
// Initialization code
localVar = new MyClass();
The type of localVar will be interpretted to be ‘any’ instead of MyClass. TypeScript will not complain about it but you’ll not get ‘any’ static type checking.

Instead, do:
2nd Code snippet:

var localVal : MyClass;
localVal = new MyClass();

TypeScript introduces the –noImplicitAny flag to disallow such programs. Then the 1st code snippet will not compile:
Type inference also work in opposite direction known as ‘Contextual Type’
Contextual typing applies in many cases. Common cases include arguments to function calls, right hand sides of assignments, type assertions, members of object and array literals, and return statements. The contextual type also acts as a candidate
type in best common type. For example:

function createZoo(): Animal[] {
return [new Rhino(), new Elephant(), new Snake()];

In this example, best common type has a set of four candidates: Animal, Rhino, Elephant, and Snake. Of these, Animal can be chosen by the best common type algorithm.

TypeScript File Definitions

The file extension for such a file is .d.ts, where d stands for definition. Type definition files make it possible to enjoy
the benefits of type checking, autocompletion, and member documentation. Any file that ends in .d.ts instead of .ts will never
generate a corresponding compiled module, so this file extension can also be useful for normal TypeScript modules that contain
only interface definitions.

Type Systems in TypeScript is automatically inferred since the lib.d.ts, the main TypeScript definition file
is loaded implicitly. For detail information on TypeScript defintion file visit here(

TypeScript code Works with existing JS libraries, TypeScript declaration files (*.d.ts) for most of the common JS libraries are
maintained seperately in TypeScript declaration files make it easy to work with exisiting libraries using
repoistory available in
TypeScript decl files (*.d.ts) can used for debuging and source mapping of TypeScript and JS file, also for type referencing.
If you want to document your function, provide documentation in TypeScript declaration files. if you want to reference d.ts files use
reference comment syntax

You can create a function on an instance member of the class, on the prototype, or as a static function

Creating a function on the prototype is easy in TypeScript, which is great since you don;t even have to know you are using the prototype.
// TypeScript
class Bike {
engine: string;
constructor (engine: string) {
this.engine = engine;

kickstart() {
return “Running ” + this.engine;
Notice the start function in the TypeScript code. Now look at the emitted JavaScript below, which defines that start function on the prototype.
// JavaScript
var Bike = (function () {
function Bike(engine) {
this.engine = engine;
Bike.prototype.kickstart = function () {
return “Running ” + this.engine;
return Bike;


One of the coolest parts of TypeScript is how it allows you to define complex type definitions in the form of interfaces.
Interfaces are used to implement duck Typing. duck typing is a style of typing in which an object’s methods and properties determine
the valid semantics, rather than its inheritance from a particular class or implementation of a specific interface.

I have 2 interfaces with the same interface but completely unrelated sematics:

interface Chicken {
id: number;
name: string;

interface JetPlane {
id: number;
name: string;
then doing the following is completely fine in TypeScript:

var chicken : Chicken = { id: 1, name: ‘Thomas’ };
var plane: JetPlane = { id: 2, name: ‘F 35’ };
chicken = plane;

TypeScript Interface uses ‘Duck typing’ or ‘Structural subtyping’


You can create a function on an instance member of the class, on the prototype, or as a static function

Creating a function on the prototype is easy in TypeScript, which is great since you don;t even have to know you are using the prototype.
// TypeScript
class Car {
engine: string;
constructor (engine: string) {
this.engine = engine;

start() {
return “Started ” + this.engine;
Notice the start function in the TypeScript code. Now look at the emitted JavaScript below, which defines that start function on the prototype.
// JavaScript
var Car = (function () {
function Car(engine) {
this.engine = engine;
Car.prototype.start = function () {
return “Started ” + this.engine;
return Car;


TypeScript classes are basic unit of abstraction very similar to C#/Java classes. In TypeScript a class can be defined with
keyword “class” followed by class name. TypeScript classes can contain constructor, fields, properties and functions.
TypeScript allows developers to define the scope of variable inside classes as “public” or “private”.
It’s important to note that the “public/private” keyword are only available in TypeScript,

When using the class keyword in TypeScript, you are actually creating two things with the same identifier:

A TypeScript interface containing all the instance methods and properties of the class; and
A JavaScript variable with a different (anonymous) constructor function type

Creating a Class

You can create a class and even add fields, properties, constructors, and functions (static, prototype, instance based). The basic syntax for a class is as follows:
// TypeScript
class Car {
// Property (public by default)
engine: string;

// Constructor
// (accepts a value so you can initialize engine)
constructor(engine: string) {
this.engine = engine;

The property could be made private by prefixing the definition with the keyword private. Inside the constructor the engine property is referred to using the this keyword.

Inheritance in TypeScript
TypeScript extends keyword provides a simple and convenient way to inherit functionality from a base class (or extend an interface)

// TypeScript
class Vehicle {
engine: string;
constructor(engine: string) {
this.engine = engine;

class Truck extends Vehicle {
bigTires: bool;
constructor(engine: string, bigTires: bool) {
this.bigTires = bigTires;

When inheritance is implemented, compiler injects (extra extends) code which is other than the developer written code to show inheritance

TypeScript emits JavaScript that helps extend the class definitions, using the __extends variable. This helps take care of some of the heavy lifting on the JavaScript side.
var __extends = this.__extends || function (d, b) {
function __() { this.constructor = d; }
__.prototype = b.prototype;
d.prototype = new __();
var Vehicle = (function () {
function Vehicle(engine) {
this.engine = engine;
return Vehicle;
var Truck = (function (_super) {
__extends(Truck, _super);
function Truck(engine, bigTires) {, engine);
this.bigTires = bigTires;
return Truck;


One easy way to help maintain code re-use and organize your code is with modules. There are patterns such as the Revealing Module Pattern (RMP) in JavaScript that make
this quite simple, but the good news is that in TypeScript modules become even easier with the module keyword (from the proposed ECMAScript 6 spec).

However, it is important to know how your code will be treated if you ignore modules: you end up back with spaghetti.

Modules can provide functionality that is only visible inside the module, and they can provide functionality that is visible from the outside using the export keyword.

TypeScript categorizes modules into internal and external modules.

TypeScript has the ability to take advantage of a pair of JavaScript modularization standards – CommonJS and Asynchronous Module Definition (AMD).
These capabilities allow for projects to be organized in a manner similar to what a “mature,” traditional server-side OO language provides.
This is particularly useful for Huge Scalable Web Applications.

TypeScript Internal modules are TypeScript’s own approach to modularize your code.
TypeScript Internal modules can span across multiple files, effectively creating a namespace.

There is no runtime module loading mechanism, you have to load the modules using <script/> tags in your code.
Alternatively, you can compile all TypeScript files into one big JavaScript file that you include using a single <script/> tag.

External modules leverage a runtime module loading mechanism. You have the choice between CommonJS and AMD.
CommonJS is used by node.js, whereas RequireJS is a prominent implementation of AMD often used in browser environments.
When using external modules, files become modules. The modules can be structured using folders and sub folders.

Benefits of Modules:
Scoping of variables (out of global scope)
Code re-use
AMD or CommonJS support
Don’t Repeat Yourself (DRY)
Easier for testing

exports keyword in TypeScript
you can make internal aspects of the module accessible outside of the module using the export keyword.

You an also extend internal modules, share them across files, and reference them using the triple slash syntax.
///<reference path=”shapes.ts”/>

Lambda or Arrow function expression :
TypeScript introduces Lambda expressions which in itself is so cool but to make this work it also automates the that-equals-this pattern.

The TypeScript code:
var myFunction = f => { this.x = “x”; }

Is compiled into this piece of JavaScript, automatically creating the that-equals-this pattern:

var _this = this;
var myFunction = function (f) {
_this.x = “x”;

Arrow function expressions are a compact form of function expressions that have a scope of ‘this’.
You can define an arrow function expression by omitting the function keyword and using the lambda syntax =>.

a simple TypeScript function that calculates sum earned by deposited funds.

var calculateInterest = function (amount, interestRate, duration) {
return amount * interestRate * duration / 12;
Using arrow function expression we can define this function alternatively as follows:

var calculateInterest2 = (amount, interestRat, duration) => {
return amount * interestRate * duration / 12;

Standard Javascript functions will dynamically bind this depending on execution context,
arrow functions on the other hand will preserve this of enclosing context.
This is a conscious design decision as arrow functions in ECMAScript 6 are meant to address
some problems associated with dynamically bound ‘this’ (eg. using function invocation pattern).

Being primarily a OOPS developer, lambda expressions in TypeScript is an extremely useful and
compact way to express anonymous methods.
Bringing this syntax to JavaScript though TypeScript is definitely a win for me.

There are lots of awesome features to list in TypeScript, all those new features of TypeScript in next post.

Hope this post brings a lot interest in developing apps using TypeScript.


EventFul Days Ahead for .NET Developers

There is a festival time ahead for all .NET Developers. With lots of Launching Events coming in next two quarters, it is going to be a very hectic and pleasant surprises for all Microsoft Developers. I think this year at October would be one of the major events taking place post Bill Gates Era. So every Microsoft fan would be very anxious to watch and enjoy the events unfolding in the next coming months. Following are the some of the major events which I would be keenly following

  1. Nokia Windows Phone 8 Launch
  2. Samsung Windows Phone 8 Launch
  3. Visual Studio 2012 Virtual Event
  4. Windows 8

To keep pace with the range of products and development tools that will be made available from Microsoft in the coming two quarters, Visit some of the resources of Microsoft that can be used to develop world class .NET Software and Windows 8 Apps.

With these awesome resources, I definitely see that Microsoft has made sure that there is enough tools and developers to create and monetize there work in Windows Store.

So Let’s start .Dot Netting Windows Phone and Windows 8 Metro Apps.


SQL Server Database Programming Part 2

Managing Transactions

  1. A transaction is a set of actions that make up an atomic unit of work and must succeed or fail as a whole
  2. By default, implicit transactions are not enabled. When implicit transactions are enabled, a number of statements automatically begin a transaction. The developer must execute a COMMIT or ROLLBACK statement to complete the transaction.
  3. Explicit transactions start with a BEGIN TRANSACTION statement and are completed by either a ROLLBACK TRANSACTION or COMMIT TRANSACTION statement.
  4. Issuing a ROLLBACK command when transactions are nested rolls back all transactions to the outermost BEGIN TRANSACTION statement, regardless of previously issued COMMIT statements for nested transactions.
  5. SQL Server uses a variety of lock modes, including shared (S), exclusive (X), and intent (IS, IX, SIX) to manage data consistency while multiple transactions are being processed concurrently.

Working with Tables and Data Types

  1. Creating tables is about more than just defining columns. It is very important to choose the right data type and to implement data integrity.
  2. You need to know the details of how the different data types behave before you can use them correctly.
  3. Data integrity needs be a part of your table definition from the beginning to make sure that you protect your data from faults.

Common Table Expressions (CTE)

  1. A recursive CTE contains two SELECT statements within the WITH clause, separated by the UNION ALL keyword. The first query defines the anchor for the recursion, and the second query defines the data set that is to be iterated across.
  2. If a CTE is contained within a batch, all statements preceding the WITH clause must be terminated with a semicolon.
  3. The outer query references the CTE and specifies the maximum recursion.


  1. Noncorrelated subqueries are independent queries that are embedded within an outer query and are used to retrieve a scalar value or list of values that can be consumed by the outer query to make code more dynamic.
  2. Correlated subqueries are queries that are embedded within an outer query but reference values within the outer query.

Ranking Functions

  1. ROW_NUMBER is used to number rows sequentially in a result set but might not produce identical results if there are ties in the column(s) used for sorting.
  2. RANK numbers a tie with identical values but can produce gaps in a sequence.
  3. DENSE_RANK numbers ties with identical values but does not produce gaps in the sequence.
  4. NTILE allows you to divide a result set into approximately equal-sized groups.

Stored Procedures

  1. A stored procedure is a batch of T-SQL code that is given a name and is stored within a database.
  2. You can pass parameters to a stored procedure either by name or by position. You can also return data from a stored procedure using output parameters.
  3. You can use the EXECUTE AS clause to cause a stored procedure to execute under a specific security context.
  4. Cursors allow you to process data on a row by row basis; however, if you are making the same modification to every row within a cursor, a set-oriented approach is more efficient.
  5. A TRY. . .CATCH block delivers structured error handling to your procedures.

User Defined Functions

  1. You can create scalar functions, inline table-valued functions, and multi-statement table-valued functions.
  2. With the exception of inline table-valued functions, the function body must be enclosed within a BEGIN. . .END block.
  3. All functions must terminate with a RETURN statement.
  4. Functions are not allowed to change the state of a database or of a SQL Server instance.


  1. Triggers are specialized stored procedures that automatically execute in response to a DDL or DML event.
  2. You can create three types of triggers: DML, DDL, and logon triggers.
  3. A DML trigger executes when an INSERT, UPDATE, or DELETE statement for which the trigger is coded occurs.
  4. A DDL trigger executes when a DDL statement for which the trigger is coded occurs.
  5. A logon trigger executes when there is a logon attempt.
  6. You can access the inserted and deleted tables within a DML trigger.
  7. You can access the XML document provided by the EVENTDATA function within a DDL or logon trigger.


  1. A view is a name for a SELECT statement stored within a database.
  2. A view has to return a single result set and cannot reference variables or temporary tables.
  3. You can update data through a view so long as the data modification can be resolved to a specific set of rows in an underlying table.
  4. If a view does not meet the requirements for allowing data modifications, you can create an INSTEAD OF trigger to process the data modification instead.
  5. You can combine multiple tables that have been physically partitioned using a UNION ALL statement to create a partitioned view.
  6. A distributed partitioned view uses linked servers to combine multiple member tables across SQL Server instances.
  7. You can create a unique, clustered index on a view to materialize the result set for improved query performance.

Queries Tuning

  1. Understanding how queries are logically constructed is important to knowing that they correctly return the intended result.
  2. Understanding how queries are logically constructed helps you understand what physical constructs (like indexes) help the query execute faster.
  3. Make sure you understand your metrics when you measure performance.


  1. Indexes typically help read performance but can hurt write performance.
  2. Indexed views can increase performance even more than indexes, but they are restrictive and typically cannot be created for the entire query.
  3. Deciding which columns to put in the index key and which should be implemented as included columns is important.
  4. Analyze which indexes are actually being used and drop the ones that aren’t. This saves storage space and minimizes the resources used to maintain indexes for write operations.

Working with XML

  1. XML can be generated using a SELECT statement in four different modes: FOR XML RAW, FOR XML AUTO, FOR XML PATH, and FOR XML EXPLICIT.
  2. FOR XML PATH is typically the preferred mode used to generate XML.
  3. The XML data type can be either typed (validated by an XML schema collection) or untyped.
  4. In an untyped XML data type, all values are always interpreted as strings.
  5. You can use the value, query, exist, nodes, and modify methods to query and alter instances of the XML data type.

SQLCLR and FileStream

  1. To use user-defined objects based on SQLCLR, SQLCLR must be enabled on the SQL Server instance.
  2. The objects most suitable for development using SQLCLR are UDFs and user-defined aggregates.
  3. If you create UDTs based on SQLCLR, make sure that you test them thoroughly.
  4. Consider using Filestream if the relevant data mostly involves storing streams larger than 1 MB.

Spatial Data Types

  1. The geography and geometry data types provide you with the ability to work with spatial data with system-defined data types rather than having to define your own CLR data types.
  2. You can instantiate spatial data by using any of the spatial methods included with SQL Server 2008.

Full-Text Search

  1. SQL Server 2008 provides fully integrated full-text search capabilities.
  2. Full-text indexes are created and maintained inside the database and are organized into virtual full-text catalogs.
  3. The CONTAINS and FREETEXT predicates, as well as the CONTAINSTABLE and FREETEXTTABLE functions, allow you to fully query text, XML, and certain forms of binary data.

Service Broker Solutions

  1. Service Broker provides reliable asynchronous messaging capabilities for your SQL Server instance.
  2. You need to configure the Service Broker components for your solution. These components might include message types, contracts, services, queues, dialogs, and conversation priorities.
  3. You use the BEGIN DIALOG, SEND, and RECEIVE commands to control individual conversations between two services.

Database Mail

  1. Database Mail was introduced in SQL Server 2005 and should be used in place of SQL Mail.
  2. Database Mail is disabled by default to minimize the surface area of the server.
  3. You should use the sp_send_dbmail system stored procedure to integrate Database Mail with your applications.
  4. A wide variety of arguments allows you to customize the e-mail messages and attachments sent from the database server.

Windows PowerShell

  1. SQL Server PowerShell is a command-line shell and scripting environment, based on Windows PowerShell.
  2. SQL Server PowerShell uses a hierarchy to represent how objects are related to each other.
  3. The three folders that exist in the SQL Server PowerShell provider are SQLSERVER:SQL,SQLSERVER:SQLPolicy, and SQLSERVERSQLRegistration.
  4. You can browse the hierarchy by using either the cmdlet names or their aliases.

Change Data Capture (CDC)

  1. Change tracking is enabled first at the database and then at the table level.
  2. Change tracking can tell you what rows have been modified and provide you with the end result of the data.
  3. Change tracking requires fewer system resources than CDC.
  4. CDC can tell you what rows have been modified and provide you with the final data as well as the intermediate states of the data.
  5. SQL Server Audit allows you to log access to tables, views, and other objects.
Digg This

Fundamentals of SQL Server Database Programming Part 1

SELECT Statement:

  1. To retrieve data from a table or view the SELECT statement is used.
  2. To filter the result set  a WHERE clause is ADDED to the SELECT statement.
  3. To SORT the result set the ORDER BY clause is used with the SELECT statement.
  4. The manipulation and formatting of the result set is done using Concatenation, aliases, and string literals.


  1. To retrieve columns from related tables and group the results into a single result set the JOIN clause is used.
  2. The following are different types of JOINS namely: INNER, LEFT OUTER, RIGHT OUTER, FULL OUTER, and CROSS.
  3. JOIN operators can combine more than two tables.
  4. Using different aliases of the same table A, the table A can be joined to itself.




  1. Aggregate functions perform calculations on expressions that are provided as input to the function.
  2. Use the GROUP BY clause when aggregation needs to  be applied based on the data in specific rows rather than the entire table.
  3. We can include all columns into the GROUP BY clause.that are listed in a SELECT, WHERE, or ORDER BY clause
  4. To provide additional summary information use ROLLUP and CUBE.
  5. Use the GROUPING function to display the rows which hold summary data provided by the ROLLUP or CUBE operators.
  6. To provide enhanced readability to your GROUP BY queries use GROUPING SETS.



Combining Datasets

  1. The UNION operator is used to add result sets from two or more SELECT statements.
  2. The EXCEPT operator is used for extracting rows that are in the left SELECT statement and do not have matching rows in the right SELECT statement.
  3. The INTERSECT operator returns only rows that are common by the two SELECT
  4. The APPLY operator passes the results from a query as input to apply a table-valued
    function for each row in the result set.
  5. OUTER APPLY retrieves all rows from the outer table along with the results returned by the function when rows are common, whereas CROSS APPLY returns only those rows from the outer table where a match exists within the function results.


Built-In Functions

  1. To provide more meaningful result sets use built-in functions.
  2. To manipulate and return date information use date and time functions.
  3. Use string functions to format or return information about string expressions.



Modifying Data—The INSERT, UPDATE, DELETE, and MERGE Statements

  1. To add new rows to a table use the INSERT statement.
  2. The UPDATE statement allows you to modify changes to the existing data in a table.
    It allows you not only to modify the value in a column, it also allows you to add or
    remove a value from a single column in the table without affecting the rest of the row
    being modified.
  3. The DELETE statement allows you to delete one or more rows from a table.
  4. The OUTPUT clause allows you to redirect information to the calling application, or
    to an object such as a table or a table variable, about the INSERT, UPDATE, or DELETE statement performed.
  5. The MERGE statement is used to perform DML actions on a target table based on whether or not a row matches information found in a source table.

Design Patterns

pattern is a commonly occurring reusable piece in software system that provides a certain set of functionality.A pattern is a commonly occurring reusable piece in software system that provides a certain set of functionality. The identification of a pattern is also based on the context in which it is used. Design patterns are solutions to general problems that software developers faced during software development. So, using patterns in modelling of systems helps in keeping design standardised and more importantly, minimizes the reinventing of the wheel in the system design. This article is all about patterns; especially design patterns.The class diagram in UML can be used to capture the patterns identified in a system.

Factory Method

  • How does this promote loosely coupled code?

A Factory pattern returns an instance of one of several possible classes, depending on the data provided to it.  Usually, all the classes it returns have a common parent class and common methods, but each of them performs a different task, and is optimized for different kinds of data.  Thus, the Factory pattern eliminates the need to bind application specific classes into the code.  The code only deals with the Product interface , and can therefore work with any user-defined Concrete Product classes. Thus the code is not tightly bound to a particular application. for eg. in the example discussed in class(and the book), the abstract classes Application and Document have generic methods to manipulate documents. However, to realize the application specific implementation, one has to subclass them- say a DrawingApplication and DrawingDocument for drawings, TextApplication and TextDocument.Instead of putting code inside Document and Application classes for each document type (binding application specific code), the factory method lets them defer the instantiation for a specific application to a subclass.


  • If a Proxy is used to instantiate an object only when it is absolutely needed, does the Proxy simplify code?

This is not necessarily true.  A proxy pattern is used when we need to represent a complex object by a simpler one.  It provides a certain level of indirection when accessing an object.  A proxy usually has the same methods as the object it represents, and hence provides an identical interface to that object.  This definitely improves performance, but may or may not simplify the code.  In some cases, the overall code may become simpler.  e.g. protection proxies and smart references allow housekeeping tasks when an object is accessed (access permissions, ref counts, object locking etc.) This makes the Subject code simpler as it does not have to bother with the bookkeeping code. Thus a Proxy would simplify Subject code, by moving it to the RealSubject code, at the expense of implementing the Proxy code.


  • What happens when a system has an explosion of strategy objects? Is there some better way to manage these strategies?

There are several ways to manage these strategies if a system has an explosion of strategy objects.  One way is to use the Template pattern which would in turn use several simpler strategy classes.  Such an explosion could occur if there are a lot of strategies for one context, or several context objects and corresponding strategy objects.  This leads to increased load on memory and system resources.  Other ways to manage this would be to implementing strategies as stateless objects that contexts can share, or making strategy objects optional. 

  • (ii) In the implementation section of this pattern, the authors describe two ways in which a strategy can get the information it needs to do its job. One way describes how a strategy object could get passed a reference to the context object, thereby giving it access to context data. But is it possible that the data required by the strategy will not be available from the context’s interface? How could you remedy this potential problem?

Yes, it is possible that the data required by the strategy will not be available from the context’s interface.  If the data were private to the context (not accessible from the interface), then the strategy would not be able to access it.  We could pass all the data required by the strategy explicitly, although this increases communication overhead.  We could also use strategies as template parameters – in this case, since the strategy would be a context method, the data would be accessible.



  • In the Implementation section of the Decorator Pattern, the authors write: A decorator object’s interface must conform to the interface of the component it decorates. Now consider an object A, that is decorated with an object B. Since object B “decorates” object A, object B shares an interface with object A. If some client is then passed an instance of this decorated object, and that method attempts to call a method in B that is not part of A’s interface, does this mean that the object is no longer a Decorator, in the strict sense of the pattern? Furthermore, why is it important that a decorator object’s interface conforms to the interface of the component it decorates?

If some client is then passed an instance of this decorated object, and that method attempts to call a method in B that is not part of A’s interface, this does NOT necessarily mean that the object is no longer a Decorator, in the strict sense of the pattern. Object B’s interface is still the same as object A’s interface although some more methods are added.  The book says that a decorator and its component are not identical. Component can add functionality to the base class. Decorator object’s interface should conform to the interface of the component it decorates because of the inheritance issues, since the Decorator acts as a transparent enclosure.  The client would be unaware of the decorators presence, and would access the contents of the object through a common interface.



  • Would you ever create an Adapter that has the same interface as the object which it adapts? Would your Adapter then be a Proxy?

An Adapter could indeed have the same interface as the object which it adapts.  In that case, the adapter would add some extra functionality before making the call to the adaptee object.  So, we would not want the exact same interface.  But in the case of a proxy, we do want the same interface since it is a virtual placeholder for an object.  Also, the adapter’s implementation would be different from that of the proxy.


  • How does a Bridge differ from a Strategy and a Strategy’s Context?

A strategy is a behavioral pattern that allows a client (Strategy context) to interchangably use multiple algorithms (Strategy).  A bridge is a structural pattern that influences the creation of a class hierarchy by decoupling an abstraction from the implementation.   In a strategy, usually the Strategy is allowed to vary to change the behavior of the algorithm, while the Context may not vary as much. In a bridge however, the abstraction and its implementation can vary independently, and it hides the implementation details from the client.



  • (i) How complex must a sub-system be in order to justify using a facade?

A facade is justified whenever the dependencies between the clients and the implementation classes of an abstration become complex enough to decouple the subsystem.  Sometimes, it is justified to use even for single class subsystem, if we expect that to grow in the future (although this would be against the principles of extreme programming 🙂 

  • (ii) What are the additional uses of a facade with respect to an organization of designers and developers with varying abilities? What are the political ramifications?

The facade in indeed a great tool for an organization of designers and developers with varying abilities.  It provides a simple access to complex subsystems for less experienced participants, but as they grow and learn, they could access the subsystems directly.  Also, the developers may present a facade of the system to the designers, thus the designers do have to concern themselves about the details of the subsystems.  Thus the developers could extend on the subsystem code independently, without affecting the design.



  • (i) How does the Composite pattern help to consolidate system-wide conditional logic?

It does this by providing a general design which makes client code simple and makes it easier to add new kinds of components. Thus the clients can treat composite structures and individual objects uniformly,  without worrying about whether they’re a leaf or composite node.  This helps avoid a lot of case style statements. 

  • (ii) Would you use the composite pattern if you did not have a part-whole hierarchy? In other words, if only a few objects have children and almost everything else in your collection is a leaf (a leaf that has no children), would you still use the composite pattern to model these objects?

We could still use a composite pattern here to provide a common interface to all the objects.  Thus we may define a composite pattern and call an operation on the component, when we wish to issue an operation on a few composite objects, and all the leaf objects.


  • Consider a composite that contains loan objects. The loan object interface contains a method called “AmountOfLoan()”, which returns the current market value of a loan. Given a requirement to extract all loans above, below or in between a certain amount, would you write or use an Iterator to do this?

An iterator goes through all the objects, and hence that would be a very inefficient search, given our problem.  However, if we built the hierarchy like a binary search tree and stored some min/max key value at each composite node, then we could implement an iterator to go through the children of a composite which satisfies the current search criterion.


Template Method

  • The Template Method relies on inheritance. Would it be possible to get the same functionality of a Template Method, using object composition? What would some of the tradeoffs be?

Yes, but we would then have to store the state of our class in such a way that all the different objects can access it.  But we couldn’t run these at the same time, unless they are very independent of each other.


Abstract Factory

  • In the Implementation section of this pattern, the authors discuss the idea of defining extensible factories. Since an Abstract Factory is composed of Factory Methods, and each Factory Method has only one signature, does this mean that the Factory Method can only create an object in one way?

We would have to specify different concrete factory subclasses in order to create an object in multiple ways.  We could avoid this by using a prototype pattern for implementing the concrete factory. 

  • Consider the MazeFactory example. The MazeFactory contains a method called MakeRoom, which takes as a parameter one integer, representing a room number. What happens if you would also like to specify the room’s color & size? Would this mean that you would need to create a new Factory Method for your MazeFactory, allowing you to pass in room number, color and size to a second MakeRoom method?

In the current MazeFactory implementation we would have to add another Factory Method with a MakeRoom method to create a room with a number, color and size. We could also use overloaded constructors which take multiple arguments (and initialize some, e.g. color and size to a default if we want to pass only the room number). The other alternative would be to use a prototype based approach, in which the concrete MakeRoom factory would have methods to add color, size parts to the catalog.


  • Like the Abstract Factory pattern, the Builder pattern requires that you define an interface, which will be used by clients to create complex objects in pieces. In the MazeBuilder example, there are BuildMaze(), BuildRoom() and BuildDoor() methods, along with a GetMaze() method. How does the Builder pattern allow one to add new methods to the Builder’s interface, without having to change each and every sub-class of the Builder?

The builder method returns child nodes back to the director, which passes them back to the builder to build additional/parent nodes. MazeBuilder does not create the maze itself, but just defines the interface for creating mazes – letting the subclasses do the actual work. Since the subclasses use the methods defined in the Builder interface, adding  a new method to the interface would not require changing each subclass as the original methods would still work and create a valid maze. Once might want to create a new subclass of the builder to make use of the additional methods in the Builder’s interface.



  • The Singleton pattern is often paired with the Abstract Factory pattern. What other creational or non-creational patterns would you use with the Singleton pattern?

We could also use the Facade pattern since we would need a single instance of a point of entry/layer to the subsystem.  We could also use a mediator with a singleton, providing one controller for the system of classes, and a proxy with a singleton, providing a single placeholder to the real object.



  • Since a Mediator becomes a repository for logic, can the code that implements this logic begin to get overly complex, possible resembling spaggheti code? How could this potential problem be solved?

Yes, this is likely to happen in certain situations.  We could then use a behavioral pattern such as strategy to couple together a family of policies to be used depending on the classes.  We may also group the classes into a hierarchy and use the Composite pattern to talk to them, simplifying code in the client (the mediator).



  • (i) The classic Model-View-Controller design is explained in Implementation note #8: Encapsulating complex update semantics. Would it ever make sense for an Observer (or View) to talk directly to the Subject (or Model)?

The Observer may request an immediate update from the Subject without going through the Controller, when we need a “real-time” update of the Subject.  This would, however, create redundant update and synchronization issues, and would be in conflict with the Mediator based design of the Controller. 

  • (ii) What are the properties of a system that uses the Observer pattern extensively? How would you approach the task of debugging code in such a system?

The system can be divide into two distinct part – the observers and the subjects.   If we were to model the relationships between objects as graph links, the graph would resemble a digraph.   We could then debug the two components of the digraph independantly.  We could first check if the subjects are updating states correctly, observers are recording the updates correctly, and then check if the communication(update) protocol between the two is working correctly. 

  • (iii) Is it clear to you how you would handle concurrency problems with is pattern? Consider an Unregister() message being sent to a subject, just before the subject sends a Notify() message to the ChangeManager (or Controller).

We would have to add a communication protocol at the Controller for handling the updates. e.g. the Controller could buffer the updates from the subjects, check if the system is in a consistent state, send the updates to the observers, check for consistency and then send the messages to the subjects.  It would be trade-off between efficiency and consistency.


Chain of Responsibility

  • (i) How does the Chain of Responsibility pattern differ from the Decorator pattern or from a linked list?.

In a chain, an object in the chain may or may not act on a request and just pass it on to the next object(handler). A decorator, however, adds responsibilites to an object dynamically, and each object in the list adds responsibilities.  Also in a chain, a receipt is not guaranteed (the request can drop off the end of the chain without ever being handled) 

  • (ii) Is it helpful to look at patterns from a structural perspective? In other words, if you see how a set of patterns are the same in terms of how they are programmed, does that help you to understand when to apply them to a design?

Yes, sometimes it helps.  And of course, experience with programming, and using the right patterns helps tp figure out more easily what patterns to use in a given situation.



  • The authors write that the “Caretaker” participant never operates on or examines the contents of a memento. Can you consider a case where a Caretaker would infact need to know the identity of a memento and thus need the ability to examine or query the contents of that memento? Would this break something in the pattern?

The idea of the memento is based on that the “Caretaker” participant never operates on or examines the contents of a memento.  So, yes, the pattern is broken if it is allowed to examine or query the contents of the memento.  But, say we wish to ensure that something cannot be “undone” after a certain action, then we would need such an ability for the Caretaker.



  • In the Motivation section of the Command pattern, an application’s menu system is described. An application has a Menu, which in turn has MenuItems, which in turn execute commands when they are clicked. What happens if the command needs some information about the application in order to do its job? How would the command have access to such information such that new comamnds could easily be written that would also have access to the information they need?

In such a case, the application and the command could both access a made-up in-between object, so that the command could get the information that way.  We can create more commands, but we need to make them all aware of this in-between object.



  • (i) When should this creational pattern be used over the other creational patterns?

Prototype hides the concrete product classes from the user, reducing the number of names the user needs to know.  This is of course common to several creational patterns.  But the Prototype allows a client to be installed and removed at run time, which adds flexibility that other creational patterns don’t have. 

  • (ii) Explain the difference between deep vs. shallow copy.

With a shallow copy, pointers will be continue to be shared between the original and the copy.  i.e. the copy will not be completely independent because it will still refer to the same variable as the original.  With a deep copy, however, we copy the original itself, but we also make copies of all the variables’ original uses. So the new object is independent , because when it refers to a variable , it actually refers to its own variable, which is a copy of the original variable.



  • If something has only two to three states, is it overkill to use the State pattern?

Not really.  One uses the State pattern when the transitions between the states are complex.  Also, albeit in contrast with extreme programming principles, if future growth will demand more states, then the State pattern should be used.



  • One issue with the Visitor pattern involces cyclicality. When you add a new Visitor, you must make changes to existing code. How would you work around this possible problem?

We could make a default Visitor that could be implemented by most of the other Visitors.



  • (i) What is a non-GUI example of a flyweight?

An example is the checkout system at a video store.  There are a large number of different objects here, but we make one single instance of an object and pass as an intrinsic state the data about who is checking out what video. 

  • (ii) What is the minimum configuration for using flyweight? Do you need to be working with thousands of objects, hundreds, tens?

There is better savings if more flyweights are shared, i.e. more objects are added.  However, it depends on the size of the objects.  If we use large and different objects and only few of them, we would still save space, although there is additional overhead for using the flyweight itself.


What are design patterns?

Design patterns are documented tried and tested solutions for recurring problems in a given context. So basically you have a problem context and the proposed solution for the same. Design patterns existed in some or other form right from the inception stage of software development. Let’s say if you want to implement a sorting algorithm the first thing comes to mind is bubble sort. So the problem is sorting and solution is bubble sort. Same holds true for design patterns.


Which are the three main categories of design patterns?

There are three basic classifications of patterns Creational, Structural, and Behavioral patterns.

Creational Patterns

 Abstract Factory:- Creates an instance of several families of classes 
 Builder: – Separates object construction from its representation 
 Factory Method:- Creates an instance of several derived classes 
 Prototype:- A fully initialized instance to be copied or cloned 
 Singleton:- A class in which only a single instance can exist

Note: – The best way to remember Creational pattern is by ABFPS (Abraham Became First President of States).
Structural Patterns

 Adapter:-Match interfaces of different classes. 
 Bridge:-Separates an object’s abstraction from its implementation. 
 Composite:-A tree structure of simple and composite objects. 
 Decorator:-Add responsibilities to objects dynamically. 
 Façade:-A single class that represents an entire subsystem.
 Flyweight:-A fine-grained instance used for efficient sharing. 
 Proxy:-An object representing another object.

Note : To remember structural pattern best is (ABCDFFP)
Behavioral Patterns

 Mediator:-Defines simplified communication between classes.
 Memento:-Capture and restore an object’s internal state. 
 Interpreter:- A way to include language elements in a program.
 Iterator:-Sequentially access the elements of a collection. 
 Chain of Resp: – A way of passing a request between a chain of objects.
 Command:-Encapsulate a command request as an object. 
 State:-Alter an object’s behavior when its state changes. 
 Strategy:-Encapsulates an algorithm inside a class. 
 Observer: – A way of notifying change to a number of classes. 
 Template Method:-Defer the exact steps of an algorithm to a subclass. 
 Visitor:-Defines a new operation to a class without change.

Note: – Just remember Music……. 2 MICS On TV (MMIICCSSOTV).

Note :- In the further section we will be covering all the above design patterns in a more detail manner.

Can you explain factory pattern?

·         Factory pattern is one of the types of creational patterns. You can make out from the name factory itself it’s meant to construct and create something. In software architecture world factory pattern is meant to centralize creation of objects. Below is a code snippet of a client which has different types of invoices. These invoices are created depending on the invoice type specified by the client. There are two issues with the code below:-

·         First we have lots of ‘new’ keyword scattered in the client. In other ways the client is loaded with lot of object creational activities which can make the client logic very complicated.

Second issue is that the client needs to be aware of all types of invoices. So if we are adding one more invoice class type called as ‘InvoiceWithFooter’ we need to reference the new class in the client and recompile the client also.


Figure: – Different types of invoice

Taking these issues as our base we will now look in to how factory pattern can help us solve the same. Below figure ‘Factory Pattern’ shows two concrete classes ‘ClsInvoiceWithHeader’ and ‘ClsInvoiceWithOutHeader’.

The first issue was that these classes are in direct contact with client which leads to lot of ‘new’ keyword scattered in the client code. This is removed by introducing a new class ‘ClsFactoryInvoice’ which does all the creation of objects.

The second issue was that the client code is aware of both the concrete classes i.e. ‘ClsInvoiceWithHeader’ and ‘ClsInvoiceWithOutHeader’. This leads to recompiling of the client code when we add new invoice types. For instance if we add ‘ClsInvoiceWithFooter’ client code needs to be changed and recompiled accordingly. To remove this issue we have introduced a common interface ‘IInvoice’. Both the concrete classes ‘ClsInvoiceWithHeader’ and ‘ClsInvoiceWithOutHeader’ inherit and implement the ‘IInvoice’ interface.

The client references only the ‘IInvoice’ interface which results in zero connection between client and the concrete classes ( ‘ClsInvoiceWithHeader’ and ‘ClsInvoiceWithOutHeader’). So now if we add new concrete invoice class we do not need to change any thing at the client side. 

In one line the creation of objects is taken care by ‘ClsFactoryInvoice’ and the client disconnection from the concrete classes is taken care by ‘IInvoice’ interface.


Figure: – Factory pattern

Below are the code snippets of how actually factory pattern can be implemented in C#. In order to avoid recompiling the client we have introduced the invoice interface ‘IInvoice’. Both the concrete classes ‘ClsInvoiceWithOutHeaders’ and ‘ClsInvoiceWithHeader’ inherit and implement the ‘IInvoice’ interface.


Figure :- Interface and concrete classes

We have also introduced an extra class ‘ClsFactoryInvoice’ with a function ‘getInvoice()’ which will generate objects of both the invoices depending on ‘intInvoiceType’ value. In short we have centralized the logic of object creation in the ‘ClsFactoryInvoice’. The client calls the ‘getInvoice’ function to generate the invoice classes. One of the most important points to be noted is that client only refers to ‘IInvoice’ type and the factory class ‘ClsFactoryInvoice’ also gives the same type of reference. This helps the client to be complete detached from the concrete classes, so now when we add new classes and invoice types we do not need to recompile the client.


Figure: – Factory class which generates objects

Note :- The above example is given in C# . Even if you are from some other technology you can still map the concept accordingly. You can get source code from the CD in ‘FactoryPattern’ folder.

Can you explain abstract factory pattern?


Abstract factory expands on the basic factory pattern. Abstract factory helps us to unite similar factory pattern classes in to one unified interface. So basically all the common factory patterns now inherit from a common abstract factory class which unifies them in a common class. All other things related to factory pattern remain same as discussed in the previous question.

A factory class helps us to centralize the creation of classes and types. Abstract factory helps us to bring uniformity between related factory patterns which leads more simplified interface for the client.


Figure: – Abstract factory unifies related factory patterns


Now that we know the basic lets try to understand the details of how abstract factory patterns are actually implemented. As said previously we have the factory pattern classes (factory1 and factory2) tied up to a common abstract factory (AbstractFactory Interface) via inheritance. Factory classes stand on the top of concrete classes which are again derived from common interface. For instance in figure ‘Implementation of abstract factory’ both the concrete classes ‘product1’ and ‘product2’ inherits from one interface i.e. ‘common’. The client who wants to use the concrete class will only interact with the abstract factory and the common interface from which the concrete classes inherit. 


Figure: – Implementation of abstract factory

Now let’s have a look at how we can practically implement abstract factory in actual code. We have scenario where we have UI creational activities for textboxes and buttons through their own centralized factory classes ‘ClsFactoryButton’ and ‘ClsFactoryText’. Both these classes inherit from common interface ‘InterfaceRender’. Both the factories ‘ClsFactoryButton’ and ‘ClsFactoryText’ inherits from the common factory ‘ClsAbstractFactory’. Figure ‘Example for AbstractFactory’ shows how these classes are arranged and the client code for the same. One of the important points to be noted about the client code is that it does not interact with the concrete classes. For object creation it uses the abstract factory ( ClsAbstractFactory ) and for calling the concrete class implementation it calls the methods via the interface ‘InterfaceRender’. So the ‘ClsAbstractFactory’ class provides a common interface for both factories ‘ClsFactoryButton’ and ‘ClsFactoryText’. 


Figure: – Example for abstract factory

Note: – We have provided a code sample in C# in the ‘AbstractFactory’ folder. People who are from different technology can compare easily the implementation in their own language.

We will just run through the sample code for abstract factory. Below code snippet ‘Abstract factory and factory code snippet’ shows how the factory pattern classes inherit from abstract factory.


Figure: – Abstract factory and factory code snippet

Figure ‘Common Interface for concrete classes’  how the concrete classes inherits from a common interface ‘InterFaceRender’ which enforces the method ‘render’ in all the concrete classes.


Figure: – Common interface for concrete classes

The final thing is the client code which uses the interface ‘InterfaceRender’ and abstract factory ‘ClsAbstractFactory’ to call and create the objects. One of the important points about the code is that it is completely isolated from the concrete classes. Due to this any changes in concrete classes like adding and removing concrete classes does not need client level changes.


Figure: – Client, interface and abstract factory


Can you explain builder pattern?

Builder falls under the type of creational pattern category. Builder pattern helps us to separate the construction of a complex object from its representation so that the same construction process can create different representations. Builder pattern is useful when the construction of the object is very complex. The main objective is to separate the construction of objects and their representations. If we are able to separate the construction and representation, we can then get many representations from the same construction. 


Figure: – Builder concept

To understand what we mean by construction and representation lets take the example of the below ‘Tea preparation’ sequence.

You can see from the figure ‘Tea preparation’ from the same preparation steps we can get three representation of tea’s (i.e. Tea with out sugar, tea with sugar / milk and tea with out milk). 


Figure: – Tea preparation

Now let’s take a real time example in software world to see how builder can separate the complex creation and its representation. Consider we have application where we need the same report to be displayed in either ‘PDF’ or ‘EXCEL’ format. Figure ‘Request a report’ shows the series of steps to achieve the same. Depending on report type a new report is created, report type is set, headers and footers of the report are set and finally we get the report for display.


Figure: – Request a report

Now let’s take a different view of the problem as shown in figure ‘Different View’. The same flow defined in ‘Request a report’ is now analyzed in representations and common construction. The construction process is same for both the types of reports but they result in different representations.


Figure: – Different View

We will take the same report problem and try to solve the same using builder patterns. There are three main parts when you want to implement builder patterns.

 Builder: – Builder is responsible for defining the construction process for individual parts. Builder has those individual processes to initialize and configure the product.
 Director: – Director takes those individual processes from the builder and defines the sequence to build the product.
 Product: – Product is the final object which is produced from the builder and director coordination.

First let’s have a look at the builder class hierarchy. We have a abstract class called as ‘ReportBuilder’ from which custom builders like ‘ReportPDF’ builder and ‘ReportEXCEL’ builder will be built.


Figure: – Builder class hierarchy


Figure ‘Builder classes in actual code’ shows the methods of the classes. To generate report we need to first Create a new report, set the report type (to EXCEL or PDF) , set report headers , set the report footers and finally get the report. We have defined two custom builders one for ‘PDF’ (ReportPDF) and other for ‘EXCEL’ (ReportExcel). These two custom builders define there own process according to the report type.



Figure: – Builder classes in actual code

Now let’s understand how director will work. Class ‘clsDirector’ takes the builder and calls the individual method process in a sequential manner. So director is like a driver who takes all the individual processes and calls them in sequential manner to generate the final product, which is the report in this case. Figure ‘Director in action’ shows how the method ‘MakeReport’ calls the individual process to generate the report product by PDF or EXCEL.




Figure: – Director in action



The third component in the builder is the product which is nothing but the report class in this case.




Figure: – The report class


Now let’s take a top view of the builder project. Figure ‘Client,builder,director and product’ shows how they work to achieve the builder pattern. Client creates the object of the director class and passes the appropriate builder to initialize the product. Depending on the builder the product is initialized/created and finally sent to the client.





Figure: – Client, builder, director and product 



The output is something like this. We can see two report types displayed with their headers according to the builder.




Figure: – Final output of builder


Note :- In CD we have provided the above code in C# in ‘BuilderPattern’ folder.



Can you explain prototype pattern?



Prototype pattern falls in the section of creational pattern. It gives us a way to create new objects from the existing instance of the object. In one sentence we clone the existing object with its data. By cloning any changes to the cloned object does not affect the original object value. If you are thinking by just setting objects we can get a clone then you have mistaken it. By setting one object to other object we set the reference of object BYREF. So changing the new object also changed the original object. To understand the BYREF fundamental more clearly consider the figure ‘BYREF’ below. Following is the sequence of the below code:-
• In the first step we have created the first object i.e. obj1 from class1.
• In the second step we have created the second object i.e. obj2 from class1.
• In the third step we set the values of the old object i.e. obj1 to ‘old value’.
• In the fourth step we set the obj1 to obj2.
• In the fifth step we change the obj2 value.
• Now we display both the values and we have found that both the objects have the new value.




Figure :- BYREf



The conclusion of the above example is that objects when set to other objects are set BYREF. So changing new object values also changes the old object value.

There are many instances when we want the new copy object changes should not affect the old object. The answer to this is prototype patterns.

Lets look how we can achieve the same using C#. In the below figure ‘Prototype in action’ we have the customer class ‘ClsCustomer’ which needs to be cloned. This can be achieved in C# my using the ‘MemberWiseClone’ method. In JAVA we have the ‘Clone’ method to achieve the same. In the same code we have also shown the client code. We have created two objects of the customer class ‘obj1’ and ‘obj2’. Any changes to ‘obj2’ will not affect ‘obj1’ as it’s a complete cloned copy.



Figure: – Prototype in action 



Note :- You can get the above sample in the CD in ‘Prototype’ folder. In C# we use the ‘MemberWiseClone’ function while in JAVA we have the ‘Clone’ function to achieve the same.

Can you explain shallow copy and deep copy in prototype patterns?

There are two types of cloning for prototype patterns. One is the shallow cloning which you have just read in the first question. In shallow copy only that object is cloned, any objects containing in that object is not cloned. For instance consider the figure ‘Deep cloning in action’ we have a customer class and we have an address class aggregated inside the customer class. ‘MemberWiseClone’ will only clone the customer class ‘ClsCustomer’ but not the ‘ClsAddress’ class. So we added the ‘MemberWiseClone’ function in the address class also. Now when we call the ‘getClone’ function we call the parent cloning function and also the child cloning function, which leads to cloning of the complete object. When the parent objects are cloned with their containing objects it’s called as deep cloning and when only the parent is clones its termed as shallow cloning.




Figure: – Deep cloning in action



Can you explain singleton pattern?



There are situations in a project where we want only one instance of the object to be created and shared between the clients. No client can create an instance of the object from outside. There is only one instance of the class which is shared across the clients. Below are the steps to make a singleton pattern:-

1) Define the constructor as private.
2) Define the instances and methods as static.

Below is a code snippet of a singleton in C#. We have defined the constructor as private, defined all the instance and methods using the static keyword as shown in the below code snippet figure ‘Singleton in action’. The static keyword ensures that you only one instance of the object is created and you can all the methods of the class with out creating the object. As we have made the constructor private, we need to call the class directly.



Figure: – Singleton in action


Note :- In JAVA to create singleton classes we use the STATIC keyword , so its same as in C#. You can get a sample C# code for singleton in the ‘singleton’ folder.

Can you explain command patterns?


Command pattern allows a request to exist as an object. Ok let’s understand what it means. Consider the figure ‘Menu and Commands’ we have different actions depending on which menu is clicked. So depending on which menu is clicked we have passed a string which will have the action text in the action string. Depending on the action string we will execute the action. The bad thing about the code is it has lot of ‘IF’ condition which makes the coding more cryptic.



Figure: – Menu and Commands


Command pattern moves the above action in to objects. These objects when executed actually execute the command. 
As said previously every command is an object. We first prepare individual classes for every action i.e. exit, open, file and print. Al l the above actions are wrapped in to classes like Exit action is wrapped in ‘clsExecuteExit’ , open action is wrapped in ‘clsExecuteOpen’, print action is wrapped in ‘clsExecutePrint’ and so on. All these classes are inherited from a common interface ‘IExecute’.



Figure: – Objects and Command


Using all the action classes we can now make the invoker. The main work of invoker is to map the action with the classes which have the action. 
So we have added all the actions in one collection i.e. the arraylist. We have exposed a method ‘getCommand’ which takes a string and gives back the abstract object ‘IExecute’. The client code is now neat and clean. All the ‘IF’ conditions are now moved to the ‘clsInvoker’ class.



Figure: – Invoker and the clean client


Note: – You can find a sample code for C# code in command pattern in ‘Command’ folder. 


Define UML?

Unified Modeling Language, a standard language for designing and documenting a system in an object-oriented manner. It has nine diagrams which can be used in design document to express design of software architecture.

Can you explain use case diagrams?

Use case diagram answers what system does from the user point of view. Use case answer ‘What will the system do?’. Use cases are mainly used in requirement document to depict clarity regarding a system. There are three important parts in a use case scenario, actor and use case. 

Scenario: – A scenario is a sequence of events which happen when a user interacts with the system.

Actor: – Actor is the who of the system, in other words the end user. 

Use Case: – Use case is task or the goal performed by the end user. Below figure ‘Use Case’ shows a simple scenario with ‘Actor’ and a ‘Use Case’. Scenario represents an accountant entering accounts data in the system. As use case’s represent action performed they are normally represented by strong verbs.

Actor’s are represented by simple stick man and use case by oval shape as shown in figure ‘Use Case’ below.


Figure: – Use Case

Can you explain primary and secondary actors?

Actors are further classified in to two types primary and secondary actors. Primary actors are the users who are the active participants and they initiate the user case, while secondary actors are those who only passively participate in the use case.

How does a simple use case look like?

Use case’s have two views of representation in any requirement document. One is the use case diagrams and the other is a detail step table about how the use case works. So it’s like a pair first an over view is shown using a use case diagram and then a table explaining the same in detail. Below is a simple ‘login’ use case shown diagrammatically and then a detail table with steps about how the use case is executed.


Figure: – Login Use Case

Use Case


Use Case Name



This uses depicts the flow of how user will log-in into the chat application.

Primary Actor

Simple chat user.


User types chat application on URL of the browser.




No password is currently present for the system
Rooms will remain constant as explained in the assumption section of this document

Failed End conditions

Duplicate user name is not allowed in the chat application.


User clicks on the log-in button.

Main Scenario

• User types chat application on URL of the browser which in turn opens the main page.
• In the main page of application user is popped up with ‘Enter user name’ option and various ‘rooms’ option drop down menu.
• User then types the name and selects one of the room from drop down menu and then clicks on the ‘Log-in’ button.
• Application then checks whether the user name is unique in the system if not then user is popped up with error message that “user already exist”.
• After entering the unique name the user is finally logged in the application.



Alternate Scenario


Success Scenarios

1. Opens page of a selected room in that other user names and their messages can be seen.

Note and Open Issues



Table: – Login use case table

Note: – You must be wondering why we have this pair why not just a use case table only. Use case diagrams are good to show relationship between use case and they also provide high over view. The table explanation of a use case talks details about the use case. So when a developer or a user is reading a requirement document, he can get an overview by looking at the diagram if he is interested he can read the use case tables for more details.

Can you explain ‘Extend’ and ‘Include’ in use cases?

‘Extend’ and ‘Include’ define relationships between use cases. Below figure ‘Extend and Include’ shows how these two fundamentals are implemented in a project. The below use case represents a system which is used to maintain customer. When a customer is added successfully it should send an email to the admin saying that a new customer is added. Only admin have rights to modify the customer. First lets define extend and include and then see how the same fits in this use case scenario.

Include: – Include relationship represents an invocation of one use case by the other. If you think from the coding perspective its like one function been called by the other function.

Extend: – This relationship signifies that the extending use case will work exactly like the base use case only that some new steps will inserted in the extended use case.

Below figure ‘Extend and Include’ shows that ‘add customer’ is same as the ‘add discounted customer’. The ‘Add discounted customer’ has an extra process, to define discount for the discounted customer which is not available for the simple customer. One of the requirements of the project was that when we add a customer, the system should send an email. So after the customer is added either through ‘Add simple customer’ use case or ‘Add discounted customer’ use case it should invoke ‘send a email’ use case. So we have defined the same with a simple dotted line with <<include>> as the relationship.


Figure: – Extend and Include

Note: – One of the points to be noted in the diagram ‘Extend and Include’ is we have defined inheritance relationship between simple and admin user. This also helps us defining a technical road map regarding relationships between simple and admin user.

Can you explain class diagrams?

Class diagram

Class is basically a prototype which helps us create objects. Class defines the static structure of the project. A class represents family of an object. By using Class we can create uniform objects.

In the below figure you can see how the class diagram looks. Basically there are three important sections which are numbered as shown in the below. Let’s try to understand according to the numbering:-
• Class name:-This is the first section or top most section of the Class which represents the name of the Class (clsCustomer).
• Attributes:-This is the second section or the middle section of the class which represents the properties of the system.
• Methods: – This section carries operation or method to act on the attributes.


Figure: – Three sections of the class

Now in the next section we will have a look on Association relationship between these classes.

How do we represent private, public and protected in class diagrams?

In order to represent visibility for properties and methods in class diagram we need to place symbols next to each property and method as shown in figure ‘Private, Public and Protected’. ‘+’ indicates that it’s public properties/methods. ‘-‘indicates private properties which means it can not be accessed outside the class. ‘#’ indicate protected/friend properties. Protected properties can only be seen within the component and not outside the component.


Figure: – Private, public and protected

what does associations in a class diagram mean?

Associations in Class diagrams

A single Class cannot represent the whole module in a project so we need one or more classes to represent a module. For instance, a module named ‘customer detail’ cannot be completed by the customer class alone , to complete the whole module we need customer class, address class, phone class in short there is relationship between the classes. So by grouping and relating between the classes we create module and these are termed as Association. In order to associate them we need to draw the arrowed lines between the classes as shown in the below figure. 

In the figure ‘Order is paid by payments class’, we can see Order class and the Payment class and arrowed line showing relationship that the order class is paid using payment class in other words order class is going to be used by payment class to pay the order. The left to right marked arrow basically shows the flow that order class uses the payment class.
In case payment class using the order class then the marked arrow should be right to left showing the direction of the flow.


Figure:- Order is paid by Payments class

There are four signs showing the flow:-


Figure: – Direction signs in UML


Multiplicity can be termed as classes having multiple associations or one class can be linked to instances of many other classes. If you look at the below figure the customer class is basically associated with the address class and also observes the notations (*, 0 and 1).If you look at the right hand side the (1….*) notation indicates that at least one or many instance of the address class can be present in the customer class. Now towards left hand side we have (0….*) notation indicating that address class can exist without or many customer class can link him.
In order to represent multiplicity of classes we have to show notations like (1….*), (0….*) as shown in below figure.

Note: ‘*’ means “many” where as ‘(0, 1)’ means “(zero or at least one)” respectively.


Figure: – Multiplicity in Classes

Can you explain aggregation and composition in class diagrams?

In this Association there are two types mainly Aggregation Association and Composition Association.

Aggregation Association signifies that the whole object can exist without the Aggregated 
Object. For example in the below figure we have three classes university class, department class and the Professor Class. The university cannot exist without department which means that university will be closed as the department is closed. In other words lifetime of the university depend on the lifetime of department.

In the same figure we have defined second Association between the department and the Professor. In this case, if the professor leaves the department still the department continues in other words department is not dependent on the professor this is called as Composition Association.

Note: – The filled diamond represents the aggregation and the empty diamond represents the composition. You can see the figure below for more details.


Figure: – Aggregation and composition in action

What are composite structure diagram and reflexive association in class diagrams?

Composite structure diagramWhen we try to show Aggregation and Composition in a complete project the diagram becomes very complicated so in order to keep it simple we can use Composite structure diagram. In the below figure we have shown two diagrams one is normal diagram other is Composite structure diagram and the simplicity can easily be identified. In the composite diagram the aggregated classes are self contained in the main class which makes it simpler to read.


Figure: – Composite Structure diagram

Reflexive associations
In many scenarios you need to show that two instances of the same class are associated with each other and this scenario is termed as Reflexive Association. For instance in the below figure shows Reflexive Association in the real project. Here you can see customer class has multiple address class and addresses can be a Head office, corporate office or Regional office. One of the address objects is Head office and we have linked the address object to show Reflexive Association relationship. This is the way we can read the diagram Regional address object is blocked by zero or one instance of Head office object.


Figure: – Reflexive association

Can you explain business entity and service class?

Business entity objects represent persistent information like tables of a database. Just making my point clearer they just represent data and do not have business validations as such. For instance below figure ‘Business entity and service’ shows a simple customer table which with three fields ‘Customer Code’,’ Customer Address’ and ‘Phone Number’. All these fields are properties in ‘ClsCustomer’ class. So ‘ClsCustomer’ class becomes the business entity class. The business entity class by itself can not do anything it’s just a place holder for data. In the same figure we have one more class ‘ClsServiceCustomer’. This class aggregates the business entity class and performs operations like ‘Add’,’ Next’ (Move to next record), ‘Prev’ (Move to previous record) and ‘GetItem’ (get a customer entity depending on condition).

With this approach we have separated the data from the behavior. The service represents the behavior while the business entity represents the persistent data.


Figure:-Business entity and service

Can you explain System entity and service class?

System entity class represents persistent information which is related to the system. For instance in the below figure ‘System entity and service class’ we have a system entity class which represents information about ‘loggedindate’ and ‘loggedintime’ of the system registry. System service class come in two flavors one is it acts like a wrapper in the system entity class to represent behavior for the persistent system entity data. In the figure you can see how the ‘ClsAudit’ system entity is wrapped by the ‘ClsAuditSytem’ class which is the system service class. ‘ClsAuditSystem’ adds ‘Audit’ and ‘GetAudit’ behavior to the ‘ClsAudit’ system entity class.


Figure: – System entity and service class

The other flavor of the system service class is to operate on non-persistent information. The first flavor operated on persistent information. For instance the below figure ‘Non-persistent information’ shows how the class ‘ClsPaymentService’ class operates on the payment gateway to Check is the card exists , Is the card valid and how much is the amount in the card ?. All these information are non-persistent. By separating the logic of non-persistent data in to a system service class we bring high reusability in the project.


Figure: – Non-persistent information

Note: – The above question can be asked in interview from the perspective of how you have separated the behavior from the data. The question will normally come twisted like ‘How did you separate the behavior from the data?’.

Can you explain generalization and specialization?

Generalization and specializationIn Generalization and Specialization we define the parent-child relationship between the classes. In many instance you will see some of the classes have same properties and operation these classes are called super class and later you can inherit from super class and make sub classes which have their own custom properties. In the below figure there are three classes to show Generalization and Specialization relationship. All phone types have phone number as a generalized property but depending upon landline or mobile you can have wired or simcard connectivity as specialized property. In this diagram the clsphone represent Generalization whereas clslandline and clsmobile represents specialization.


Figure: – Generalization and Specialization

How do we represent an abstract class and interface UML?

Interface is represented by <<type>> in the class diagram. Below figure ‘Interface in action’ shows we have defined an interface ‘IContext’. Note the ‘<<type>>’ represents an interface. If we want to show that the interface is used in a class we show the same with a line and a simple circle as shown in figure ‘Interface in Action’ below.


Figure: – Interface in action

Abstract classes are represented by ‘{abstract}’ as shown in figure ‘Abstract classes in action’.


Figure: – Abstract classes in action.

How do we achieve generalization and specialization?

By using inheritance.

Can you explain object diagrams in UML?

Class represents shows the static nature of the system. From the previous question you can easily judge that class diagrams shows the types and how they are linked. Classes come to live only when objects are created from them. Object diagram gives a pictorial representation of class diagram at any point of time. Below figure ‘Object diagram’ shows how a class looks in when actual objects are created. We have shown a simple student and course relationship in the object diagram. So a student can take multiple courses. The class diagram shows the same with the multiplicity relationship. We have also shown how the class diagram then looks when the objects are created using the object diagram. We represent object with Object Name: Class Name. For instance in the below figure we have shown ‘Shiv : ClsStudent’ i.e ‘Shiv’ is the object and ‘ClsStudent’ the class. As the objects are created we also need to show data of the properties, the same is represented by ‘PropertyName=Value’ i.e. ‘StudentName=Shiv’.


Figure: – Object diagrams

The diagram also states that ‘ClsStudent’ can apply for many courses. The same is represented in object diagram by showing two objects one of the ‘Computer’ and the other of ‘English’.

Note: – Object diagrams should only be drawn to represent complicated relationship between objects. It’s possible that it can also complicate your technical document as lot. So use it sparingly.

Can you explain sequence diagrams?

Sequence diagrams
Sequence diagram shows interaction between objects over a specific period time. Below figure ‘Sequence diagram’ shows how a sequence diagram looks like. In this sequence diagram we have four objects ‘Customer’,’Product’,’Stock’ and ‘Payment’. The message flow is shown vertically in waterfall manner i.e. it starts from the top and flows to the bottom. Dashed lines represent the duration for which the object will be live. Horizontal rectangles on the dashed lines represent activation of the object. Messages sent from a object is represented by dark arrow and dark arrow head. Return message are represented by dotted arrow. So the figure shows the following sequence of interaction between the four objects:-

• Customer object sends message to the product object to request if the product is available or not.
• Product object sends message to the stock object to see if the product exists in the stock.
• Stock object answers saying yes or No.
• Product object sends the message to the customer object.
• Customer object then sends a message to the payment object to pay money.
• Payment object then answers with a receipt to the customer object.

One of the points to be noted is product and stock object is not active when the payment activity occurs.


Figure: – Sequence diagram

Messages in sequence diagrams
There are five different kinds of messages which can be represented by sequence 
Synchronous and asynchronous messages:-
Synchronous messages are represented by a dark arrow head while asynchronous messages are shown by a thin arrow head as shown in figure ‘Synchronous and Asynchronous’.


Figure: – Synchronous and Asynchronous

Recursive message:-
We have scenarios where we need to represent function and subroutines which are called recursively. Recursive means the method calling himself. Recursive messages are represented by small rectangle inside a big rectangle with an arrow going from the big rectangle to the small rectangle as shown in figure ‘Recursive message’.


Figure: – Recursive message

Message iteration:-

Message iteration represents loops during sequences of activity. Below figure ‘message iteration’ shows how ‘order’ calls the ‘orderitem’ objects in a loop to get cost. To represent loop we need to write ‘For each <<object name>>’. In the below figure the object is the ‘orderitem’. Also note the for each is put in a box to emphasize that it’s a loop.


Figure: – Message iteration

Message constraint:-
If we want to represent constraints it is put in a rectangle bracket as shown in figure ‘message constraint’. In the below figure ‘message constraint’ the ‘customer’ object can call ‘book tickets’ only if the age of the customer is greater than 10.


Figure: – Message constraint

Message branching:-
Below figure ‘message branching’ shows how ‘customer’ object have two branches one is when the customer calls save data and one when he cancels the data.


Figure: – Message branching

Doing Sequence diagram practically
Let’s take a small example to understand sequence diagram practically. Below is a simple voucher entry screen for accounts data entry. Following are the steps how the accountant will do data entry for the voucher:-

  • Accountant loads the voucher data entry screen. Voucher screen loads with debit account codes and credit account codes in the respective combo boxes.
  • Accountant will then fill in all details of the voucher like voucher description, date, debit account code, credit account code, description, and amount and then click ‘add voucher’ button.
  • Once ‘add voucher’ is clicked it will appear in the voucher screen below in a grid and the voucher entry screen will be cleared and waiting for new voucher to be added. During this step voucher is not added to database it’s only in the collection.
  • If there are more vouchers to be added the user again fills voucher and clicks ‘add voucher’.
  • Once all the vouchers are added he clicks ‘submit voucher’ which finally adds the group of vouchers to the database.

Below figure ‘Voucher data entry screen’ shows pictorially how the screen looks like.


Figure: – Voucher data entry screen

Figure ‘Voucher data entry sequence diagram’ shows how the sequence diagram looks like. Below diagram shows a full sequence diagram view of how the flow of the above screen will flow from the user interface to the data access layer. There are three main steps in the sequence diagram, let’s understand the same step by step.

Step 1:- The accountant loads the voucher data entry screen. You can see from the voucher data entry screen image we have two combo boxes debit and credit account codes which are loaded by the UI. So the UI calls the ‘Account Master’ to load the account code which in turn calls the data access layer to load the accounting codes.

Step 2:- In this step the accountant starts filling the voucher information. The important point to be noted in this step is that after a voucher is added there is a conditional statement which says do we want to add a new voucher. If the accountant wants to add new voucher he again repeats step 2 sequence in the sequence diagram. One point to be noted is the vouchers are not added to database they are added in to the voucher collection.

Step 3:- If there are no more vouchers the accountant clicks submit and finally adds the entire voucher in the database. We have used the loop of the sequence diagram to show how the whole voucher collection is added to the database.


Figure: – Voucher data entry sequence diagram


Digg This


I will be blogging on ASP.NET and publishing on the following topics of ASP.NET 4.0

Introduction to ASP.NET

  • Agenda :
    • Brief on HTML
    • Difference between HTML and XML
    • Why XML is important.
    • Static Web Pages vs Dynamic Web Pages.
    • TAG affect how text is displayed on Web Page. E.g. <b> text </b> <i> italic</i> è text italic.

<b color=blue> this</b> this

Difference between Web Forms   and ASP.NET MVC 3.0

                        Web Forms                                MVC
  •   Web Forms are based on ASP.NET and it is high   level programming framework.
ASP.NET   MVC is also based on ASP.NET and it is low level programming technology.
  •   Web Forms are similar to User Interface   controls of windows App. It is event based controls.
ASP.NET   MVC uses HTML controls and requires knowledge of JS plugins.
  •   Web Forms Controls encapsulate HTML, JS and   CSS. They databind Charts, Grid, Tables etc.
ASP.NET   MVC directly use HTML controls hence require deep knowledge of HTML and HTTP.   They have total control of HTML markup.
  •   The unit testing is not part of the framework,   needs to be manually incorporated.
ASP.NET   MVC supports unit testing, TDD and Agile.
  •   Browser differences are handled by the Web   Forms
Browser   differences and OS compatibility needs to be care by the developer.

What is ASP.NET

ASP.NET is free framework using C# and VB. Visual studio provides Visual Web Developer for free to develop standalone website. The Intellisense of Visual Studio helps to understand the libraries used for developing website. Visual Studio has powerful debugging tool. ASP.NET is part of .NET, Website Spark is a development program to develop website and it is free software.

How text is   displayed Provides information about the text
HTML is parsed and interpreted by browser and then   displayed. Using XML to provide a data for information requested and provides   the data as response in XML.
Static Web Pages Dynamic Web Pages
A plain HTML page which doesn’t change   during interaction with the user is Static Web Page. An .aspx page is analyzed and CLR   executes code in it by server to generate dynamic page. Finally the response   data is converted to HTML page for each request.

Working with the Server

  • Server does everything for every user request.
  • Dynamic Page forces Server to do everything lead to poor performance.
  • Server manages the HTTP state session.

The .aspx page is a dynamic page follows the request and response model. And it has unique session for each request.

Client information and session information to recognize the request originate information for the server.

When 1st request is sent, server creates session and is managed by server. Session management require server resources. Time out limit is set by the web application, after limit the session expires. So before the session expires an interaction between the client and server should be made.

1st request à Parser à compile à IL code in Assembly Cache à Memory Execute http runtime.

2nd request ——————————————————————àMemory Execute http runtime.

Server Controls

Server Control : Server control is configured before hand during design time. The  request for the web page makes the dynamic page to execute the program logic at the server and deliver it as HTML control to client. E.g. gridView control, calendar control.

Code Behind: VB/C# code in another page with extension .cs is code behind of web page.

Inline Code: Javascript code and HTML tags are inline code in web page.

ASP.NET Framework which is composed of WebMatrix, WebForms, ASP.NET MVC is required to build websites, web application.

State Management and AutoPostback

Web pages are HTTP based and are stateless, the stateless nature is a problem.

ASP.NET maintains the HTTP state automatically. set EnableViewState to true in properties window to enable Postback.

What is ViewState ? ViewState  is a hidden value containing state information.

Autopostback – When a  whole page is sent back to server with new option selected is a AutopostBack property.

ASP.NET supports client side scripting.

Validation controls: A special control under validation section in toolbox; Select required control in it and drop  them on the web design. Select the control to validate. Only works with Server controls. So Validation controls to works with HTML convert html control to Server control.


.NET Application Packaging, Deployment and Configuring Application

Deployment and Packaging .NET Assemblies.
Deployment and Packaging of .NET Application
Today, applications are created using the types developed by Microsoft or custom built by you. If these types are developed using any language that targets the common language runtime (CLR), they can all work together seamlessly, i.e. different types created using different .NET languages can interact seamlessly.

.NET Framework Deployment Objectives:

All applications use DLLs from Microsoft or other vendors. Because an application executes code from various vendors, the developer of any one piece of code can’t be 100 percent sure how someone else is going to use. Even if this kind of interaction is unsafe and dangerous. End users have come across this scenario quiet often when one company decides to update its part of the code and ship it to all its users. Usually these code should be backward-compatible with previous version, since it becomes impossible to retest and debug all of the already-shipped applications to ensure that the changes will have no undesirable effect.

When installing a new application you discover that it has somehow corrupted an already-installed application. This predicament is known as “DLL hell”. The end result is that users have to carefully consider whether to install new software on their machines.

The problem with this is that the application isn’t isolated as a single entity. You can’t easily back up the application since you must copy the application’s files and also the relevant parts of the registry you must run the installation program again so that all files and registry settings are set properly. Finally, you cant easily uninstall or remove the application without having this nasty feeling that some part of the application is still lurking on your machine.

When application are installed, they come with all kinds of files, from different companies. This code can perform any operation, including deleting files or sending e-mail. To make users comfortable, security must be built into the system so that the users can explicitly allow or disallow code developed by various companies to access their system resources.

The .NET framework addresses the DLL hell issue in a big way. For example, unlike COM, types no longer require settings in the registry. Unfortunately, application still require shortcut links. As for security, the .NET Framework includes a security model called code access security   Whereas Windows security is based on a user’s identity, code access security is based on permissions that host applications that loading components can control. As you’ll see, the .NET Framework enables users to control what gets installed and what runs, and in general, to control their machines, more than Windows ever did.

Developing Modules with Types

Lets start with an example as shown below:

public sealed class Appln {

public static void Main() {

System.Console.WriteLine(“Hello My world”);



This application defines type called Appln. This type has a single public, static method called Main. Inside Main is a reference to another type called System.Console. System.Console is a type implemented by Microsoft, and the intermediate Language (IL) code that implements this type’s methods is in the MSCorLib.dll file. To build it write the above source code into a C# file and then execute the following command line:

csc.exe /out : Appln.exe /t:exe /r:MSCorLib.dll Appln.cs

This command line tells the C# compiler to emit an executable file called Appln.exe (/out: Appln.exe). The type of file produced is a win32 console application (/t[arget]:exe).

When the C# compiler processes the source file, it sees that the code references the System.Console type’s WriteLine method. At this point, the compiler wants to ensure that this type exists somewhere, that it has a WriteLine method, and that the argument being passed to this method matches the parameter the method expects. Since this type is not defined in the C# source code, to make the C# compiler happy, you must give it a set of assemblies that it can use to resolve references to external types. In the command line above /r[eference]:MSCorLib.dll switch, which tells the compiler to look for external types in the assembly identified by the MSCorLib.dll file.

MSCorLib.dll is a special file in that contains all the core types: Byte, Char, String, Int32 and many more. In fact these types are so frequently used that the C# compiler automatically references the MSCorLib.dll assembly. i.e. the above command line can be shortened as

csc.exe /out : Appln.exe /t:exe Appln.cs

Further you can drop /out and /t:exe since both match, so the command would be

csc.exe Appln.cs

If for some reason, you really don’t want the C# compiler to reference the MSCorLib.dll assembly, you can use the /nostdlib switch. Microsoft uses this switch when building the MSCorlib.dll assembly itself. For e.g. the following will throw error since the above code references System.Console type which is defined in MSCorLib.dll

csc.exe /out: Appln.exe /t:exe /nostdlib Appln.cs

This means that a machine running 32-bit or 64-bit versions of Windows should be able to load this file and do something with it. Windows supports two types of applications, those with a console user interface (CUI) and those with a graphical user interface (GUI). Because I specified the /t:exe switch, the C# compiler produced a CUI application. You’d use the /t: winexe switch to cause the C# compiler to produce a GUI application.

Response Files

I’d like to spend a moment talking about response files. A response file is a text file that contains a set of compiler command-line switches. You instruct the compiler to use a response file by specifying its name on the command line by an @sign. For e.g. you can have response file called myApp.rsp that contains the following text

/out: MyAppln.exe

/target: winexe

To cause CSC.exe to use these settings you’d invoke it as follows:

csc.exe @myAppln.rsp codeFile1.cs CodeFile2.cs

This tells the C# compiler what to name the output file and what kind of target to create. The C# compiler supports multiple response files. The compiler also looks in the directory containing the CSC.exe file for a global CSC.rsp file. Settings that you want applied to all of your projects should go in this file. The compiler aggregates and uses the settings in all of these response files. If you have conflicting settings in the local and global response file, the settings in the local file override the settings in the global life. Likewise, any settings explicitly passed on the command line override the settings taken from a local response file.

When you install the .NET Framework, it installs a default global CSC.rsp file in the %SystemRoot%\Microsoft.NET\Framework\vX.X.Xdirectory where X.X.X is the version of the .NET Framework you have installed). The 4.0 version of the file contains the following switches.

# This file contains command-Line options that the C# Compiler has to process during compilation

# process, unless “noconfig” option is specified.

# Reference the common Framework libraries

/r: Accessibility.dll

/r: Microsoft.CSharp.dll

/r: System.Configuration.Install.dll

/r: System.Core.dll

/r: System.Data.dll

/r: System.Data.DataSetExtensions.dll

/r: System.Data.Linq.dll

/r: System.Deployment.dll

/r: System.Device.dll

/r: System.DirectoryServices.dll

/r: System.dll

/r: System.Drawing.dll

/r: System.EnterpriseServices.dll

/r: System.Management.dll

/r: System.Messaging.dll

/r: System.Numerics.dll

/r: System.Runtime.Remoting.dll

/r: System.Runtime.Serialization.dll

/r: System.Runtime.Serialization.Formatters.Soap.dll

/r: System.Security.dll

/r: System.ServiceModel.dll

/r: System.ServiceProcess.dll

/r: System.Transactions.dll

/r: System.Web.Services.dll

/r: System.Windows.Forms.dll

/r: System.Xml.dll

/r: System.Xml.Linq.dll

Because the global CSC.rsp file references all of the assemblies listed, you do not need to explicitly references all of the assemblies by using the C# compiler’s /reference switch. This response file is a big convenience for developers because it allows them to use types and namespaces defined in various Microsoft-published assemblies without having to specify a /reference compiler switch for each when compiling.

When you use the /reference compiler switch to reference an assembly, you can specify a complete path to a particular file. However, if you do not specify a path, the compiler will search for the file in the following places (in the order listed)

– working directory

– The directory that contains the CSC.exe file itself. MSCorLib.dll is always obtained from the directory. The path looks something like this %SystemRoots%\Microsoft.NET\Framework\v4.0.#####

– Any directories specified using the /lib compiler switch.

– any directories specified using the LIB environment variable

you are welcome to add your own switches to the global CSC.rsp file if you want to make your life even easier, but this makes it more difficult to replicate the build environment on different machines you have to remember to update the CSC.rsp the same way on each build machine. Also you can tell the compiler to ignore both local and global files by specifying the /noconfig command-line switch.

A managed PE file has four main parts the PE32(+) header, the CLR header, the metadata and the IL . the PE32(+) header is the standard information that Windows expects. The CLR header is a small block of information that is specific to modules that require the CLR (managed modules). The header includes the major and minor version number of the CLR that the module was built for: some flags, a MethodDef token (described later) indicating the module’s entry point method if this module  is CUI or GUI executable, and an optional strong-name. You can see the format of the CLR header by examining the IMAGE_COR20_HEADER defined in the CorHdr.h header file.

The metadata is a block of binary data that consists of several tables. There are three categories of tables: definition tables, reference tables and manifest tables. The following table describes some of the more common definition tables that exist in a module’s metadata block.

Metadata Definition
Table Name
ModuleDef Always contains one entry that identifies the module. The entry includes the module’s filename and a extension and a module version ID. This allows the file to be  renamed while keeping a record of its original name.
TypeDef Contains one entry for each type defined in the module. Each entry includes the type’s name, base type and flags (public, private etc, ) and contains indexes to the methods it owns in the MethodDef table, the fields it owns in the fieldDef table, the properties it owns in the PropertyDef table, and the events it owns in the EventDef table.
MethodDef Contains one entry for each method defined in the module. Each entry includes the method’s name, flags (private, public, virtual, abstract, static, final, etc) signature and offset within the module where its IL code can be found. Each entry can also refer to a ParamDef table entry in which more information about the method’s parameters can be found.
FieldDef Contains one entry for every defined in the module. Each entry includes flags (in, out, retval, etc) type and name.
ParamDef Contains one entry for each parameter defined in the module. Each entry includes flags (in, out, retval etc) type and name.
PropertyDef Contains one entry for each property defined in the module. Each entry includes flags, type and name.
EventDef Contains one entry for each event defined in the module. Each entry includes flags and name.

Compiler during compilation creates an entry for every definition in the source code to be created in one of the tables defined above. Metadata table entries are also  created as the compiler detects the types, fields, methods, properties and events that the source code references. The metadata created includes a set of reference tables that keep a record of the referenced items. Table below gives some more common reference metadata tables.

Metadata Reference
Table Name
AssemblyRef Contains one entry for each assembly referenced by the module. Each entry includes the information necessary to bind to the assembly: the assembly’s name (without path and extension), version number, culture and public key token. Each entry also contains some flags and a hash value.
ModuleRef Contains one entry for each PE module that implements types referenced by this module. Each entry includes the module’s filename and extension. This table is used to bind to types that are implemented in different modules of the calling assembly’s module.
TypeRef Contains one entry for each type referenced by the module. Each entry includes the type’s name and a reference to where the type can be found. If the type is implemented within another type, the reference will indicate a TypeRef entry. If the type is implemented in the same module , the reference will indicate a ModuleDef entry.  If the type is implemented in the another module within the calling assembly , the reference will indicate a ModuleRef entry. If the type is implemented in the different assembly, the reference will indicate a AssemblyRef entry.
MemberRef Contains one entry for each member referenced by the module. Each entry includes the member’s name and signature and points to the TypeRef entry for the type that defines the member.

My personal favorite is ILDasm.exe, the IL Disassembler. To see the metadata tables, executes the following command line

ILDasm MyAppln.exe

To see the metadata in a nice, human-readable form, select the View/MetaInfo/Show! menu item.

The important thing to remember is that MyAppln.exe contains a TypeDef whose name is MyAppln. This type identifies a public sealed class that is derived from System.Object (a type referenced from another assembly). The program type also defines two methods Main and .ctor (a constructor).

Main is a public, static method whose code is IL. Main has a void return type and takes no arguments. The constructor method is public and its code is also IL. The constructor has a void return type has no arguments and has a this pointer,  which refers to the object’s memory that is to be constructed when the method is called.

Combining Modules to Form an Assembly

An assembly is a collection of one or more files containing type definitions and resource files. One of the assembly’s files is chosen to hold a manifest. The manifest is another set of metadata tables that basically contain the names of the files that are part of the assembly. They also describe the assembly’s version, culture, publisher, publicly exported types and all of the files that comprise the assembly.

The CLR always loads the file that contains the manifest metadata tables first and then uses the manifest to get the names of the other files that are in the assembly. Here are some characteristics of assemblies that you should remember:

– An assembly defines the reusable types.

– An assembly is marked with a  version number.

– An assembly can have security information associated with it.

An assembly’s individual files don’t have these attributes – except for the file that contains the manifest metadata tables. To package, version, secure and use types, you must place them in modules that are part of an assembly

The reason is that an assembly allows you to decouple the logical and physical notion of reusable types. for e.g. an assembly can consist of several types. You couldn’t put the frequently used types in one file and the less frequently used types in another file.

You configure an application to download assembly files by specifying a codeBase element in the application’s configuration file. The codeBase element identifies a URL pointing to where all of an assembly’s files can be found. When attempting to load an assembly’s file, the CLR obtains the codeBase element’s URL and checks the machine’s download cache to see if the file is present. If it is, the file is loaded. If the file isn’t in the cache, the CLR downloads the file into cache from the location the URL points to. If the file can’t be found, the CLR throws a FileNotFoundException exception at runtime.

I’ve identified three reasons to use multifile assemblies:

– You can partition  your types among separate files, allowing for files to be incrementally downloaded as described in the Internet download scenario. Partitioning the types into separate files also allows for partial or piecemeal packaging and deployment for applications you purchase and install.

-You can add resource or data files to your assembly. for example, you could have a type that calculates some insurance information using actuarial table. Instead of embedding the actuarial table in the source code, you could use a tool so that the data file is considered to be part of  the assembly.

-You can create assemblies consisting of types implemented in different programming languages. To developers using the assembly, the assembly appears to contain just a bunch of types; developers wont even know that different programming languages were used. By the way, if you prefer, you can run ILDasm.exe on each of the modules to obtain an IL source code file. Then you can run ILAsm.exe and pass it all of the IL source code files. ILAsm.exe will produce a single file containing all of the types. This technique requires your source code compiler to produce IL-only code.

Manifest Metadata
Table Name
AssemblyDef Contains a single entry if this module identifies as assembly. The entry includes the assembly’s name, version, culture, flags, hash algorithm, and the publisher’s public key.
FileDef contains one entry for each PE and resource file that is part of the assembly. The entry includes the file’s name and extension, hash value and flags. If the assembly consists only of its own file, the FileDef table has no entries.
ManifestResourceDef Contains one entry for each resource that is part of the assembly. The entry includes the resource’s name, flags and an index into the FileDef table indicating the file that contains the resource isn’t a stand-alone file, the resource is a stream contained within a PE file. For an embedded resource, the entry also includes an offset indicating the start of the resource stream within the PE file.
ExportedTypesDef Contains one entry for each public type exported from all of the assembly’s PE modules. The entry includes the type’s name, an index into the FileDef table and an index into the TypeDef table. To save file space, types exported from the file containing the manifest are not repeated in this table because the type information is available using the metadata’s TypeDef table.

The C# compiler produces an assembly when you specify any of the following command-line switches: /t[arget]:exe, /t[arget]:winexe or t[arget]:library. All of these switches cause the compiler to generate a single PE file that contains the manifest metadata tables. The resulting file is either a CUI executable, GUI executable or a DLL, respectively.

The C# compiler supports the /t[arget]:module switch. This switch tells the compiler to produce a PE file that doesn’t contain the manifest metadata tables. The PE file produced is always a DLL PE file, and this file must be added to an assembly before the CLR can access any types within it. When you use the /t:module switch, the C# compiler, by default, names the output file with an extension of .netmodule.

There are many ways to add a module to an assembly. If you are using the  C# compiler to build a PE file with a manifest, you can use the /addmodule switch. Let’s assume that we have two source code files:

– File1.cs which contains rarely used types

– File2,cs which contains frequently used types

Lets compile the rarely used types into their own module so that users of the assembly won’t need to deploy this module if they never access the rarely used types:

csc /t:module File1.cs

This line causes the C# compiler to create a File1.netmodule file. Next let’s compile the frequently used types into their module, because this module will now represent the entire assembly.

We change the name of the output file to myappln.dll instead of calling it File2.dll

csc /out:File2.dll /t:library /addmodule:File1.netmodule File2.cs

This line tells the C# compiler to compile the File2.cs  file to produce the myappln.dll file Because /t:library is specified, a DLL PE file containing the manifest metadata tables is emitted into the myappln.dll file. The /addmodule:File1.netmodule switch tells the compiler that File1.netmodule is a file that should be considered part of the assembly. Specifically, the addmodule switch tells the compiler to add the file to the FileDef manifest metadata table and to add File1.netmodule’s publicly exported types to the ExportedtypesDef manifest metadata table.

The two files shown below are created. The module on the right contains the manifest.

File1.netmodule myappln.dll
IL compiled from File1.cs IL compiled from File2.cs
Metadata Types, methods and so on defined by file1.csTypes, methods and so on referenced by File1.cs Metadata Types, methods and so on defined by file2.csTypes, methods and so on referenced by File2.cs

Manifest Assembly files (self and File2.netmodule)
Public assembly types (self and File2.netmodule)

The File1.netmodule file contains the IL code generated by compiling File1.cs. This file also contains metadata table s that describe the types, methods fields, properties, events and so on that are defined by File1.cs. The metadata tables also describe the types, methods and so on that are referenced by File1.cs. The myappln.dll is a separate file. Like File1.netmodule this file includes the IL code generated by compiling File2.cs and also includes similar definition and reference metadata tables. However myappln.dll contains the additional manifest metadata tables, making myappln.dll an assembly. The additional manifest metadata tables describe all of the files that make up the assembly. The manifest metadata tables also include all of the public types exported from myappln.dll and File2.netmodule.

Any client code that consumes the myappln.dll assembly’s types must be built using the /r[eference]:myappln.dll compiler switch. This switch tells the compiler to load the myappln.dll assembly and all of the files listed in its FileDef table when searching for an external type.

The CLR loads assembly files only when a method referencing a type in an unloaded assembly  is called. This means that to run an application, all of the files from a referenced assembly do not need to be present.

Using the Assembly Linker

The Al.exe utility can produce an EXE or a DLL  PE file that contains only a manifest describing the types in other modules. To understand how AL.exe works, lets change the way the myappln.dll assembly is built:

csc /t:module File1.cs

csc /t:module File2.cs

al /out:myappln.dll /t: library File1.netmodule File2.netmodule

In this example, two separate modules, File1.netmodule and File2.netmodule, are created. Neither module is an assembly because they don’t contain manifest metadata tables. Then a third file is produced: myappln.dll which is a small DLL PE file that contains no IL code but has manifest metadata tables indicating that File1.netmodule and File2.netmodule are part of the assembly. The resulting assembly consists of the three files: myappln.dll, File1.netmodule and File2.netmodule. The assembly linker has no way to combine multiple files into a single file.

The AL.exe utility can also produce CUI and GUI PE files using the /t[arget]:exe or /t[arget]:winexe command line switches. You can specify which method in a module should be used as an entry point by adding the /main command-line switch when invoking AL.exe. The following is an example of how to call the Assembly Linker, AL.exe, by using the /main command-line switch.

csc /t:module /r:myappln.dll Program.cs

al /out: Program.exe /t:exe /main: Program.Main Program.netmodule

Here the first line builds the Program.cs file into a Program.netmodule file. The second line produces a small Program.exe PE file that contains the manifest metadata tables. In addition there is a small global function named __EntryPoint that is emitted by AL.exe because of the /main: Program.Main command-line switch. This function, __EntryPoint, contains the following IL code:

.method privatescope static void __EntryPoint$PST06000001() cli managed



As you can see, this code simply calls the Main method contained in the Program type defined in the Program.netmodule file.

Adding Resource Files to an Assembly

When using AL.exe to create an assembly you can add a file as a resource to the assembly by using the /embed[resource] switch. this switch takes a file and embeds the file’s contents into the resulting PE file. The manifest’s ManifestResourceDef table is updated to reflect the existence of the resources.

AL.exe also supports a link[resource] switch, which also takes a file containing resources. However, the /link[resource] switch updates the manifest’s ManifestResourceDef and FileDef tables, indicating that the resource exists and identifying which of the assembly’s files contains it. The resource file is not embedded into the assembly PE file; it remains separate and must be packaged and deployed with the other assembly files.

The C# compiler’s /resource switch embeds the specified resource file into the resulting assembly PE file, updating the ManifestResourceDef table. The compiler’s /linkresource switch adds an entry to the ManifestResourceDef and the FileDef manifest tables to refer to a stand-alone resource file.

You can do this easily by specifying the pathname of a res file with the /win32res switch when using either AL.exe or CSC.exe. In addition you can quickly and easily embed a standard win32 icon resource into an assembly by specifying the pathname of the .ico file with the win32icon switch when using either AL.exe or CSC.exe. Within Visual Studio you can add resource files to your assembly by displaying your project’s properties and then clicking the application tab.

Assembly Version Resource Information

When AL.exe  or CSC.exe produces a PE file assembly, it also embeds into the PE file a standard Win32 version resource. Application code can also acquire and examine this information at runtime by calling System.Diagnostic.FileversionInfo’s static GetVersionInfo method.

Here’s what the code that produced the version information looks like

using System.Reflection;

//FileDescription version version information

[assembly: AssemblyTitle(“MyAppln.dll”)]

// Comments version information:

[assembly: AssemblyCompany(“Wintellect”)]

// ProductName version information

[assembly: AssemblyProduct(“Wintellect ® Jeff’s Type Library”)]

// LegalCopyright version information

[assembly: AssemblyCopyright(“Copyright © wintellect 2010”)]

// LegalTrademask version information:

[assembly:AssemblyTrademark(“JeffTypes is a registered trademark of wintellect”)]

// AssemblyVersion version information

[assembly: AssemblyVersion(“”)]

// FILEVERSION/Fileversion version information:

[assembly: AssemblyinformationalVersion(“”)]

// Set the language field (discussed  later in the “Culture” section)

[assembly: AssemblyCulture(“”)]

The table below shows the Version Resource Fields and Their Corresponding AL.exe Switches and Custom attributes

Version Resource Al.exe Switch Custom Attribute/Comment
FILEVERSION /fileversion System.Reflection.AssemblyFileVersionAttribute
PRODUCTVERSION /productversion System.Reflection.AssemblyInformationalVersionAttribute
FILEFLAGS (none) Always 0
FILEOS (none) Currently always VOS__WINDOWS32
FILETYPE /target Set to VFT_APP if /target:exe or /target:winexe is specified set to VFT_DLL if /target:library is specified
FILESUBTYPE (none) Always set to VFT2_UNKNOWN
AssemblyVersion /version System.Reflection.AssemblyVersionAttribute
Comments /description System.Reflection.AssemblyDescriptionAttribute
CompanyName /company System.Reflection.AssemblyCompanyAttribute
FileDescription /title System.Reflection.AssemblyTitleAttribute
FileVersion /version System.Reflection.AssemblyFileVersionAttribute
InternalName /out Set the name of the output file specified without the extension
LegalCopyright /copyright System.Reflection.AssemblyCopyrightAttribute
LegalTrademarks /trademark System.Reflection.AssemblyTrademarkAttribute
OriginalFileName /out set to the name of the output file (without a path)
PrivateBuild (none) Always blank
ProductName /product System.Reflection.AssemblyProductAttribute
ProductVersion /productversion System.Reflection.AssemblyInformationalVersionAttribute
SpecialBuild (none) Always blank
  • AssemblyFileVersion This version number is stored in the Win32 version resource. This number is for information purposes only; the CLR doesn’t examine this version number in any way.
  • AssemblyinformationalVersion This version number is also stored in the Win32 version resource and again, this number is for information purposes only;
  • AssemblyVersion This version is stored in the AssemblyDef manifest metadata table. The CLR uses this version number when binding to strongly named assemblies. This number is extremely important and is used to uniquely identify an assembly. when starting to develop an assembly, you should set the major , minor, build and revision numbers and shouldn’t change them until you’re ready to being work on the next deployable version of your assembly. When you build an assembly, this version m\number of the referenced assembly is embedded in the AssemblyRef table’ entry. This means that an assembly is tightly bound to a specific version of a referenced assembly.

Simple Application Deployment

Assemblies don’t dictate or require any special means of packaging. The easiest way to package a set of assemblies is simply to copy all of the files directly. Because the assemblies include all of the dependent assembly references and types, the user can just run the application and the runtime will look for referenced assemblies in the application’s directory. No modifications to the registry  are necessary for the application to run. To uninstall the application, just delete all the files.

You can use the options available on the publish tab to cause Visual Studio to produce and MSI file can also install any prerequisite components such as the .NET Framework or Microsoft SQL Server 2008 Express Edition. Finally, the application can automatically check for updates and install them on the user’s machine by taking advantage of ClickOnce technology.

Assemblies deployed to the same directory as the application are called privately deployed assemblies. Privately deployed assemblies can simply be copied to an application’s base directory, and the CLR will load them and execute the code in them. In addition, an application can be uninstalled by simply deleting the assemblies in its directory. This allows simple lookup and restore as well.

This simple install/remove/uninstall scenario is possible because each assembly has metadata indicating which referenced assembly should be loaded, no registry settings are required. An  application always binds to the same type it was built and tested with; the CLR can’t load a different assembly that just happens to provide a type with the same name.

Simple Administrative Control

To allow administrative control over an application a configuration file can be placed in the application’s directory. The setup program would then install this configuration file in the application’s base directory. The CLR interprets the content of this file to alter its policies for locating and loading assembly files.

Using a separate file allows the file to be easily backed up and also allows the administrator to copy the application to another machine – just copy the necessary files and the administrative policy is copied too.

The CLR won’t be able to locate and load these files; running the application will cause a System.IO.FileNotFoundException exception to be thrown. To fix this, the publisher creates an XML configuration file and deploys it to the application base directory. The name of this file must be the name of the application’s main assembly file with a .config extension: program.exe.config for this example. This configuration file should look like this:



<assemblyBinding xmlns=”urn: schema-microsoft-com:asm.v1”>

<probing privatePAth=”AuxFiles” />




Whenever the CLR attempts to locate an assembly file, it always looks in the application’s directory first and if it cant find the file there, it looks in the AuxFiles subdirectory. You can specify multiple semicolon-delimited paths for the probing element’s privatePath attribute. Each path is considered relative to the application’s base directory. You can’t specify an absolute or a relative path identifying a directory that is outside of the application base directory.

The name and location of this XML configuration file is different depending on the application type

  • For executable applications(EXE), the configuration file must be in the application’s base directory, and it must be the name of the EXE file with “config” appended to it.
  • For microsoft ASP.NET Web Form applications, the file must be in the web application’s virtual root directory and is always named web.config

When you install the .NET Framework, it creates a Machine config file. There is one Machine.config file per version of the CLR you have installed on the machine.

The Machine.config file is located in the following directory:


Of course, %SystemRoot% identifies your windows directory (usually C:\WINDOWS), and version is a version number identifying a specific version of the .NET Framework. Settings in the Machine.config file represent default settings that affect all applications running on the machine. An administrator can create a machine-wide policy by modifying the single Machine.config file. However, administrators and users should avoid modifying this file. Plus you want the application’s settings to be backed up and restored, and keeping an application’s settings in the application-specific configuration file enables this.