Month: November 2017



I will be blogging on ASP.NET and publishing on the following topics of ASP.NET 4.0

Introduction to ASP.NET

  • Agenda :
    • Brief on HTML
    • Difference between HTML and XML
    • Why XML is important.
    • Static Web Pages vs Dynamic Web Pages.
    • TAG affect how text is displayed on Web Page. E.g. <b> text </b> <i> italic</i> è text italic.

<b color=blue> this</b> this

Difference between Web Forms   and ASP.NET MVC 3.0

                        Web Forms                                MVC
  •   Web Forms are based on ASP.NET and it is high   level programming framework.
ASP.NET   MVC is also based on ASP.NET and it is low level programming technology.
  •   Web Forms are similar to User Interface   controls of windows App. It is event based controls.
ASP.NET   MVC uses HTML controls and requires knowledge of JS plugins.
  •   Web Forms Controls encapsulate HTML, JS and   CSS. They databind Charts, Grid, Tables etc.
ASP.NET   MVC directly use HTML controls hence require deep knowledge of HTML and HTTP.   They have total control of HTML markup.
  •   The unit testing is not part of the framework,   needs to be manually incorporated.
ASP.NET   MVC supports unit testing, TDD and Agile.
  •   Browser differences are handled by the Web   Forms
Browser   differences and OS compatibility needs to be care by the developer.

What is ASP.NET

ASP.NET is free framework using C# and VB. Visual studio provides Visual Web Developer for free to develop standalone website. The Intellisense of Visual Studio helps to understand the libraries used for developing website. Visual Studio has powerful debugging tool. ASP.NET is part of .NET, Website Spark is a development program to develop website and it is free software.

How text is   displayed Provides information about the text
HTML is parsed and interpreted by browser and then   displayed. Using XML to provide a data for information requested and provides   the data as response in XML.
Static Web Pages Dynamic Web Pages
A plain HTML page which doesn’t change   during interaction with the user is Static Web Page. An .aspx page is analyzed and CLR   executes code in it by server to generate dynamic page. Finally the response   data is converted to HTML page for each request.

Working with the Server

  • Server does everything for every user request.
  • Dynamic Page forces Server to do everything lead to poor performance.
  • Server manages the HTTP state session.

The .aspx page is a dynamic page follows the request and response model. And it has unique session for each request.

Client information and session information to recognize the request originate information for the server.

When 1st request is sent, server creates session and is managed by server. Session management require server resources. Time out limit is set by the web application, after limit the session expires. So before the session expires an interaction between the client and server should be made.

1st request à Parser à compile à IL code in Assembly Cache à Memory Execute http runtime.

2nd request ——————————————————————àMemory Execute http runtime.

Server Controls

Server Control : Server control is configured before hand during design time. The  request for the web page makes the dynamic page to execute the program logic at the server and deliver it as HTML control to client. E.g. gridView control, calendar control.

Code Behind: VB/C# code in another page with extension .cs is code behind of web page.

Inline Code: Javascript code and HTML tags are inline code in web page.

ASP.NET Framework which is composed of WebMatrix, WebForms, ASP.NET MVC is required to build websites, web application.

State Management and AutoPostback

Web pages are HTTP based and are stateless, the stateless nature is a problem.

ASP.NET maintains the HTTP state automatically. set EnableViewState to true in properties window to enable Postback.

What is ViewState ? ViewState  is a hidden value containing state information.

Autopostback – When a  whole page is sent back to server with new option selected is a AutopostBack property.

ASP.NET supports client side scripting.

Validation controls: A special control under validation section in toolbox; Select required control in it and drop  them on the web design. Select the control to validate. Only works with Server controls. So Validation controls to works with HTML convert html control to Server control.

Packaging .NET Application .NET

.NET Application Packaging, Deployment and Configuring Application

Deployment and Packaging .NET Assemblies.
Deployment and Packaging of .NET Application

Today, applications are created using the types developed by Microsoft or custom built by you. If these types are developed using any language that targets the common language runtime (CLR), they can all work together seamlessly, i.e. different types created using different .NET languages can interact seamlessly.

.NET Framework Deployment Objectives:

All applications use DLLs from Microsoft or other vendors. Because an application executes code from various vendors, the developer of any one piece of code can’t be 100 percent sure how someone else is going to use. Even if this kind of interaction is unsafe and dangerous. End users have come across this scenario quiet often when one company decides to update its part of the code and ship it to all its users. Usually these code should be backward-compatible with previous version, since it becomes impossible to retest and debug all of the already-shipped applications to ensure that the changes will have no undesirable effect.

When installing a new application you discover that it has somehow corrupted an already-installed application. This predicament is known as “DLL hell”. The end result is that users have to carefully consider whether to install new software on their machines.

The problem with this is that the application isn’t isolated as a single entity. You can’t easily back up the application since you must copy the application’s files and also the relevant parts of the registry you must run the installation program again so that all files and registry settings are set properly. Finally, you cant easily uninstall or remove the application without having this nasty feeling that some part of the application is still lurking on your machine.

When application are installed, they come with all kinds of files, from different companies. This code can perform any operation, including deleting files or sending e-mail. To make users comfortable, security must be built into the system so that the users can explicitly allow or disallow code developed by various companies to access their system resources.

The .NET framework addresses the DLL hell issue in a big way. For example, unlike COM, types no longer require settings in the registry. Unfortunately, application still require shortcut links. As for security, the .NET Framework includes a security model called code access security   Whereas Windows security is based on a user’s identity, code access security is based on permissions that host applications that loading components can control. As you’ll see, the .NET Framework enables users to control what gets installed and what runs, and in general, to control their machines, more than Windows ever did.

Developing Modules with Types

Lets start with an example as shown below:

public sealed class Appln {

public static void Main() {

System.Console.WriteLine(“Hello My world”);



This application defines type called Appln. This type has a single public, static method called Main. Inside Main is a reference to another type called System.Console. System.Console is a type implemented by Microsoft, and the intermediate Language (IL) code that implements this type’s methods is in the MSCorLib.dll file. To build it write the above source code into a C# file and then execute the following command line:

csc.exe /out : Appln.exe /t:exe /r:MSCorLib.dll Appln.cs

This command line tells the C# compiler to emit an executable file called Appln.exe (/out: Appln.exe). The type of file produced is a win32 console application (/t[arget]:exe).

When the C# compiler processes the source file, it sees that the code references the System.Console type’s WriteLine method. At this point, the compiler wants to ensure that this type exists somewhere, that it has a WriteLine method, and that the argument being passed to this method matches the parameter the method expects. Since this type is not defined in the C# source code, to make the C# compiler happy, you must give it a set of assemblies that it can use to resolve references to external types. In the command line above /r[eference]:MSCorLib.dll switch, which tells the compiler to look for external types in the assembly identified by the MSCorLib.dll file.

MSCorLib.dll is a special file in that contains all the core types: Byte, Char, String, Int32 and many more. In fact these types are so frequently used that the C# compiler automatically references the MSCorLib.dll assembly. i.e. the above command line can be shortened as

csc.exe /out : Appln.exe /t:exe Appln.cs

Further you can drop /out and /t:exe since both match, so the command would be

csc.exe Appln.cs

If for some reason, you really don’t want the C# compiler to reference the MSCorLib.dll assembly, you can use the /nostdlib switch. Microsoft uses this switch when building the MSCorlib.dll assembly itself. For e.g. the following will throw error since the above code references System.Console type which is defined in MSCorLib.dll

csc.exe /out: Appln.exe /t:exe /nostdlib Appln.cs

This means that a machine running 32-bit or 64-bit versions of Windows should be able to load this file and do something with it. Windows supports two types of applications, those with a console user interface (CUI) and those with a graphical user interface (GUI). Because I specified the /t:exe switch, the C# compiler produced a CUI application. You’d use the /t: winexe switch to cause the C# compiler to produce a GUI application.

Response Files

I’d like to spend a moment talking about response files. A response file is a text file that contains a set of compiler command-line switches. You instruct the compiler to use a response file by specifying its name on the command line by an @sign. For e.g. you can have response file called myApp.rsp that contains the following text

/out: MyAppln.exe

/target: winexe

To cause CSC.exe to use these settings you’d invoke it as follows:

csc.exe @myAppln.rsp codeFile1.cs CodeFile2.cs

This tells the C# compiler what to name the output file and what kind of target to create. The C# compiler supports multiple response files. The compiler also looks in the directory containing the CSC.exe file for a global CSC.rsp file. Settings that you want applied to all of your projects should go in this file. The compiler aggregates and uses the settings in all of these response files. If you have conflicting settings in the local and global response file, the settings in the local file override the settings in the global life. Likewise, any settings explicitly passed on the command line override the settings taken from a local response file.

When you install the .NET Framework, it installs a default global CSC.rsp file in the %SystemRoot%\Microsoft.NET\Framework\vX.X.Xdirectory where X.X.X is the version of the .NET Framework you have installed). The 4.0 version of the file contains the following switches.

# This file contains command-Line options that the C# Compiler has to process during compilation

# process, unless “noconfig” option is specified.

# Reference the common Framework libraries

/r: Accessibility.dll

/r: Microsoft.CSharp.dll

/r: System.Configuration.Install.dll

/r: System.Core.dll

/r: System.Data.dll

/r: System.Data.DataSetExtensions.dll

/r: System.Data.Linq.dll

/r: System.Deployment.dll

/r: System.Device.dll

/r: System.DirectoryServices.dll

/r: System.dll

/r: System.Drawing.dll

/r: System.EnterpriseServices.dll

/r: System.Management.dll

/r: System.Messaging.dll

/r: System.Numerics.dll

/r: System.Runtime.Remoting.dll

/r: System.Runtime.Serialization.dll

/r: System.Runtime.Serialization.Formatters.Soap.dll

/r: System.Security.dll

/r: System.ServiceModel.dll

/r: System.ServiceProcess.dll

/r: System.Transactions.dll

/r: System.Web.Services.dll

/r: System.Windows.Forms.dll

/r: System.Xml.dll

/r: System.Xml.Linq.dll

Because the global CSC.rsp file references all of the assemblies listed, you do not need to explicitly references all of the assemblies by using the C# compiler’s /reference switch. This response file is a big convenience for developers because it allows them to use types and namespaces defined in various Microsoft-published assemblies without having to specify a /reference compiler switch for each when compiling.

When you use the /reference compiler switch to reference an assembly, you can specify a complete path to a particular file. However, if you do not specify a path, the compiler will search for the file in the following places (in the order listed)

– working directory

– The directory that contains the CSC.exe file itself. MSCorLib.dll is always obtained from the directory. The path looks something like this %SystemRoots%\Microsoft.NET\Framework\v4.0.#####

– Any directories specified using the /lib compiler switch.

– any directories specified using the LIB environment variable

you are welcome to add your own switches to the global CSC.rsp file if you want to make your life even easier, but this makes it more difficult to replicate the build environment on different machines you have to remember to update the CSC.rsp the same way on each build machine. Also you can tell the compiler to ignore both local and global files by specifying the /noconfig command-line switch.

A managed PE file has four main parts the PE32(+) header, the CLR header, the metadata and the IL . the PE32(+) header is the standard information that Windows expects. The CLR header is a small block of information that is specific to modules that require the CLR (managed modules). The header includes the major and minor version number of the CLR that the module was built for: some flags, a MethodDef token (described later) indicating the module’s entry point method if this module  is CUI or GUI executable, and an optional strong-name. You can see the format of the CLR header by examining the IMAGE_COR20_HEADER defined in the CorHdr.h header file.

The metadata is a block of binary data that consists of several tables. There are three categories of tables: definition tables, reference tables and manifest tables. The following table describes some of the more common definition tables that exist in a module’s metadata block.

Metadata Definition
Table Name
ModuleDef Always contains one entry that identifies the module. The entry includes the module’s filename and a extension and a module version ID. This allows the file to be  renamed while keeping a record of its original name.
TypeDef Contains one entry for each type defined in the module. Each entry includes the type’s name, base type and flags (public, private etc, ) and contains indexes to the methods it owns in the MethodDef table, the fields it owns in the fieldDef table, the properties it owns in the PropertyDef table, and the events it owns in the EventDef table.
MethodDef Contains one entry for each method defined in the module. Each entry includes the method’s name, flags (private, public, virtual, abstract, static, final, etc) signature and offset within the module where its IL code can be found. Each entry can also refer to a ParamDef table entry in which more information about the method’s parameters can be found.
FieldDef Contains one entry for every defined in the module. Each entry includes flags (in, out, retval, etc) type and name.
ParamDef Contains one entry for each parameter defined in the module. Each entry includes flags (in, out, retval etc) type and name.
PropertyDef Contains one entry for each property defined in the module. Each entry includes flags, type and name.
EventDef Contains one entry for each event defined in the module. Each entry includes flags and name.

Compiler during compilation creates an entry for every definition in the source code to be created in one of the tables defined above. Metadata table entries are also  created as the compiler detects the types, fields, methods, properties and events that the source code references. The metadata created includes a set of reference tables that keep a record of the referenced items. Table below gives some more common reference metadata tables.

Metadata Reference
Table Name
AssemblyRef Contains one entry for each assembly referenced by the module. Each entry includes the information necessary to bind to the assembly: the assembly’s name (without path and extension), version number, culture and public key token. Each entry also contains some flags and a hash value.
ModuleRef Contains one entry for each PE module that implements types referenced by this module. Each entry includes the module’s filename and extension. This table is used to bind to types that are implemented in different modules of the calling assembly’s module.
TypeRef Contains one entry for each type referenced by the module. Each entry includes the type’s name and a reference to where the type can be found. If the type is implemented within another type, the reference will indicate a TypeRef entry. If the type is implemented in the same module , the reference will indicate a ModuleDef entry.  If the type is implemented in the another module within the calling assembly , the reference will indicate a ModuleRef entry. If the type is implemented in the different assembly, the reference will indicate a AssemblyRef entry.
MemberRef Contains one entry for each member referenced by the module. Each entry includes the member’s name and signature and points to the TypeRef entry for the type that defines the member.

My personal favorite is ILDasm.exe, the IL Disassembler. To see the metadata tables, executes the following command line

ILDasm MyAppln.exe

To see the metadata in a nice, human-readable form, select the View/MetaInfo/Show! menu item.

The important thing to remember is that MyAppln.exe contains a TypeDef whose name is MyAppln. This type identifies a public sealed class that is derived from System.Object (a type referenced from another assembly). The program type also defines two methods Main and .ctor (a constructor).

Main is a public, static method whose code is IL. Main has a void return type and takes no arguments. The constructor method is public and its code is also IL. The constructor has a void return type has no arguments and has a this pointer,  which refers to the object’s memory that is to be constructed when the method is called.

Combining Modules to Form an Assembly

An assembly is a collection of one or more files containing type definitions and resource files. One of the assembly’s files is chosen to hold a manifest. The manifest is another set of metadata tables that basically contain the names of the files that are part of the assembly. They also describe the assembly’s version, culture, publisher, publicly exported types and all of the files that comprise the assembly.

The CLR always loads the file that contains the manifest metadata tables first and then uses the manifest to get the names of the other files that are in the assembly. Here are some characteristics of assemblies that you should remember:

– An assembly defines the reusable types.

– An assembly is marked with a  version number.

– An assembly can have security information associated with it.

An assembly’s individual files don’t have these attributes – except for the file that contains the manifest metadata tables. To package, version, secure and use types, you must place them in modules that are part of an assembly

The reason is that an assembly allows you to decouple the logical and physical notion of reusable types. for e.g. an assembly can consist of several types. You couldn’t put the frequently used types in one file and the less frequently used types in another file.

You configure an application to download assembly files by specifying a codeBase element in the application’s configuration file. The codeBase element identifies a URL pointing to where all of an assembly’s files can be found. When attempting to load an assembly’s file, the CLR obtains the codeBase element’s URL and checks the machine’s download cache to see if the file is present. If it is, the file is loaded. If the file isn’t in the cache, the CLR downloads the file into cache from the location the URL points to. If the file can’t be found, the CLR throws a FileNotFoundException exception at runtime.

I’ve identified three reasons to use multifile assemblies:

– You can partition  your types among separate files, allowing for files to be incrementally downloaded as described in the Internet download scenario. Partitioning the types into separate files also allows for partial or piecemeal packaging and deployment for applications you purchase and install.

-You can add resource or data files to your assembly. for example, you could have a type that calculates some insurance information using actuarial table. Instead of embedding the actuarial table in the source code, you could use a tool so that the data file is considered to be part of  the assembly.

-You can create assemblies consisting of types implemented in different programming languages. To developers using the assembly, the assembly appears to contain just a bunch of types; developers wont even know that different programming languages were used. By the way, if you prefer, you can run ILDasm.exe on each of the modules to obtain an IL source code file. Then you can run ILAsm.exe and pass it all of the IL source code files. ILAsm.exe will produce a single file containing all of the types. This technique requires your source code compiler to produce IL-only code.

Manifest Metadata
Table Name
AssemblyDef Contains a single entry if this module identifies as assembly. The entry includes the assembly’s name, version, culture, flags, hash algorithm, and the publisher’s public key.
FileDef contains one entry for each PE and resource file that is part of the assembly. The entry includes the file’s name and extension, hash value and flags. If the assembly consists only of its own file, the FileDef table has no entries.
ManifestResourceDef Contains one entry for each resource that is part of the assembly. The entry includes the resource’s name, flags and an index into the FileDef table indicating the file that contains the resource isn’t a stand-alone file, the resource is a stream contained within a PE file. For an embedded resource, the entry also includes an offset indicating the start of the resource stream within the PE file.
ExportedTypesDef Contains one entry for each public type exported from all of the assembly’s PE modules. The entry includes the type’s name, an index into the FileDef table and an index into the TypeDef table. To save file space, types exported from the file containing the manifest are not repeated in this table because the type information is available using the metadata’s TypeDef table.

The C# compiler produces an assembly when you specify any of the following command-line switches: /t[arget]:exe, /t[arget]:winexe or t[arget]:library. All of these switches cause the compiler to generate a single PE file that contains the manifest metadata tables. The resulting file is either a CUI executable, GUI executable or a DLL, respectively.

The C# compiler supports the /t[arget]:module switch. This switch tells the compiler to produce a PE file that doesn’t contain the manifest metadata tables. The PE file produced is always a DLL PE file, and this file must be added to an assembly before the CLR can access any types within it. When you use the /t:module switch, the C# compiler, by default, names the output file with an extension of .netmodule.

There are many ways to add a module to an assembly. If you are using the  C# compiler to build a PE file with a manifest, you can use the /addmodule switch. Let’s assume that we have two source code files:

– File1.cs which contains rarely used types

– File2,cs which contains frequently used types

Lets compile the rarely used types into their own module so that users of the assembly won’t need to deploy this module if they never access the rarely used types:

csc /t:module File1.cs

This line causes the C# compiler to create a File1.netmodule file. Next let’s compile the frequently used types into their module, because this module will now represent the entire assembly.

We change the name of the output file to myappln.dll instead of calling it File2.dll

csc /out:File2.dll /t:library /addmodule:File1.netmodule File2.cs

This line tells the C# compiler to compile the File2.cs  file to produce the myappln.dll file Because /t:library is specified, a DLL PE file containing the manifest metadata tables is emitted into the myappln.dll file. The /addmodule:File1.netmodule switch tells the compiler that File1.netmodule is a file that should be considered part of the assembly. Specifically, the addmodule switch tells the compiler to add the file to the FileDef manifest metadata table and to add File1.netmodule’s publicly exported types to the ExportedtypesDef manifest metadata table.

The two files shown below are created. The module on the right contains the manifest.

File1.netmodule myappln.dll
IL compiled from File1.cs IL compiled from File2.cs
Metadata Types, methods and so on defined by file1.csTypes, methods and so on referenced by File1.cs Metadata Types, methods and so on defined by file2.csTypes, methods and so on referenced by File2.cs

Manifest Assembly files (self and File2.netmodule)
Public assembly types (self and File2.netmodule)

The File1.netmodule file contains the IL code generated by compiling File1.cs. This file also contains metadata table s that describe the types, methods fields, properties, events and so on that are defined by File1.cs. The metadata tables also describe the types, methods and so on that are referenced by File1.cs. The myappln.dll is a separate file. Like File1.netmodule this file includes the IL code generated by compiling File2.cs and also includes similar definition and reference metadata tables. However myappln.dll contains the additional manifest metadata tables, making myappln.dll an assembly. The additional manifest metadata tables describe all of the files that make up the assembly. The manifest metadata tables also include all of the public types exported from myappln.dll and File2.netmodule.

Any client code that consumes the myappln.dll assembly’s types must be built using the /r[eference]:myappln.dll compiler switch. This switch tells the compiler to load the myappln.dll assembly and all of the files listed in its FileDef table when searching for an external type.

The CLR loads assembly files only when a method referencing a type in an unloaded assembly  is called. This means that to run an application, all of the files from a referenced assembly do not need to be present.

Using the Assembly Linker

The Al.exe utility can produce an EXE or a DLL  PE file that contains only a manifest describing the types in other modules. To understand how AL.exe works, lets change the way the myappln.dll assembly is built:

csc /t:module File1.cs

csc /t:module File2.cs

al /out:myappln.dll /t: library File1.netmodule File2.netmodule

In this example, two separate modules, File1.netmodule and File2.netmodule, are created. Neither module is an assembly because they don’t contain manifest metadata tables. Then a third file is produced: myappln.dll which is a small DLL PE file that contains no IL code but has manifest metadata tables indicating that File1.netmodule and File2.netmodule are part of the assembly. The resulting assembly consists of the three files: myappln.dll, File1.netmodule and File2.netmodule. The assembly linker has no way to combine multiple files into a single file.

The AL.exe utility can also produce CUI and GUI PE files using the /t[arget]:exe or /t[arget]:winexe command line switches. You can specify which method in a module should be used as an entry point by adding the /main command-line switch when invoking AL.exe. The following is an example of how to call the Assembly Linker, AL.exe, by using the /main command-line switch.

csc /t:module /r:myappln.dll Program.cs

al /out: Program.exe /t:exe /main: Program.Main Program.netmodule

Here the first line builds the Program.cs file into a Program.netmodule file. The second line produces a small Program.exe PE file that contains the manifest metadata tables. In addition there is a small global function named __EntryPoint that is emitted by AL.exe because of the /main: Program.Main command-line switch. This function, __EntryPoint, contains the following IL code:

.method privatescope static void __EntryPoint$PST06000001() cli managed



As you can see, this code simply calls the Main method contained in the Program type defined in the Program.netmodule file.

Adding Resource Files to an Assembly

When using AL.exe to create an assembly you can add a file as a resource to the assembly by using the /embed[resource] switch. this switch takes a file and embeds the file’s contents into the resulting PE file. The manifest’s ManifestResourceDef table is updated to reflect the existence of the resources.

AL.exe also supports a link[resource] switch, which also takes a file containing resources. However, the /link[resource] switch updates the manifest’s ManifestResourceDef and FileDef tables, indicating that the resource exists and identifying which of the assembly’s files contains it. The resource file is not embedded into the assembly PE file; it remains separate and must be packaged and deployed with the other assembly files.

The C# compiler’s /resource switch embeds the specified resource file into the resulting assembly PE file, updating the ManifestResourceDef table. The compiler’s /linkresource switch adds an entry to the ManifestResourceDef and the FileDef manifest tables to refer to a stand-alone resource file.

You can do this easily by specifying the pathname of a res file with the /win32res switch when using either AL.exe or CSC.exe. In addition you can quickly and easily embed a standard win32 icon resource into an assembly by specifying the pathname of the .ico file with the win32icon switch when using either AL.exe or CSC.exe. Within Visual Studio you can add resource files to your assembly by displaying your project’s properties and then clicking the application tab.

Assembly Version Resource Information

When AL.exe  or CSC.exe produces a PE file assembly, it also embeds into the PE file a standard Win32 version resource. Application code can also acquire and examine this information at runtime by calling System.Diagnostic.FileversionInfo’s static GetVersionInfo method.

Here’s what the code that produced the version information looks like

using System.Reflection;

//FileDescription version version information

[assembly: AssemblyTitle(“MyAppln.dll”)]

// Comments version information:

[assembly: AssemblyCompany(“Wintellect”)]

// ProductName version information

[assembly: AssemblyProduct(“Wintellect ® Jeff’s Type Library”)]

// LegalCopyright version information

[assembly: AssemblyCopyright(“Copyright © wintellect 2010”)]

// LegalTrademask version information:

[assembly:AssemblyTrademark(“JeffTypes is a registered trademark of wintellect”)]

// AssemblyVersion version information

[assembly: AssemblyVersion(“”)]

// FILEVERSION/Fileversion version information:

[assembly: AssemblyinformationalVersion(“”)]

// Set the language field (discussed  later in the “Culture” section)

[assembly: AssemblyCulture(“”)]

The table below shows the Version Resource Fields and Their Corresponding AL.exe Switches and Custom attributes

Version Resource Al.exe Switch Custom Attribute/Comment
FILEVERSION /fileversion System.Reflection.AssemblyFileVersionAttribute
PRODUCTVERSION /productversion System.Reflection.AssemblyInformationalVersionAttribute
FILEFLAGS (none) Always 0
FILEOS (none) Currently always VOS__WINDOWS32
FILETYPE /target Set to VFT_APP if /target:exe or /target:winexe is specified set to VFT_DLL if /target:library is specified
FILESUBTYPE (none) Always set to VFT2_UNKNOWN
AssemblyVersion /version System.Reflection.AssemblyVersionAttribute
Comments /description System.Reflection.AssemblyDescriptionAttribute
CompanyName /company System.Reflection.AssemblyCompanyAttribute
FileDescription /title System.Reflection.AssemblyTitleAttribute
FileVersion /version System.Reflection.AssemblyFileVersionAttribute
InternalName /out Set the name of the output file specified without the extension
LegalCopyright /copyright System.Reflection.AssemblyCopyrightAttribute
LegalTrademarks /trademark System.Reflection.AssemblyTrademarkAttribute
OriginalFileName /out set to the name of the output file (without a path)
PrivateBuild (none) Always blank
ProductName /product System.Reflection.AssemblyProductAttribute
ProductVersion /productversion System.Reflection.AssemblyInformationalVersionAttribute
SpecialBuild (none) Always blank
  • AssemblyFileVersion This version number is stored in the Win32 version resource. This number is for information purposes only; the CLR doesn’t examine this version number in any way.
  • AssemblyinformationalVersion This version number is also stored in the Win32 version resource and again, this number is for information purposes only;
  • AssemblyVersion This version is stored in the AssemblyDef manifest metadata table. The CLR uses this version number when binding to strongly named assemblies. This number is extremely important and is used to uniquely identify an assembly. when starting to develop an assembly, you should set the major , minor, build and revision numbers and shouldn’t change them until you’re ready to being work on the next deployable version of your assembly. When you build an assembly, this version m\number of the referenced assembly is embedded in the AssemblyRef table’ entry. This means that an assembly is tightly bound to a specific version of a referenced assembly.

Simple Application Deployment

Assemblies don’t dictate or require any special means of packaging. The easiest way to package a set of assemblies is simply to copy all of the files directly. Because the assemblies include all of the dependent assembly references and types, the user can just run the application and the runtime will look for referenced assemblies in the application’s directory. No modifications to the registry  are necessary for the application to run. To uninstall the application, just delete all the files.

You can use the options available on the publish tab to cause Visual Studio to produce and MSI file can also install any prerequisite components such as the .NET Framework or Microsoft SQL Server 2008 Express Edition. Finally, the application can automatically check for updates and install them on the user’s machine by taking advantage of ClickOnce technology.

Assemblies deployed to the same directory as the application are called privately deployed assemblies. Privately deployed assemblies can simply be copied to an application’s base directory, and the CLR will load them and execute the code in them. In addition, an application can be uninstalled by simply deleting the assemblies in its directory. This allows simple lookup and restore as well.

This simple install/remove/uninstall scenario is possible because each assembly has metadata indicating which referenced assembly should be loaded, no registry settings are required. An  application always binds to the same type it was built and tested with; the CLR can’t load a different assembly that just happens to provide a type with the same name.

Simple Administrative Control

To allow administrative control over an application a configuration file can be placed in the application’s directory. The setup program would then install this configuration file in the application’s base directory. The CLR interprets the content of this file to alter its policies for locating and loading assembly files.

Using a separate file allows the file to be easily backed up and also allows the administrator to copy the application to another machine – just copy the necessary files and the administrative policy is copied too.

The CLR won’t be able to locate and load these files; running the application will cause a System.IO.FileNotFoundException exception to be thrown. To fix this, the publisher creates an XML configuration file and deploys it to the application base directory. The name of this file must be the name of the application’s main assembly file with a .config extension: program.exe.config for this example. This configuration file should look like this:



<assemblyBinding xmlns=”urn: schema-microsoft-com:asm.v1”>

<probing privatePAth=”AuxFiles” />




Whenever the CLR attempts to locate an assembly file, it always looks in the application’s directory first and if it cant find the file there, it looks in the AuxFiles subdirectory. You can specify multiple semicolon-delimited paths for the probing element’s privatePath attribute. Each path is considered relative to the application’s base directory. You can’t specify an absolute or a relative path identifying a directory that is outside of the application base directory.

The name and location of this XML configuration file is different depending on the application type

  • For executable applications(EXE), the configuration file must be in the application’s base directory, and it must be the name of the EXE file with “config” appended to it.
  • For microsoft ASP.NET Web Form applications, the file must be in the web application’s virtual root directory and is always named web.config

When you install the .NET Framework, it creates a Machine config file. There is one Machine.config file per version of the CLR you have installed on the machine.

The Machine.config file is located in the following directory:


Of course, %SystemRoot% identifies your windows directory (usually C:\WINDOWS), and version is a version number identifying a specific version of the .NET Framework. Settings in the Machine.config file represent default settings that affect all applications running on the machine. An administrator can create a machine-wide policy by modifying the single Machine.config file. However, administrators and users should avoid modifying this file. Plus you want the application’s settings to be backed up and restored, and keeping an application’s settings in the application-specific configuration file enables this.

generics-c-sharp .NET

C# Generics

  1. Introduction
  2. Infrastructure for Generics
  3. Generic Types and Inheritance
  4. Contravariant and Covariant Generic Types
  5. Verifiability and Constraints

C# Generics

Generics is mechanism offered by the common language runtime (CLR) and programming languages that provides one more form of code reuse : algorithm reuse

Microsoft design guidelines that generic parameter variables should either be called T or least start with an uppercase T. The uppercase T stands for type, just as I stands for interface as in IEnumerable.

Generics provide the following big benefits to developers:

– Source code protection : The developer using a generic algorithm doesn’t need to have access to the algorithm’s source code.

– Type safety : When a generic algorithm is used with a specific type, the compiler and the CLR understand this and ensure that only objects compatible with the specified data type are used with the algorithm. Attempting to use an object of an incompatible type will result in either a compiler error or a runtime exception being thrown.

– Cleaner code : The code is easier to maintain, since the compiler enforces type safety, fewer casts are required the code.

– Better Performance : Generic algorithm can be created to work with a specific value type and the CLR no longer has to do any boxing and casts are unnecessary. The CLR doesn’t have to check the type safety of the generated code and this enhances the performance of the algorithm.

using System;
using System.Collections;
using System.Collections.Generic;
using System.Diagnostics;
public static class MyApp {
public static void Main() {
private static void ValueTypePerfTest() {
const Int32 count = 10000000;
using (new OperationTimer(“List<Int32>”)) {
List l = new List(Count);
for (Int32 n = 0; n < count; n++) {
l.Add(n) ;
Int32 x = l[n];
l = null; // Make sure this gets GC’d
using (new OperationTimer(“ArrayList of Int32”)) {
ArrayList a = new ArrayList();
for (Int32 n = 0; n < count; n++) {
Int32 x = (Int32) a[n];
a = null; // Make sure this gets GC’d
private static void ReferenceTypePerfTest () {
const Int32 count = 10000000;
using (new OperationTimer(“List<String>”)) {
List<String> l = new List<String>();
for (Int32 n = 0; n < count; n++) {
l.Add(“X”) ;
String x = l[n];
l = null; // Make sure this gets GC’d
using ( new OperationTimer(“ArrayList of String”)) {
ArrayList a = new ArrayList();
for (Int32 n = 0; n < count; n++) {
String x = (String) a[n];
a = null; // Make sure this gets GC’d
// This class is useful for doing operations performance timing
internal sealed class OperationTimer : IDisposable {
private Int64 m_startTime;
private String m_text;
private Int32 m_collectionCount;
public OperationTimer(String text) {
m_text = text;
m_collectionCount = GC.CollectionCount(0) ;
// This should be the last statement in this
// method to keep timing as accurate as possible
m_startTime = Stopwatch.GetTimestamp();
public void Dispose() {
Console.WriteLine(“(0,6:###.00) seconds (GCs={1,3}) {2}”,
(StopWatch.GetTimestamp() – m_startTime) /
(Double) StopWatch.Frequency, GC.CollectionCount(0) –
m_collectionCount, m_text);
private static void PrepareForOperation() {

When I run this program

.20 seconds (GCs = 0) List<Int32>

3.30 seconds (GCs = 45) ArrayList of Int32

.50 seconds (GCs = 6) List<String>

.58 seconds (GCs = 6) ArrayList of String

The output here shows that using the generic List with Int32 type is much faster than using the non-generic ArrayList algorithm with Int32. Also using the value type – Int32 with ArrayList causes a lot of boxing operations to occur which results in 45 GC where as List algorithm requires 0 GC.

So it doesn’t appear that the generic List algorithm is of any benefit here. But it gives a cleaner code and compile time type safety.

Generics inside FCL

Microsoft recommends that developers use the generic collection classes and now discourages use of the non-generic collection classes for several reasons. First, the non-generic collection classes are not generic and so you don’t get the type safety, cleaner code, and better performance that you get when you use generic collection classes. Second the generic classes have a better object model than non-generic classes. For example, fewer methods are virtual, resulting in better performance, and new members have been added to the generic collections to provide new functionality.

The FCL ships with many generic interface definitions so that the benefits of generics can be realized when working with interface as well. The commonly used interfaces are contained in the System.Collections.Generic namespace.

Infrastructure for Generics

Microsoft had to provide the following for Generics to work properly.

– Create new IL instructions that are aware of type arguments

– Modify the format of existing metadata tables so that type names and methods with generic parameters could be expressed.

– Modify the various programming languages to support the new syntax, allowing developers to define and reference generic types and methods

– Modify the compilers to emit the new IL instructions and the modified metadata format.

– Modify the just-in-time(JIT) compiler to process the new type argument aware IL instructions that produce the correct native code.

– Create new reflection members so that developers can query types and members to determine if they have generic parameters. Created new methods using Reflection had to be defined so that developers could create generic type and method definitions at runtime.

– Modify the debugger to show and manipulate generic types, members, fields and local variables.

– Modify the Microsoft VS Intellisense feature to show specific member prototypes when using a generic type or a method with a specific data type.

Open and Closed Types

The CLR creates an internal data structure for each and every type in use by an application. These data structures are called type objects. A type with generic type parameters is still considered a type and the CLR will create an internal type object for each of these. This applies to reference types, value types, interface types and delegate types. A type with generic type parameters is called Open type and the CLR doesn’t allow any instance of an open type to be constructed .

When code references a generic type it can specify a set of generic type arguments. If actual data types are passed in for all of the type arguments, the type is called a closed type, and the CLR does allow instances of a closed type to be constructed.

For e.g.

using System;

using System.Collections;

using System.Collections.Generic;

// A partially specified open type

internal sealed class DictionaryStringKey<TValue> : Dictionary<String, TValue> {


public static class MyApp {

public static void Main() {

Object o = null;

//Dictionary<,> is an open type having 2 new type parameters

Type t = typeof(Dictionary<,>);

// try to create an instance of this type ( fails)

o = CreateInstance(t);


//DictionaryStringKey<,> is an open type having 1 type parameter

t = typeof(DictionaryStringKey<>);

// Try to create an instance of this type fails

o = CreateInstance(t)


// DictionaryStringKey<Guid> is a closed type

t = typeof(DictionaryStringKey<Guid>);

// Try to create an instance of this type succeeds

o = CreateInstance(t)

Console.WriteLine(“Object types=”+o.GetType());


private static Object CreateInstance(Type t) {

Object o = null;

try {

o = Activator.CreateInstance(t);

Console.Write(“Created instance of (0)”,t.ToString());


catch ( ArgumentException e ) {



return 0;



When we execute this code, we get the following output:

Cannot create an instance of System.Collections.Generic.Dictionary 2[TKey, TValue] because Type.ContainsGenericParameters is true.

Cannot create an instance of DictionaryStringKey 1[TValue] because Type.ContainsGenericParameters is true.

Created an instance of DictionaryStringKey `1[System.Guid]

Object Type = DictionaryStringKey `1[System.Guid]

In the output we see that the names end with a backtick (`) followed by a number, it is type’s arity which indicates the number of type parameters required by the type.

Generic Types and Inheritance

A generic type is a type, and it can be derived from any other type. When you use generic type and specify type arguments, you are defining a new type object in the CLR, and the new type object is derived from whatever type generic type was derived from. I.e. List <T> is derived from Object, List<String> and List<Guid> are also derived from Object. Similarly, DictionaryStringKey<TValue> is derived from DictionaryStringKey<String, TValue> , DictionaryStringKey<Guid> is also derived from Dictionary<String, Guid>. Consider an example below

internal class Node {

protected Node m_next;

public Node(Node next) {

m_next = next;



internal sealed class TypedNode<T> : Node {

public T m_data;

public TypedNode(T data) : this(data, null) {


public TypedNode(T data, Node next) : base(next) {

m_data = data;


public override String ToString() {

return m_data.ToString() = (( m_next ! = null) ? m_next.ToString() : String.Empty);



Now the main code will be as follows

private static void DifferentDataLinkedList() {

Node head = new TypedNode<Char>(‘,’);

head = new TypedNode<DateTime>(DateTime.Now, head);

head = new TypedNode<String>(“Today is”, head)’;



Generic Type Identity

C# does offer a way to use simplified syntax to refer to a generic closed type while not affecting type equivalence at all; you can use the good old using directive at the top of your source code file. Here is an example:

using DateTimeList = System.Collections.Generic.List<System.DateTime>;

Using directive is really just defining a symbol called DateTimeList. As the code compiles, the compiler substitutes all occurrences of DateTimeList with System.Collections.Generic.List<System.DateTime>. This just allows developers to use a simplified syntax without affecting the actual meaning of the code, and therefore, type identity and equivalence are maintained. So now when the following line executes the sameType will initialized to true.

Boolean sameType = (typeof(List<DateTime>) == typeof(DateTimeList));

Code Explosion

When a method that uses generic type parameters is JIT-compiled, the CLR takes the method IL, substitutes the specified type arguments, and then creates native code that is specific to that method operating on the specified data types. The CLR keeps generating the native code for every method/type combination. This is referred to as code explosion.

Fortunately, CLR has some optimizations built into it to reduce code explosion. First, if a method is called for a particular type argument, and later the method is called again using the same type argument, the CLR will compile the code for this method/type combination just once. So if assembly uses List<DateTime>, and a completely different assembly also uses List<DateTime>, the CLR will compile the methods for List<DateTime>. This reduces the code explosion. The CLR has another optimization: the CLR considers all reference type arguments to be identical and the code can be shared. For example, the code compiled by the CLR for List<String>’s method can be used for List<Stream>’s methods, since String and Stream are both reference types. In fact, for any reference type, the same code will be used. But if the type argument is a value type, the CLR must produce native code specifically for that value type. The reason is the value types can vary in size.

Generic Interfaces

CLR supports Generic Interface to avoid boxing and loss of compile time type safety. A reference or value type can implement a generic interface by specifying type arguments, or a type can implement a generic interface by leaving the type arguments unspecified.

The definition of a generic interface in the System.Collections.Generic namespace that is part of FCL :

public interface IEnumerator<T> : IDisposable, IEnumerator {

T Current { get; }


Eg of Generic Interface

internal sealed class Triangle : IEnumerator<Point> {

private Point[] m_vertices;

// IEnumerator<Point>’s Current property is of type Point

public Point Current { get {….}}



Now the Generic class that implements Generic Interface

internal sealed class ArrayEnumerator<T> : IEnumerator<T> {

private T[] m_array;

// IEnumerator<T>’s Current property is of type T

public T Current { get {…} }



Generic Delegates

The CLR supports generic delegates to ensure that any type of object can be passed to a callback method in a type-safe way. Furthermore generic delegates allow a value type instance to be passed to a callback method without any boxing. “Delegates,” a delegate is really just a class definition with four methods: a constructor, an invoke method, a BeginInvoke method, and an EndInvoke method. When you define a delegate type that specifies type parameters, the compilers emits the delegate class’s methods and the type parameters are applied to any methods having parameters/return values of the specified type parameter.

For example, if you define a generic delegate like this:

public delegate TReturn CallMe<TReturn, TKey, TValue>(TKey key, TValue value);

The compiler turns that into a class that logically looks like this:

public sealed class CallMe<TReturn, TKey, TValue> : MulticastDelegate {

public CallMe(Object object, IntPtr method);

public virtual TReturn Invoke(TKey key, TValue value);

public virtual TReturn IAsyncResult BeginInvoke(TKey key, TValue value,

AsyncCallback callback, Object object);

public virtual TReturn EndInvoke(IAsyncResult result);


It is recommended that one should use the generic Action and Func delegates that come predefined in the Framework Class Library wherever possible.

Contravariant and Covariant Generic Types

Each of a delegate’s generic type parameters can be cast to a variable of generic delegate type where the generic parameter type differs, A generic type parameter can be of the following:

Invariant : A generic type parameter that cannot be changed.

Contravariant : A generic type parameter that can change from a class to a class derived from it. The contravariant can appear as only input parameters with in keyword

Covariant : A generic type parameter that can change from a class to one of its base classes. In C#, you indicate covariant generic type parameters with the out keyword which can appear only in output positions such as a method’s return type.

public delegate TResult Func<in T, out TResult>(T arg);

In this generic type parameter T is marked with the in keyword making it contravariant; and the generic type parameter TResult is marked with the out keyword, making it covariant .

If I have variable declared as follows:

Func<Object, ArgumentException> fn1 = null;

I can cast it to another Func type, where the generic type parameters are different:

Func<String, Exception> fn2 = fn1; // no explicit cast is required here

Exception e = fn2(“ “);

Here fn1 refers to a function that accepts an Object and returns an ArgumentException. The fn2 variable wants to refer to a method that takes a String and returns an Exception. Since you can pass a String to a method that wants an Object, and since you can take the result of a method that returns an ArgumentException and treat it as an Exception, the code above compiler and is known at compile time to preserve type safety.

Note: Variance is not possible for value types because boxing would be required. Also variance is not allowed on a generic type parameter if an argument of that type is passed to a method using the out or ref keyword. And CLR would throw a following exception if it find this kind of statement :

Invalid variance: They type parameter ‘T’ must be invariantly valid on ‘SomeDelegate<T>.Invoke(ref T)’. ‘T’ is contravariant.”

delegate void SomeDelegate<in T>(ref T t);

When using delegates that take generic arguments and return values, it is recommended to always specify the in and out keywords for contravariance and covariance whenever possible as doing this has no ill effects and enables your delegate to be used in more scenarios.

Here is an example of an interface with a contravariant generic type parameter:

public interface IEnumerator<out T> : IEnumerator {

Boolean MoveNext();

T Current { get; }


Since T is contravariant, it is possible to have the following code compile and run successfully:

// This method accepts an IEnumerable of my reference type

Int32 Count(IEnumerable<Object> collection) {…}


//The call below passes an IEnumerable<String> to count

Int32 c = count(new[] { “Grant” });

for this reason the compiler team forces you to be explicit when declaring a generic type parameter. Then if you attempt to use this type parameter in a context that doesn’t match how you declared it, the compiler issues an error letting you know that you are attempting to break the contract. If you then decide to break the contract by adding in or out on generic type parameters, you should expect to have to modify some of the code sites that were using the out contract.

Generic Methods

When you define a generic class, struct, or interface, any methods defined in these types can refer to a type parameter specified by the type. A type parameter can be used as a method’s parameter, a method’s return value, or as a local variable defined inside the method. However, the CLR also supports the ability for a method to specify its very own type parameters. And these type parameters can also be used for parameters, return values, or local variables.

internal sealed class GenericType<T> {

private T m_value;

public GenericType( T value) { m_value = value; }

public TOutput Converter<TOutput> () {

TOutput result = (TOutput) Convert.ChangeType(m_value, typeof(TOutput));

return result;



In this example, you can see that the GenericType class defines its own type parameter(T), and the Converter method defines its own type parameter(TOutput). This allows a GenericType to be constructed to work with any type. The converter method can convert the object referred to by the m_value field to various types depending on what type argument is passed to it when called. The ability to have type parameters and method parameters allows for phenomenal flexibility.

A reasonably good example of a generic method is the two method

private static void swap<T>(ref T o1, ref T o2) {

T temp = o1;

o1 = o2;

o2 = temp;


Code can now call swap like this

private static void CallingSwap() {

Int32 n1 = 1, n2 =2;

Console.WriteLine(“n1={0}, n2={1}”, n1, n2);

Swap<Int32>(ref n1, ref n2);

Console.WriteLine(“n1={0}, n2={1}”, n1, n2);

String s1 = “Aidan”, s2 = “Grant”;

Console.WriteLine(“s1={0}, s2={1}”, s1, s2);

Swap<String>(ref s1, ref s2);

Console.WriteLine(“s1={0}, s2={1}”, s1, s2);


The variable you pass as an out /ref argument must be the same type as the method’s parameter to avoid a potential type safety exploit.


public static class InterLocked {

public static T Exchange<T>(ref T location1, T value) where T: class;

public static T CompareExchange<T>(

ref T location1, T value, T comparand) where T: class;


Generic Methods and Type Inference

To help improve code creation, readability, and maintainability, the C# compiler offers type inference when calling a generic method. Type inference means that the compiler attempts to determine the type to use automatically when calling a generic method.

Here is some code that demonstrates type inference:

private static void CallingSwapUsingInference() {

Int32 nl = 1, n2 =2;

Swap(ref n1, ref n2);// Calls Swap<Int32>

String s1 = “Aidan”;

Object s2 = “Grant”;

Swap(ref s1, ref s2);// Error, type can’t be inferred


In this code, first call to Swap compiler infers n1 and n2 are Int32 and hence it will invoke Swap with Int32 type parameter. In the second call compiler sees that s1 is String and s2 is Object. Since s1 and s2 are variables of different data types, the compiler can’t accurately infer the type to use for swap’s type argument,and it issues invalid type arguments error for method ‘Swap<T>(ref T, ref T)’ .

Another type is a type that can be defined with multiple methods with one of its methods taking a specific data type and another taking a generic type parameter as shown below in the code

private static void Display(String s) {



private static void Display<T>(T o) {

Display(o.ToString()); //Calls Display(String)


Here are some ways to call the Display method

Display(“Jeff”); // Calls Display(String)

Display(123); // Calls Display<T>(T)

Display<String>(“Adrian”); // Calls Display<T>(T)

The C# compiler always prefers a more explicit match over a generic match, and therefore, it generates a call to the non-generic Display method that takes a String. For the second call, the compiler can’t call the non-generic Display method that takes a String, so it must call the generic Display method. By the way, it is fortunate that the compiler always prefers the more explicit match; if the compiler had preferred the generic method, because the generic display method calls Display again there would have been infinite recursion.

Verifiability and Constraints

A constraint is a way to limit the number of types that can be specified for a generic argument, Limiting the number of types allows you to do more with those types. Here is a new version of the Min method that specifies a constraint:

public static T Min<T o1, To2> where T : IComparable<T> {

if (o1.CompareTo(o2) < 0) return o1;

return o2;


The C# where token tells the compiler that any type specified for T must implement the generic IComparable interface of the same type(T). Because of this constraint, the compiler now allows the method to call the CompareTo method since this method is defined by the IComparable<T> interface.

Now when code references a generic type or method, the compiler is responsible for ensuring that a type argument that meets the constraints is specified.

For e.g.

private static void CallMin() {

Object o1 = “Jeff”, o2 = “Richter”;

Object oMin = Min<Object>(o1, o2); //Error


The compiler issues the error because system.Object doesn’t implement the IComparable<Object> interface. In fact, system.Object doesn’t implement any interfaces at all.

The CLR doesn’t allow overloading based on the type parameter names or constraints you can overload types or methods based only on arity. The following e.g. shows that

// It is OK to define the following types

internal sealed class AType {}

internal sealed class AType <T>{}

internal sealed class AType <T1, T2>{}

//error : conflicts with AType<T> that has no constraints

internal sealed class AType<T> where T: IComparable<T> {}

//Error: conflicts with AType<T1, T2>

internal sealed class AType <T3, T4>{}

internal sealed class AnotherType {

private static void M() {}

private static void M<T>() {}

private static void M<T1, T2>() {}

//Error: conflicts with M<T> that has no constraints

private static void M<T>() where T : IComparable<T> {}


private static void M<T3, T4>() {}


In fact, the overriding method is not allowed to specify any constraints on its type parameters at all. However, it can change the names of the type parameters. similarly, when implementing an interface method, the method must specify the same number of type parameters as the interface method and these type parameters will inherit the constraints specified on them by the interface’s method.


internal class Base {

public virtual void M<T1, T2>()

where T1 : struct

where T2 : class {



internal sealed class Derived : Base {

public override void M<T3, T4>()

where T3 : EventArgs //Error

where T4: class //Error

{ }


Notice that you can change the names of the type parameters as in the example from T1 to T3 and T2 to T4; however you cannot change constraints.

A type parameter can be constrained using a primary constraint, a secondary constraint, and constructor constraint.

Primary Constraint

A primary constraint can be reference type that identifies a class that is not sealed. You cannot specify one of the following special reference types: System.Object, System.Array, System.Delegate, System.MulticastDelegate, System.Valuetype, System.Enum or System.Void

When specifying a reference type constraint, you are promising the compiler that a specified type argument will either be of the same type or of a type derived from the the constant type. For e.g.

internal sealed class PrimaryConstraintOfStream<T> where T : Stream {

public void M(T stream) {

stream.Close(); //OK



In this class definition the type parameter T has a primary constraint of Stream. This tells the compiler that code using PrimaryConstraintOfStream must specify a type argument of Stream or a type derived from stream. If a type parameter doesn’t specify a primary constraint, System.Object is assumed. However, the C# compiler issues an error message if you explicitly specify System.Object in your source code.

There are two special primary constraints: class and struct. The class constraint promises the compiler that a specified type argument will be reference type. Any class type, interface type delegate type or array type satisfies this constraint. For e.g.

internal sealed class PrimaryConstraintOfClass<T> where T : class {

public void M() {

T temp = null; // Allowed because T must be a reference type



In this example setting temp to null is legal because T is known to be a reference type, and all reference type variables can be set to null. If T were unconstrained, the code above would not compile because T could be a value type, and value type variables cannot be set to null.

The struct constraint promises the compiler that a specified type argument will be a value type. Any value type, including enumerations, satisfies this constraint. However, the compiler and the CLR treat any System.Nullable<T> value type as a special type and nullable types do not satisfy this constraint. The reason is because the Nullable<T>type constrains its type parameter to struct, and the CLR wants to prohibit a recursive type such as Nullable<Nullable<T>>


internal sealed class PrimaryConstraintOfStruct<T> where T : struct {

public static T Factory() {

//Allowed because all value types implicitly

// have a public parameterless constructor

return new T();



In this example, newing up a T is legal because T is known to be a value type and all value types implicitly have a public, parameterless constructor. If T were unconstrained, constrained to a reference type or constrained to class, the above code would not compile because some reference types do not have public, parameterless constructors

Secondary Constraint

A type parameter can specify zero or more secondary constraints where a secondary constraint represents an interface type. When specifying an interface type constraint, you are promising the compiler that a specified type argument will be a type argument must specify a type that implements all of the interface constraints

There is another kind of secondary constraint called a type parameter constraint. This kind of constraint is used much less often than interface constraint. It allows a generic type of method to indicate that there must be a relationship between specified type arguments. A type parameter can have zero or more type constraints applied to it. Here is a generic method that demonstrates the use of a type parameter constraint:

private static List<TBase> ConvertIList<T, TBase>(IList<T> list) where T : TBase {

List<TBase> baseList = new List<TBase>(list.count);

for (Int32 index = 0; index < list.Count; index++) {



return baseList;


The convertIList method specifies two types parameters in which the T parameter is constrained by the TBase type parameter. This means that whatever type argument is specified for T, the type argument must be compatible with whatever type arguments is specified for TBase. Here is a method showing some legal and illegal calls to convertIList:

private static void CallingConvertIList(){

//Construct and initialize a List<String> (which implements IList<String>)

IList<String> ls = new List <String>();

ls.Add(“A String”);

//Convert the IList<String> to an IList<Object>

IList<Object> lo = ConvertIList<String, Object>(ls);

//Convert the IList<String> to an IList<IComparable>

IList<IComparable> lc = ConvertIList<String, IComparable>(ls);

//Convert the IList<String> to an IList<IComparable<String>>

IList<IComparable<String>> lcs = ConvertIList<String, IComparable<String>>(ls);

//Convert the IList<String> to an IList<IComparable>

IList<String> ls2 = ConvertIList<String, String>(ls);

//Convert the IList<String> to an IList<Exception>

IList<Exception> le = ConvertIList<String, Exception>(ls); //Error

In the first call to ConvertIList, the compiler ensures that String is compatible with Object. Since String is derived from Object, the first call adheres to the type parameter constraint. In the second call to ConvertIList, the compiler ensures that String is compatible with IComparable. Since String implements the IComparable interface, the second call adheres to the type parameter constraint. In the third call to ConvertIList, the compiler ensures that String is compatible with IComparable<String> Since String implements the IComparable<String> interface, the third call adheres to the type parameter constraint. In the fourth call to ConvertIList, the compiler knows that String is compatible with itself. In the fifth call to ConvertIList, the compiler ensures that string is compatible with Exception. Since String is not compatible with Exception, the fifth call doesn’t adhere to they type parameter constraint, and the compiler issues the following message: “error CS0311: The type string cannot be used as type parameter ‘T’ in the generic type or method ‘Program.ConvertIList<T,TBase>(System.Collections.Generic.IList<T>)’. There is no implicit reference conversion from string to System.Exception”.

Constructor Constraints

A type parameter can specify zero constructor constraints or one constructor constraint. When specifying a constructor constraint, you are promising the compiler that a specified type argument will be a non-abstract type that implements a public, parameterless constructor. Note that the C# compiler considers it an error to specify a constructor constraint with the struct constraint because it is redundant; all value types implicitly offer a public, parameterless constructor.


internal sealed class ConstructorConstraint<T> where T : new() {

public static T Factory() {

// Allowed because all value types implicitly

// have a public, parameterless constructor and because

// the constraint requires that any specified reference

// type also have a public, parameterless constructor

return new T();



In the above example, newing up a T is legal because T is known to be a type that has a public, parameterless constructor. This is certainly true of all value types, and the constructor constraint requires that it be true of any reference type specified as a type argument.

Casting Generic Type

Casting a generic type variable to another type is illegal unless you are casting to a type compatible with a constraint:

private static void CastingGenericTypeVariable1<T>(T obj) {

Int32 x = (Int32) obj; //Error

String s = (String) obj; //Error


The compiler issues an error on both lines above because T could be any type, and there is no guarantee that the casts will succeed. You can modify this code to get it to compile by casting to Object first:

private static void CastingAGenericTypeVariable2<T>(T obj ) {

Int32 x = (Int32) (object) obj; // No Error

String s = (String) (Object) obj; // No Error


If a casting of reference type needs to be done we can use ‘as’ operator. For e.g.

private static void CastingAGenericTypeVariable3<T>(T obj) {

String s = obj as String; // No error


Default value for Generic Type Variable:

Setting a generic type variable to null is illegal unless the generic type is constrained to a reference type.

private static void SettingAGenericTypeVariableToNull<T>() {

T temp = null; //C50403 – Cannot convert null to type parameter T


Since T is unconstrained, it could be a value type, and setting a variable of a value type to null is not possible. If T were constrained to a reference type, setting temp to null would compile and run just fine. C# team felt that it would be useful to give developers the ability to set a variable to a default value. So the C# compiler allows you to use the default keyword to accomplish this

private static void SettingAGenericTypeVariableToDefaultValue<T>(){

T temp = default(T); // OK


The use of the default keyword above tells the C# compiler and the CLR’s JIT compiler to produce code to set temp to null if T is a reference type and to set temp to all-bits-zero if T is a value type.

Comparison of Generic Type variables:

Comparing a generic type variable to null by using the == or != operator is legal regardless of whether the generic type is constrained:

private static void ComparingAGenericTypeVariableWithNull<T>(T obj) {

if(obj == null) { /* Never executes for value type */ }


Since T is unconstrained, it could be a reference type or a value type. If T is a value type, obj can never be null. The C# compiler does not issue an error, instead, it compiles the code just fine. When this method is called using a type argument that is a value type, the JIT compiler sees that the if statement can never be true, and the JIT compiler will not emit the native code for the if test or the code in the braces. If I had used the != operator, the JIT compiler would not emit the code for the if test and it will emit the code inside the if’s braces.

By the way, if T had been constrained to a struct, the compiler would have thrown an error.

Comparing two Generic Type variables

Comparing two variables of the same generic type is illegal if the generic type parameter is not known to be a reference type:

private static void ComparingTwoGenericTypeVariables<T>(T o1, T o2) {

if(o1 == o2) { } //Error


In this example T is unconstrained, and whereas it is legal to compare two reference type variables with one another, it is not legal to compare two value type variables with one another unless the value type overloads the == operator.

By the way, if T had been constrained to a struct, the compiler would have thrown an error.

Avoid Generic Type as Operands

The operators such as +, –, *, and / can’t be applied to variables of a generic type because the compiler doesn’t know the type at compile time. This means that you can’t use any of these operators with variables of a generic type. So it is impossible to write a mathematical algorithm that works on an arbitrary numeric data type.

Digg This

Assemblies and its version policy.

  1. Introduction Assembly Types
  2. The Global Assembly Cache
  3. Configuration Files

Assembly PropertiesIntroduction

The NET Framework and  Framework Class Library are an perfect example of globally deployed assemblies as they are most widely used assemblies by multiple application and .NET software vendors. The applications are built and tested using code implemented by third party vendors and Microsoft using a particular version of the libraries. These third party libraries and NET Frameworks  are also modified and updated using service packs and hotfixes to incorporate the feature enhancements and bug fixes. Now the applications are forced to use the newer version of assemblies. The .NET framework follows certain policy which supports backward compatibility which helps the older existing application to execute.

The third party libraries also are updated and modified which are sometimes are not backward compatible which makes the existing application bit unstable. The reason of instability of existing applications is due to those apps were tuned to work with the old code which had old features and bugs.

So there must be a process to deploy new files with the hope that the applications will work properly. And if the application doesn’t work fine there has to be an easy way to restore the application to its last known good state.

The similarities and differences in privately deployed week assemblies and global deployed strong named assemblies

There are two kinds of assemblies weakly named assemblies and strongly named assemblies. Both the types of assemblies are structurally identical, i.e. they use the same portable executable (PE) file format, PE32(+)header, CLR header, metadata, manifest tables, and intermediate language. Same tools or utilities are used for generating the assemblies.

The real difference is the strongly named assemblies is that a strongly named assembly is signed with a publisher’s public/private key pair that uniquely identifies the assembly’s publisher. The key pair allows the assembly to be uniquely identified, secured and versioned, it allows the assembly to be deployed anywhere on the user’s machine or even on the internet. Because of this uniquely identifiable assembly name the CLR can implement using the safe publishing policy for deployment.

A strongly named assembly is signed with a publisher’s public/private key pair. This key pair allows the assembly to have unique ID, secured and versioned, and it allows the assembly to be deployed anywhere. An assembly can be deployed in two ways privately under application’s base directory or one of its subdirectories and secondly it is deployed globally into a well known location called GAC which CLR looks into whenever any strongly named assembly is referenced.

The table below gives a brief idea about deployment of strongly named assemblies and weakly named assemblies.

Kind of Assembly               Private deployment                                       Global deployment

Weakly named                            Yes                                                                       No

Strongly named                           Yes                                                                      Yes

There are few problems faced by developers while deploying the assemblies namely, Two companies could produce assemblies that have the same file name and if both of these files are copied to the same location where shared assemblies are kept. the most recently copied file overwrites the old file, and all the applications that were using the old assembly no longer predictable. This is similar to “DLL Hell” in COM world.

components of  strongly named assemblies

The CLR needs technology that helps assemblies to be uniquely identified. This is technique is known as strongly named assembly. The strongly named assembly consists of four attributes that uniquely identify the assembly : file name, a version number, a culture identity, and a public key. This hash value is called a public key token. The following assembly identity strings identify four completely different assembly files:

“MyAppln, Version=1.0.1123.0, Culture=neutral, PublicKeyToken=23asdfkajlkasdf”

“MyAppln, Version=1.0.1123.0, Culture=”en-US”, PublicKeyToken=23asdfkajlkasdf”

“MyAppln, Version=1.0.1234.0, Culture=neutral, PublicKeyToken=bb78343awsdfgs”

“MyAppln, Version=1.0.1123.0, Culture=neutral, PublicKeyToken=465765sdfgsdss”

The first component identifies an assembly file called “MyAppln”.

The second component informs developer is creating a version of 1.0.1123.0.

The third component identifies locale or culture is neutral.

The fourth component identified as public key token generated using public/private key pair.

Why Microsoft used cryptographic API’s for strongly named assemblies?

Microsoft has used cryptographic API and standard technology of public/private key to mark the assembly’s uniqueness, cryptography technologies help the user to verify the integrity of the assembly contents for each every machine, they can also be used for setting the privileges and permission to be granted on per user or publisher basis. Also care should be taken that not a single company shares their private key to be used for generating strongly named assembly.

The System.Reflection.AssemblyName class is a utility class that offers several public instance such as CultureInfo, FullName, KeyPair, Name and Version. The utility class offers a few public instance methods such as GetPublicKey, GetPublicKeyToken, SetPublicKey and SetPublicKeyToken.

A weakly named assembly can have assembly version and culture. The CLR always ignores the version numbers as they are always privately deployed, the CLR simply uses the name of the assembly when looking for the assembly’s file in the application base directory and its sub directories or If a different path is mentioned in the XML configuration file’s probing element’s “privatepath” XML attribute.

The strong named assembly is signed using the public/private key. The following is the steps involved in signing of assembly to make it a strongly named assembly

1. Run SN.exe to generate private/public key pair

SN –k MyCompany.snk

2. To view the public key and store it in a file called MyCompany.PublicKey

SN –p MyCompany.snk MyCompany.Publickey

3. Now execute SN.exe, passing it the –tp switch and the file contains just the public key

SN –tp MyCompany.PublicKey

When I execute this command, I get the following

Microsoft (R) .NET Framework Strong Name Utility  Version 4.0.20928.1

Copyright (c) Microsoft Corporation.  All rights reserved.

Public key is






Public key token is 74786c738e63f883

The SN.exe utility doesn’t offer any options for you to display the private key.

A public key token is a 64-bit hash of the public key, to help developers public key tokens were created to avoid using long size of public key. These public key tokens are stored in an assemblyRef table. These reduced tokens are created to conserve storage space.

The command line option or switch of C# compiler to use keyfile that holds the public key is as follows:

csc /keyfile:MyCompany.snk MyApp.cs

The compiler opens the specified file signs the assembly with the private key, and embeds the public key in the manifest. Note that the other files in assembly are not signed only manifest containing assembly is signed.

The Visual studio private/public key is created by navigating to project properties tab, clicking on the signing tab, selecting the Sign assembly check box and then clicking on the <New > option form the Choose A strong Name Key file combo box.

As each file’s name is added to the manifest the file’s contents are hashed and this hash value is stored along with the the file’s name in the FileDef table. You can override the default hash algorithm used with AL.exe’s /algid switch or apply the assembly-level System.Reflection.AssemblyAlgorithmIdAttribute custom attribute in one of the assembly’s source code files. By default, a SHA-1 algorithm is used.

After the PE file containing the manifest is built and its entire contents are hashed. The hash algorithm used here is always SHA-1 and can’t be overridden. This hash value is signed with the publisher’s private key and the resulting RSA digital signature is stored in a reserved section within the PE file. The CLR header of the PE file is updated to reflect where the digital signature is embedded within the file.

The publisher’s private key is used for signing assembly and public key is also embedded into the AssemblyDef manifest metadata table in the PE file. The combination of the file name, the assembly version, the culture and the public key gives this assembly a strong name which is guaranteed to be unique and thus duplication is avoided.

The reference assemblies that are required by your assembly need to be specified using the /reference compiler switch, this will instruct the compiler to emit an assemblyRef metadata table indicates the referenced assembly’s name, version number, culture, and public key information.

Example of AssemblyRef metadata Table and AssemblyDef metadata table

The example of AssemblyRef is shown below:

AssemblyRef #2 ——————————————————-

Token: 0x23000002

Public Key or Token: ef 41 b5 08 ea 1c fb 8b

Name: multifile

Major Version: 0x00000001

Minor Version: 0x00000002

Build Number: 0x00000003

Revision Number: 0x00000004

Locale: <null>

HashValue Blob:

Flags : [none] (00000000)

The example of AssemblyDef metadata table is shown below

// Assembly

// ——————————————————-

// Token: 0x20000001

// Name : hello

// Public Key :

// Hash Algorithm : 0x00008004

// Version:

// Major Version: 0x00000000

// Minor Version: 0x00000000

// Build Number: 0x00000000

// Revision Number: 0x00000000

// Locale: <null>

// Flags : [none] (00000000)

// CustomAttribute #1 (0c000002)

// ——————————————————-

// CustomAttribute Type: 0a00001f

// CustomAttributeName: System.Runtime.CompilerServices.CompilationRelaxationsAttribute :: instance void .ctor(int32)

// Length: 8

// Value : 01 00 08 00 00 00 00 00 > <

// ctor args: (8)


// CustomAttribute #2 (0c000003)

// ——————————————————-

// CustomAttribute Type: 0a000020

// CustomAttributeName:System.Runtime.CompilerServices.RuntimeCompatibilityAttribute ::instance void .ctor()

// Length: 30

// Value : 01 00 01 00 54 02 16 57 72 61 70 4e 6f 6e 45 78 > T WrapNonEx<

// : 63 65 70 74 69 6f 6e 54 68 72 6f 77 73 01 >ExceptionThrows <

// ctor args: ()


The Global Assembly Cache

The assembly must be placed into a well known directory and the CLR must know to search in this directory automatically when a reference to the assembly is detected. This well-known location is called the global assembly cache which can usually be found in the following directory.


The GAC directory is structured: it contains many subdirectories and an algorithm is used to generate the names of these subdirectories. You should never manually copy assembly files into GAC instead you need to install the assemblies into GAC using the tools which knows the internal structure and how to generate the proper subdirectory names. The most common tool for installing a strongly name assembly into the GAC is GACUtil.exe

Microsoft (R) .NET Global Assembly Cache Utility.  Version 3.5.30729.1
Copyright (c) Microsoft Corporation.  All rights reserved.

Usage: GACUtil <command> [ <options> ]
/i <assembly_path> [ /r <…> ] [ /f ]
Installs an assembly to the global assembly cache.

/il <assembly_path_list_file> [ /r <…> ] [ /f ]
Installs one or more assemblies to the global assembly cache.

/u <assembly_display_name> [ /r <…> ]
Uninstalls an assembly from the global assembly cache.

/ul <assembly_display_name_list_file> [ /r <…> ]
Uninstalls one or more assemblies from the global assembly cache.

/l [ <assembly_name> ]
List the global assembly cache filtered by <assembly_name>

/lr [ <assembly_name> ]
List the global assembly cache with all traced references.

Deletes the contents of the download cache

Lists the contents of the download cache

Displays a detailed help screen

/r <reference_scheme> <reference_id> <description>
Specifies a traced reference to install (/i, /il) or uninstall (/u, /ul).

Forces reinstall of an assembly.

Suppresses display of the logo banner

Suppresses display of all output

The GACUtil used for installing assembly to GAC we use /I switch but for properly deployment one should use /r switch in addition to specifying the /I or /u switch to install or uninstall the assembly. The /r switch integrates the assembly with the Windows install and uninstall engine. GACUtil instructs which are the application are using /sharing the assembly and then ties the application and the assembly together.

The GACUtil tool is not shipped with .NET Redistributable package. If your application includes some assemblies that you want to deployed into GAC, you should use the Windows Installer (MSI), because MSI is the only tool that is guaranteed to be on the end-user machines and capable of installing assemblies into the GAC.

Whenever an assembly is created it has to refer the other assemblies for success compilation, the reference switch provides the name of the referred assemblies. If the file name is a full path, CSC.exe loads the specified assemblies and uses its metadata information to build the assembly. if you specify a file name without a path, CSC.exe attempts to find the assembly by looking in the following directories

1. Working directory

2. The directory that contains the CSC.exe file itself. This directory also contains the CLR DLLs.

3. Any directories specified using the /lib compiler switch.

4. Any directories specified using the LIB environment variable.

Even though GAC is the directory where the assembly is found at a compile time, this isn’t the directory where the assembly will be loaded from at runtime, when you install the .NET framework, two copies of Microsoft’s assembly files are actually installed. One set is installed into the compiler/CLR directory and another set is installed into a GAC subdirectory. The files in the compiler/CLR directory exist so that you can easily build your assembly, whereas the copies in the GAC exist so that they can be loaded at runtime for execution.

The reason that CSC.exe doesn’t look in the GAC for referenced assemblies is that you did have to know the path to the assembly file and the structure of the GAC which is undocumented

When assembly is hashed i.e. signed using the private key, the system would have hashed the contents of the file containing the manifest and compares the hash value with the RSA digital signature value embedded within the PE file. If the values are identical, the file contents haven’t been tampered with, and also the public key that corresponds to the publisher’s private key. The system hashes the contents of the assembly’s other files and compares the hash value don’t match at least one of the assembly’s files has been tampered with, and the assembly will fail to install into the GAC.

The CLR loads the referenced global assembly from the GAC using the strong name properties. If the referenced assembly is available in the GAC, CLR will return its containing subdirectory and  the file holding the manifest is loaded. Finding the assembly this way assures the caller that the assembly loaded at runtime came from the same publisher that built the assembly the code was compiled against. Now comparison of public key token in the referencing assembly’s assemblyRef table and public key token in the referenced assembly’s AssemblyDef table. If the referenced assembly isn’t in the GAC, the CLR looks in the application’s base directory and then in the private paths identified in the application’s configuration file; if the application containing the assembly is installed using the MSI, then CLR invokes MSI to load the required assembly. IF the assembly is not found in any of these location, an exception is thrown and finally the binding of assembly fails.

Assembly Hashing

A hashing of the file is performed every time an application executes and loads the assembly. This performance hit is a tradeoff for being certain that the assembly file’s content hasn’t been tampered with. When the CLR detects mismatched hash values at runtime, it throws System.IO.FileLoadException.

When you are ready to package your strongly named assembly you’ll have to use the secure private key to sign it. However, while developing and testing the assembly, gaining access to the secure private key can be a huge problem. Due to this .NET provides a technique known as delayed signing a.k.a partial signing. Delayed signing allows the user to build an assembly by using the user’s public key, the private key isn’t required.

The delayed signing is set on the C# compiler using /delaysign compiler switch. In Visual studio open the project properties of your project, navigate to Signing tab, and then select the Delay Sign Only check box. If you are using AL.exe you can specify the /delay[sign] command-line switch.

For avoiding or preventing of verification of integrity of the assembly’s files. you have to set the –Vr command-line switch of SN.exe utility, Executing the SN.exe with this switch is tells the CLR to skip verifying the hash values for any of the assembly files loaded at runtime. SN’s –Vr switch adds the assembly’s strong name in registry under the follow subkey: HKEY_LOCAL_MACHINESOFTWAREMicrosoftStrongNameVerification.

The –r switch of SN utility is used along with the name of the file that contains the actual private key to hash it, sign it file contents of the assembly and then embed the RSA digital signature in the file where the space for it had been previously reserved. After this step you can deploy the fully signed assembly.

The Cryptographic service providers offer containers that abstract the location of these keys. Microsoft uses a CSP that has a container that, when accessed, obtains the private key from a hardware device. If public and private key pair is in a CSP we have to specify different switches to the CSC.exe, AL.exe, and SN.exe programs: When compiling specify the /keycontainer switch; when linking using AL.exe specify /keyname and when using the strong Name SN tool specify –Rc to add a private key to delayed signed assembly. SN offers many more switches for performing operations with CSP.

Delayed signing is useful whenever you want to perform some other operation to an assembly before you package it. For e.g. you may want to obfuscate your assembly, you cannot obfuscate after you have fully signed because the hash value will be incorrect. So, if you want to obfuscate an assembly file or perform any other type of post build operations, you should use delayed signing, performing the post-build operations, and then run SN.exe with –R or –Rc switch to complete the signing process of the assembly with all of its hashing.

Deploying privately preserves the simple copy install deployment story and better isolates the application and its assemblies. Also GAC isn’t intended to be new dumping ground for assemblies. The reason is because new versions of assemblies don’t overwrite each other, they are installed side by side eating up disk space.

Another alternative way of deploying the assemblies is to use XML configuration files which have the shared assembly’s codeBase element indicate the path of the shared assembly. Now at runtime, the CLR will know to look in the strongly named assembly’s directory for the shared assemblies.This technique is rarely used since any one of application sharing the assembly is uninstalled then there is chance that these shared assemblies might be uninstalled.

When the source code is compiled to create an executable, this executable is executed the CLR loads the assemblies and initialization takes place.i.e. CLR reads the assembly’s CLR header, looking for the MethodDefToken. that identifies the application’s entry point method(Main). From the MethodDef metadata table, the offset within the file for the method’s IL Code is located and JIT-compiled into native code, which includes having the code verified for type safety. The native code then starts executing.

When JIT-compiling this code, the CLR detects all references to types and  members and loads their defining assemblies. At this point, the CLR knows which assembly it needs. Now the CLR must locate the assembly in order to load it. When resolving a referenced type, the CLR can find the type in one of three places;

1. Same file: Access to a type that is in the same file is determined at compile time. The type is loaded out of the file directly and execution starts.

2. Type is in Different file but in same assembly.

3. Type is in Different file and in different assembly

If any errors occur while resolving a type reference –file can’t found, file can’t be loaded, hash mismatch, version mismatch and so on – an appropriate exception is thrown. The CLR then creates its internal data structure to represent the type, and the JIT compiler successful completes the compilation of the main method. finally application starts executing.

Flow chart of Type binding by CLR during compilation.

Type Binding

The GAC identifies assemblies using name, version, culture, public key, and CPU architecture. When searching the GAC for an assembly, the CLR figures out what type of process the application is currently running in 32-bit x86 on top of WOW64 technology, 64-bit x64, 64-bit IA 64. Then when searching the GAC for an assembly, the CLR first searches for a CPU  architecture-specific version of the assembly. If it does not find a matching assembly, it then searches version for a CPU-agnostic version of the assembly.

Configuration Files

Configuration files are XML files that can be changed as needed. Configuration Files are standard XML files. The .NET Framework defines a set of elements that implement configuration settings. Developers can use configuration files to change settings without recompiling applications. Administrators can use configuration files to set policies that affect how applications run on their computers.

  • <configuration> Element
    Describes the <configuration> element, which is the top-level element for all configuration files.
  • <assemblyBinding> Element for <configuration>
    Specifies assembly binding policy at the configuration level.
  • <linkedConfiguration> Element
    Specifies a configuration file to include.
  • Startup Settings Schema
    Describes the elements that specify which version of the common language runtime to use.
  • Runtime Settings Schema
    Describes the elements that configure assembly binding and runtime behavior.
  • Network Settings Schema
    Describes the elements that specify how the .NET Framework connects to the Internet.
  • Cryptography Settings Schema
    Describes elements that map friendly algorithm names to classes that implement cryptography algorithms.
  • Configuration Sections Schema
    Describes the elements used to create and use configuration sections for custom settings.
  • Trace and Debug Settings Schema
    Describes the elements that specify trace switches and listeners.
  • Compiler and Language Provider Settings Schema
    Describes the elements that specify compiler configuration for available language providers.
  • Application Settings Schema
    Describes the elements that enable a Windows Forms or ASP.NET application to store and retrieve application-scoped and user-scoped settings.
  • Web Settings Schema
    All elements in the Web settings schema, which includes elements for configuring how ASP.NET works with a host application such as IIS. Used in aspnet.config files.
  • Example of Publisher’s policy

    The schema for publisher policy  is as follows

    <Custom element for configuration section>
    <Custom element for configuration section>

    Example of application Configuration file is shown below

    <configuration> <runtime> <assemblyBinding ><dependentAssembly> <assemblyIdentity name="myAssembly" publicKeyToken="32ab4ba45e0a69a1" culture="en-us" />

    <!– Assembly version can be redirected in application, publisher policy or m/c configuration files –>

    <bindingRedirect oldVersion="" newVersion="" />



    <assemblyIdentity name="mySecondAssembly" publicKeyToken="1f2e54s865swqcds" culture="en-us" />

    <!– Publisher policy can be set only in the application configuration file. –> <publisherPolicy apply="no" />





    When JIT-compilation process CLR looks up the assembly version in the application configuration file and applies any version number redirections; the CLR is now looking for this assembly/version.

    For e.g.

    <assemblyBinding ><!—.NET Framework version 1.0 redirects here –>


    <assemblyBinding ><!—.NET Framework version 1.1 redirects here –>


    If publisher’s policy elements apply attribute is set to yes, the CLR examines the GAC for the new assembly/version  and applies any version number redirections in the machine.config file and applies any version number redirections there. At this point CLR knows the version and attempts to load the assembly from the GAC, if the assembly isn’t in the GAC and if there is no codeBase element, the CLR checks for assembly in the app base directory. If the codebase element is there the CLR attempts to load the assembly from the codeBase element’s specified URL.

    When you package your new assembly to send out to all of your users, a XML configuration file is created. So that publisher can set policies only for the assemblies that they themselves create. In addition, the elements shown here are the only elements that can be specified in a publisher policy configuration file. Now publisher can create an assembly that contains this publisher policy configuration file.

    AL.exe /out: Policy.1.0.MyAppln.dll

    / version:

    / keyfile: MyCompany.snk

    / linkresource: Myapps.config


    In this command:

  • The Myapps.config argument is the name of the publisher policy file.
  • The Policy.1.0.MyAppln.dll argument is the name of the publisher policy assembly that results from this command. The assembly file name must follow the format:policy. majorNumber . minorNumber . mainAssemblyName .dll
  • The MyCompany.snk argument is the name of the file containing the key pair. You must sign the assembly and publisher policy assembly with the same key pair.
  • The x86 argument identifies the platform targeted by a processor-specific assembly. It can be amd64, ia64, msil, or x86.
  • Once the publisher policy assembly is built and distributed. It has to be deployed into the GAC.

    The following command adds policy.1.0.myAssembly.dll to the global assembly cache.

    gacutil /i publisherPolicyAssemblyFile

    for e.g. gacutil /I Policy.1.0.MyAppln.dll

    Finally to have the runtime do this, the administrator can edit the application configuration file and add the following publisher policy element

    <publisherPolicy apply =”no”>

    This element can be placed as a child element of <assemblybinding> XML tag/element in the application config file so that it applies to all the assemblies or if you need to apply it a specific assembly you need to specify it as a child of <dependentAssembly>

    Digg This

    CLR Fundamentals.

      1. Introduction

      2. The Common Language Runtime (CLR)

      3. How Common Language Runtime Loads:

      4. IL and Verification:

      5. Unsafe Code

      6. The NGen Tool

      7. The Framework Class Library

      8. The Common Type System

      9. The Common Language Specification


    This is one of my initial blogs on CLR Overview and Basics, which I believe every .NET developer must know. I believe this topic is one of the prerequisite for starting anything related to .NET, it may be Console Application, Web page or an Application on Windows Phone. To start with I will tried to give you a broad overview of Common Language Runtime(CLR).

    The Common Language Runtime (CLR)

    is a runtime and provides an environment for a programming language that targets it. CLR has no idea which programming language the developer used for the source code. A developer can write code in any .NET language that target the CLR, it may be C# or VB or F# or C++/CLI etc. Compiler acts as syntax verifiers and does code analysis, this allows developers to code in their desired .NET languages and makes it easier to express one’s idea and develop software easily.

    Fig 1.1
    Environment of .NET Runtime.

    Regardless of which compiler is used the result is a managed module. A managed module is a standard 32 bit Windows PE32 file or a standard 64 bit Windows (PE32+) file that require CLR to execute. Managed Assemblies always take advantage of Data Execution Prevention (DEP) and Address Space Layout Randomization(ASLR) in Windows, These two are security features of .NET Framework.

    Table 1-1 Parts of Managed Module

    All CLR compilers generate IL Code, every compiler emits full metadata into every managed module. Metadata is superset of COM TypeLib and Intermediate Definition Language (IDL). CLR metadata is far more complete and associated with the file containing the IL code. The metadata and IL code are embedded in the same EXE/Dll as the code making it impossible to separate the two. Because metadata and managed code are built at the same time and binds them together into resulting managed module. They are never out of sync with one another.

    Metadata  has many applications or benefits v.i.z:,

    • Metadata removes the need for native header/library files during compilation, since all the information is available in the Assembly (PE32+) file. It also has the IL code that which implements the type and members. Compiler can comprehend the metadata directly from the managed module.
    • Visual Studio uses metadata to assist the developer in writing the code, Intellisense of Visual Studio parses the metadata table to inform coder what is the property, method, events and fields or a type offers and  in the case of methods, what parameters the method expects.
    • CLR code verification process uses metadata to ensure that you code performs only type-safe operations.
    • Metadata allows serialization of object on local machine and deserialization of the same object state on a remote machine.
    • Metadata allows the garbage collector to track the lifetime of objects.

    C# and the IL Assembler always produce modules that contain managed code and data. So end users must have CLR installed on their devices to execute these managed code.

    C++/CLI compiler is an exception to this it builds EXE/DLL modules that contain unmanaged code and manipulate unmanaged data at runtime, by adding the /CLR switch to the compiler options the C++ compiler can produce modules that contain hybrid of managed and unmanaged code, for these modules CLR is a must for execution. C++ compiler allows developer to write both managed and unmanaged code but still emit a single module.

    Merging managed Modules to an Assembly:

    Fig 1.2 Integrating managed modules into single assembly

    CLR works with assemblies which is logical grouping of one or more modules or resource objects. An assembly is the smallest unit of versioning, reuse and security. You can produce a single file or a multi-file assembly. An assembly is similar to what we would say Component in COM World.

    Single PE32(+) is a logical grouping of files which has manifest embedded is set to metadata tables. These tables describe the files that make up the assembly with public types implementation and the resource or data files that are associated  with the assembly.

    If you want group of files into an assembly you will have to be aware of more tools and their command-line arguments. An assembly allows you to decompose the deployment of the files  while still treating all of the files as a single collection. An assembly modules have information about referenced  assemblies which makes them “self describing”. It means assembly’s immediate dependencies can be identified and verified by CLR.

    How Common Language Runtime Loads:

    An Assembly execution is managed by CLR, so CLR  needs to be loaded first into the process. You can determine if the .NET Framework is installed on a particular machine by looking for MSCorEE.dll in the  %SystemRoot%System32 directory. The existence of this file confirms that .NET framework is installed. The different versions of NET can be installed on a machine and this can be identified by looking at the following Register Key


    The .NET Framework SDK includes a command-line tool CLRViewer to view the version of the installed Runtime. If assemblies contain only type safe managed code then it should work on both 32-bit  and 64-bit  versions of Windows without making any source code changes. The executable will run on any machine with a version of .NET Framework installed on it. If .NET developer want to develop an assembly that works on a specific version of Windows then developer needs to use C# compiler “/platform” command-line switch. This switch allows to set whether the assembly can be executed on x86 machines with 32-bit Windows version or on X64 machines with 64-bit Windows version or on Intel Itanium machines with 64-bit Windows version. But the default value is “anycpu” which makes assembly to execute run on any version of Windows.

    Depending on the /platform command line option, the compiler will generate an assembly that contains either a PE32 or PE32+ header, and the compiler will also insert the desired CPU architecture information into the header. MS ships two tools with the SDK i.e. DumpBin.exe and CorFlags,exe which can be used to examine the header information contained in a managed module.

    When executing the assembly, windows determines using the file header whether to execute the application in 32-bit or 64-bit address space. An executable file with a PE32 header can run in a 32-bit or 54-bit address space, and a executable with PE32+ header requires 64-bit address space Windows also verifies the CPU architecture to confirm that the machine has the required CPU. Lastly 64-bit Windows version has a feature called WOW64 – Windows on Windows64 that allows 32-bit applications to run on it.

    Table 1-2 Runtime State of Modules based on /platform switch
    /platform Switch
    Type of Managed Module x86Windows x64Windows IA64 Windows
    any-cpu PE32/agnostic Runs as a 32-bit application Run as a 64-bit application Runs as a 64-bit application
    x86 PE32/x86 Runs as a 32-bit application Runs as a WOW64 application Runs as a WOW64 application
    x64 PE32+/x64 Doesn’t run Run as a 64-bit application Doesn’t run
    Itanium PE32+/Itanium Doesn’t run Doesn’t run Runs as a 64-bit application

    After Windows has examined the assembly header to determine whether to create a 32-bit process, a64-bit process, or a WOW64 process, Windows loads the x86, x64 or IA64 version of MSCorEE.dll into the process’s address space. Then process’s primary thread calls a method defined inside MSCorEE.dll. This method initializes the CLR, loads the EXE assembly and then calls its entry point method (Main). When a unmanaged application loads a managed assembly, Windows loads and initialize the CLR in order to process the code contained within the assembly.

    IL is a much higher language when compared to most CPU m/c languages. It can access and manipulate object types and has instructions to create and initialize objects, call virtual methods on objects and manipulate array elements directly. LI can be written in assembly language using IL Assembler, ILAsm.exe. Microsoft also provides an IL Disassembler, ILDasm.exe

    The IL assembly language allows a developer to access all of the CLR’s facilities which is hidden by other programming language which you would really wanted to use. In this scenario you can use multiple languages which CLR supports to utilize the otherwise the hidden CLR facilities, in-fact level of integration between .NET programming languages inside CLR makes mixed-language programming a biggest advantage for the developer.

    To execute a method its IL code is initially converted to native CPU instructions. This is the job of the CLR’s JIT compiler.

    Fig shows what happens when the first time a method is called

    Just before the main method executes, the CLR detects all of the types that are reference by Main code. This causes the CLR to allocate an internal data structure that is used to manage access to the referenced types. This internal data structure contains an entry for each method defined  by the Console type. Each entry holds the address where the method’s implementation can be found. When initializing this structure the CLR sets each entry to an internal, undocumented function contained inside the CLR itself I call this function JITCompiler

    When Main makes its first call to WriteLine, the JITCompiler function is called. The JIT Compiler function is responsible for compiling a method’s IL code into native CPU instructions. Because  the IL is being compiled “just in time” this component of the CLR is referred to as a JITter or a JIT Compiler.

    The JIT Compiler function then searches the defining assembly’s metadata for the called method’s IL. JITCompiler next verifies and compiles the IL code into native CPU instructions. The native CPU instructions are saved in a dynamically allocated block of memory. Then, JITCompiler goes back to the entry for the called method in the type’s internal data structure created by the CLR and replaces the reference that called it in the first place with the address of the block of memory containing the native CPU instructions it just compiled. Finally, the JITCompiler function jumps to the code in the memory block. When this code returns, it returns to the code in Main which continues execution as normal.

    Main now calls WriteLine a second time. This time, the code for WriteLine has already been verified and compiled. so the call goes directly to the block of memory, skipping the JITCompiler function entirely. After the WriteLine method executes, it returns to main.

    A performance  hit is incurred only the first time a method is called. All subsequent calls to method execute at the full speed of the native code because verification and compilation to native code don’t need to be performed again.

    The native CPU instructions in dynamic memory the compiled code is discarded when the application terminates. So if you run the application again the JIT compiler will have to compile the IL to native instructions again. It’s also likely that more time is spent inside the method then calling the method. The CLR’s JIT compiler optimizes the native code, it may take more time to produce the optimized code but the code will execute in less time with better performance compared to non-optimized code.

    The two C# compiler switches that impact code optimization /optimize and /debug. The following table shows the impact of code performance based the two switches.

    • Compiler Switch Settings                    C# IL Code Quality                           JIT Native Code Quality
    • /optimize- /debug-                                      Unoptimized                                     Optimized
    • /optimize- /debug(+/full/pdbonly)               Unoptimized                                     Unoptimized
    • /optimize+ /debug(-/+/full/pdbonly)            Optimized                                         Optimized

    The unoptimized IL code contains many no-operation instructions and also branches that jump to the next line of code, these unoptimized code instructions are generated to enable edit-and-continue feature of Visual Studio while debugging and enable applying break points to the code.

    When producing optimized IL code the C# compiler will remove these extraneous NOP and branch instructions, making the code harder to single-step through in a debugger as control flow will be optimized. Furthermore, the compiler produces a Program Database (PDB) file only if specify the /debug(+/full/pdbonly) switch. The PDB file helps the debugger find local variables and map the IL instructions to source code. The /debug:full switch tells the JIT compiler will track what native code came from each IL instruction. This allows developer to use JIT Debugger of Visual studio to connect a debugger to an already running process and debug the code easily. Without the /debug:full switch, the JIT compiler does not track the IL to native code information which makes the JIT compiler run a little faster and also uses a little less memory. If you start a process with the Visual Studio debugger, it forces the JIT Compiler to track the IL to native code information unless you off the suppress JIT Optimization On Module Load (Managed Only) option in Visual Studio. In this managed environment, compiling the code is accomplished in two phases. Initially the compiler parses over the source code, doing as much work as possible in producing IL. But IL itself must be compiled into native CPU instructions at runtime,requiring more memory and more CPU time to be allocated to complete the task.

    The following are difference or comparison of managed code to unmanaged code:

    1. A JIT compiler can determine if the application is running on an Intel Pentium 4 CPU and produce native code that takes advantage of any special instructions offered by the Pentium 4. Usually, unmanaged applications are compiled for the lowest-common-denominator CPU and avoid using special instructions that would give the application a performance boost.
    2. A JIT compiler can determine when a certain test is always false on the machine that it is running on. In those cases, the native code would be fine-tuned for the host machine; the resulting code is smaller and executes faster.
    3. The CLR could profile the code’s execution and recompile the IL into native code while the application runs. The recompiled code could be reorganized to reduce incorrect  branch predictions depending on the observed execution patterns.

    NGen.exe tool compiles all of an assembly’s IL code into native code and saves the resulting native code to a file on disk. At runtime, when an assembly  is loaded, the CLR automatically checks to see whether a precompiled code so that no compilation is required at runtime. the code produced by NGen.exe will not be as highly optimized as the JIT compiler-produced code.

    IL and Verification:

    While compiling IL into native CPU instructions, the CLR performs a process called verification. Verification examines the high-level IL code and ensures that everything the code does is safe. For e.g. verification checks that every method is called with the correct number of parameters. The managed module’s metadata includes all of the method and type information used by the verification process.

    In Windows, each process has its own virtual address space. Separate address spaces are necessary because you can’t trust an application’s code. It is entirely possible that an application will read from or write to an invalid memory address. By placing each windows process in a separate address space, you gain robustness and stability;

    You can run multiple managed applications in a single Windows virtual address space. Reducing the number of processes by running multiple applications in a single  OS process can improve performance, require fewer resources and be just as robust as if each application had its own process.

    The CLR does offer the ability to execute multiple managed applications in a single OS process. Each managed application executes in an AppDomain. Every managed EXE file will run in its own separate address space that has just the one AppDomain. A process hosting the CLR can decide to run AppDomain in a single OS process.

    Unsafe Code

    Safe code is code that is verifiably safe. Unsafe code is allowed to work directly with memory addresses and manipulate bytes at these addresses. This is a very powerful feature and is typically useful when interoperating with unmanaged code or when you want to improve the performance of a time-critical algorithm.

    The C# compiler requires that all methods that contain unsafe code be marked with the unsafe keyword. In addition, the C# compiler requires you to compile the source code by using the /unsafe compiler switch.

    JIT compiler attempts to compile an unsafe method, it checks to see if the assembly containing the method has been granted the System.Security.Permissions.SecurityPermission with  System.Security.Permissions.SecurityPermissionFlag’s SkipVerification flag set. The JIT compiler will compile the unsafe code and allow it to execute. The CLR is trusting this code and is hoping the direct address and byte manipulations do not cause any harm. If the flag is not set, the JIT compiler throws either a System.InvalidProgramException or a System.Security.VerificationException preventing the method from executing. In fact, the whole application will probably terminate at this point, but at least no harm can be done.

    PEVerify.exe  tool examines all of an assembly’s methods and notifies you of any methods that contain unsafe code. So when you use PEVerify to check an assembly, it must be able to locate and load all referenced assemblies. Because PEVerify uses the CLR to locate the dependent assemblies, the assemblies are located using the same binding and probing rules that would normally be used when executing the assembly.

    The NGen Tool

    The NGen.exe tool is inserting machine code during the build process, so it is interesting in two scenarios

    • Improving an application startup time: The just-in time compilation is avoided because the code will already be compiled into native code and hence improve the startup time.
    • Reducing an application working set: The reason is because the NGen.exe tool compiles the IL to native code and saves the output in a separate file. This file can be memory mapped into multiple-process address spaces simultaneously, allowing the code to be shared;

    When a setup program invokes nGen.exe. A new assembly file containing only this native code instead of IL code is created by NGen.exe. This new file is placed in a folder under the directory with a name like C:WindowsAssemblyNativeImages_v4.0.#####_64. The directory name includes the version of the CLR and information denoting whether the native code is compiled for x86, x64 or Itanium.

    Whenever the CLR loads an assembly file, the CLR looks to see if a corresponding NGen’d native file exists. There are drawbacks to NGen’d files

    • No intellectual property protection: At runtime, the CLR requires that the assemblies that contain IL and metadata be shipped. if the CLR can’t use the NGen’d file for some reason the CLR gracefully goes back to JIT compiling the assembly’s IL code which must be available.
    • NGen’d files can get out of sync: When the CLR loads NGen’d file. It compares a number of characteristics about the previously compiled code and the current execution environment. Here is a partial list of characteristics that must match.
    • – CLR version: this changes with patches or service packs.
    • – CPU type: this changes if you upgrade your processor hardware
    • – Windows OS version: this changes with a new service pack update
    • – Assembly’s identity module version ID (MVID): this changes when recompiling.
    • – Referenced assembly’s version IDs: this changes when you recompile a referenced assembly
    • – Security : this changes when you revoke permission such as SkipVerification or UnmanagedCode that were once granted.
    • Whenever an end user installs a new service pack of the .NET framework the service pack’s installation program will run NGen.exe in update mode automatically so that NGen’d files are kept in sync with the version of the CLR installed.
    • Inferior execution-time performance: NGen can’t make as many assumptions about the execution environment as the JIT compiler can. This causes NGen.exe to produce inferior code. Some NGen’d applications actually perform about 5% slower when compared to their JIT-compiled counterpart. So, if you’re considering using NGen.exe you should compare NGen’d and non-NGen’d versions to be sure that the NGen’d version doesn’t actually run slower. the reduction in working set size improves performance so using NGen can be net win.
    • NGen.exe makes little or no sense because only the first client request experiences a performance hit; future client requests run at higher speed. In addition for most server applications only one instance of the code is required, so there is no working set benefit . NGen’d images cannot be shared across AppDomains so there is no benefit to NGen’ing an assembly that will be used in a cross-AppDomain scenario.

    The Framework Class Library

    1. The Framework Class library (FCL) – is a set of DLL assemblies that contain several thousand type definition in which each type exposes some functionality
    2. Following are the different types of application that can be created/developed using FCL:
    3. Web Services
    4. Web Forms HTML-based applications (Web sites)
    5. Rich Windows GUI applications
    6. Rich internet Applications (RIAs)
    7. Windows console applications
    8. Windows services
    9. Database stored procedures
    10. Component Library

    Below are the General Framework Class Library namespaces

    Namespace                                                          Description of Contents

    1. System                                              All of the basic types used by every application
    2. System.Data                                     Types for communicating with database & processing data.
    3. System.IO                                         Types for doing stream I/O and walking directories and files
    4. System.Net                                       Types that allows for low-level network communications.
    5. System.Runtime.InteropServices   Types that allow managed code to access unmanaged OS                                                                               platform facilities such as DCOM and Win32 functions.
    6. System.Security                                Types used for protecting data and resources
    7. System.Text                                       Types to work on text in different encodings.
    8. System.Threading            Types used for asynchronous operations & synchronizing access to resources.
    9. System.Xml                    Types used for processing Extensible Markup Language schemas & data.

    The Common Type System

    The types are at the root of the CLR so Microsoft created a format specification – The Common Type System (CTS) that describes how types are defined and how they behave. The CTS specification states that a type can contain zero or more members

    • Field: A data variable that is part of the object’s state. Fields are identified by their name and type
    • Method A function that performs an operation on the object, often changing the object’s state. Methods have a name a signature and modifiers
    • Property: Properties allow an implementer to validate input parameters and object state before accessing the value and/or calculating a value only when necessary. They also allow a user of the type to have simplified syntax. Finally properties allow you to create read-only or write only fields.
    • Event: An event allows a notification mechanism between an object and other interested objects

    The CTS also specifies the rules for type visibility and access to the members of a type. thus the CTS establishes the rules by which assemblies form a boundary of visibility for a type and the CLR enforces the visibility rules

    A type that is visible to a caller can further restrict the ability of the caller to access the type’s members. The following list shows the valid options for controlling access to a member:

    Private : The member is accessible only by other members in the same class type

    Family : The member is accessible by derived types regardless of whether they are within the same assembly.

    Family and assembly The member is accessible by derived types but only if the derived type is defined in the same assembly.

    Assembly: The member is accessible by any code in the same assembly Many languages refer to assembly as internal.

    Family or assembly: The member is accessible by derived types in any assembly. C# refers to family or assembly as protected internal.

    Public : The member is accessible by any code in any assembly.

    The CTS defines the rules governing type inheritance, virtual methods, object lifetime and so on. And it will map the language specific syntax into IL, the “language” of the CLR, when it emits the assembly during compilation. The CTS allows a type to derive from only one base class. To help the developer Microsoft’s C++/CLI compiler reports an error if it detects that you are attempting to create managed code that includes a type deriving from multiple base types.

    All types must inherit from a predefined type: System Object. This object is the root of all other types and therefore guarantees that every type instance has a minimum set of behaviours. Specifically the System.Object type allows you do the following:

    – compare two instances for equality

    – Obtain a hash code for the instance

    – Query the true type of an instance

    – Perform a shallow copy of the instance

    – Obtain a string representation of the instance object’s current state.

    The Common Language Specification:

    Microsoft has defined a Common Language Specification (CLS) that details for compiler vendors the minimum set of features their compiler must support if these compilers are to generate types compatible with other components written by other CLS-compliant languages on top of the CLR.

    The CLS defines rules that externally visible types and methods must adhere to if they are to be accessible from any CLS-compliant programming language. Note that the CLS rules don’t apply to code that is accessible only within the defining assembly. Most other languages, such as C#, Visual Basic and Fortran expose a subset of the CLR/CTS features to the programmer. THE CLS defines the minimum set of features that all languages must support. you shouldn’t take advantage of any features that are outside of the CLS in its public and protected members. Doing so would mean that your type’s members might not be accessible by programmers writing code in other programming languages.

    The [assembly:CLSCompliant(true)] attribute is applied to the assembly. This attribute tells the compiler to ensure that any publicly exposed type doesn’t have any construct that would prevent the type from being accessed from any other programming language. The reason is that the SomeLibraryTypeXX type would default to internal and would therefore no logner be exposed outside of the assembly

    The table below show s how the programming language constructs got mapped to the equivalent CLR fields and methods

    Type member Member Type Equivalent Programming Language Construct
    AnEvent Field Event the name of the field is AnEvent and its type is System.EventHandler
    .ctor Method Constructor
    Finalize Method Constructor
    add_AnEvent Method Event add accessor method
    get_Aproperty Method Property get accessor method
    get_Item Method Indexer get accessor method
    op_Addition Method + operator
    op_Equality Method == operator
    op_Inequality Method != operator
    remove_Anevent Method Event_remove accessor method
    set_Aproperty Method Property set accessor method
    set_Item Method Indexer set accessor method.

    Interoperability with Unmanaged Code: CLR supports 3 interoperability scenarios

    • – Managed code can call an unmanaged function in a DLL
    • – Managed code can use an existing COM component (server)
    • – Unmanaged code can use a managed type (server).
    Digg This
    Design Patterns

    Design Pattern Part – 4

    UML Diagram of Bridge Design Pattern
    UML Diagram of Bridge Design Pattern

    Bridge Pattern

    The bridge pattern is a design pattern used in software engineering which is meant to decouple an abstraction from its implementation so that the two can vary independently”.[1] The bridge uses encapsulation, aggregation, and can use inheritance to separate responsibilities into different classes.


    A bridge is a structural pattern that influences the creation of a class hierarchy by decoupling an abstraction from the implementation. In a bridge however, the abstraction and its implementation can vary independently, and it hides the implementation details from the client.

    A simple Illustration of Bridge Design Pattern is ‘Remote has a Car object relationship’. This “has-a” relationship bridge is implemented as Bridge Pattern. Two different parts of the code that is changing except the relationship between them then it is bridge between two changing objects.

    Inheritance tree of Car with each node representing the different make of Car, is an example of Bridge pattern.

    The bridge pattern is useful when both the class as well as what it does vary often. The class itself can be thought of as the implementation and what the class can do as the abstraction.

    When a class varies often, the features of object-oriented programming become very useful because changes to a program‘s code can be made easily with minimal prior knowledge about the program with the help of bridge pattern.

    The bridge pattern can also be thought of as two layers of abstraction.

    The following is the sample code Bridge pattern.

    using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace DesignPatterns.Bridge {

    public abstract class DataObject


    public abstract void Register();

    public abstract DataObject Copy();

    public abstract void Delete();


    public abstract class Repository


    public abstract void AddObject(DataObject dataObject);

    public abstract void CopyObject(DataObject dataObject);

    public abstract void RemoveObject(DataObject dataObject);

    public void SaveChanges()


    Console.WriteLine(“Changes were saved”);



    public class ClientDataObject : DataObject


    public override void Register()


    Console.WriteLine(“ClientDataObject was registered”);


    public override DataObject Copy()


    Console.WriteLine(“ClientDataObject was copied”);

    return new ClientDataObject();


    public override void Delete()


    Console.WriteLine(“ClientDataObject was deleted”);



    public class ProductDataObject : DataObject


    public override void Register()


    Console.WriteLine(“ProductDataObject was registered”);


    public override DataObject Copy()


    Console.WriteLine(“ProductDataObject was copied”);

    return new ProductDataObject();


    public override void Delete()


    Console.WriteLine(“ProductDataObject was deleted”);



    public class ProductRepository : Repository


    public override void AddObject(DataObject dataObject)


    // Do repository specific work



    public override void CopyObject(DataObject dataObject)


    // Do repository specific work



    public override void RemoveObject(DataObject dataObject)


    // Do repository specific work





    You should use Bridge pattern whenever you Identify that there are operations that do not always need to be implemented in the same way.

    You should implement Bridge pattern when following are the requirements of your application:

    • Completely hide implementations from clients.

    • Avoid binding an implementation to an abstraction directly.

    • Change an implementation without even recompiling an abstraction.

    • Combine different parts of a system at runtime.

    The next blog I will be explaining about the Builder Pattern.

    Digg This

    C# 4.0 new Features.

    Dynamice Language Runtime

    Dynamic  Lookup

    dynamic keyword : These Object type need not be known till runtime. Member’s signature is not know till it is executed.

    E.g. System.Reflection

    Programming against COM IDispatch

    Programming against XML or HTML DOM

    Dynamic Language Runtime (DLR) behaves more like Python or Ruby.

    Dynamic in C# is a type for e.g.

    Dynamic WildThings(dynamic  beast, string name)


    Dynamic whatis = beast.Wildness(name);


    return whatsits;


    dynamic : statically declared on object type, when object is marked to be dynamic that object is recognized by the compiler and it replaces the object metadata to be used during runtime, the runtime  then check to resolve the call which would be invoked either as dynamic dispatch or throws runtime error.

    dynamic != var

    Var  keyword is used for type inference and compile time check is made.

    Dynamic keyword is used for object that is unknown during compilation and hence compile time  check is not made.

    dynamic cannot be used for Extension methods

    dynamic methods invocation cannot use anonymous methods  as parameter.

    dynamic heisenberg;

    Void LocationObserver(float x, float t) {}

    Heisenberg.Observer(LocationObserver); –> right way of using call

    Heisenberg.Observer(delegate (float y, float t){});–> wrong way of using call

    Heisenberg.Observer((x,t)=>x+t);–> wrong way of using call

    dyanmic  objects cannot be used in LINQ.

    Dynamic collection = {1,2,4,5 ,6, 7,8}

    Var result = collection.Select(e=>e.size>25)

    1. Select is an extension method
    2. Selector is a lambda

    Dynamic Language Runtime is loaded everytime  dynamic objects are executed.

    It reduces the efficiency because for caching only for the first time and then subsequent execution is same as normal execution as no caching will be required.

    DLR is a normal assembly part of System.Core , dynamic objects implement IDispatch or IDynamicObject Interface. Using Dynamic XML now we can shorten the invocation for e.g. element.Lastname instead of element.Attribute[LastName].

    COM support in C# 4.0

    COM interops is feature where COM Interface methods are used to interact with Automation Object like Office Automation. Now ref keyword can ignored while using COM Interops and PIA objects.

    Now the publisher creates the COM interops assembly using COM Interface which was earlier release done by the developer of COM Interops. With the latest release of C# there is no option of PIA, hence code is generated or implemented only for the COM Interface methods  that were used by the application.

    Named Parameters and  Optional Parameters


    Optional Parameters sets a default value for the parameter used; Optional parameter is used for consistence in C# syntax; Optional parameter takes the default value if the parameter is not passed with method invocation.

    Static void Entrée(string name, decimal price=10.0M, int servers=1, bool vegan =false)

    Static void main ()


    Entrée(“Linuine Prime”, 10.25M,2, true); -> overrides all default values

    Entrée(“Lover”, 11.5M,2); -> overrides bool

    Entrée(“Spaghetti”, 8.5M); ->overrides bool int

    Entrée(“Baked Ziu”); -> overrides bool int decimal


    Named parameters : Bind values to parameters e.g. using Microsoft.Office.Tools.Word;

    Document doc;

    Object filename = “MyDoc.docx”;

    Object missing = System.Reflection.missing.Value;

    Doc.SaveAs(ref fileName, ref missing, ref missing ,…ref embeddedTTFS,…..);

    Now it can be used as doc.SaveAs(FileName:ref fileName, embeddedTTFS: ref embedTTFS);

    the method invocation will contain the parameters that are mentioned and other missing parameter will now have default values.

    e.g. Thing(string color=”white”, string texture=”smooth”, string slope=”square”, string emotion=”calm”, int quantity =1)

    Publi static void Things()




    Thing(texture :”Furry”,shape:”triangular”);



    Benefits : No longer creating overload() simply for the convenience of omitting parameter

    Office Automation COM interops use optional parameters

    No longer have to scorn about VB language.

    It uses principle of Least surprise while mapping of the method.

    Liabilities: Complicates overload resolution of optional parameter

    Events in C# 4.0 

    Syntax for events :

    public event EventHandler<TickEventArgs>Tick;

    Public void OnTick(TickEventArgs e){ Tick(this,e);}

    Public class TickEventArgs:EventArgs


    public string Symbol {get; private set;}

    public string Price {get; private set;}

    public TickEventArgs(symbol, decimal, price)


    Symbol = symbol;

    Price = price;



    In C#4.0, events is now implemented based on compose and swap technique.

    Now Events works for static & instance types, events works for reference and value types.

    Covariance and ContraVariance:

    Covariance : Modifier out on a generic Interface or delegate e.g. IEnumerable<out T>

    The parameter type T can only occur in an output position, if used in input position it will throw error, if used in output position then an argument of a less derived type can be passed.

    Enumeration of giraffe is also Enumeration of animals

    Contravariance: Modifier in on a generic interface or delegate e.g. IComparable<in T>

    Type T can only occur in input position, Compiler will generate  contravariant  conversions. It means an argument of a more derived type can be passed.

    So variance can be used for comparison and enumeration of collections in type safe manner.

    AutoProperties in C# 

    Type inference changes in C# 4.0 has now allowed developer to declare properties and their corresponding accessor  and mutator method are generated by compilier defaultly. For e.g.

    Public class Pt{

    Public int X { get; set;}

    Public int Y { get; set;}

    } and compiler generates the back field which is inaccessible.

    This type of property is now known as Auto properties.

    Implicitly typed local variables : These variables can occur

    1 inside foreach.
    2 Initialization of for
    3 Using statement
    4 Local variable declaration

    Initializers specifies values for fields and properties in single statement.

    Var p1 = new Point {X=1, Y=2};

    Var p2= new Point (1){Y=2};

    Collection Initializers:

    The class should have on Add public method which would take on one Key parameter and the other value parameter then we can use collection initializers as follows

    Public class Dictionary <Tkey, Tvalue>:IEnumerable


    public void Add(Tkey key, Tvalue value) {…}



    Var namedCircles = new Dictionary<string, Circle>


    {“aa”, new Circle{Origin=new PT{X=1,Y=2}, Radius=2}}

    {“ab”, new Circle{Origin=new PT{X=2,Y=5}, Radius=3}}


    Lambda in C#

    Anonymous methods is a delegate function which is inlined as a block of code.

    Lambda is a functional declarative syntax way of writing Anonymous method and it is a single statement.

    Lambda function has an operator “=>” known as ‘goesto’

    Delegate int SomeDelegate(int i);

    SomeDelegate squareint = x =>x*x;

    Int j =squareint(5); //25
    (x,y) => x ==y;  //type infered
    (int x, string s) => s.Length > x; //type declared.
    () => Console::WriteLine(“Hi”); // no args

    Statement Lambda :e.g.

    Delegate void Another Delegate(string s);

    AnotherDelegate Hello = a => {

    string w = String.Format(“Hello, {0}”,a);



    Hello(“world”);  == Hello world

    Extension Methods :

    Extension Methods are static methods that can be Invoked using instance method syntax. Extension method are less discoverable and has less functionality. Extension method are static methods has one parameter ‘this’.

    Using Extension Methods

    • Must define inside non generic static class
    • Extension methods are still external static methods
    • Cannot hide, replace or override instance methods
    • Must import namespace for extension method.

    System.Linq defines extension methods for IEnumerable and IQueryable <T>

    Shrinking Delegates using lambda expression Func<int, int> sqr = x=>x*x

    What if entries are not in memory then use lambda expression for that we need to import System.Ling.Expression.

    Lambda functions as delegates become opaque code and treat it as special type, the alternative is Expression<TDelegate>. Expression Trees is used for runtime analysis.


    Int[] digits={0,1,2,3,4,5,6};

    Int [] a = digits.Slice(4,3).Double()

    Is same as Instance Syntax i.e.

    Int []a = Extension.Double(Extension.Slice(digits,4,3));

    LINQ to XML

    Introduction: W3C-Compilant DOM a.k.a. XMLDocument, XMLReader & XMLWriter are part of namespace System.Xml.Linq.

    What is DOM: declarations, element, attribute value and text content can be represented with a class, this tree of objects fully describe a document. This is called a document object model or DOM.

    The LINQ to XML DOM: Xdocument, Xelement and Xattribute, Xdom -> LINQ friendly: This means LINQ has methods that emit useful IEnumerable sequences upon which you can query. It constructors are designed or create an XDOM tree through LINQ project.

    XDOM Overview:

    Types of Elements


    XObject is the root element inheritance hierarchy.

    XElement & XDocument are roots of the containership.

    XObject is the abstract base class of XDocument.

    XNode is the base class which excludes attributes and it is the ordered collection of mixed types.


    Helloworld              à XText

    <subelement1/>   àXelement

    <!—comment – -> àXComment

    <subelement2/> à Xelement



    XElement ———————————————————|————————————————-XDocument

    XDocument: is the root of an XMLTree wraps the root Xelement adding an Xdeclaration.

    Loading and Parsing: XElement, XDocument loads and parse methods to build X-DOM tree from existing source.

    –          Loads builds an XDOM from a file, URI, Stream, TextReader or XmlReader

    –          Parse builds an X-Dom from a string

    –          XNode is created using ReadFrom() from XmlReader.

    –          XmlReader/XMLWriter reads or write from XNode via from CreateReader() or CreateWriter()

    Saving and Serializing: Saving and Serializing of XMLDom is done using the save method from file or stream using TextWriter/XMLWriter

    Instantiating an X-DOM using the Add method of XContainer, for e.g.

    Xelement lastName = new Xelement (“lastName”, “Blogs”);

    LastName.Add(new Xcomment(“nicename”);

    Functional Construction: XDOM supports Functional Construction (it is a mode of instantiation), you build an entire tree in a single expression.

    Automatic Deep Cloning : An already parent node is added to second parent node and deep cloning is made, this process in known as deep Cloning. This automatic duplication keeps X-DOM object instantiation free of side effects.

    Navigating and Querying:

    XDOM returns single value or sequence implementing IEnumerable when a LINQ query is executed.

    FirstNode, LastNode returns first child and last child

    Nodes () returns all children, Elements () return child nodes of XElement type

    SelectMany Query

    Elements () is an extension method that implements IEnumerable<XContainer>

    Element () is same as Elements ().FirstorDefault ()

    Recursive function: Descendants / Descendant Nodes return recursively child elements/Nodes

    Parent Navigation: XNode have parent property and AncestorXXX methods, A parent is always XElement, To access the XDocument we use Document property and Ancestor method return XElement Collection when first element is Parent.

    XElement customer =

    (new XElement (“Customer”,

    new XAttribute (“id”,12),

    new XElement (“firstname”, ”joe”),

    new XElement(“lastname”,”Bloggs”),

    XComment(“nice name”)



    Advantage of Functional Construction is

    –          Code resembles the shape of the XML.

    –          It can be incorporated into the select clause of the LINQ query.

    Specific Content: XElement overloaded take params object array. Public XElement (XName name, params object[] content) here are the decision made by the XContainer.


    Attribute Navigation: XAttribute define PreviousAttribute () and NextAttribute ().

    Updating an XDOM:

    Most convenient methods to update elements and attributes are as follows

    SetValue or reassign the value property

    SetElementValues /SetAttributeValue



    Add –> appends a child node

    AddFirst -> adds @ the beginning of collection

    RemoveAll  è {RemoveAttributes (), RemoveNodes ()}

    ReplaceXXX => Removing and then adding,

    AddBeforeSelf, AddAfterSelf, Remove and ReplaceWith are applied to Collections.

    Remove () -> removes current Node from its Parent

    ReplaceWith -> Remove and then insert some other content at the same position.

    E.g. Removes all contacts that feature the comment “confidential” anywhere in their tree

    Contacts. Elements ().Where (e=>e.Descendant.Nodes ()

    .OfType<XComment> ()

    .Any (c=>c.Value ==”confidential”)).Remove();

    Internally Remove () —-Copiesà temporary list –enumerateà temporary list àperform deletionsà avoids errors while deleting and querying at the same time.

    XElement —Values()à the content of that node.

    Setting Values: SetValue or assign the value property it accepts any simple data types

    Explicit casts on XElement & XAttribute

    All standard numeric types

    String, bool, DateTime, DateTimeOffset, TimeSpan & Guid Nullable<> versions of the aforementioned value types

    Casting to a nullable int avoids a NullReferenceException or add a predicate to the where clause

    For e.g. where cust.Attributes(“Credit”).Any() && (int)cust.Attribute

    Automatic XText Concatenation: If you specifically create XText nodes but end up with multiple children

    Var e = new XElement(“test”, new Xtext(“Hello”), new Text(“World”));

    e.Valueè HelloWorld


    XDocument: It wraps a root XElement and adds XDeclaration, It is based on XContainer and it supports AddXXX, RemoveXXX & replaceXXX.

    XDocument can accept only limited content

    -a single XElement object (the ‘root’)

    -a single XDeclaration

    – a single XDocumentType object

    – Any number of XProcessing Instruction

    – Any number of XComment objects

    Simplest valid XDocument has just a root element

    var doc= new XDocument(XElement (“test”,”data”));

    XDeclaration is not an XNode and does not appear in document Nodes collection.

    XElement & XDocument follow the below rules in emitting xml declarations:

    –          Calling save with a filename always writes a declaration

    –          Calling save with an XMLWriter writes a declaration unless XMLWriter is instructed otherwise

    –          The toString() never emits XML declaration

    XMLWriter will be set with the following settings OmitXmlDeclaration and Conformance Level properties to produce XML without declaration.

    The purpose of XDeclaration is

    What text encoding to use

    What to put in the XML declaration encoding /standalone attributes.

    XDeclaration Constructors parameters are

    1. Version
    2. Encoding
    3. Standalone

    Var doc = new XDocument ( new Xdeclaration(“10”,”utf-8”,”yes”),new XElement(“test”, “data”));

    File.WriteAllText è encodes using UTF-8

    Namespace in XML: Customer element in the namespace

    OReilly.Nutshell.CSharp is defined as

    <customer >Attributes:  Assign namespace to attributes

    <customer >>


    <lastname xsi: nil=”true”/>


    Unambiguously xsi: nil attributes informs that lastname is nil.

    Specifying Namespace in the X-DOM

    1. Var e = new XElement(“{}customer”,”Bloggs”);
    2. Use the XNamespace and XName types

    Public sealed class XNamespace


    Public string Namespace Name {get ;}


    Public sealed class XName


    Public string LocalName {get ;}

    Public XNamespace Namespace {get ;}


    Both types define implicit casts from string, so the following is legal,

    XNamespace ns = “”;

    XName localName = “customer”;

    XName fullName = “{}”;

    XName overloads +operator

    XElement, namespace must be explicitly given otherwise it will not inherit from parent.

    XNamespace ns=””;

    var data = new XElement (ns+”data”, newXElement(ns+”customer”,”Bloggs”), new

    XElement (ns+”purchase”, “Bicycle”));


    <data >>




    For nil attribute we write it as <dos xsi_nil=”true”/>

    Annotations: Annotations are intended for your own private use and are treated as black boxes by X-DOM. Following are XObject add & remove annotations

    Public void AddAnnotations(object annot)

    Public void RemoveAnnotations<T> () where T: class

    Annotations methods to retrieve a sequence of matches

    The source can be anything over which LINQ can query such as

    -LINQ to SQL or Entity Framework queries

    -A load collections

    -Another X-DOM

    Regardless of the source, the strategy is the same in using LINQ to emit X-DOM

    For e.g. retrieve customers from a db into XML


    <customer id=’1’>




    We start by writing a functional construction expression for the X-DOM

    Var customers = new XElement (“customers”, new XElement (“customer”, new XAttribute (“id”, 1), new XElement (“name”,”sue”), new XElement (“buys”, 3)));

    We then turn this into a projection and build a LINQ query around it.

    Var customers = New XElement(“customers”, “from c in dataContext.Customers select

    New XElement(“customers” new XAttribute(“id”,c.ID),

    new XElement (“name”,c.Name),

    new XElement(“bugs”,c.Purchase.count)

    )   );

    IQueryable <T> is interface used during enumeration of database query and execution of SQL statement. XStreaming Element is a cut down version of XElement that applies to deferred loading semantics to its child content. This queries passed into an XStreaming Element constructor are not enumerated until you call save, toString or writeTo on the element: this avoids loading the whole X-DOM into memory at once.

    XStreaming Element doesn’t expose methods such as Elements or Attributes. XStreaming Element is not based on XObject.

    Concat operator preserves order so all elements/ nodes are arranged alphabetically.

    System.XML namespace:                             System.XML.*

    XMLReader & XMLWriter



    • XPathNavigator -Information and API




    LINQ centric version of XMLDocument

    XmlConvert – a static class for parsing and formatting XML Strings

    XMLReader is a high performance class for reading

    XMLStream is a low level and forward only manner class for I/O operations

    XMLReader – instantiated using the Create Method

    XMLReader rdr = XMLReader. Create (new System.IO.StringReader (myString));

    XmlReader settings object used to create parsing of validation options:

    XMLReaderSettings settings = new XMLReaderSettings();

    Settings.IgnoreWhitespace = true

    Settings.IgnoreProcessingInstructions = true

    Settings.IgnoreWhitespace = true

    Using ( XMLReader reader =  XmlReader.Create(“customer.xml” ,settings));

    XMLReaderSettings.CloseInput() to close the underlying stream when the reader is closed. The default value for CloseInput and CloseOutput  = true;

    The units of XML stream are XMLNodes; reader traverses the stream in depth first order. Depth property returns the current depth of the cursor.

    The most primitive way of reading is Read (), it first calls positions cursor to first node.

    -When Read() returns false means it went past last node, Attributes are not included in Read based traversal.

    Node Type is of XMLNodeType then its enum members are as follows

    Name , Comment , Document, XmDeclaration, entity, Documentype, Element, EndEntity, DocumentFragment, EndElement, EntityReference, Notation, Text Processing Instruction Whitespace, Attribute, CDATA, Significant Whitespace,

    String properties of Reader: Name & Value.

    Switch (r.NodeType)




    Case XMLNodeType.XmlDeclaration: Console.Writeline(r.value);


    Case XMLNodeType.DocumentType: Console.Writeline(r,name+”-“+r.value);



    An entity is like a macro; a CDATA is like a verbatim string(@”…”) in C#.

    Reading Elements : XmlReader provides few methods to read XMLDocument. XmlReader throws an XmlException if any validation fails. XmlException has line number and line Position.

    ReadStartElement() verifies that the current NodeType is StartElement

    ReadEndElement() verifies that the current NodeType is EndElement and then calls Read.

    Reader.ReadStartElement (“firstName”);



    ReadElementContentAsString -> reads a start Element a text node and an end element, returning as a String;

    Similarly ReadElementContentAsInt -> reads a end Element as Int.

    MoveToContent() skips over all the fluff: XMLdeclarations  whitespace, comments and processing instructions.

    <customer/> -> ReadEndElement throws exception because there is no end element for xml reader.

    The workaround for the above scenario is

    bool Empty = reader.IsEmptyElement();


    if(!isEmpty) reader.ReadEndElement();

    The ReadElementXXX() handles both kinds of empty elements.

    ReadContentAsXXX parses a text node into type XXX using the XMLConvert class.

    ReadElementContentAsXXX apply to element nodes rather than text node enclosed by the element.

    ReadInnerXML returns an element and all its descendants, when used for attribute returns the value of the attribute.

    ReadOuterXML includes the element at the cursor position and all its descendants

    ReadSubtree is a proxy reader that provides a view over just the current element.

    ReadToDescendant moves the cursor to the first descendant

    ReadToFollowing moves the cursor to the start of the first node

    ReadToNextSibiling moves the cursor to the start of the first sibling node with the specified name/namespace.

    ReadString and ReadElementString same as ReadContentAsString except these methods throw an exception if there’s more than a single text node with the element or comment.

    To make it easy the forward only rule is released during attribute traversal jump to any attribute by calling MoveToAttribute().

    MoveToElement(): returns start element from any place within the attribute node diversion.

    Reader.MoveToAttribute(“XXX”); returns false if the specified attribute doesn’t exists.

    Namespaces and Prefixes:

    XmlReader provides two parallel systems


    -Namespace URI and LocalName.Name()

    <c: customer…>               c:customer

    So reader.StartElement(“c:Customer”);

    The second system is aware of 2 namespace-aware properties – NamespaceURI and LocalName

    e.g. <customer e(“logfile.xml”,settings))



    while(r.Name == “logentry”)


    XElement logEntry = (XElement)Xnode.ReadFrom(r );

    Int id= (int) logEntry.Attribute(“id”);

    DateTime dt = (DateTime)logEntry.Element(“date”);

    String source = (string)logEntry.Element(“source”);




    By implementing as shown above, you can slot a XElement into a custom type’s ReadXML or WriteXML method without the caller ever knowing you’ve cheated. XElement collaborates with XmlReader to ensure that namespace are kept intact and prefixes are properly expanded. Using XMLWriter with XElement to write inner Elements into an XmlWriter. The following code writes 1 million logentry elements to an XML file using XElement without storing the whole thing in memory:

    Using (XmlWriter w = XmlWriter.Create(“log.xml”)


    w.writeStartElement (“log”);

    for (int I =0; I < 1000000; i++)


    XElement e = new XElement(“logentry”, new XAttribute(“id”,i), new XElement(“source”,”test”));





    Using XElement incurs minimal execution overhead.

    XMLDocument: It is an in memory representation of an XML document, Its object model and methods conform to a pattern defined by the W3C.

    The base type for all objects in an XMLDocument tree is XmlNode. The following types derive from XmlNode:






    XmlLinkedNode è exposes Next Sibling and Prev Sibling.

    XmlLinkedNode is an abstract base for the following subtypes








    Loading and Saving the XmlDocument: instantiate an XmlDocument and invoke Load () or LoadXML ()

    –          Load accepts a filename, stream, TextReader or XMLReader

    –          LoadXML accepts a literal XML String.

    e.g. XmlDocument doc = new XmlDocument();



    using ParentNode property, you can ascend backup the tree,

    Console.WriteLine (doc.DocumentElement.ChildNodes [1].ParentNode.Name);

    The following properties also help traverse the document

    FirstChild LastChild NextSibling PreviousSibling

    XmlNode express an attributes property for accessing attributes either by name or by ordinal position.

    Console.WriteLine (doc.DocumentElement.Attributes[“id”].Value);

    InnerText property represents the concatenation of all child text nodes

    Console.WriteLine (doc.DocumentElement.ChildNodes[1].ParentNode.InnerText);

    Console.WriteLine (doc.DocumentElement.ChildNodes[1].FirstChild.Value);

    Setting the InnerText property replaces all child nodes with a single text node for e.g.

    Wrong way => doc.DocumentElement.ChildNodes[0].Innertext=”Jo”;

    Right way => doc.DocumentElement.ChildNodes[0].FirstChild.InnerText = “jo”

    InnerXML property represents the XML fragment within the current node. Console.WriteLine (doc.DocumentElement.InnerXML);

    Output <firstname>Jim</firstname><lastname>Bo</lastname>

    InnerXML throws an exception if the node type cannot have children

    Creating and Manipulating Nodes

    1. Call one of the CreateXXX methods on XMLDocument.
    2. Add the new node into tree by calling AppendChild, prependChild, InsertBefore or InsertAfter on the desired parent node.

    To remove a node, you invoke RemoveChild, ReplaceChild or RemoveAll

    Namespaces: CreateElement & CreateAttribute () are overloaded to let you specify a namespace and prefix

    CreateXXX(string name);

    CreateXXX(string name, string namespaceURI);

    CreateXXX(string prefix, string localName, string namespaceURI)

    E.g. XmlElement customer = doc.CreateElement(“o”,”customer”,””);

    XPath : Both DOM and the XPath DataModel represents an XMLDocument as a tree.

    XPath Data Model is purely data centric, abstracting away the formatting aspects of XMLText.

    For e.g. CDATA sections are not required in the XPath Data Model

    Given a XML document

    XPath queries within the code in the following ways :

    Call one of the SelectXXX methods on an XMLDocument or XMLNode

    –          Spawn an XPath Navigator from either

    • XmlDocument
    • An XPathDocument

    Call an XPathXXX extension method on an XNode.

    The SelectXXX methods accept an XPath query string

    XmlNode n = doc.SelectSingleNode (“customers/customer [instance=’Jim’] “);

    Console.WriteLine (n.Innertext); // Jim +Bo

    The SelectXXX methods delegate their implementation to XPathNavigator which is used directly over XMLDocument or read-only XPathDocument

    XElement e = e.XPathSelectElement(“customer/customer[firstname =’Jim’]”);

    The extension method used with XNodes are CreateNavigator (); XPathEvaluate (); XPathSelectElement (); XpathSelectElements ();

    Common XPath Operators are as follows

    Operator | Description

    /                              Children

    //                            Recursively children

    .                               CurrentNode

    ..                             ParentNode

    *                            Wildcard

    @                            Attribute

    []                             Filter

    :                               namespace separator

    XPathNavigator: It is a cursor over the XPathDataModel representation of an XML document It is loaded with the primitive methods that move the cursor around the tree

    XPathNavigator Select * () take XPath string / queries and return more complex navigations or multiple nodes.

    E.g. XPathNavigator nav = doc.CreateNavigator();

    XPathNavigator jim = nav.SelectSingleNode(“customers/customer[firstname=’Jim’]”);

    Console.WriteLine (jim.Value);

    The SelectSingleNode method returns a single XPathNavigator. The Select method returning returns XPathNode Iterator which iterates over multiple XPathNavigators.

    XPathNavigator nav = doc.CreateNavigator();

    String xPath = “customers/customer/firstname/text()”;

    Foreach (XPathNavigator nav in nav.Select(xPath))

    Console.WriteLine (nav.Value)

    For faster queries, compile XPath to XPathExpression then pass it to Select* method

    XPathNavigator nav = doc.CreateNavigator ();

    XPathExpression expr = nav.Compile (“customers/customer/firstname”);

    Foreach (XPathNavigator a in nav.Select (expr))

    Console.WriteLine (a.Value);

    Output: Jim Thomas.

    Querying with Namespace:

    XmlDocument doc = new XmlDocument ();


    XmlNameSpaceManager xnm = new XMLNamespaceManager (doc.NameTable);

    We can add prefix/namespace pairs to it as follows:

    Xnm.AddNamespace (“o”,”” );

    The Select * methods on XMLDocument & XPathNavigator have overloads that accept as XMLNamespaceManager

    XmlNode n =doc.SelectSingleNode (“o: customers/o: customers”, xnm);

    XPathDocument: An XPathNavigator backed by an XPathDocument is faster than an XmlDocument but it cannot make changes to the underlying document:

    XPathDocument doc = new XPathDocument (“customers.xml”);

    XPathNavigator nav = doc.CreateNavigator ();

    Foreach (XPathNavigator a in nav.Select (“customers/customer/firstname”))

    Console.WriteLine (a.Value);

    XSD and Schema Validation: For each domain XML file confirms to a pattern / schema to the standardize and automate the interpretation and validation of XML documents widely used is XSD (XML Schema Definition) which is supported in System.XML

    Performing Schema Validation: You can validate an XML file on one or more schemas before processing it. The validation is done for following reasons

    –          You can get away with less error checking and exception handling.

    –          Schema validation pciks up errors you might otherwise overlook

    –          Error messages are detailed and informative.

    When XmlDocument is loaded into an XMLReader containing schema, validation happens automatically

    Settings.ValidationType = ValidationType.Schema;


    Using (xmlReader r = XmlReader.Create(“customers.xml”, settings))

    Settings.ValidationFlags |= XmlSchemaValidationFlags.ProcessInlineSchema

    if schema validation fails then XmlSchemaValidationException is thrown.


    try {

    While (r.Read());

    } catch (XmlSchemaValidationException ex)



    You want to report on all errors in the document, you must handle the ValidationEventhandler event;

    Settings. ValidationEventHandler t = ValidationHandler;

    Static void ValidationHandler(object sender, ValidationEventArgs e)


    Console.WriteLine (“Error:”+e.Exception.Message);


    The exception property of ValidationEventArgs contains the XmlSchemaValidationException that would have otherwise been thrown. You can also validate on XDocument or XElement that’s already in memory by calling extensions methods in System.XMLSchema. These methods accept XMLSchemaSet and a validationHandler


    XMLSchemaSet set = new XMLSchemaSet ();

    Set.Add (null,@”customer.xml”);

    Doc.Validate (set, (sender, args) => {error.AppendLine (args.Exception.message);});

    LINQ Queries:

    Linq is a set of language and framework feature for constructing type safe queries over in-memory collections and remote data sources. It enables us to query a collection implementing IEnumerable<T>. LINQ offers both validations i.e. compile time and run time error checking.

    The basic units of data in LINQ are sequences and elements. A sequence is any object that implements IEnumerable<T> and an element is each item in the sequence.

    Query operators are methods that transform/project a sequence. In the Enumerable class in System.Linq there are around 40 query operators which are implemented as extension methods. These are called standard query operators.

    Query operators over in-memory local objects are known as LINQ-to-Objects queries. LINQ also support sequence implementing IQueryable<T> interface and supported by standard query operators in Queryable class.

    A query is an expression that transforms sequence with query operators e.g.

    String[] names= {“Tom”, “Dick”, “Harry”};

    IEnumerable<string> filteredNames = names.Where(n=>n.Length>=4)

    Foreach(string name in filteredNames)


    Next query operators accept lambda expression as an argument. Here it is the signature of where query operator.

    Public static IEnumerable<TSource> where <TSource>(this IEnumerable<TSource>source, Func<TSource, bool> predicate)

    C# also provides another syntax for writing queries called query expression syntax. IEnumerable<string>filteredNAmes from n in names where n.Contains(“a”) select n;

    Chaining Query Operators: To build more complex queries you append additional query operators  to the expression creating a chain. E.g. IEnumerable<string> query = names. Where(n=>n.Contains(“a”))



    Where, OrderBy and select are standard query operators that resolve to extension methods in the Enumerable class

    Where operator: emits filtered verison of the input sequence.

    Orderby operator: emits sorted version of the input sequence.

    Select operator: emits a sequence where each input element is transformed or projected with a given lambda expression.

    The following are the signatures of above 3 operators

    public static IEnumerable<TSource> Where<TSource>(this IEnumerable<TSource>source, func<TSource, bool>predicate)

    public static IEnumerable<TSource> OrderBy<TSource>(this IEnumerable<TSource>source, func<TSource, Tkey>keyselector)

    public static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable <TSource>source, Func<TSource,TResult>selector);

    without extension methods the query loses its fluency as shown below

    IEnumerable<string> query = Enumerable.Select(Enumerable.OrderBy(Enumerable.Where( names, n => n.Contains(“a”)), n=>n.Length),n=>n.ToUpper());

    Whereas if we use extension methods we get a natural linear shape reflect the left-to-right flow of data and keeping lambda expression alongside their query operators

    IEnumerable<string> query = names.Where (n=>n.contains (“a”)).Orderby (n=>n.Length).Select (n=>n.ToUpper ());

    The purpose of the lambda expression depends on the particular query operator. An expression returning a bool value is called a predicate. A lambda expression is a query operator always works on individual elements in the input sequence not the sequence as a whole.

    Lambda expressions and Func signatures: The standard query operators utilize generic Func delegates. Func is a family of general purpose generic delegates in System.Linq, defined with the following intent: The tye arguments in Func appear in the same  order they do in lambda expression . Hence  Func<TSource, bool> matches TSoruce => bool Func<TSource, TResult> matches TSoruce => TResult.

    The standard query operators use the following generic type names

    TSource                ElementType for the input sequence

    TResult                 ElementType for the output sequence if different from TSource.

    TKey                      ElementType for the key used in sorting grouping or joining.

    TSource is determined by the input sequence. TResult and they are inferred from your lambda expression. Func<TSource, TResult> is same as TSource=>TResult lambda expression. TSource and TResult are different types,  so the lambda expression can change the type of each  element, further the lambda expression  determines the output sequence type.

    The where query operator is simpler and requires no type inference for the output because the operator merely filters elements it does not transform them.

    The orderby query operator has a predicate/Key selector as Func<TSource, Tkey> maps an input element to a sorting key. This is inferred from lambda expression and is separate from the input and output element types.

    Query operators in Enumerable class refer to methods instead of lambda expression to emit expression trees. Query operators in Queryable class refer to lambda expression to emit expression trees.

    Natural Ordering: the original ordering of elements in input sequence is important  in LINQ. Operators such as Where and Select preserve the original ordering of the input sequence. LINQ preserves the ordering wherever possible.

    Some of the operators which do not return sequence are as follows

    Int numbers ={10,9,8,7,6};

    Int firstnumber = numbers.First();

    Int Lastnumber = numbers.Last();

    Int secondnumber = numbers.ElementAt(1);

    Int LowestNumber = number.OrderBy(n=>n).First();

    The aggregation operators return a scalar value

    Int count = numbers.Count();

    Int min = numbers.Min();

    The quantifiers return a bool value

    Bool hasTheNumberNine = numbers.Contain(9);

    Bool hasMorethanZeroElement = numbers.Any();

    Bool hasAnOldElement = numbersAny(n=>n%2==1);

    Some query operators accept two input sequence for e.g.

    Int[] seq1 = {1,2,3}; Int[] seq1 = {3,4,5};

    IEnumerable <int> concat = seq1.Concat(seq2);

    IEnumerable <int> union = seq1.union(seq2);

    C# provides a syntactic shortcut for writing LINQ queries called query expressions. Query expression always start with a form clause and ends with either a select or group clause. The from clause declares an range variable similar to traversing the input sequence.

    e.g. IEnumerable<string> query = from n in names where n.contains(“a”) orderby n.length select n.toUpper();

    Range Variables: The identifier immediately following the from keyword syntax is called the range variable refers to the current element in the sequence

    Query expression also let you introduce new range variable via the following clauses: let into An additional from clause.

    Query Syntax vs Fluent Syntax

    Query syntax is simpler for queries that involve any of the following

    1. A let clause for introducing a new variable alongside the range variable.
    2. SelectMany, Join or GroupJoin, followed by an outer range variable reference.

    Finally there are many operators that no keyword in query syntax. These require that you use fluent syntax. This means any operator outside of the following : where select selectmany orderby thenby orderbydescending thenbydescending groupby join groupjoin.

    Mixed Syntax Queries: If a query operator has no query syntax support you can mix query syntax and fluent syntax. The only constraint is that each query syntax component must be complete.

    Deferred Execution: An important feature of most query operators is that they execute not when constructed but when enumerated.

    e.g. IEnumerable<int>query = numbers.Select(n=>*10)

    foreach(int n in query)

    Console.Write(n + “/”); //10 / 20

    All standard query operators provide deferred execution with the following exceptions:

    –          Operators that return a single element or scalar value such as First or Count

    –          The following conversion operators toArray, ToList, ToDictionary, ToLookup cause immediate query execution because their result type have no mechanism for providing deferred execution.

    Deferred Execution is important because its decouples query construction from query execution. This allows you to construct a query in several steps as well as making database queries possible.

    A deferred execution query is reevaluated when you re-enumerate:

    IEnumerate<int>query = numbers.Select(n=>n*10);

    Foreach (int n in query) Console.Write(n+”/”); o/p= 10/20/


    Foreach (int n in query) Console.Write(n+”/”); o/p = nothing

    There are a couple of disadvantages:

    Sometimes you want to freeze or cache the results at a certain point in time.

    Some queries are computationally intensive so you don’t want to unnecessarily repeat them.

    Query’s captured variable : Query’s lambda expression reference local variables these variables are subject to captured variable semantics. This means that if you later change their value, the query changes as well.

    Int[] numbers = {1,2};

    int factor =10;

    IEnumerable<int> query = numbers.Select(n=>n*factor);

    Factor =20;

    Foreach(int n in query)Console.Write(n+”|”);//20|40|

    A decorator sequence has no backing structure of its own to store elements. Instead it wraps another sequence that you supply at runtime to which it maintains a permanent dependency. Whenever you request data from a decorator, it in turn must request data from the wrapped input sequence.

    Hence when you call an operator such as select or where you are doing nothing more than instantiating a enumerable class that decorates the input sequence.

    Changing query operators create a layer of decorators When you enumerate query, you are querying the original array, transformed through a layering or chain of decorators.

    Subqueries: A subquery is a query contained within another query’s lambda expression. E.g. string[] musos = {“David”,”Roger”,”Rick”}; IEnumerable<string>query = musos.Orderby(m=>m.split().last());

    m.split() converts each string  into a collection of words upon which we then call the last query operator. M.split().last is the subquery; query references the outer query.

    Subqueries are permitted because you can put any valid C# expression on the right hand side of a lambda. In a query expression, a subquery amounts to a query referenced from an expression in any clause except the from clause.

    A subquery is primarily scoped to the enclosing expression and is able to reference the outer lambda argument ( or range variable in a query expression). A subquery is executed whenever the enclosing lambda expression is evaluated. Local queries follow this model literally interpreted queries follow this model conceptually. The sub query executes as and when required to feed the outer query.

    An exception is when the sub query is correlated meaning that it references the outer range variable.

    Sub queries are called indirectly through delegate in the case of a local query or through an expression tree in the case of an interpreted query.

    Composition Strategies : 3 strategies for building more  complex queries

    –          Progressive query construction

    –          Using into keyword

    –          Wrapping queries

    There are a couple of potential benefits however to building queries progressively :

    It can make queries easier to write

    You can add query  operators conditionally For e.g.

    If(includeFilter)query = query.Where(….)

    This is more efficient than

    Query = query.Where(n=>!includeFilter||expressions) because it avoids adding an extra query operator if includeFilter is false. A progressive approach is often useful in query comprehensions, In fluent syntax we could write this query as a single expression

    IEnumerable<string>query = names.Select(n=>n.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””)).where(n=>n.length>2).orderby(n=>n);


    We can rewrite the query in progressive manner as follows

    IEnumerable<string>query = from n in names.

    Select n.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””);

    Query = from n in query where n.length > 2 orderby n select n;


    The INTO keyword: The into keyword lets you continue a query after a projection and is a shortcut for progressively querying . With into we can rewrite the preceding query as :

    IEnumerable<string> query = from n in names

    Select n.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””) into noVowel where noVowel.Length > 2 orderby noVowel select noVowel;

    The only place you can use into is after a select or group clause “into” restarts a query allowing you to introduce fresh where, orderby and select clauses.

    Scoping rules: All queries variables are out of scope following an into keyword. The following  willnot compile

    Var query = from n1 in names select n1.Toupper() into n2 where n1.contains(“x”) select n2;

    Here n1 is not in scope so above statement is illegal.

    To see why,

    Var query = names.Select(n1=>n1.Toupper())


    Wrapping queries: A query built progressively can be formulated into single statement by wrapping one query around another query. In general terms:
    var tempQuery = tempQueryExprn

    Var finalQuery = from … in tempQuery can be reformulated as

    Var finalQuery = from … in (tempQueryExprn).

    Reformulated in wrapped form, it’s the following

    IEnumerable<string> query = from n1 in (

    From n2 in names

    Select n2.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””)).where n1.length>2 orderby n1

    Projection Strategies: All our select clauses have projected scalar element types. With C# object initializers, you can project into complex types, for e.g. we can write the following class to assist:

    Class TempProjectionITem


    Public string Original;

    Public string Vowelless;


    And then project into it with object initializers:

    String[] names = {“Tom”,”Dick”, “Harry”, “Mary”,”Jay”};

    IEnumerable <TempProjectionItem>temp =  from n in names select new TempProjectionItem {

    Original =n ,

    Vowelless=n.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””))};

    The result is of the type IEnumerable<TempProjectionItem> which we can subsequently query

    IEnumerable<string>query = from item in temp where item.Vowelless.length>2 select item.original;

    This gives the same result as the previous example, but without needing to write one-off class. The compiler does the job instead, writing a temporary class with fields that match the structure of our projection. This means however that the intermediate query has the following type:


    We can write the whole query more succinctly with the keyword

    Var query=from n in names

    Select new


    Original =n,

    Vowelless = n.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””))} into temp where temp.Vowelless.Length>2 select tempOriginal;

    The let keyword: introduces a new variable alongside the range variable. With Let we can write a query as follows

    String[] names = {“Tom”, “Dick”, “Harry”, “Mary”,”Jay”};

    IEnumerable<string>query = from n in names

    Let vowelless = n.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””)) .where vowelless.Length>2.orderby vowelless .select n;

    The compiler resolves Let clause by projecting into a temporary anonymous type that contains both the range variable and new expression variable

    Let accomplishes two things:

    –          It projects new elements alongside existing elements

    –          It allows an expression to be used repeatedly in a query without being rewritten.

    Let approach is particularly advantageous in this example because it allows the select clause to project either the original name (n) or its vowel-removed version(v).

    You can have any number of let statements. A let statement can reference variables introduced in earlier let statements. Let reprojects all existing variables transparently.

    Interpreted Queries: LINQ provides two parallel architectures: Local queries for local object collections and interpreted queries for remote data sources. Local queries resolve to query operators in the enumerable class, which in turn resolve to chains of decorator sequences. The delegates that they accept whether expressed in query sysntax, fluent syntax or traditional delegates are fully local to IL code.

    By contrast, interpreted queries are descriptive. They operate over sequences that implement IQuerable<T> and they resolve to the query operators in the Queryable class which emit expression trees that are interpreted at runtime.

    These are two IQueryable<T> implementations in the .NET framework :

    LINQ to SQL


    Create Table Customer


    ID int not null primarykey,

    Name varchar(30)


    Insert customer values (1,”Tom”)

    Insert customer values (2,”Dick”)

    Insert customer values (3,”Harry”)

    Insert customer values (4,”Mary”)

    Insert customer values (5,”Jay”)

    We can write Interpreted Query to retrieve customers whose name contains the letter “a” as follows

    Using System;

    Using System.Ling;

    Using System.Data.Linq;

    Using System.Data.Linq.Mapping;

    [Table] public class Customer


    [column(Isprimarykey=true)] public int ID;

    [column]public string Name;


    Class Test


    Static void main()


    Datacontext datacontext = new DataContext(“connection String”);

    Table<customer>customers = dataContext.GetTable<Customer>();

    IQueryable<string>query = from c in customers where c.Name.contains(“a”).orderby(c.Name.Length).select c.Name.ToUpper();

    Foreach(string name in query)Console.WriteLine(name);



    LINQ to SQL would be as follows

    SELECT UPPER([to][Name])as[value] FROM[Customer]AS[to]WHERE[to].[Name]LIKE@po ORDER BY LEN([to].[Name])

    Here customers is of type table<>, which implements IQueryable<T>. This means the compiler has a choice in resolving where it could call the extension method in Enumerable or the following extension method in Queryable:

    Public static IQueryable<TSource>Where<TSource>(this IQueryable<TSource>source, Expression<Func<TSource,bool>>predicate)

    The compiler chooses “Queryable.Where” bcoz its signature is a more specific match.

    “Queryable.Where” accepts a predicate wrapped in an Expression<TDelegate> type. This instructs the compiler to translate  the supplied lambda expression in otherwords, n=>n.Name.contains(“a”) to an expression tree rather than a compiled delegate. An expression tree is an object model based on the types in System.Linq expression that can be inspected at runtime.

    When you enumerate over an integrated query the outermost sequence runs a program that traverse the entire expression tree, processing it as a unit. In our example LINQ to SQL translates the expression tree to a SQL statement, which it then executes yielding the results as a sequence.

    A query can include both interpreted and local operators. A typical pattern is to have the local operators on the outside and the interpreted components on the inside; this pattern works well with LINQ-to-DB queries.

    AsEnumerable: Enumerable.AsEnumerable is the simplest of all query operators. Here its complete definition

    Public static IEnumerable <TSource>AsEnumerable <TSource>(this IEnumerable<TSource>source)

    { return sources;}

    Its purpose is to cast an IQueryable<T>sequence to IEnumerable<T> forcing subsequent query operators to bind to Enumerable operators instead of Queryable operators. This causes the  instead of Queryable operators. This causes the remainder of the query to execute locally.


    Regex wordcounter = new Regex(@”b(w[-]+]b”);

    Var query =dataContext.MedicalArticales

    .where(article=>article.Topic == “influenza”)


    .where(article=>wordCounter.Matches(article.Abstract).Count <100);

    An alternative to calling AsEnumerable is to call toArray or toList. The advantage of AsEnumerable is deferred execution.