Category: .NET

Packaging .NET Application .NET

.NET Application Packaging, Deployment and Configuring Application

Deployment and Packaging .NET Assemblies.
Deployment and Packaging of .NET Application

Today, applications are created using the types developed by Microsoft or custom built by you. If these types are developed using any language that targets the common language runtime (CLR), they can all work together seamlessly, i.e. different types created using different .NET languages can interact seamlessly.

.NET Framework Deployment Objectives:

All applications use DLLs from Microsoft or other vendors. Because an application executes code from various vendors, the developer of any one piece of code can’t be 100 percent sure how someone else is going to use. Even if this kind of interaction is unsafe and dangerous. End users have come across this scenario quiet often when one company decides to update its part of the code and ship it to all its users. Usually these code should be backward-compatible with previous version, since it becomes impossible to retest and debug all of the already-shipped applications to ensure that the changes will have no undesirable effect.

When installing a new application you discover that it has somehow corrupted an already-installed application. This predicament is known as “DLL hell”. The end result is that users have to carefully consider whether to install new software on their machines.

The problem with this is that the application isn’t isolated as a single entity. You can’t easily back up the application since you must copy the application’s files and also the relevant parts of the registry you must run the installation program again so that all files and registry settings are set properly. Finally, you cant easily uninstall or remove the application without having this nasty feeling that some part of the application is still lurking on your machine.

When application are installed, they come with all kinds of files, from different companies. This code can perform any operation, including deleting files or sending e-mail. To make users comfortable, security must be built into the system so that the users can explicitly allow or disallow code developed by various companies to access their system resources.

The .NET framework addresses the DLL hell issue in a big way. For example, unlike COM, types no longer require settings in the registry. Unfortunately, application still require shortcut links. As for security, the .NET Framework includes a security model called code access security   Whereas Windows security is based on a user’s identity, code access security is based on permissions that host applications that loading components can control. As you’ll see, the .NET Framework enables users to control what gets installed and what runs, and in general, to control their machines, more than Windows ever did.

Developing Modules with Types

Lets start with an example as shown below:

public sealed class Appln {

public static void Main() {

System.Console.WriteLine(“Hello My world”);

}

}

This application defines type called Appln. This type has a single public, static method called Main. Inside Main is a reference to another type called System.Console. System.Console is a type implemented by Microsoft, and the intermediate Language (IL) code that implements this type’s methods is in the MSCorLib.dll file. To build it write the above source code into a C# file and then execute the following command line:

csc.exe /out : Appln.exe /t:exe /r:MSCorLib.dll Appln.cs

This command line tells the C# compiler to emit an executable file called Appln.exe (/out: Appln.exe). The type of file produced is a win32 console application (/t[arget]:exe).

When the C# compiler processes the source file, it sees that the code references the System.Console type’s WriteLine method. At this point, the compiler wants to ensure that this type exists somewhere, that it has a WriteLine method, and that the argument being passed to this method matches the parameter the method expects. Since this type is not defined in the C# source code, to make the C# compiler happy, you must give it a set of assemblies that it can use to resolve references to external types. In the command line above /r[eference]:MSCorLib.dll switch, which tells the compiler to look for external types in the assembly identified by the MSCorLib.dll file.

MSCorLib.dll is a special file in that contains all the core types: Byte, Char, String, Int32 and many more. In fact these types are so frequently used that the C# compiler automatically references the MSCorLib.dll assembly. i.e. the above command line can be shortened as

csc.exe /out : Appln.exe /t:exe Appln.cs

Further you can drop /out and /t:exe since both match, so the command would be

csc.exe Appln.cs

If for some reason, you really don’t want the C# compiler to reference the MSCorLib.dll assembly, you can use the /nostdlib switch. Microsoft uses this switch when building the MSCorlib.dll assembly itself. For e.g. the following will throw error since the above code references System.Console type which is defined in MSCorLib.dll

csc.exe /out: Appln.exe /t:exe /nostdlib Appln.cs

This means that a machine running 32-bit or 64-bit versions of Windows should be able to load this file and do something with it. Windows supports two types of applications, those with a console user interface (CUI) and those with a graphical user interface (GUI). Because I specified the /t:exe switch, the C# compiler produced a CUI application. You’d use the /t: winexe switch to cause the C# compiler to produce a GUI application.

Response Files

I’d like to spend a moment talking about response files. A response file is a text file that contains a set of compiler command-line switches. You instruct the compiler to use a response file by specifying its name on the command line by an @sign. For e.g. you can have response file called myApp.rsp that contains the following text

/out: MyAppln.exe

/target: winexe

To cause CSC.exe to use these settings you’d invoke it as follows:

csc.exe @myAppln.rsp codeFile1.cs CodeFile2.cs

This tells the C# compiler what to name the output file and what kind of target to create. The C# compiler supports multiple response files. The compiler also looks in the directory containing the CSC.exe file for a global CSC.rsp file. Settings that you want applied to all of your projects should go in this file. The compiler aggregates and uses the settings in all of these response files. If you have conflicting settings in the local and global response file, the settings in the local file override the settings in the global life. Likewise, any settings explicitly passed on the command line override the settings taken from a local response file.

When you install the .NET Framework, it installs a default global CSC.rsp file in the %SystemRoot%\Microsoft.NET\Framework\vX.X.Xdirectory where X.X.X is the version of the .NET Framework you have installed). The 4.0 version of the file contains the following switches.

# This file contains command-Line options that the C# Compiler has to process during compilation

# process, unless “noconfig” option is specified.

# Reference the common Framework libraries

/r: Accessibility.dll

/r: Microsoft.CSharp.dll

/r: System.Configuration.Install.dll

/r: System.Core.dll

/r: System.Data.dll

/r: System.Data.DataSetExtensions.dll

/r: System.Data.Linq.dll

/r: System.Deployment.dll

/r: System.Device.dll

/r: System.DirectoryServices.dll

/r: System.dll

/r: System.Drawing.dll

/r: System.EnterpriseServices.dll

/r: System.Management.dll

/r: System.Messaging.dll

/r: System.Numerics.dll

/r: System.Runtime.Remoting.dll

/r: System.Runtime.Serialization.dll

/r: System.Runtime.Serialization.Formatters.Soap.dll

/r: System.Security.dll

/r: System.ServiceModel.dll

/r: System.ServiceProcess.dll

/r: System.Transactions.dll

/r: System.Web.Services.dll

/r: System.Windows.Forms.dll

/r: System.Xml.dll

/r: System.Xml.Linq.dll

Because the global CSC.rsp file references all of the assemblies listed, you do not need to explicitly references all of the assemblies by using the C# compiler’s /reference switch. This response file is a big convenience for developers because it allows them to use types and namespaces defined in various Microsoft-published assemblies without having to specify a /reference compiler switch for each when compiling.

When you use the /reference compiler switch to reference an assembly, you can specify a complete path to a particular file. However, if you do not specify a path, the compiler will search for the file in the following places (in the order listed)

– working directory

– The directory that contains the CSC.exe file itself. MSCorLib.dll is always obtained from the directory. The path looks something like this %SystemRoots%\Microsoft.NET\Framework\v4.0.#####

– Any directories specified using the /lib compiler switch.

– any directories specified using the LIB environment variable

you are welcome to add your own switches to the global CSC.rsp file if you want to make your life even easier, but this makes it more difficult to replicate the build environment on different machines you have to remember to update the CSC.rsp the same way on each build machine. Also you can tell the compiler to ignore both local and global CSC.rip files by specifying the /noconfig command-line switch.

A managed PE file has four main parts the PE32(+) header, the CLR header, the metadata and the IL . the PE32(+) header is the standard information that Windows expects. The CLR header is a small block of information that is specific to modules that require the CLR (managed modules). The header includes the major and minor version number of the CLR that the module was built for: some flags, a MethodDef token (described later) indicating the module’s entry point method if this module  is CUI or GUI executable, and an optional strong-name. You can see the format of the CLR header by examining the IMAGE_COR20_HEADER defined in the CorHdr.h header file.

The metadata is a block of binary data that consists of several tables. There are three categories of tables: definition tables, reference tables and manifest tables. The following table describes some of the more common definition tables that exist in a module’s metadata block.

Metadata Definition
Table Name
Description
ModuleDef Always contains one entry that identifies the module. The entry includes the module’s filename and a extension and a module version ID. This allows the file to be  renamed while keeping a record of its original name.
TypeDef Contains one entry for each type defined in the module. Each entry includes the type’s name, base type and flags (public, private etc, ) and contains indexes to the methods it owns in the MethodDef table, the fields it owns in the fieldDef table, the properties it owns in the PropertyDef table, and the events it owns in the EventDef table.
MethodDef Contains one entry for each method defined in the module. Each entry includes the method’s name, flags (private, public, virtual, abstract, static, final, etc) signature and offset within the module where its IL code can be found. Each entry can also refer to a ParamDef table entry in which more information about the method’s parameters can be found.
FieldDef Contains one entry for every defined in the module. Each entry includes flags (in, out, retval, etc) type and name.
ParamDef Contains one entry for each parameter defined in the module. Each entry includes flags (in, out, retval etc) type and name.
PropertyDef Contains one entry for each property defined in the module. Each entry includes flags, type and name.
EventDef Contains one entry for each event defined in the module. Each entry includes flags and name.

Compiler during compilation creates an entry for every definition in the source code to be created in one of the tables defined above. Metadata table entries are also  created as the compiler detects the types, fields, methods, properties and events that the source code references. The metadata created includes a set of reference tables that keep a record of the referenced items. Table below gives some more common reference metadata tables.

Metadata Reference
Table Name
Description
AssemblyRef Contains one entry for each assembly referenced by the module. Each entry includes the information necessary to bind to the assembly: the assembly’s name (without path and extension), version number, culture and public key token. Each entry also contains some flags and a hash value.
ModuleRef Contains one entry for each PE module that implements types referenced by this module. Each entry includes the module’s filename and extension. This table is used to bind to types that are implemented in different modules of the calling assembly’s module.
TypeRef Contains one entry for each type referenced by the module. Each entry includes the type’s name and a reference to where the type can be found. If the type is implemented within another type, the reference will indicate a TypeRef entry. If the type is implemented in the same module , the reference will indicate a ModuleDef entry.  If the type is implemented in the another module within the calling assembly , the reference will indicate a ModuleRef entry. If the type is implemented in the different assembly, the reference will indicate a AssemblyRef entry.
MemberRef Contains one entry for each member referenced by the module. Each entry includes the member’s name and signature and points to the TypeRef entry for the type that defines the member.

My personal favorite is ILDasm.exe, the IL Disassembler. To see the metadata tables, executes the following command line

ILDasm MyAppln.exe

To see the metadata in a nice, human-readable form, select the View/MetaInfo/Show! menu item.

The important thing to remember is that MyAppln.exe contains a TypeDef whose name is MyAppln. This type identifies a public sealed class that is derived from System.Object (a type referenced from another assembly). The program type also defines two methods Main and .ctor (a constructor).

Main is a public, static method whose code is IL. Main has a void return type and takes no arguments. The constructor method is public and its code is also IL. The constructor has a void return type has no arguments and has a this pointer,  which refers to the object’s memory that is to be constructed when the method is called.

Combining Modules to Form an Assembly

An assembly is a collection of one or more files containing type definitions and resource files. One of the assembly’s files is chosen to hold a manifest. The manifest is another set of metadata tables that basically contain the names of the files that are part of the assembly. They also describe the assembly’s version, culture, publisher, publicly exported types and all of the files that comprise the assembly.

The CLR always loads the file that contains the manifest metadata tables first and then uses the manifest to get the names of the other files that are in the assembly. Here are some characteristics of assemblies that you should remember:

– An assembly defines the reusable types.

– An assembly is marked with a  version number.

– An assembly can have security information associated with it.

An assembly’s individual files don’t have these attributes – except for the file that contains the manifest metadata tables. To package, version, secure and use types, you must place them in modules that are part of an assembly

The reason is that an assembly allows you to decouple the logical and physical notion of reusable types. for e.g. an assembly can consist of several types. You couldn’t put the frequently used types in one file and the less frequently used types in another file.

You configure an application to download assembly files by specifying a codeBase element in the application’s configuration file. The codeBase element identifies a URL pointing to where all of an assembly’s files can be found. When attempting to load an assembly’s file, the CLR obtains the codeBase element’s URL and checks the machine’s download cache to see if the file is present. If it is, the file is loaded. If the file isn’t in the cache, the CLR downloads the file into cache from the location the URL points to. If the file can’t be found, the CLR throws a FileNotFoundException exception at runtime.

I’ve identified three reasons to use multifile assemblies:

– You can partition  your types among separate files, allowing for files to be incrementally downloaded as described in the Internet download scenario. Partitioning the types into separate files also allows for partial or piecemeal packaging and deployment for applications you purchase and install.

-You can add resource or data files to your assembly. for example, you could have a type that calculates some insurance information using actuarial table. Instead of embedding the actuarial table in the source code, you could use a tool so that the data file is considered to be part of  the assembly.

-You can create assemblies consisting of types implemented in different programming languages. To developers using the assembly, the assembly appears to contain just a bunch of types; developers wont even know that different programming languages were used. By the way, if you prefer, you can run ILDasm.exe on each of the modules to obtain an IL source code file. Then you can run ILAsm.exe and pass it all of the IL source code files. ILAsm.exe will produce a single file containing all of the types. This technique requires your source code compiler to produce IL-only code.

Manifest Metadata
Table Name
Description.
AssemblyDef Contains a single entry if this module identifies as assembly. The entry includes the assembly’s name, version, culture, flags, hash algorithm, and the publisher’s public key.
FileDef contains one entry for each PE and resource file that is part of the assembly. The entry includes the file’s name and extension, hash value and flags. If the assembly consists only of its own file, the FileDef table has no entries.
ManifestResourceDef Contains one entry for each resource that is part of the assembly. The entry includes the resource’s name, flags and an index into the FileDef table indicating the file that contains the resource isn’t a stand-alone file, the resource is a stream contained within a PE file. For an embedded resource, the entry also includes an offset indicating the start of the resource stream within the PE file.
ExportedTypesDef Contains one entry for each public type exported from all of the assembly’s PE modules. The entry includes the type’s name, an index into the FileDef table and an index into the TypeDef table. To save file space, types exported from the file containing the manifest are not repeated in this table because the type information is available using the metadata’s TypeDef table.

The C# compiler produces an assembly when you specify any of the following command-line switches: /t[arget]:exe, /t[arget]:winexe or t[arget]:library. All of these switches cause the compiler to generate a single PE file that contains the manifest metadata tables. The resulting file is either a CUI executable, GUI executable or a DLL, respectively.

The C# compiler supports the /t[arget]:module switch. This switch tells the compiler to produce a PE file that doesn’t contain the manifest metadata tables. The PE file produced is always a DLL PE file, and this file must be added to an assembly before the CLR can access any types within it. When you use the /t:module switch, the C# compiler, by default, names the output file with an extension of .netmodule.

There are many ways to add a module to an assembly. If you are using the  C# compiler to build a PE file with a manifest, you can use the /addmodule switch. Let’s assume that we have two source code files:

– File1.cs which contains rarely used types

– File2,cs which contains frequently used types

Lets compile the rarely used types into their own module so that users of the assembly won’t need to deploy this module if they never access the rarely used types:

csc /t:module File1.cs

This line causes the C# compiler to create a File1.netmodule file. Next let’s compile the frequently used types into their module, because this module will now represent the entire assembly.

We change the name of the output file to myappln.dll instead of calling it File2.dll

csc /out:File2.dll /t:library /addmodule:File1.netmodule File2.cs

This line tells the C# compiler to compile the File2.cs  file to produce the myappln.dll file Because /t:library is specified, a DLL PE file containing the manifest metadata tables is emitted into the myappln.dll file. The /addmodule:File1.netmodule switch tells the compiler that File1.netmodule is a file that should be considered part of the assembly. Specifically, the addmodule switch tells the compiler to add the file to the FileDef manifest metadata table and to add File1.netmodule’s publicly exported types to the ExportedtypesDef manifest metadata table.

The two files shown below are created. The module on the right contains the manifest.

File1.netmodule myappln.dll
IL compiled from File1.cs IL compiled from File2.cs
Metadata Types, methods and so on defined by file1.csTypes, methods and so on referenced by File1.cs Metadata Types, methods and so on defined by file2.csTypes, methods and so on referenced by File2.cs

Manifest Assembly files (self and File2.netmodule)
Public assembly types (self and File2.netmodule)

The File1.netmodule file contains the IL code generated by compiling File1.cs. This file also contains metadata table s that describe the types, methods fields, properties, events and so on that are defined by File1.cs. The metadata tables also describe the types, methods and so on that are referenced by File1.cs. The myappln.dll is a separate file. Like File1.netmodule this file includes the IL code generated by compiling File2.cs and also includes similar definition and reference metadata tables. However myappln.dll contains the additional manifest metadata tables, making myappln.dll an assembly. The additional manifest metadata tables describe all of the files that make up the assembly. The manifest metadata tables also include all of the public types exported from myappln.dll and File2.netmodule.

Any client code that consumes the myappln.dll assembly’s types must be built using the /r[eference]:myappln.dll compiler switch. This switch tells the compiler to load the myappln.dll assembly and all of the files listed in its FileDef table when searching for an external type.

The CLR loads assembly files only when a method referencing a type in an unloaded assembly  is called. This means that to run an application, all of the files from a referenced assembly do not need to be present.

Using the Assembly Linker

The Al.exe utility can produce an EXE or a DLL  PE file that contains only a manifest describing the types in other modules. To understand how AL.exe works, lets change the way the myappln.dll assembly is built:

csc /t:module File1.cs

csc /t:module File2.cs

al /out:myappln.dll /t: library File1.netmodule File2.netmodule

In this example, two separate modules, File1.netmodule and File2.netmodule, are created. Neither module is an assembly because they don’t contain manifest metadata tables. Then a third file is produced: myappln.dll which is a small DLL PE file that contains no IL code but has manifest metadata tables indicating that File1.netmodule and File2.netmodule are part of the assembly. The resulting assembly consists of the three files: myappln.dll, File1.netmodule and File2.netmodule. The assembly linker has no way to combine multiple files into a single file.

The AL.exe utility can also produce CUI and GUI PE files using the /t[arget]:exe or /t[arget]:winexe command line switches. You can specify which method in a module should be used as an entry point by adding the /main command-line switch when invoking AL.exe. The following is an example of how to call the Assembly Linker, AL.exe, by using the /main command-line switch.

csc /t:module /r:myappln.dll Program.cs

al /out: Program.exe /t:exe /main: Program.Main Program.netmodule

Here the first line builds the Program.cs file into a Program.netmodule file. The second line produces a small Program.exe PE file that contains the manifest metadata tables. In addition there is a small global function named __EntryPoint that is emitted by AL.exe because of the /main: Program.Main command-line switch. This function, __EntryPoint, contains the following IL code:

.method privatescope static void __EntryPoint$PST06000001() cli managed

{

}

As you can see, this code simply calls the Main method contained in the Program type defined in the Program.netmodule file.

Adding Resource Files to an Assembly

When using AL.exe to create an assembly you can add a file as a resource to the assembly by using the /embed[resource] switch. this switch takes a file and embeds the file’s contents into the resulting PE file. The manifest’s ManifestResourceDef table is updated to reflect the existence of the resources.

AL.exe also supports a link[resource] switch, which also takes a file containing resources. However, the /link[resource] switch updates the manifest’s ManifestResourceDef and FileDef tables, indicating that the resource exists and identifying which of the assembly’s files contains it. The resource file is not embedded into the assembly PE file; it remains separate and must be packaged and deployed with the other assembly files.

The C# compiler’s /resource switch embeds the specified resource file into the resulting assembly PE file, updating the ManifestResourceDef table. The compiler’s /linkresource switch adds an entry to the ManifestResourceDef and the FileDef manifest tables to refer to a stand-alone resource file.

You can do this easily by specifying the pathname of a res file with the /win32res switch when using either AL.exe or CSC.exe. In addition you can quickly and easily embed a standard win32 icon resource into an assembly by specifying the pathname of the .ico file with the win32icon switch when using either AL.exe or CSC.exe. Within Visual Studio you can add resource files to your assembly by displaying your project’s properties and then clicking the application tab.

Assembly Version Resource Information

When AL.exe  or CSC.exe produces a PE file assembly, it also embeds into the PE file a standard Win32 version resource. Application code can also acquire and examine this information at runtime by calling System.Diagnostic.FileversionInfo’s static GetVersionInfo method.

Here’s what the code that produced the version information looks like

using System.Reflection;

//FileDescription version version information

[assembly: AssemblyTitle(“MyAppln.dll”)]

// Comments version information:

[assembly: AssemblyCompany(“Wintellect”)]

// ProductName version information

[assembly: AssemblyProduct(“Wintellect ® Jeff’s Type Library”)]

// LegalCopyright version information

[assembly: AssemblyCopyright(“Copyright © wintellect 2010”)]

// LegalTrademask version information:

[assembly:AssemblyTrademark(“JeffTypes is a registered trademark of wintellect”)]

// AssemblyVersion version information

[assembly: AssemblyVersion(“3.0.0.0”)]

// FILEVERSION/Fileversion version information:

[assembly: AssemblyinformationalVersion(“2.0.0.0”)]

// Set the language field (discussed  later in the “Culture” section)

[assembly: AssemblyCulture(“”)]

The table below shows the Version Resource Fields and Their Corresponding AL.exe Switches and Custom attributes

Version Resource Al.exe Switch Custom Attribute/Comment
FILEVERSION /fileversion System.Reflection.AssemblyFileVersionAttribute
PRODUCTVERSION /productversion System.Reflection.AssemblyInformationalVersionAttribute
FILEFLAGSMASK (none) Always set to VS_FFI_FILEFLAGSMASK
FILEFLAGS (none) Always 0
FILEOS (none) Currently always VOS__WINDOWS32
FILETYPE /target Set to VFT_APP if /target:exe or /target:winexe is specified set to VFT_DLL if /target:library is specified
FILESUBTYPE (none) Always set to VFT2_UNKNOWN
AssemblyVersion /version System.Reflection.AssemblyVersionAttribute
Comments /description System.Reflection.AssemblyDescriptionAttribute
CompanyName /company System.Reflection.AssemblyCompanyAttribute
FileDescription /title System.Reflection.AssemblyTitleAttribute
FileVersion /version System.Reflection.AssemblyFileVersionAttribute
InternalName /out Set the name of the output file specified without the extension
LegalCopyright /copyright System.Reflection.AssemblyCopyrightAttribute
LegalTrademarks /trademark System.Reflection.AssemblyTrademarkAttribute
OriginalFileName /out set to the name of the output file (without a path)
PrivateBuild (none) Always blank
ProductName /product System.Reflection.AssemblyProductAttribute
ProductVersion /productversion System.Reflection.AssemblyInformationalVersionAttribute
SpecialBuild (none) Always blank
  • AssemblyFileVersion This version number is stored in the Win32 version resource. This number is for information purposes only; the CLR doesn’t examine this version number in any way.
  • AssemblyinformationalVersion This version number is also stored in the Win32 version resource and again, this number is for information purposes only;
  • AssemblyVersion This version is stored in the AssemblyDef manifest metadata table. The CLR uses this version number when binding to strongly named assemblies. This number is extremely important and is used to uniquely identify an assembly. when starting to develop an assembly, you should set the major , minor, build and revision numbers and shouldn’t change them until you’re ready to being work on the next deployable version of your assembly. When you build an assembly, this version m\number of the referenced assembly is embedded in the AssemblyRef table’ entry. This means that an assembly is tightly bound to a specific version of a referenced assembly.

Simple Application Deployment

Assemblies don’t dictate or require any special means of packaging. The easiest way to package a set of assemblies is simply to copy all of the files directly. Because the assemblies include all of the dependent assembly references and types, the user can just run the application and the runtime will look for referenced assemblies in the application’s directory. No modifications to the registry  are necessary for the application to run. To uninstall the application, just delete all the files.

You can use the options available on the publish tab to cause Visual Studio to produce and MSI file can also install any prerequisite components such as the .NET Framework or Microsoft SQL Server 2008 Express Edition. Finally, the application can automatically check for updates and install them on the user’s machine by taking advantage of ClickOnce technology.

Assemblies deployed to the same directory as the application are called privately deployed assemblies. Privately deployed assemblies can simply be copied to an application’s base directory, and the CLR will load them and execute the code in them. In addition, an application can be uninstalled by simply deleting the assemblies in its directory. This allows simple lookup and restore as well.

This simple install/remove/uninstall scenario is possible because each assembly has metadata indicating which referenced assembly should be loaded, no registry settings are required. An  application always binds to the same type it was built and tested with; the CLR can’t load a different assembly that just happens to provide a type with the same name.

Simple Administrative Control

To allow administrative control over an application a configuration file can be placed in the application’s directory. The setup program would then install this configuration file in the application’s base directory. The CLR interprets the content of this file to alter its policies for locating and loading assembly files.

Using a separate file allows the file to be easily backed up and also allows the administrator to copy the application to another machine – just copy the necessary files and the administrative policy is copied too.

The CLR won’t be able to locate and load these files; running the application will cause a System.IO.FileNotFoundException exception to be thrown. To fix this, the publisher creates an XML configuration file and deploys it to the application base directory. The name of this file must be the name of the application’s main assembly file with a .config extension: program.exe.config for this example. This configuration file should look like this:

<configuration>

<runtime>

<assemblyBinding xmlns=”urn: schema-microsoft-com:asm.v1”>

<probing privatePAth=”AuxFiles” />

<assemblyBinding>

<runtime>

<configuration>

Whenever the CLR attempts to locate an assembly file, it always looks in the application’s directory first and if it cant find the file there, it looks in the AuxFiles subdirectory. You can specify multiple semicolon-delimited paths for the probing element’s privatePath attribute. Each path is considered relative to the application’s base directory. You can’t specify an absolute or a relative path identifying a directory that is outside of the application base directory.

The name and location of this XML configuration file is different depending on the application type

  • For executable applications(EXE), the configuration file must be in the application’s base directory, and it must be the name of the EXE file with “config” appended to it.
  • For microsoft ASP.NET Web Form applications, the file must be in the web application’s virtual root directory and is always named web.config

When you install the .NET Framework, it creates a Machine config file. There is one Machine.config file per version of the CLR you have installed on the machine.

The Machine.config file is located in the following directory:

%SystenRoot%\Microsoft.NET\Framework\version\CONFIG

Of course, %SystemRoot% identifies your windows directory (usually C:\WINDOWS), and version is a version number identifying a specific version of the .NET Framework. Settings in the Machine.config file represent default settings that affect all applications running on the machine. An administrator can create a machine-wide policy by modifying the single Machine.config file. However, administrators and users should avoid modifying this file. Plus you want the application’s settings to be backed up and restored, and keeping an application’s settings in the application-specific configuration file enables this.

generics-c-sharp .NET

C# Generics

  1. Introduction
  2. Infrastructure for Generics
  3. Generic Types and Inheritance
  4. Contravariant and Covariant Generic Types
  5. Verifiability and Constraints

C# Generics

Generics is mechanism offered by the common language runtime (CLR) and programming languages that provides one more form of code reuse : algorithm reuse

Microsoft design guidelines that generic parameter variables should either be called T or least start with an uppercase T. The uppercase T stands for type, just as I stands for interface as in IEnumerable.

Generics provide the following big benefits to developers:

– Source code protection : The developer using a generic algorithm doesn’t need to have access to the algorithm’s source code.

– Type safety : When a generic algorithm is used with a specific type, the compiler and the CLR understand this and ensure that only objects compatible with the specified data type are used with the algorithm. Attempting to use an object of an incompatible type will result in either a compiler error or a runtime exception being thrown.

– Cleaner code : The code is easier to maintain, since the compiler enforces type safety, fewer casts are required the code.

– Better Performance : Generic algorithm can be created to work with a specific value type and the CLR no longer has to do any boxing and casts are unnecessary. The CLR doesn’t have to check the type safety of the generated code and this enhances the performance of the algorithm.

using System;
using System.Collections;
using System.Collections.Generic;
using System.Diagnostics;
public static class MyApp {
public static void Main() {
ValueTypePerfTest();
ReferenceTypePerfTest();
}
private static void ValueTypePerfTest() {
const Int32 count = 10000000;
using (new OperationTimer(“List<Int32>”)) {
List l = new List(Count);
for (Int32 n = 0; n < count; n++) {
l.Add(n) ;
Int32 x = l[n];
}
l = null; // Make sure this gets GC’d
}
using (new OperationTimer(“ArrayList of Int32”)) {
ArrayList a = new ArrayList();
for (Int32 n = 0; n < count; n++) {
a.Add(n);
Int32 x = (Int32) a[n];
}
a = null; // Make sure this gets GC’d
}
}
private static void ReferenceTypePerfTest () {
const Int32 count = 10000000;
using (new OperationTimer(“List<String>”)) {
List<String> l = new List<String>();
for (Int32 n = 0; n < count; n++) {
l.Add(“X”) ;
String x = l[n];
l = null; // Make sure this gets GC’d
}
using ( new OperationTimer(“ArrayList of String”)) {
ArrayList a = new ArrayList();
for (Int32 n = 0; n < count; n++) {
a.Add(“X”);
String x = (String) a[n];
}
a = null; // Make sure this gets GC’d
}
}
}
// This class is useful for doing operations performance timing
internal sealed class OperationTimer : IDisposable {
private Int64 m_startTime;
private String m_text;
private Int32 m_collectionCount;
public OperationTimer(String text) {
PrepareForOperation();
m_text = text;
m_collectionCount = GC.CollectionCount(0) ;
// This should be the last statement in this
// method to keep timing as accurate as possible
m_startTime = Stopwatch.GetTimestamp();
}
public void Dispose() {
Console.WriteLine(“(0,6:###.00) seconds (GCs={1,3}) {2}”,
(StopWatch.GetTimestamp() – m_startTime) /
(Double) StopWatch.Frequency, GC.CollectionCount(0) –
m_collectionCount, m_text);
}
private static void PrepareForOperation() {
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
}
}

When I run this program

.20 seconds (GCs = 0) List<Int32>

3.30 seconds (GCs = 45) ArrayList of Int32

.50 seconds (GCs = 6) List<String>

.58 seconds (GCs = 6) ArrayList of String

The output here shows that using the generic List with Int32 type is much faster than using the non-generic ArrayList algorithm with Int32. Also using the value type – Int32 with ArrayList causes a lot of boxing operations to occur which results in 45 GC where as List algorithm requires 0 GC.

So it doesn’t appear that the generic List algorithm is of any benefit here. But it gives a cleaner code and compile time type safety.

Generics inside FCL

Microsoft recommends that developers use the generic collection classes and now discourages use of the non-generic collection classes for several reasons. First, the non-generic collection classes are not generic and so you don’t get the type safety, cleaner code, and better performance that you get when you use generic collection classes. Second the generic classes have a better object model than non-generic classes. For example, fewer methods are virtual, resulting in better performance, and new members have been added to the generic collections to provide new functionality.

The FCL ships with many generic interface definitions so that the benefits of generics can be realized when working with interface as well. The commonly used interfaces are contained in the System.Collections.Generic namespace.

Infrastructure for Generics

Microsoft had to provide the following for Generics to work properly.

– Create new IL instructions that are aware of type arguments

– Modify the format of existing metadata tables so that type names and methods with generic parameters could be expressed.

– Modify the various programming languages to support the new syntax, allowing developers to define and reference generic types and methods

– Modify the compilers to emit the new IL instructions and the modified metadata format.

– Modify the just-in-time(JIT) compiler to process the new type argument aware IL instructions that produce the correct native code.

– Create new reflection members so that developers can query types and members to determine if they have generic parameters. Created new methods using Reflection had to be defined so that developers could create generic type and method definitions at runtime.

– Modify the debugger to show and manipulate generic types, members, fields and local variables.

– Modify the Microsoft VS Intellisense feature to show specific member prototypes when using a generic type or a method with a specific data type.

Open and Closed Types

The CLR creates an internal data structure for each and every type in use by an application. These data structures are called type objects. A type with generic type parameters is still considered a type and the CLR will create an internal type object for each of these. This applies to reference types, value types, interface types and delegate types. A type with generic type parameters is called Open type and the CLR doesn’t allow any instance of an open type to be constructed .

When code references a generic type it can specify a set of generic type arguments. If actual data types are passed in for all of the type arguments, the type is called a closed type, and the CLR does allow instances of a closed type to be constructed.

For e.g.

using System;

using System.Collections;

using System.Collections.Generic;

// A partially specified open type

internal sealed class DictionaryStringKey<TValue> : Dictionary<String, TValue> {

}

public static class MyApp {

public static void Main() {

Object o = null;

//Dictionary<,> is an open type having 2 new type parameters

Type t = typeof(Dictionary<,>);

// try to create an instance of this type ( fails)

o = CreateInstance(t);

Console.WriteLine();

//DictionaryStringKey<,> is an open type having 1 type parameter

t = typeof(DictionaryStringKey<>);

// Try to create an instance of this type fails

o = CreateInstance(t)

Console.WriteLine();

// DictionaryStringKey<Guid> is a closed type

t = typeof(DictionaryStringKey<Guid>);

// Try to create an instance of this type succeeds

o = CreateInstance(t)

Console.WriteLine(“Object types=”+o.GetType());

}

private static Object CreateInstance(Type t) {

Object o = null;

try {

o = Activator.CreateInstance(t);

Console.Write(“Created instance of (0)”,t.ToString());

}

catch ( ArgumentException e ) {

Console.WriteLine(e.Message);

}

return 0;

}

}

When we execute this code, we get the following output:

Cannot create an instance of System.Collections.Generic.Dictionary 2[TKey, TValue] because Type.ContainsGenericParameters is true.

Cannot create an instance of DictionaryStringKey 1[TValue] because Type.ContainsGenericParameters is true.

Created an instance of DictionaryStringKey `1[System.Guid]

Object Type = DictionaryStringKey `1[System.Guid]

In the output we see that the names end with a backtick (`) followed by a number, it is type’s arity which indicates the number of type parameters required by the type.

Generic Types and Inheritance

A generic type is a type, and it can be derived from any other type. When you use generic type and specify type arguments, you are defining a new type object in the CLR, and the new type object is derived from whatever type generic type was derived from. I.e. List <T> is derived from Object, List<String> and List<Guid> are also derived from Object. Similarly, DictionaryStringKey<TValue> is derived from DictionaryStringKey<String, TValue> , DictionaryStringKey<Guid> is also derived from Dictionary<String, Guid>. Consider an example below

internal class Node {

protected Node m_next;

public Node(Node next) {

m_next = next;

}

}

internal sealed class TypedNode<T> : Node {

public T m_data;

public TypedNode(T data) : this(data, null) {

}

public TypedNode(T data, Node next) : base(next) {

m_data = data;

}

public override String ToString() {

return m_data.ToString() = (( m_next ! = null) ? m_next.ToString() : String.Empty);

}

}

Now the main code will be as follows

private static void DifferentDataLinkedList() {

Node head = new TypedNode<Char>(‘,’);

head = new TypedNode<DateTime>(DateTime.Now, head);

head = new TypedNode<String>(“Today is”, head)’;

Console.WriteLine(head.ToString());

}

Generic Type Identity

C# does offer a way to use simplified syntax to refer to a generic closed type while not affecting type equivalence at all; you can use the good old using directive at the top of your source code file. Here is an example:

using DateTimeList = System.Collections.Generic.List<System.DateTime>;

Using directive is really just defining a symbol called DateTimeList. As the code compiles, the compiler substitutes all occurrences of DateTimeList with System.Collections.Generic.List<System.DateTime>. This just allows developers to use a simplified syntax without affecting the actual meaning of the code, and therefore, type identity and equivalence are maintained. So now when the following line executes the sameType will initialized to true.

Boolean sameType = (typeof(List<DateTime>) == typeof(DateTimeList));

Code Explosion

When a method that uses generic type parameters is JIT-compiled, the CLR takes the method IL, substitutes the specified type arguments, and then creates native code that is specific to that method operating on the specified data types. The CLR keeps generating the native code for every method/type combination. This is referred to as code explosion.

Fortunately, CLR has some optimizations built into it to reduce code explosion. First, if a method is called for a particular type argument, and later the method is called again using the same type argument, the CLR will compile the code for this method/type combination just once. So if assembly uses List<DateTime>, and a completely different assembly also uses List<DateTime>, the CLR will compile the methods for List<DateTime>. This reduces the code explosion. The CLR has another optimization: the CLR considers all reference type arguments to be identical and the code can be shared. For example, the code compiled by the CLR for List<String>’s method can be used for List<Stream>’s methods, since String and Stream are both reference types. In fact, for any reference type, the same code will be used. But if the type argument is a value type, the CLR must produce native code specifically for that value type. The reason is the value types can vary in size.

Generic Interfaces

CLR supports Generic Interface to avoid boxing and loss of compile time type safety. A reference or value type can implement a generic interface by specifying type arguments, or a type can implement a generic interface by leaving the type arguments unspecified.

The definition of a generic interface in the System.Collections.Generic namespace that is part of FCL :

public interface IEnumerator<T> : IDisposable, IEnumerator {

T Current { get; }

}

Eg of Generic Interface

internal sealed class Triangle : IEnumerator<Point> {

private Point[] m_vertices;

// IEnumerator<Point>’s Current property is of type Point

public Point Current { get {….}}

…..

}

Now the Generic class that implements Generic Interface

internal sealed class ArrayEnumerator<T> : IEnumerator<T> {

private T[] m_array;

// IEnumerator<T>’s Current property is of type T

public T Current { get {…} }

….

}

Generic Delegates

The CLR supports generic delegates to ensure that any type of object can be passed to a callback method in a type-safe way. Furthermore generic delegates allow a value type instance to be passed to a callback method without any boxing. “Delegates,” a delegate is really just a class definition with four methods: a constructor, an invoke method, a BeginInvoke method, and an EndInvoke method. When you define a delegate type that specifies type parameters, the compilers emits the delegate class’s methods and the type parameters are applied to any methods having parameters/return values of the specified type parameter.

For example, if you define a generic delegate like this:

public delegate TReturn CallMe<TReturn, TKey, TValue>(TKey key, TValue value);

The compiler turns that into a class that logically looks like this:

public sealed class CallMe<TReturn, TKey, TValue> : MulticastDelegate {

public CallMe(Object object, IntPtr method);

public virtual TReturn Invoke(TKey key, TValue value);

public virtual TReturn IAsyncResult BeginInvoke(TKey key, TValue value,

AsyncCallback callback, Object object);

public virtual TReturn EndInvoke(IAsyncResult result);

}

It is recommended that one should use the generic Action and Func delegates that come predefined in the Framework Class Library wherever possible.

Contravariant and Covariant Generic Types

Each of a delegate’s generic type parameters can be cast to a variable of generic delegate type where the generic parameter type differs, A generic type parameter can be of the following:

Invariant : A generic type parameter that cannot be changed.

Contravariant : A generic type parameter that can change from a class to a class derived from it. The contravariant can appear as only input parameters with in keyword

Covariant : A generic type parameter that can change from a class to one of its base classes. In C#, you indicate covariant generic type parameters with the out keyword which can appear only in output positions such as a method’s return type.

public delegate TResult Func<in T, out TResult>(T arg);

In this generic type parameter T is marked with the in keyword making it contravariant; and the generic type parameter TResult is marked with the out keyword, making it covariant .

If I have variable declared as follows:

Func<Object, ArgumentException> fn1 = null;

I can cast it to another Func type, where the generic type parameters are different:

Func<String, Exception> fn2 = fn1; // no explicit cast is required here

Exception e = fn2(“ “);

Here fn1 refers to a function that accepts an Object and returns an ArgumentException. The fn2 variable wants to refer to a method that takes a String and returns an Exception. Since you can pass a String to a method that wants an Object, and since you can take the result of a method that returns an ArgumentException and treat it as an Exception, the code above compiler and is known at compile time to preserve type safety.

Note: Variance is not possible for value types because boxing would be required. Also variance is not allowed on a generic type parameter if an argument of that type is passed to a method using the out or ref keyword. And CLR would throw a following exception if it find this kind of statement :

Invalid variance: They type parameter ‘T’ must be invariantly valid on ‘SomeDelegate<T>.Invoke(ref T)’. ‘T’ is contravariant.”

delegate void SomeDelegate<in T>(ref T t);

When using delegates that take generic arguments and return values, it is recommended to always specify the in and out keywords for contravariance and covariance whenever possible as doing this has no ill effects and enables your delegate to be used in more scenarios.

Here is an example of an interface with a contravariant generic type parameter:

public interface IEnumerator<out T> : IEnumerator {

Boolean MoveNext();

T Current { get; }

}

Since T is contravariant, it is possible to have the following code compile and run successfully:

// This method accepts an IEnumerable of my reference type

Int32 Count(IEnumerable<Object> collection) {…}

….

//The call below passes an IEnumerable<String> to count

Int32 c = count(new[] { “Grant” });

for this reason the compiler team forces you to be explicit when declaring a generic type parameter. Then if you attempt to use this type parameter in a context that doesn’t match how you declared it, the compiler issues an error letting you know that you are attempting to break the contract. If you then decide to break the contract by adding in or out on generic type parameters, you should expect to have to modify some of the code sites that were using the out contract.

Generic Methods

When you define a generic class, struct, or interface, any methods defined in these types can refer to a type parameter specified by the type. A type parameter can be used as a method’s parameter, a method’s return value, or as a local variable defined inside the method. However, the CLR also supports the ability for a method to specify its very own type parameters. And these type parameters can also be used for parameters, return values, or local variables.

internal sealed class GenericType<T> {

private T m_value;

public GenericType( T value) { m_value = value; }

public TOutput Converter<TOutput> () {

TOutput result = (TOutput) Convert.ChangeType(m_value, typeof(TOutput));

return result;

}

}

In this example, you can see that the GenericType class defines its own type parameter(T), and the Converter method defines its own type parameter(TOutput). This allows a GenericType to be constructed to work with any type. The converter method can convert the object referred to by the m_value field to various types depending on what type argument is passed to it when called. The ability to have type parameters and method parameters allows for phenomenal flexibility.

A reasonably good example of a generic method is the two method

private static void swap<T>(ref T o1, ref T o2) {

T temp = o1;

o1 = o2;

o2 = temp;

}

Code can now call swap like this

private static void CallingSwap() {

Int32 n1 = 1, n2 =2;

Console.WriteLine(“n1={0}, n2={1}”, n1, n2);

Swap<Int32>(ref n1, ref n2);

Console.WriteLine(“n1={0}, n2={1}”, n1, n2);

String s1 = “Aidan”, s2 = “Grant”;

Console.WriteLine(“s1={0}, s2={1}”, s1, s2);

Swap<String>(ref s1, ref s2);

Console.WriteLine(“s1={0}, s2={1}”, s1, s2);

}

The variable you pass as an out /ref argument must be the same type as the method’s parameter to avoid a potential type safety exploit.

e.g.

public static class InterLocked {

public static T Exchange<T>(ref T location1, T value) where T: class;

public static T CompareExchange<T>(

ref T location1, T value, T comparand) where T: class;

}

Generic Methods and Type Inference

To help improve code creation, readability, and maintainability, the C# compiler offers type inference when calling a generic method. Type inference means that the compiler attempts to determine the type to use automatically when calling a generic method.

Here is some code that demonstrates type inference:

private static void CallingSwapUsingInference() {

Int32 nl = 1, n2 =2;

Swap(ref n1, ref n2);// Calls Swap<Int32>

String s1 = “Aidan”;

Object s2 = “Grant”;

Swap(ref s1, ref s2);// Error, type can’t be inferred

}

In this code, first call to Swap compiler infers n1 and n2 are Int32 and hence it will invoke Swap with Int32 type parameter. In the second call compiler sees that s1 is String and s2 is Object. Since s1 and s2 are variables of different data types, the compiler can’t accurately infer the type to use for swap’s type argument,and it issues invalid type arguments error for method ‘Swap<T>(ref T, ref T)’ .

Another type is a type that can be defined with multiple methods with one of its methods taking a specific data type and another taking a generic type parameter as shown below in the code

private static void Display(String s) {

Console.WriteLine(s);

}

private static void Display<T>(T o) {

Display(o.ToString()); //Calls Display(String)

}

Here are some ways to call the Display method

Display(“Jeff”); // Calls Display(String)

Display(123); // Calls Display<T>(T)

Display<String>(“Adrian”); // Calls Display<T>(T)

The C# compiler always prefers a more explicit match over a generic match, and therefore, it generates a call to the non-generic Display method that takes a String. For the second call, the compiler can’t call the non-generic Display method that takes a String, so it must call the generic Display method. By the way, it is fortunate that the compiler always prefers the more explicit match; if the compiler had preferred the generic method, because the generic display method calls Display again there would have been infinite recursion.

Verifiability and Constraints

A constraint is a way to limit the number of types that can be specified for a generic argument, Limiting the number of types allows you to do more with those types. Here is a new version of the Min method that specifies a constraint:

public static T Min<T o1, To2> where T : IComparable<T> {

if (o1.CompareTo(o2) < 0) return o1;

return o2;

}

The C# where token tells the compiler that any type specified for T must implement the generic IComparable interface of the same type(T). Because of this constraint, the compiler now allows the method to call the CompareTo method since this method is defined by the IComparable<T> interface.

Now when code references a generic type or method, the compiler is responsible for ensuring that a type argument that meets the constraints is specified.

For e.g.

private static void CallMin() {

Object o1 = “Jeff”, o2 = “Richter”;

Object oMin = Min<Object>(o1, o2); //Error

}

The compiler issues the error because system.Object doesn’t implement the IComparable<Object> interface. In fact, system.Object doesn’t implement any interfaces at all.

The CLR doesn’t allow overloading based on the type parameter names or constraints you can overload types or methods based only on arity. The following e.g. shows that

// It is OK to define the following types

internal sealed class AType {}

internal sealed class AType <T>{}

internal sealed class AType <T1, T2>{}

//error : conflicts with AType<T> that has no constraints

internal sealed class AType<T> where T: IComparable<T> {}

//Error: conflicts with AType<T1, T2>

internal sealed class AType <T3, T4>{}

internal sealed class AnotherType {

private static void M() {}

private static void M<T>() {}

private static void M<T1, T2>() {}

//Error: conflicts with M<T> that has no constraints

private static void M<T>() where T : IComparable<T> {}

//Error

private static void M<T3, T4>() {}

}

In fact, the overriding method is not allowed to specify any constraints on its type parameters at all. However, it can change the names of the type parameters. similarly, when implementing an interface method, the method must specify the same number of type parameters as the interface method and these type parameters will inherit the constraints specified on them by the interface’s method.

E.g.

internal class Base {

public virtual void M<T1, T2>()

where T1 : struct

where T2 : class {

}

}

internal sealed class Derived : Base {

public override void M<T3, T4>()

where T3 : EventArgs //Error

where T4: class //Error

{ }

}

Notice that you can change the names of the type parameters as in the example from T1 to T3 and T2 to T4; however you cannot change constraints.

A type parameter can be constrained using a primary constraint, a secondary constraint, and constructor constraint.

Primary Constraint

A primary constraint can be reference type that identifies a class that is not sealed. You cannot specify one of the following special reference types: System.Object, System.Array, System.Delegate, System.MulticastDelegate, System.Valuetype, System.Enum or System.Void

When specifying a reference type constraint, you are promising the compiler that a specified type argument will either be of the same type or of a type derived from the the constant type. For e.g.

internal sealed class PrimaryConstraintOfStream<T> where T : Stream {

public void M(T stream) {

stream.Close(); //OK

}

};

In this class definition the type parameter T has a primary constraint of Stream. This tells the compiler that code using PrimaryConstraintOfStream must specify a type argument of Stream or a type derived from stream. If a type parameter doesn’t specify a primary constraint, System.Object is assumed. However, the C# compiler issues an error message if you explicitly specify System.Object in your source code.

There are two special primary constraints: class and struct. The class constraint promises the compiler that a specified type argument will be reference type. Any class type, interface type delegate type or array type satisfies this constraint. For e.g.

internal sealed class PrimaryConstraintOfClass<T> where T : class {

public void M() {

T temp = null; // Allowed because T must be a reference type

}

}

In this example setting temp to null is legal because T is known to be a reference type, and all reference type variables can be set to null. If T were unconstrained, the code above would not compile because T could be a value type, and value type variables cannot be set to null.

The struct constraint promises the compiler that a specified type argument will be a value type. Any value type, including enumerations, satisfies this constraint. However, the compiler and the CLR treat any System.Nullable<T> value type as a special type and nullable types do not satisfy this constraint. The reason is because the Nullable<T>type constrains its type parameter to struct, and the CLR wants to prohibit a recursive type such as Nullable<Nullable<T>>

e.g.

internal sealed class PrimaryConstraintOfStruct<T> where T : struct {

public static T Factory() {

//Allowed because all value types implicitly

// have a public parameterless constructor

return new T();

}

}

In this example, newing up a T is legal because T is known to be a value type and all value types implicitly have a public, parameterless constructor. If T were unconstrained, constrained to a reference type or constrained to class, the above code would not compile because some reference types do not have public, parameterless constructors

Secondary Constraint

A type parameter can specify zero or more secondary constraints where a secondary constraint represents an interface type. When specifying an interface type constraint, you are promising the compiler that a specified type argument will be a type argument must specify a type that implements all of the interface constraints

There is another kind of secondary constraint called a type parameter constraint. This kind of constraint is used much less often than interface constraint. It allows a generic type of method to indicate that there must be a relationship between specified type arguments. A type parameter can have zero or more type constraints applied to it. Here is a generic method that demonstrates the use of a type parameter constraint:

private static List<TBase> ConvertIList<T, TBase>(IList<T> list) where T : TBase {

List<TBase> baseList = new List<TBase>(list.count);

for (Int32 index = 0; index < list.Count; index++) {

baseList.Add(list[index]);

}

return baseList;

}

The convertIList method specifies two types parameters in which the T parameter is constrained by the TBase type parameter. This means that whatever type argument is specified for T, the type argument must be compatible with whatever type arguments is specified for TBase. Here is a method showing some legal and illegal calls to convertIList:

private static void CallingConvertIList(){

//Construct and initialize a List<String> (which implements IList<String>)

IList<String> ls = new List <String>();

ls.Add(“A String”);

//Convert the IList<String> to an IList<Object>

IList<Object> lo = ConvertIList<String, Object>(ls);

//Convert the IList<String> to an IList<IComparable>

IList<IComparable> lc = ConvertIList<String, IComparable>(ls);

//Convert the IList<String> to an IList<IComparable<String>>

IList<IComparable<String>> lcs = ConvertIList<String, IComparable<String>>(ls);

//Convert the IList<String> to an IList<IComparable>

IList<String> ls2 = ConvertIList<String, String>(ls);

//Convert the IList<String> to an IList<Exception>

IList<Exception> le = ConvertIList<String, Exception>(ls); //Error

In the first call to ConvertIList, the compiler ensures that String is compatible with Object. Since String is derived from Object, the first call adheres to the type parameter constraint. In the second call to ConvertIList, the compiler ensures that String is compatible with IComparable. Since String implements the IComparable interface, the second call adheres to the type parameter constraint. In the third call to ConvertIList, the compiler ensures that String is compatible with IComparable<String> Since String implements the IComparable<String> interface, the third call adheres to the type parameter constraint. In the fourth call to ConvertIList, the compiler knows that String is compatible with itself. In the fifth call to ConvertIList, the compiler ensures that string is compatible with Exception. Since String is not compatible with Exception, the fifth call doesn’t adhere to they type parameter constraint, and the compiler issues the following message: “error CS0311: The type string cannot be used as type parameter ‘T’ in the generic type or method ‘Program.ConvertIList<T,TBase>(System.Collections.Generic.IList<T>)’. There is no implicit reference conversion from string to System.Exception”.

Constructor Constraints

A type parameter can specify zero constructor constraints or one constructor constraint. When specifying a constructor constraint, you are promising the compiler that a specified type argument will be a non-abstract type that implements a public, parameterless constructor. Note that the C# compiler considers it an error to specify a constructor constraint with the struct constraint because it is redundant; all value types implicitly offer a public, parameterless constructor.

e.g.

internal sealed class ConstructorConstraint<T> where T : new() {

public static T Factory() {

// Allowed because all value types implicitly

// have a public, parameterless constructor and because

// the constraint requires that any specified reference

// type also have a public, parameterless constructor

return new T();

}

}

In the above example, newing up a T is legal because T is known to be a type that has a public, parameterless constructor. This is certainly true of all value types, and the constructor constraint requires that it be true of any reference type specified as a type argument.

Casting Generic Type

Casting a generic type variable to another type is illegal unless you are casting to a type compatible with a constraint:

private static void CastingGenericTypeVariable1<T>(T obj) {

Int32 x = (Int32) obj; //Error

String s = (String) obj; //Error

}

The compiler issues an error on both lines above because T could be any type, and there is no guarantee that the casts will succeed. You can modify this code to get it to compile by casting to Object first:

private static void CastingAGenericTypeVariable2<T>(T obj ) {

Int32 x = (Int32) (object) obj; // No Error

String s = (String) (Object) obj; // No Error

}

If a casting of reference type needs to be done we can use ‘as’ operator. For e.g.

private static void CastingAGenericTypeVariable3<T>(T obj) {

String s = obj as String; // No error

}

Default value for Generic Type Variable:

Setting a generic type variable to null is illegal unless the generic type is constrained to a reference type.

private static void SettingAGenericTypeVariableToNull<T>() {

T temp = null; //C50403 – Cannot convert null to type parameter T

}

Since T is unconstrained, it could be a value type, and setting a variable of a value type to null is not possible. If T were constrained to a reference type, setting temp to null would compile and run just fine. C# team felt that it would be useful to give developers the ability to set a variable to a default value. So the C# compiler allows you to use the default keyword to accomplish this

private static void SettingAGenericTypeVariableToDefaultValue<T>(){

T temp = default(T); // OK

}

The use of the default keyword above tells the C# compiler and the CLR’s JIT compiler to produce code to set temp to null if T is a reference type and to set temp to all-bits-zero if T is a value type.

Comparison of Generic Type variables:

Comparing a generic type variable to null by using the == or != operator is legal regardless of whether the generic type is constrained:

private static void ComparingAGenericTypeVariableWithNull<T>(T obj) {

if(obj == null) { /* Never executes for value type */ }

}

Since T is unconstrained, it could be a reference type or a value type. If T is a value type, obj can never be null. The C# compiler does not issue an error, instead, it compiles the code just fine. When this method is called using a type argument that is a value type, the JIT compiler sees that the if statement can never be true, and the JIT compiler will not emit the native code for the if test or the code in the braces. If I had used the != operator, the JIT compiler would not emit the code for the if test and it will emit the code inside the if’s braces.

By the way, if T had been constrained to a struct, the compiler would have thrown an error.

Comparing two Generic Type variables

Comparing two variables of the same generic type is illegal if the generic type parameter is not known to be a reference type:

private static void ComparingTwoGenericTypeVariables<T>(T o1, T o2) {

if(o1 == o2) { } //Error

}

In this example T is unconstrained, and whereas it is legal to compare two reference type variables with one another, it is not legal to compare two value type variables with one another unless the value type overloads the == operator.

By the way, if T had been constrained to a struct, the compiler would have thrown an error.

Avoid Generic Type as Operands

The operators such as +, –, *, and / can’t be applied to variables of a generic type because the compiler doesn’t know the type at compile time. This means that you can’t use any of these operators with variables of a generic type. So it is impossible to write a mathematical algorithm that works on an arbitrary numeric data type.

Digg This
.NET

Assemblies and its version policy.

  1. Introduction Assembly Types
  2. The Global Assembly Cache
  3. Configuration Files

Assembly PropertiesIntroduction

The NET Framework and  Framework Class Library are an perfect example of globally deployed assemblies as they are most widely used assemblies by multiple application and .NET software vendors. The applications are built and tested using code implemented by third party vendors and Microsoft using a particular version of the libraries. These third party libraries and NET Frameworks  are also modified and updated using service packs and hotfixes to incorporate the feature enhancements and bug fixes. Now the applications are forced to use the newer version of assemblies. The .NET framework follows certain policy which supports backward compatibility which helps the older existing application to execute.

The third party libraries also are updated and modified which are sometimes are not backward compatible which makes the existing application bit unstable. The reason of instability of existing applications is due to those apps were tuned to work with the old code which had old features and bugs.

So there must be a process to deploy new files with the hope that the applications will work properly. And if the application doesn’t work fine there has to be an easy way to restore the application to its last known good state.

The similarities and differences in privately deployed week assemblies and global deployed strong named assemblies

There are two kinds of assemblies weakly named assemblies and strongly named assemblies. Both the types of assemblies are structurally identical, i.e. they use the same portable executable (PE) file format, PE32(+)header, CLR header, metadata, manifest tables, and intermediate language. Same tools or utilities are used for generating the assemblies.

The real difference is the strongly named assemblies is that a strongly named assembly is signed with a publisher’s public/private key pair that uniquely identifies the assembly’s publisher. The key pair allows the assembly to be uniquely identified, secured and versioned, it allows the assembly to be deployed anywhere on the user’s machine or even on the internet. Because of this uniquely identifiable assembly name the CLR can implement using the safe publishing policy for deployment.

A strongly named assembly is signed with a publisher’s public/private key pair. This key pair allows the assembly to have unique ID, secured and versioned, and it allows the assembly to be deployed anywhere. An assembly can be deployed in two ways privately under application’s base directory or one of its subdirectories and secondly it is deployed globally into a well known location called GAC which CLR looks into whenever any strongly named assembly is referenced.

The table below gives a brief idea about deployment of strongly named assemblies and weakly named assemblies.

Kind of Assembly               Private deployment                                       Global deployment

Weakly named                            Yes                                                                       No

Strongly named                           Yes                                                                      Yes

There are few problems faced by developers while deploying the assemblies namely, Two companies could produce assemblies that have the same file name and if both of these files are copied to the same location where shared assemblies are kept. the most recently copied file overwrites the old file, and all the applications that were using the old assembly no longer predictable. This is similar to “DLL Hell” in COM world.

components of  strongly named assemblies

The CLR needs technology that helps assemblies to be uniquely identified. This is technique is known as strongly named assembly. The strongly named assembly consists of four attributes that uniquely identify the assembly : file name, a version number, a culture identity, and a public key. This hash value is called a public key token. The following assembly identity strings identify four completely different assembly files:

“MyAppln, Version=1.0.1123.0, Culture=neutral, PublicKeyToken=23asdfkajlkasdf”

“MyAppln, Version=1.0.1123.0, Culture=”en-US”, PublicKeyToken=23asdfkajlkasdf”

“MyAppln, Version=1.0.1234.0, Culture=neutral, PublicKeyToken=bb78343awsdfgs”

“MyAppln, Version=1.0.1123.0, Culture=neutral, PublicKeyToken=465765sdfgsdss”

The first component identifies an assembly file called “MyAppln”.

The second component informs developer is creating a version of 1.0.1123.0.

The third component identifies locale or culture is neutral.

The fourth component identified as public key token generated using public/private key pair.

Why Microsoft used cryptographic API’s for strongly named assemblies?

Microsoft has used cryptographic API and standard technology of public/private key to mark the assembly’s uniqueness, cryptography technologies help the user to verify the integrity of the assembly contents for each every machine, they can also be used for setting the privileges and permission to be granted on per user or publisher basis. Also care should be taken that not a single company shares their private key to be used for generating strongly named assembly.

The System.Reflection.AssemblyName class is a utility class that offers several public instance such as CultureInfo, FullName, KeyPair, Name and Version. The utility class offers a few public instance methods such as GetPublicKey, GetPublicKeyToken, SetPublicKey and SetPublicKeyToken.

A weakly named assembly can have assembly version and culture. The CLR always ignores the version numbers as they are always privately deployed, the CLR simply uses the name of the assembly when looking for the assembly’s file in the application base directory and its sub directories or If a different path is mentioned in the XML configuration file’s probing element’s “privatepath” XML attribute.

The strong named assembly is signed using the public/private key. The following is the steps involved in signing of assembly to make it a strongly named assembly

1. Run SN.exe to generate private/public key pair

SN –k MyCompany.snk

2. To view the public key and store it in a file called MyCompany.PublicKey

SN –p MyCompany.snk MyCompany.Publickey

3. Now execute SN.exe, passing it the –tp switch and the file contains just the public key

SN –tp MyCompany.PublicKey

When I execute this command, I get the following

Microsoft (R) .NET Framework Strong Name Utility  Version 4.0.20928.1

Copyright (c) Microsoft Corporation.  All rights reserved.

Public key is

00240003048000009404000006020000002400055253413100040000710001000f507849113404

5955a3f8fdc1bd0d29cba6357026e7caf1631831c64d71fc09051a29444d1d5b4199331d6a1c9d

883de7837dd553b26f82d9bfacad5e405a286fad65cd3e2a890925314e6d34dd3102448cd8a7c8

f16cd1b39b180b34985faa799f3d21e5c81f86467b5f02f451cc3473858d1e7bef63ee39440edf

b64ef8a8

Public key token is 74786c738e63f883

The SN.exe utility doesn’t offer any options for you to display the private key.

A public key token is a 64-bit hash of the public key, to help developers public key tokens were created to avoid using long size of public key. These public key tokens are stored in an assemblyRef table. These reduced tokens are created to conserve storage space.

The command line option or switch of C# compiler to use keyfile that holds the public key is as follows:

csc /keyfile:MyCompany.snk MyApp.cs

The compiler opens the specified file signs the assembly with the private key, and embeds the public key in the manifest. Note that the other files in assembly are not signed only manifest containing assembly is signed.

The Visual studio private/public key is created by navigating to project properties tab, clicking on the signing tab, selecting the Sign assembly check box and then clicking on the <New > option form the Choose A strong Name Key file combo box.

As each file’s name is added to the manifest the file’s contents are hashed and this hash value is stored along with the the file’s name in the FileDef table. You can override the default hash algorithm used with AL.exe’s /algid switch or apply the assembly-level System.Reflection.AssemblyAlgorithmIdAttribute custom attribute in one of the assembly’s source code files. By default, a SHA-1 algorithm is used.

After the PE file containing the manifest is built and its entire contents are hashed. The hash algorithm used here is always SHA-1 and can’t be overridden. This hash value is signed with the publisher’s private key and the resulting RSA digital signature is stored in a reserved section within the PE file. The CLR header of the PE file is updated to reflect where the digital signature is embedded within the file.

The publisher’s private key is used for signing assembly and public key is also embedded into the AssemblyDef manifest metadata table in the PE file. The combination of the file name, the assembly version, the culture and the public key gives this assembly a strong name which is guaranteed to be unique and thus duplication is avoided.

The reference assemblies that are required by your assembly need to be specified using the /reference compiler switch, this will instruct the compiler to emit an assemblyRef metadata table indicates the referenced assembly’s name, version number, culture, and public key information.

Example of AssemblyRef metadata Table and AssemblyDef metadata table

The example of AssemblyRef is shown below:

AssemblyRef #2 ——————————————————-

Token: 0x23000002

Public Key or Token: ef 41 b5 08 ea 1c fb 8b

Name: multifile

Major Version: 0x00000001

Minor Version: 0x00000002

Build Number: 0x00000003

Revision Number: 0x00000004

Locale: <null>

HashValue Blob:

Flags : [none] (00000000)

The example of AssemblyDef metadata table is shown below

// Assembly

// ——————————————————-

// Token: 0x20000001

// Name : hello

// Public Key :

// Hash Algorithm : 0x00008004

// Version: 0.0.0.0

// Major Version: 0x00000000

// Minor Version: 0x00000000

// Build Number: 0x00000000

// Revision Number: 0x00000000

// Locale: <null>

// Flags : [none] (00000000)

// CustomAttribute #1 (0c000002)

// ——————————————————-

// CustomAttribute Type: 0a00001f

// CustomAttributeName: System.Runtime.CompilerServices.CompilationRelaxationsAttribute :: instance void .ctor(int32)

// Length: 8

// Value : 01 00 08 00 00 00 00 00 > <

// ctor args: (8)

//

// CustomAttribute #2 (0c000003)

// ——————————————————-

// CustomAttribute Type: 0a000020

// CustomAttributeName:System.Runtime.CompilerServices.RuntimeCompatibilityAttribute ::instance void .ctor()

// Length: 30

// Value : 01 00 01 00 54 02 16 57 72 61 70 4e 6f 6e 45 78 > T WrapNonEx<

// : 63 65 70 74 69 6f 6e 54 68 72 6f 77 73 01 >ExceptionThrows <

// ctor args: ()

//

The Global Assembly Cache

The assembly must be placed into a well known directory and the CLR must know to search in this directory automatically when a reference to the assembly is detected. This well-known location is called the global assembly cache which can usually be found in the following directory.

C:WindowsAssembly

The GAC directory is structured: it contains many subdirectories and an algorithm is used to generate the names of these subdirectories. You should never manually copy assembly files into GAC instead you need to install the assemblies into GAC using the tools which knows the internal structure and how to generate the proper subdirectory names. The most common tool for installing a strongly name assembly into the GAC is GACUtil.exe

Microsoft (R) .NET Global Assembly Cache Utility.  Version 3.5.30729.1
Copyright (c) Microsoft Corporation.  All rights reserved.

Usage: GACUtil <command> [ <options> ]
Commands:
/i <assembly_path> [ /r <…> ] [ /f ]
Installs an assembly to the global assembly cache.

/il <assembly_path_list_file> [ /r <…> ] [ /f ]
Installs one or more assemblies to the global assembly cache.

/u <assembly_display_name> [ /r <…> ]
Uninstalls an assembly from the global assembly cache.

/ul <assembly_display_name_list_file> [ /r <…> ]
Uninstalls one or more assemblies from the global assembly cache.

/l [ <assembly_name> ]
List the global assembly cache filtered by <assembly_name>

/lr [ <assembly_name> ]
List the global assembly cache with all traced references.

/cdl
Deletes the contents of the download cache

/ldl
Lists the contents of the download cache

/?
Displays a detailed help screen

Options:
/r <reference_scheme> <reference_id> <description>
Specifies a traced reference to install (/i, /il) or uninstall (/u, /ul).

/f
Forces reinstall of an assembly.

/nologo
Suppresses display of the logo banner

/silent
Suppresses display of all output

The GACUtil used for installing assembly to GAC we use /I switch but for properly deployment one should use /r switch in addition to specifying the /I or /u switch to install or uninstall the assembly. The /r switch integrates the assembly with the Windows install and uninstall engine. GACUtil instructs which are the application are using /sharing the assembly and then ties the application and the assembly together.

The GACUtil tool is not shipped with .NET Redistributable package. If your application includes some assemblies that you want to deployed into GAC, you should use the Windows Installer (MSI), because MSI is the only tool that is guaranteed to be on the end-user machines and capable of installing assemblies into the GAC.

Whenever an assembly is created it has to refer the other assemblies for success compilation, the reference switch provides the name of the referred assemblies. If the file name is a full path, CSC.exe loads the specified assemblies and uses its metadata information to build the assembly. if you specify a file name without a path, CSC.exe attempts to find the assembly by looking in the following directories

1. Working directory

2. The directory that contains the CSC.exe file itself. This directory also contains the CLR DLLs.

3. Any directories specified using the /lib compiler switch.

4. Any directories specified using the LIB environment variable.

Even though GAC is the directory where the assembly is found at a compile time, this isn’t the directory where the assembly will be loaded from at runtime, when you install the .NET framework, two copies of Microsoft’s assembly files are actually installed. One set is installed into the compiler/CLR directory and another set is installed into a GAC subdirectory. The files in the compiler/CLR directory exist so that you can easily build your assembly, whereas the copies in the GAC exist so that they can be loaded at runtime for execution.

The reason that CSC.exe doesn’t look in the GAC for referenced assemblies is that you did have to know the path to the assembly file and the structure of the GAC which is undocumented

When assembly is hashed i.e. signed using the private key, the system would have hashed the contents of the file containing the manifest and compares the hash value with the RSA digital signature value embedded within the PE file. If the values are identical, the file contents haven’t been tampered with, and also the public key that corresponds to the publisher’s private key. The system hashes the contents of the assembly’s other files and compares the hash value don’t match at least one of the assembly’s files has been tampered with, and the assembly will fail to install into the GAC.

The CLR loads the referenced global assembly from the GAC using the strong name properties. If the referenced assembly is available in the GAC, CLR will return its containing subdirectory and  the file holding the manifest is loaded. Finding the assembly this way assures the caller that the assembly loaded at runtime came from the same publisher that built the assembly the code was compiled against. Now comparison of public key token in the referencing assembly’s assemblyRef table and public key token in the referenced assembly’s AssemblyDef table. If the referenced assembly isn’t in the GAC, the CLR looks in the application’s base directory and then in the private paths identified in the application’s configuration file; if the application containing the assembly is installed using the MSI, then CLR invokes MSI to load the required assembly. IF the assembly is not found in any of these location, an exception is thrown and finally the binding of assembly fails.

Assembly Hashing

A hashing of the file is performed every time an application executes and loads the assembly. This performance hit is a tradeoff for being certain that the assembly file’s content hasn’t been tampered with. When the CLR detects mismatched hash values at runtime, it throws System.IO.FileLoadException.

When you are ready to package your strongly named assembly you’ll have to use the secure private key to sign it. However, while developing and testing the assembly, gaining access to the secure private key can be a huge problem. Due to this .NET provides a technique known as delayed signing a.k.a partial signing. Delayed signing allows the user to build an assembly by using the user’s public key, the private key isn’t required.

The delayed signing is set on the C# compiler using /delaysign compiler switch. In Visual studio open the project properties of your project, navigate to Signing tab, and then select the Delay Sign Only check box. If you are using AL.exe you can specify the /delay[sign] command-line switch.

For avoiding or preventing of verification of integrity of the assembly’s files. you have to set the –Vr command-line switch of SN.exe utility, Executing the SN.exe with this switch is tells the CLR to skip verifying the hash values for any of the assembly files loaded at runtime. SN’s –Vr switch adds the assembly’s strong name in registry under the follow subkey: HKEY_LOCAL_MACHINESOFTWAREMicrosoftStrongNameVerification.

The –r switch of SN utility is used along with the name of the file that contains the actual private key to hash it, sign it file contents of the assembly and then embed the RSA digital signature in the file where the space for it had been previously reserved. After this step you can deploy the fully signed assembly.

The Cryptographic service providers offer containers that abstract the location of these keys. Microsoft uses a CSP that has a container that, when accessed, obtains the private key from a hardware device. If public and private key pair is in a CSP we have to specify different switches to the CSC.exe, AL.exe, and SN.exe programs: When compiling specify the /keycontainer switch; when linking using AL.exe specify /keyname and when using the strong Name SN tool specify –Rc to add a private key to delayed signed assembly. SN offers many more switches for performing operations with CSP.

Delayed signing is useful whenever you want to perform some other operation to an assembly before you package it. For e.g. you may want to obfuscate your assembly, you cannot obfuscate after you have fully signed because the hash value will be incorrect. So, if you want to obfuscate an assembly file or perform any other type of post build operations, you should use delayed signing, performing the post-build operations, and then run SN.exe with –R or –Rc switch to complete the signing process of the assembly with all of its hashing.

Deploying privately preserves the simple copy install deployment story and better isolates the application and its assemblies. Also GAC isn’t intended to be new dumping ground for assemblies. The reason is because new versions of assemblies don’t overwrite each other, they are installed side by side eating up disk space.

Another alternative way of deploying the assemblies is to use XML configuration files which have the shared assembly’s codeBase element indicate the path of the shared assembly. Now at runtime, the CLR will know to look in the strongly named assembly’s directory for the shared assemblies.This technique is rarely used since any one of application sharing the assembly is uninstalled then there is chance that these shared assemblies might be uninstalled.

When the source code is compiled to create an executable, this executable is executed the CLR loads the assemblies and initialization takes place.i.e. CLR reads the assembly’s CLR header, looking for the MethodDefToken. that identifies the application’s entry point method(Main). From the MethodDef metadata table, the offset within the file for the method’s IL Code is located and JIT-compiled into native code, which includes having the code verified for type safety. The native code then starts executing.

When JIT-compiling this code, the CLR detects all references to types and  members and loads their defining assemblies. At this point, the CLR knows which assembly it needs. Now the CLR must locate the assembly in order to load it. When resolving a referenced type, the CLR can find the type in one of three places;

1. Same file: Access to a type that is in the same file is determined at compile time. The type is loaded out of the file directly and execution starts.

2. Type is in Different file but in same assembly.

3. Type is in Different file and in different assembly

If any errors occur while resolving a type reference –file can’t found, file can’t be loaded, hash mismatch, version mismatch and so on – an appropriate exception is thrown. The CLR then creates its internal data structure to represent the type, and the JIT compiler successful completes the compilation of the main method. finally application starts executing.

Flow chart of Type binding by CLR during compilation.

Type Binding

The GAC identifies assemblies using name, version, culture, public key, and CPU architecture. When searching the GAC for an assembly, the CLR figures out what type of process the application is currently running in 32-bit x86 on top of WOW64 technology, 64-bit x64, 64-bit IA 64. Then when searching the GAC for an assembly, the CLR first searches for a CPU  architecture-specific version of the assembly. If it does not find a matching assembly, it then searches version for a CPU-agnostic version of the assembly.

Configuration Files

Configuration files are XML files that can be changed as needed. Configuration Files are standard XML files. The .NET Framework defines a set of elements that implement configuration settings. Developers can use configuration files to change settings without recompiling applications. Administrators can use configuration files to set policies that affect how applications run on their computers.

  • <configuration> Element
    Describes the <configuration> element, which is the top-level element for all configuration files.
  • <assemblyBinding> Element for <configuration>
    Specifies assembly binding policy at the configuration level.
  • <linkedConfiguration> Element
    Specifies a configuration file to include.
  • Startup Settings Schema
    Describes the elements that specify which version of the common language runtime to use.
  • Runtime Settings Schema
    Describes the elements that configure assembly binding and runtime behavior.
  • Network Settings Schema
    Describes the elements that specify how the .NET Framework connects to the Internet.
  • Cryptography Settings Schema
    Describes elements that map friendly algorithm names to classes that implement cryptography algorithms.
  • Configuration Sections Schema
    Describes the elements used to create and use configuration sections for custom settings.
  • Trace and Debug Settings Schema
    Describes the elements that specify trace switches and listeners.
  • Compiler and Language Provider Settings Schema
    Describes the elements that specify compiler configuration for available language providers.
  • Application Settings Schema
    Describes the elements that enable a Windows Forms or ASP.NET application to store and retrieve application-scoped and user-scoped settings.
  • Web Settings Schema
    All elements in the Web settings schema, which includes elements for configuring how ASP.NET works with a host application such as IIS. Used in aspnet.config files.
  • Example of Publisher’s policy

    The schema for publisher policy  is as follows

    <configSections>
    <clear>
    <remove>
    <section>
    <sectionGroup>
    <section>
    <appSettings>
    <Custom element for configuration section>
    <Custom element for configuration section>
    <add>
    <remove>
    <clear>

    Example of application Configuration file is shown below

    <configuration> <runtime> <assemblyBinding ><dependentAssembly> <assemblyIdentity name="myAssembly" publicKeyToken="32ab4ba45e0a69a1" culture="en-us" />

    <!– Assembly version can be redirected in application, publisher policy or m/c configuration files –>

    <bindingRedirect oldVersion="3.0.0.0" newVersion="3.0.1.1" />

    </dependentAssembly>

    <dependentAssembly>

    <assemblyIdentity name="mySecondAssembly" publicKeyToken="1f2e54s865swqcds" culture="en-us" />

    <!– Publisher policy can be set only in the application configuration file. –> <publisherPolicy apply="no" />

    </dependentAssembly>

    </assemblyBinding>

    </runtime>

    </configuration>

    When JIT-compilation process CLR looks up the assembly version in the application configuration file and applies any version number redirections; the CLR is now looking for this assembly/version.

    For e.g.

    <assemblyBinding ><!—.NET Framework version 1.0 redirects here –>

    </assemblyBinding>

    <assemblyBinding ><!—.NET Framework version 1.1 redirects here –>

    </assemblyBinding>

    If publisher’s policy elements apply attribute is set to yes, the CLR examines the GAC for the new assembly/version  and applies any version number redirections in the machine.config file and applies any version number redirections there. At this point CLR knows the version and attempts to load the assembly from the GAC, if the assembly isn’t in the GAC and if there is no codeBase element, the CLR checks for assembly in the app base directory. If the codebase element is there the CLR attempts to load the assembly from the codeBase element’s specified URL.

    When you package your new assembly to send out to all of your users, a XML configuration file is created. So that publisher can set policies only for the assemblies that they themselves create. In addition, the elements shown here are the only elements that can be specified in a publisher policy configuration file. Now publisher can create an assembly that contains this publisher policy configuration file.

    AL.exe /out: Policy.1.0.MyAppln.dll

    / version: 1.0.0.0

    / keyfile: MyCompany.snk

    / linkresource: Myapps.config

    /platform:x86

    In this command:

  • The Myapps.config argument is the name of the publisher policy file.
  • The Policy.1.0.MyAppln.dll argument is the name of the publisher policy assembly that results from this command. The assembly file name must follow the format:policy. majorNumber . minorNumber . mainAssemblyName .dll
  • The MyCompany.snk argument is the name of the file containing the key pair. You must sign the assembly and publisher policy assembly with the same key pair.
  • The x86 argument identifies the platform targeted by a processor-specific assembly. It can be amd64, ia64, msil, or x86.
  • Once the publisher policy assembly is built and distributed. It has to be deployed into the GAC.

    The following command adds policy.1.0.myAssembly.dll to the global assembly cache.

    gacutil /i publisherPolicyAssemblyFile

    for e.g. gacutil /I Policy.1.0.MyAppln.dll

    Finally to have the runtime do this, the administrator can edit the application configuration file and add the following publisher policy element

    <publisherPolicy apply =”no”>

    This element can be placed as a child element of <assemblybinding> XML tag/element in the application config file so that it applies to all the assemblies or if you need to apply it a specific assembly you need to specify it as a child of <dependentAssembly>

    Digg This
    .NET

    CLR Fundamentals.

      1. Introduction

      2. The Common Language Runtime (CLR)

      3. How Common Language Runtime Loads:

      4. IL and Verification:

      5. Unsafe Code

      6. The NGen Tool

      7. The Framework Class Library

      8. The Common Type System

      9. The Common Language Specification

    Introduction

    This is one of my initial blogs on CLR Overview and Basics, which I believe every .NET developer must know. I believe this topic is one of the prerequisite for starting anything related to .NET, it may be Console Application, Web page or an Application on Windows Phone. To start with I will tried to give you a broad overview of Common Language Runtime(CLR).

    The Common Language Runtime (CLR)

    is a runtime and provides an environment for a programming language that targets it. CLR has no idea which programming language the developer used for the source code. A developer can write code in any .NET language that target the CLR, it may be C# or VB or F# or C++/CLI etc. Compiler acts as syntax verifiers and does code analysis, this allows developers to code in their desired .NET languages and makes it easier to express one’s idea and develop software easily.

    Fig 1.1
    Environment of .NET Runtime.

    Regardless of which compiler is used the result is a managed module. A managed module is a standard 32 bit Windows PE32 file or a standard 64 bit Windows (PE32+) file that require CLR to execute. Managed Assemblies always take advantage of Data Execution Prevention (DEP) and Address Space Layout Randomization(ASLR) in Windows, These two are security features of .NET Framework.

    Table 1-1 Parts of Managed Module

    All CLR compilers generate IL Code, every compiler emits full metadata into every managed module. Metadata is superset of COM TypeLib and Intermediate Definition Language (IDL). CLR metadata is far more complete and associated with the file containing the IL code. The metadata and IL code are embedded in the same EXE/Dll as the code making it impossible to separate the two. Because metadata and managed code are built at the same time and binds them together into resulting managed module. They are never out of sync with one another.

    Metadata  has many applications or benefits v.i.z:,

    • Metadata removes the need for native header/library files during compilation, since all the information is available in the Assembly (PE32+) file. It also has the IL code that which implements the type and members. Compiler can comprehend the metadata directly from the managed module.
    • Visual Studio uses metadata to assist the developer in writing the code, Intellisense of Visual Studio parses the metadata table to inform coder what is the property, method, events and fields or a type offers and  in the case of methods, what parameters the method expects.
    • CLR code verification process uses metadata to ensure that you code performs only type-safe operations.
    • Metadata allows serialization of object on local machine and deserialization of the same object state on a remote machine.
    • Metadata allows the garbage collector to track the lifetime of objects.

    C# and the IL Assembler always produce modules that contain managed code and data. So end users must have CLR installed on their devices to execute these managed code.

    C++/CLI compiler is an exception to this it builds EXE/DLL modules that contain unmanaged code and manipulate unmanaged data at runtime, by adding the /CLR switch to the compiler options the C++ compiler can produce modules that contain hybrid of managed and unmanaged code, for these modules CLR is a must for execution. C++ compiler allows developer to write both managed and unmanaged code but still emit a single module.

    Merging managed Modules to an Assembly:

    Fig 1.2 Integrating managed modules into single assembly

    CLR works with assemblies which is logical grouping of one or more modules or resource objects. An assembly is the smallest unit of versioning, reuse and security. You can produce a single file or a multi-file assembly. An assembly is similar to what we would say Component in COM World.

    Single PE32(+) is a logical grouping of files which has manifest embedded is set to metadata tables. These tables describe the files that make up the assembly with public types implementation and the resource or data files that are associated  with the assembly.

    If you want group of files into an assembly you will have to be aware of more tools and their command-line arguments. An assembly allows you to decompose the deployment of the files  while still treating all of the files as a single collection. An assembly modules have information about referenced  assemblies which makes them “self describing”. It means assembly’s immediate dependencies can be identified and verified by CLR.

    How Common Language Runtime Loads:

    An Assembly execution is managed by CLR, so CLR  needs to be loaded first into the process. You can determine if the .NET Framework is installed on a particular machine by looking for MSCorEE.dll in the  %SystemRoot%System32 directory. The existence of this file confirms that .NET framework is installed. The different versions of NET can be installed on a machine and this can be identified by looking at the following Register Key

    HKEY_LOCAL_MACHINESOFTWAREMicrosoftNET Framework SetupNDP .

    The .NET Framework SDK includes a command-line tool CLRViewer to view the version of the installed Runtime. If assemblies contain only type safe managed code then it should work on both 32-bit  and 64-bit  versions of Windows without making any source code changes. The executable will run on any machine with a version of .NET Framework installed on it. If .NET developer want to develop an assembly that works on a specific version of Windows then developer needs to use C# compiler “/platform” command-line switch. This switch allows to set whether the assembly can be executed on x86 machines with 32-bit Windows version or on X64 machines with 64-bit Windows version or on Intel Itanium machines with 64-bit Windows version. But the default value is “anycpu” which makes assembly to execute run on any version of Windows.

    Depending on the /platform command line option, the compiler will generate an assembly that contains either a PE32 or PE32+ header, and the compiler will also insert the desired CPU architecture information into the header. MS ships two tools with the SDK i.e. DumpBin.exe and CorFlags,exe which can be used to examine the header information contained in a managed module.

    When executing the assembly, windows determines using the file header whether to execute the application in 32-bit or 64-bit address space. An executable file with a PE32 header can run in a 32-bit or 54-bit address space, and a executable with PE32+ header requires 64-bit address space Windows also verifies the CPU architecture to confirm that the machine has the required CPU. Lastly 64-bit Windows version has a feature called WOW64 – Windows on Windows64 that allows 32-bit applications to run on it.

    Table 1-2 Runtime State of Modules based on /platform switch
    /platform Switch
    Type of Managed Module x86Windows x64Windows IA64 Windows
    any-cpu PE32/agnostic Runs as a 32-bit application Run as a 64-bit application Runs as a 64-bit application
    x86 PE32/x86 Runs as a 32-bit application Runs as a WOW64 application Runs as a WOW64 application
    x64 PE32+/x64 Doesn’t run Run as a 64-bit application Doesn’t run
    Itanium PE32+/Itanium Doesn’t run Doesn’t run Runs as a 64-bit application

    After Windows has examined the assembly header to determine whether to create a 32-bit process, a64-bit process, or a WOW64 process, Windows loads the x86, x64 or IA64 version of MSCorEE.dll into the process’s address space. Then process’s primary thread calls a method defined inside MSCorEE.dll. This method initializes the CLR, loads the EXE assembly and then calls its entry point method (Main). When a unmanaged application loads a managed assembly, Windows loads and initialize the CLR in order to process the code contained within the assembly.

    IL is a much higher language when compared to most CPU m/c languages. It can access and manipulate object types and has instructions to create and initialize objects, call virtual methods on objects and manipulate array elements directly. LI can be written in assembly language using IL Assembler, ILAsm.exe. Microsoft also provides an IL Disassembler, ILDasm.exe

    The IL assembly language allows a developer to access all of the CLR’s facilities which is hidden by other programming language which you would really wanted to use. In this scenario you can use multiple languages which CLR supports to utilize the otherwise the hidden CLR facilities, in-fact level of integration between .NET programming languages inside CLR makes mixed-language programming a biggest advantage for the developer.

    To execute a method its IL code is initially converted to native CPU instructions. This is the job of the CLR’s JIT compiler.

    Fig shows what happens when the first time a method is called

    Just before the main method executes, the CLR detects all of the types that are reference by Main code. This causes the CLR to allocate an internal data structure that is used to manage access to the referenced types. This internal data structure contains an entry for each method defined  by the Console type. Each entry holds the address where the method’s implementation can be found. When initializing this structure the CLR sets each entry to an internal, undocumented function contained inside the CLR itself I call this function JITCompiler

    When Main makes its first call to WriteLine, the JITCompiler function is called. The JIT Compiler function is responsible for compiling a method’s IL code into native CPU instructions. Because  the IL is being compiled “just in time” this component of the CLR is referred to as a JITter or a JIT Compiler.

    The JIT Compiler function then searches the defining assembly’s metadata for the called method’s IL. JITCompiler next verifies and compiles the IL code into native CPU instructions. The native CPU instructions are saved in a dynamically allocated block of memory. Then, JITCompiler goes back to the entry for the called method in the type’s internal data structure created by the CLR and replaces the reference that called it in the first place with the address of the block of memory containing the native CPU instructions it just compiled. Finally, the JITCompiler function jumps to the code in the memory block. When this code returns, it returns to the code in Main which continues execution as normal.

    Main now calls WriteLine a second time. This time, the code for WriteLine has already been verified and compiled. so the call goes directly to the block of memory, skipping the JITCompiler function entirely. After the WriteLine method executes, it returns to main.

    A performance  hit is incurred only the first time a method is called. All subsequent calls to method execute at the full speed of the native code because verification and compilation to native code don’t need to be performed again.

    The native CPU instructions in dynamic memory the compiled code is discarded when the application terminates. So if you run the application again the JIT compiler will have to compile the IL to native instructions again. It’s also likely that more time is spent inside the method then calling the method. The CLR’s JIT compiler optimizes the native code, it may take more time to produce the optimized code but the code will execute in less time with better performance compared to non-optimized code.

    The two C# compiler switches that impact code optimization /optimize and /debug. The following table shows the impact of code performance based the two switches.

    • Compiler Switch Settings                    C# IL Code Quality                           JIT Native Code Quality
    • /optimize- /debug-                                      Unoptimized                                     Optimized
    • /optimize- /debug(+/full/pdbonly)               Unoptimized                                     Unoptimized
    • /optimize+ /debug(-/+/full/pdbonly)            Optimized                                         Optimized

    The unoptimized IL code contains many no-operation instructions and also branches that jump to the next line of code, these unoptimized code instructions are generated to enable edit-and-continue feature of Visual Studio while debugging and enable applying break points to the code.

    When producing optimized IL code the C# compiler will remove these extraneous NOP and branch instructions, making the code harder to single-step through in a debugger as control flow will be optimized. Furthermore, the compiler produces a Program Database (PDB) file only if specify the /debug(+/full/pdbonly) switch. The PDB file helps the debugger find local variables and map the IL instructions to source code. The /debug:full switch tells the JIT compiler will track what native code came from each IL instruction. This allows developer to use JIT Debugger of Visual studio to connect a debugger to an already running process and debug the code easily. Without the /debug:full switch, the JIT compiler does not track the IL to native code information which makes the JIT compiler run a little faster and also uses a little less memory. If you start a process with the Visual Studio debugger, it forces the JIT Compiler to track the IL to native code information unless you off the suppress JIT Optimization On Module Load (Managed Only) option in Visual Studio. In this managed environment, compiling the code is accomplished in two phases. Initially the compiler parses over the source code, doing as much work as possible in producing IL. But IL itself must be compiled into native CPU instructions at runtime,requiring more memory and more CPU time to be allocated to complete the task.

    The following are difference or comparison of managed code to unmanaged code:

    1. A JIT compiler can determine if the application is running on an Intel Pentium 4 CPU and produce native code that takes advantage of any special instructions offered by the Pentium 4. Usually, unmanaged applications are compiled for the lowest-common-denominator CPU and avoid using special instructions that would give the application a performance boost.
    2. A JIT compiler can determine when a certain test is always false on the machine that it is running on. In those cases, the native code would be fine-tuned for the host machine; the resulting code is smaller and executes faster.
    3. The CLR could profile the code’s execution and recompile the IL into native code while the application runs. The recompiled code could be reorganized to reduce incorrect  branch predictions depending on the observed execution patterns.

    NGen.exe tool compiles all of an assembly’s IL code into native code and saves the resulting native code to a file on disk. At runtime, when an assembly  is loaded, the CLR automatically checks to see whether a precompiled code so that no compilation is required at runtime. the code produced by NGen.exe will not be as highly optimized as the JIT compiler-produced code.

    IL and Verification:

    While compiling IL into native CPU instructions, the CLR performs a process called verification. Verification examines the high-level IL code and ensures that everything the code does is safe. For e.g. verification checks that every method is called with the correct number of parameters. The managed module’s metadata includes all of the method and type information used by the verification process.

    In Windows, each process has its own virtual address space. Separate address spaces are necessary because you can’t trust an application’s code. It is entirely possible that an application will read from or write to an invalid memory address. By placing each windows process in a separate address space, you gain robustness and stability;

    You can run multiple managed applications in a single Windows virtual address space. Reducing the number of processes by running multiple applications in a single  OS process can improve performance, require fewer resources and be just as robust as if each application had its own process.

    The CLR does offer the ability to execute multiple managed applications in a single OS process. Each managed application executes in an AppDomain. Every managed EXE file will run in its own separate address space that has just the one AppDomain. A process hosting the CLR can decide to run AppDomain in a single OS process.

    Unsafe Code

    Safe code is code that is verifiably safe. Unsafe code is allowed to work directly with memory addresses and manipulate bytes at these addresses. This is a very powerful feature and is typically useful when interoperating with unmanaged code or when you want to improve the performance of a time-critical algorithm.

    The C# compiler requires that all methods that contain unsafe code be marked with the unsafe keyword. In addition, the C# compiler requires you to compile the source code by using the /unsafe compiler switch.

    JIT compiler attempts to compile an unsafe method, it checks to see if the assembly containing the method has been granted the System.Security.Permissions.SecurityPermission with  System.Security.Permissions.SecurityPermissionFlag’s SkipVerification flag set. The JIT compiler will compile the unsafe code and allow it to execute. The CLR is trusting this code and is hoping the direct address and byte manipulations do not cause any harm. If the flag is not set, the JIT compiler throws either a System.InvalidProgramException or a System.Security.VerificationException preventing the method from executing. In fact, the whole application will probably terminate at this point, but at least no harm can be done.

    PEVerify.exe  tool examines all of an assembly’s methods and notifies you of any methods that contain unsafe code. So when you use PEVerify to check an assembly, it must be able to locate and load all referenced assemblies. Because PEVerify uses the CLR to locate the dependent assemblies, the assemblies are located using the same binding and probing rules that would normally be used when executing the assembly.

    The NGen Tool

    The NGen.exe tool is inserting machine code during the build process, so it is interesting in two scenarios

    • Improving an application startup time: The just-in time compilation is avoided because the code will already be compiled into native code and hence improve the startup time.
    • Reducing an application working set: The reason is because the NGen.exe tool compiles the IL to native code and saves the output in a separate file. This file can be memory mapped into multiple-process address spaces simultaneously, allowing the code to be shared;

    When a setup program invokes nGen.exe. A new assembly file containing only this native code instead of IL code is created by NGen.exe. This new file is placed in a folder under the directory with a name like C:WindowsAssemblyNativeImages_v4.0.#####_64. The directory name includes the version of the CLR and information denoting whether the native code is compiled for x86, x64 or Itanium.

    Whenever the CLR loads an assembly file, the CLR looks to see if a corresponding NGen’d native file exists. There are drawbacks to NGen’d files

    • No intellectual property protection: At runtime, the CLR requires that the assemblies that contain IL and metadata be shipped. if the CLR can’t use the NGen’d file for some reason the CLR gracefully goes back to JIT compiling the assembly’s IL code which must be available.
    • NGen’d files can get out of sync: When the CLR loads NGen’d file. It compares a number of characteristics about the previously compiled code and the current execution environment. Here is a partial list of characteristics that must match.
    • – CLR version: this changes with patches or service packs.
    • – CPU type: this changes if you upgrade your processor hardware
    • – Windows OS version: this changes with a new service pack update
    • – Assembly’s identity module version ID (MVID): this changes when recompiling.
    • – Referenced assembly’s version IDs: this changes when you recompile a referenced assembly
    • – Security : this changes when you revoke permission such as SkipVerification or UnmanagedCode that were once granted.
    • Whenever an end user installs a new service pack of the .NET framework the service pack’s installation program will run NGen.exe in update mode automatically so that NGen’d files are kept in sync with the version of the CLR installed.
    • Inferior execution-time performance: NGen can’t make as many assumptions about the execution environment as the JIT compiler can. This causes NGen.exe to produce inferior code. Some NGen’d applications actually perform about 5% slower when compared to their JIT-compiled counterpart. So, if you’re considering using NGen.exe you should compare NGen’d and non-NGen’d versions to be sure that the NGen’d version doesn’t actually run slower. the reduction in working set size improves performance so using NGen can be net win.
    • NGen.exe makes little or no sense because only the first client request experiences a performance hit; future client requests run at higher speed. In addition for most server applications only one instance of the code is required, so there is no working set benefit . NGen’d images cannot be shared across AppDomains so there is no benefit to NGen’ing an assembly that will be used in a cross-AppDomain scenario.

    The Framework Class Library

    1. The Framework Class library (FCL) – is a set of DLL assemblies that contain several thousand type definition in which each type exposes some functionality
    2. Following are the different types of application that can be created/developed using FCL:
    3. Web Services
    4. Web Forms HTML-based applications (Web sites)
    5. Rich Windows GUI applications
    6. Rich internet Applications (RIAs)
    7. Windows console applications
    8. Windows services
    9. Database stored procedures
    10. Component Library

    Below are the General Framework Class Library namespaces

    Namespace                                                          Description of Contents

    1. System                                              All of the basic types used by every application
    2. System.Data                                     Types for communicating with database & processing data.
    3. System.IO                                         Types for doing stream I/O and walking directories and files
    4. System.Net                                       Types that allows for low-level network communications.
    5. System.Runtime.InteropServices   Types that allow managed code to access unmanaged OS                                                                               platform facilities such as DCOM and Win32 functions.
    6. System.Security                                Types used for protecting data and resources
    7. System.Text                                       Types to work on text in different encodings.
    8. System.Threading            Types used for asynchronous operations & synchronizing access to resources.
    9. System.Xml                    Types used for processing Extensible Markup Language schemas & data.

    The Common Type System

    The types are at the root of the CLR so Microsoft created a format specification – The Common Type System (CTS) that describes how types are defined and how they behave. The CTS specification states that a type can contain zero or more members

    • Field: A data variable that is part of the object’s state. Fields are identified by their name and type
    • Method A function that performs an operation on the object, often changing the object’s state. Methods have a name a signature and modifiers
    • Property: Properties allow an implementer to validate input parameters and object state before accessing the value and/or calculating a value only when necessary. They also allow a user of the type to have simplified syntax. Finally properties allow you to create read-only or write only fields.
    • Event: An event allows a notification mechanism between an object and other interested objects

    The CTS also specifies the rules for type visibility and access to the members of a type. thus the CTS establishes the rules by which assemblies form a boundary of visibility for a type and the CLR enforces the visibility rules

    A type that is visible to a caller can further restrict the ability of the caller to access the type’s members. The following list shows the valid options for controlling access to a member:

    Private : The member is accessible only by other members in the same class type

    Family : The member is accessible by derived types regardless of whether they are within the same assembly.

    Family and assembly The member is accessible by derived types but only if the derived type is defined in the same assembly.

    Assembly: The member is accessible by any code in the same assembly Many languages refer to assembly as internal.

    Family or assembly: The member is accessible by derived types in any assembly. C# refers to family or assembly as protected internal.

    Public : The member is accessible by any code in any assembly.

    The CTS defines the rules governing type inheritance, virtual methods, object lifetime and so on. And it will map the language specific syntax into IL, the “language” of the CLR, when it emits the assembly during compilation. The CTS allows a type to derive from only one base class. To help the developer Microsoft’s C++/CLI compiler reports an error if it detects that you are attempting to create managed code that includes a type deriving from multiple base types.

    All types must inherit from a predefined type: System Object. This object is the root of all other types and therefore guarantees that every type instance has a minimum set of behaviours. Specifically the System.Object type allows you do the following:

    – compare two instances for equality

    – Obtain a hash code for the instance

    – Query the true type of an instance

    – Perform a shallow copy of the instance

    – Obtain a string representation of the instance object’s current state.

    The Common Language Specification:

    Microsoft has defined a Common Language Specification (CLS) that details for compiler vendors the minimum set of features their compiler must support if these compilers are to generate types compatible with other components written by other CLS-compliant languages on top of the CLR.

    The CLS defines rules that externally visible types and methods must adhere to if they are to be accessible from any CLS-compliant programming language. Note that the CLS rules don’t apply to code that is accessible only within the defining assembly. Most other languages, such as C#, Visual Basic and Fortran expose a subset of the CLR/CTS features to the programmer. THE CLS defines the minimum set of features that all languages must support. you shouldn’t take advantage of any features that are outside of the CLS in its public and protected members. Doing so would mean that your type’s members might not be accessible by programmers writing code in other programming languages.

    The [assembly:CLSCompliant(true)] attribute is applied to the assembly. This attribute tells the compiler to ensure that any publicly exposed type doesn’t have any construct that would prevent the type from being accessed from any other programming language. The reason is that the SomeLibraryTypeXX type would default to internal and would therefore no logner be exposed outside of the assembly

    The table below show s how the programming language constructs got mapped to the equivalent CLR fields and methods

    Type member Member Type Equivalent Programming Language Construct
    AnEvent Field Event the name of the field is AnEvent and its type is System.EventHandler
    .ctor Method Constructor
    Finalize Method Constructor
    add_AnEvent Method Event add accessor method
    get_Aproperty Method Property get accessor method
    get_Item Method Indexer get accessor method
    op_Addition Method + operator
    op_Equality Method == operator
    op_Inequality Method != operator
    remove_Anevent Method Event_remove accessor method
    set_Aproperty Method Property set accessor method
    set_Item Method Indexer set accessor method.

    Interoperability with Unmanaged Code: CLR supports 3 interoperability scenarios

    • – Managed code can call an unmanaged function in a DLL
    • – Managed code can use an existing COM component (server)
    • – Unmanaged code can use a managed type (server).
    Digg This
    .NET

    C# 4.0 new Features.

    Dynamice Language Runtime

    Dynamic  Lookup

    dynamic keyword : These Object type need not be known till runtime. Member’s signature is not know till it is executed.

    E.g. System.Reflection

    Programming against COM IDispatch

    Programming against XML or HTML DOM

    Dynamic Language Runtime (DLR) behaves more like Python or Ruby.

    Dynamic in C# is a type for e.g.

    Dynamic WildThings(dynamic  beast, string name)

    {

    Dynamic whatis = beast.Wildness(name);

    ..

    return whatsits;

    }

    dynamic : statically declared on object type, when object is marked to be dynamic that object is recognized by the compiler and it replaces the object metadata to be used during runtime, the runtime  then check to resolve the call which would be invoked either as dynamic dispatch or throws runtime error.

    dynamic != var

    Var  keyword is used for type inference and compile time check is made.

    Dynamic keyword is used for object that is unknown during compilation and hence compile time  check is not made.

    dynamic cannot be used for Extension methods

    dynamic methods invocation cannot use anonymous methods  as parameter.

    dynamic heisenberg;

    Void LocationObserver(float x, float t) {}

    Heisenberg.Observer(LocationObserver); –> right way of using call

    Heisenberg.Observer(delegate (float y, float t){});–> wrong way of using call

    Heisenberg.Observer((x,t)=>x+t);–> wrong way of using call

    dyanmic  objects cannot be used in LINQ.

    Dynamic collection = {1,2,4,5 ,6, 7,8}

    Var result = collection.Select(e=>e.size>25)

    1. Select is an extension method
    2. Selector is a lambda

    Dynamic Language Runtime is loaded everytime  dynamic objects are executed.

    It reduces the efficiency because for caching only for the first time and then subsequent execution is same as normal execution as no caching will be required.

    DLR is a normal assembly part of System.Core , dynamic objects implement IDispatch or IDynamicObject Interface. Using Dynamic XML now we can shorten the invocation for e.g. element.Lastname instead of element.Attribute[LastName].

    COM support in C# 4.0

    COM interops is feature where COM Interface methods are used to interact with Automation Object like Office Automation. Now ref keyword can ignored while using COM Interops and PIA objects.

    Now the publisher creates the COM interops assembly using COM Interface which was earlier release done by the developer of COM Interops. With the latest release of C# there is no option of PIA, hence code is generated or implemented only for the COM Interface methods  that were used by the application.

    Named Parameters and  Optional Parameters

     

    Optional Parameters sets a default value for the parameter used; Optional parameter is used for consistence in C# syntax; Optional parameter takes the default value if the parameter is not passed with method invocation.

    Static void Entrée(string name, decimal price=10.0M, int servers=1, bool vegan =false)

    Static void main ()

    {

    Entrée(“Linuine Prime”, 10.25M,2, true); -> overrides all default values

    Entrée(“Lover”, 11.5M,2); -> overrides bool

    Entrée(“Spaghetti”, 8.5M); ->overrides bool int

    Entrée(“Baked Ziu”); -> overrides bool int decimal

    }

    Named parameters : Bind values to parameters e.g. using Microsoft.Office.Tools.Word;

    Document doc;

    Object filename = “MyDoc.docx”;

    Object missing = System.Reflection.missing.Value;

    Doc.SaveAs(ref fileName, ref missing, ref missing ,…ref embeddedTTFS,…..);

    Now it can be used as doc.SaveAs(FileName:ref fileName, embeddedTTFS: ref embedTTFS);

    the method invocation will contain the parameters that are mentioned and other missing parameter will now have default values.

    e.g. Thing(string color=”white”, string texture=”smooth”, string slope=”square”, string emotion=”calm”, int quantity =1)

    Publi static void Things()

    {

    Thing(“blue”,”bumpy”,”oval”,”shaken”,17);

    Thing(“blue”,”bumpy”,”oval”,”shaken”);

    Thing(texture :”Furry”,shape:”triangular”);

    Thing(emotion:”happy”,quantity:4);

    }

    Benefits : No longer creating overload() simply for the convenience of omitting parameter

    Office Automation COM interops use optional parameters

    No longer have to scorn about VB language.

    It uses principle of Least surprise while mapping of the method.

    Liabilities: Complicates overload resolution of optional parameter

    Events in C# 4.0 

    Syntax for events :

    public event EventHandler<TickEventArgs>Tick;

    Public void OnTick(TickEventArgs e){ Tick(this,e);}

    Public class TickEventArgs:EventArgs

    {

    public string Symbol {get; private set;}

    public string Price {get; private set;}

    public TickEventArgs(symbol, decimal, price)

    {

    Symbol = symbol;

    Price = price;

    }

    }

    In C#4.0, events is now implemented based on compose and swap technique.

    Now Events works for static & instance types, events works for reference and value types.

    Covariance and ContraVariance:

    Covariance : Modifier out on a generic Interface or delegate e.g. IEnumerable<out T>

    The parameter type T can only occur in an output position, if used in input position it will throw error, if used in output position then an argument of a less derived type can be passed.

    Enumeration of giraffe is also Enumeration of animals

    Contravariance: Modifier in on a generic interface or delegate e.g. IComparable<in T>

    Type T can only occur in input position, Compiler will generate  contravariant  conversions. It means an argument of a more derived type can be passed.

    So variance can be used for comparison and enumeration of collections in type safe manner.

    AutoProperties in C# 

    Type inference changes in C# 4.0 has now allowed developer to declare properties and their corresponding accessor  and mutator method are generated by compilier defaultly. For e.g.

    Public class Pt{

    Public int X { get; set;}

    Public int Y { get; set;}

    } and compiler generates the back field which is inaccessible.

    This type of property is now known as Auto properties.

    Implicitly typed local variables : These variables can occur

    1 inside foreach.
    2 Initialization of for
    3 Using statement
    4 Local variable declaration

    Initializers specifies values for fields and properties in single statement.

    Var p1 = new Point {X=1, Y=2};

    Var p2= new Point (1){Y=2};

    Collection Initializers:

    The class should have on Add public method which would take on one Key parameter and the other value parameter then we can use collection initializers as follows

    Public class Dictionary <Tkey, Tvalue>:IEnumerable

    {

    public void Add(Tkey key, Tvalue value) {…}

    ….

    }

    Var namedCircles = new Dictionary<string, Circle>

    {

    {“aa”, new Circle{Origin=new PT{X=1,Y=2}, Radius=2}}

    {“ab”, new Circle{Origin=new PT{X=2,Y=5}, Radius=3}}

    };

    Lambda in C#

    Anonymous methods is a delegate function which is inlined as a block of code.

    Lambda is a functional declarative syntax way of writing Anonymous method and it is a single statement.

    Lambda function has an operator “=>” known as ‘goesto’

    Delegate int SomeDelegate(int i);

    SomeDelegate squareint = x =>x*x;

    Int j =squareint(5); //25
    (x,y) => x ==y;  //type infered
    (int x, string s) => s.Length > x; //type declared.
    () => Console::WriteLine(“Hi”); // no args

    Statement Lambda :e.g.

    Delegate void Another Delegate(string s);

    AnotherDelegate Hello = a => {

    string w = String.Format(“Hello, {0}”,a);

    Console::WriteLine(w);

    }

    Hello(“world”);  == Hello world

    Extension Methods :

    Extension Methods are static methods that can be Invoked using instance method syntax. Extension method are less discoverable and has less functionality. Extension method are static methods has one parameter ‘this’.

    Using Extension Methods

    • Must define inside non generic static class
    • Extension methods are still external static methods
    • Cannot hide, replace or override instance methods
    • Must import namespace for extension method.

    System.Linq defines extension methods for IEnumerable and IQueryable <T>

    Shrinking Delegates using lambda expression Func<int, int> sqr = x=>x*x

    What if entries are not in memory then use lambda expression for that we need to import System.Ling.Expression.

    Lambda functions as delegates become opaque code and treat it as special type, the alternative is Expression<TDelegate>. Expression Trees is used for runtime analysis.

    e.g.

    Int[] digits={0,1,2,3,4,5,6};

    Int [] a = digits.Slice(4,3).Double()

    Is same as Instance Syntax i.e.

    Int []a = Extension.Double(Extension.Slice(digits,4,3));

    LINQ to XML

    Introduction: W3C-Compilant DOM a.k.a. XMLDocument, XMLReader & XMLWriter are part of namespace System.Xml.Linq.

    What is DOM: declarations, element, attribute value and text content can be represented with a class, this tree of objects fully describe a document. This is called a document object model or DOM.

    The LINQ to XML DOM: Xdocument, Xelement and Xattribute, Xdom -> LINQ friendly: This means LINQ has methods that emit useful IEnumerable sequences upon which you can query. It constructors are designed or create an XDOM tree through LINQ project.

    XDOM Overview:

    Types of Elements

    XElement

    XObject is the root element inheritance hierarchy.

    XElement & XDocument are roots of the containership.

    XObject is the abstract base class of XDocument.

    XNode is the base class which excludes attributes and it is the ordered collection of mixed types.

    <data>

    Helloworld              à XText

    <subelement1/>   àXelement

    <!—comment – -> àXComment

    <subelement2/> à Xelement

    </data>

    XContainer

    XElement ———————————————————|————————————————-XDocument

    XDocument: is the root of an XMLTree wraps the root Xelement adding an Xdeclaration.

    Loading and Parsing: XElement, XDocument loads and parse methods to build X-DOM tree from existing source.

    –          Loads builds an XDOM from a file, URI, Stream, TextReader or XmlReader

    –          Parse builds an X-Dom from a string

    –          XNode is created using ReadFrom() from XmlReader.

    –          XmlReader/XMLWriter reads or write from XNode via from CreateReader() or CreateWriter()

    Saving and Serializing: Saving and Serializing of XMLDom is done using the save method from file or stream using TextWriter/XMLWriter

    Instantiating an X-DOM using the Add method of XContainer, for e.g.

    Xelement lastName = new Xelement (“lastName”, “Blogs”);

    LastName.Add(new Xcomment(“nicename”);

    Functional Construction: XDOM supports Functional Construction (it is a mode of instantiation), you build an entire tree in a single expression.

    Automatic Deep Cloning : An already parent node is added to second parent node and deep cloning is made, this process in known as deep Cloning. This automatic duplication keeps X-DOM object instantiation free of side effects.

    Navigating and Querying:

    XDOM returns single value or sequence implementing IEnumerable when a LINQ query is executed.

    FirstNode, LastNode returns first child and last child

    Nodes () returns all children, Elements () return child nodes of XElement type

    SelectMany Query

    Elements () is an extension method that implements IEnumerable<XContainer>

    Element () is same as Elements ().FirstorDefault ()

    Recursive function: Descendants / Descendant Nodes return recursively child elements/Nodes

    Parent Navigation: XNode have parent property and AncestorXXX methods, A parent is always XElement, To access the XDocument we use Document property and Ancestor method return XElement Collection when first element is Parent.

    XElement customer =

    (new XElement (“Customer”,

    new XAttribute (“id”,12),

    new XElement (“firstname”, ”joe”),

    new XElement(“lastname”,”Bloggs”),

    XComment(“nice name”)

    )

    );

    Advantage of Functional Construction is

    –          Code resembles the shape of the XML.

    –          It can be incorporated into the select clause of the LINQ query.

    Specific Content: XElement overloaded take params object array. Public XElement (XName name, params object[] content) here are the decision made by the XContainer.

    Diagram:

    Attribute Navigation: XAttribute define PreviousAttribute () and NextAttribute ().

    Updating an XDOM:

    Most convenient methods to update elements and attributes are as follows

    SetValue or reassign the value property

    SetElementValues /SetAttributeValue

    RemoveXXX

    AddXXX/ReplaceXXX

    Add –> appends a child node

    AddFirst -> adds @ the beginning of collection

    RemoveAll  è {RemoveAttributes (), RemoveNodes ()}

    ReplaceXXX => Removing and then adding,

    AddBeforeSelf, AddAfterSelf, Remove and ReplaceWith are applied to Collections.

    Remove () -> removes current Node from its Parent

    ReplaceWith -> Remove and then insert some other content at the same position.

    E.g. Removes all contacts that feature the comment “confidential” anywhere in their tree

    Contacts. Elements ().Where (e=>e.Descendant.Nodes ()

    .OfType<XComment> ()

    .Any (c=>c.Value ==”confidential”)).Remove();

    Internally Remove () —-Copiesà temporary list –enumerateà temporary list àperform deletionsà avoids errors while deleting and querying at the same time.

    XElement —Values()à the content of that node.

    Setting Values: SetValue or assign the value property it accepts any simple data types

    Explicit casts on XElement & XAttribute

    All standard numeric types

    String, bool, DateTime, DateTimeOffset, TimeSpan & Guid Nullable<> versions of the aforementioned value types

    Casting to a nullable int avoids a NullReferenceException or add a predicate to the where clause

    For e.g. where cust.Attributes(“Credit”).Any() && (int)cust.Attribute

    Automatic XText Concatenation: If you specifically create XText nodes but end up with multiple children

    Var e = new XElement(“test”, new Xtext(“Hello”), new Text(“World”));

    e.Valueè HelloWorld

    e.Nodes().Count()è2

    XDocument: It wraps a root XElement and adds XDeclaration, It is based on XContainer and it supports AddXXX, RemoveXXX & replaceXXX.

    XDocument can accept only limited content

    -a single XElement object (the ‘root’)

    -a single XDeclaration

    – a single XDocumentType object

    – Any number of XProcessing Instruction

    – Any number of XComment objects

    Simplest valid XDocument has just a root element

    var doc= new XDocument(XElement (“test”,”data”));

    XDeclaration is not an XNode and does not appear in document Nodes collection.

    XElement & XDocument follow the below rules in emitting xml declarations:

    –          Calling save with a filename always writes a declaration

    –          Calling save with an XMLWriter writes a declaration unless XMLWriter is instructed otherwise

    –          The toString() never emits XML declaration

    XMLWriter will be set with the following settings OmitXmlDeclaration and Conformance Level properties to produce XML without declaration.

    The purpose of XDeclaration is

    What text encoding to use

    What to put in the XML declaration encoding /standalone attributes.

    XDeclaration Constructors parameters are

    1. Version
    2. Encoding
    3. Standalone

    Var doc = new XDocument ( new Xdeclaration(“10”,”utf-8”,”yes”),new XElement(“test”, “data”));

    File.WriteAllText è encodes using UTF-8

    Namespace in XML: Customer element in the namespace

    OReilly.Nutshell.CSharp is defined as

    <customer >Attributes:  Assign namespace to attributes

    <customer >http://www.w3c.org/2007/XMLSchema-instance>

    <firstname>Joe</firstname>

    <lastname xsi: nil=”true”/>

    </customer>

    Unambiguously xsi: nil attributes informs that lastname is nil.

    Specifying Namespace in the X-DOM

    1. Var e = new XElement(“{http://domain.com/xmlpsace}customer”,”Bloggs”);
    2. Use the XNamespace and XName types

    Public sealed class XNamespace

    {

    Public string Namespace Name {get ;}

    }

    Public sealed class XName

    {

    Public string LocalName {get ;}

    Public XNamespace Namespace {get ;}

    }

    Both types define implicit casts from string, so the following is legal,

    XNamespace ns = “http://domain.com/xmlspace”;

    XName localName = “customer”;

    XName fullName = “{http://domain.com/xmlspace/customer}”;

    XName overloads +operator

    XElement, namespace must be explicitly given otherwise it will not inherit from parent.

    XNamespace ns=”http://domain.com/xmlspace”;

    var data = new XElement (ns+”data”, newXElement(ns+”customer”,”Bloggs”), new

    XElement (ns+”purchase”, “Bicycle”));

    O/p:

    <data >http://domain.com/xmlspace>

    <customer>Bloggs</customer>

    <purchase>Bicycle</purchase>

    </data>

    For nil attribute we write it as <dos xsi_nil=”true”/>

    Annotations: Annotations are intended for your own private use and are treated as black boxes by X-DOM. Following are XObject add & remove annotations

    Public void AddAnnotations(object annot)

    Public void RemoveAnnotations<T> () where T: class

    Annotations methods to retrieve a sequence of matches

    The source can be anything over which LINQ can query such as

    -LINQ to SQL or Entity Framework queries

    -A load collections

    -Another X-DOM

    Regardless of the source, the strategy is the same in using LINQ to emit X-DOM

    For e.g. retrieve customers from a db into XML

    <customers>

    <customer id=’1’>

    <name>sue</name>

    <buys>3</buys>

    <customers>

    We start by writing a functional construction expression for the X-DOM

    Var customers = new XElement (“customers”, new XElement (“customer”, new XAttribute (“id”, 1), new XElement (“name”,”sue”), new XElement (“buys”, 3)));

    We then turn this into a projection and build a LINQ query around it.

    Var customers = New XElement(“customers”, “from c in dataContext.Customers select

    New XElement(“customers” new XAttribute(“id”,c.ID),

    new XElement (“name”,c.Name),

    new XElement(“bugs”,c.Purchase.count)

    )   );

    IQueryable <T> is interface used during enumeration of database query and execution of SQL statement. XStreaming Element is a cut down version of XElement that applies to deferred loading semantics to its child content. This queries passed into an XStreaming Element constructor are not enumerated until you call save, toString or writeTo on the element: this avoids loading the whole X-DOM into memory at once.

    XStreaming Element doesn’t expose methods such as Elements or Attributes. XStreaming Element is not based on XObject.

    Concat operator preserves order so all elements/ nodes are arranged alphabetically.

    System.XML namespace:                             System.XML.*

    XMLReader & XMLWriter

    XmlDocument

    Systen.XML.XPath

    • XPathNavigator -Information and API

    System.Xml.XMLSchema

    System.XML.Serialization

    System.Xml.XLinq

    LINQ centric version of XMLDocument

    XmlConvert – a static class for parsing and formatting XML Strings

    XMLReader is a high performance class for reading

    XMLStream is a low level and forward only manner class for I/O operations

    XMLReader – instantiated using the Create Method

    XMLReader rdr = XMLReader. Create (new System.IO.StringReader (myString));

    XmlReader settings object used to create parsing of validation options:

    XMLReaderSettings settings = new XMLReaderSettings();

    Settings.IgnoreWhitespace = true

    Settings.IgnoreProcessingInstructions = true

    Settings.IgnoreWhitespace = true

    Using ( XMLReader reader =  XmlReader.Create(“customer.xml” ,settings));

    XMLReaderSettings.CloseInput() to close the underlying stream when the reader is closed. The default value for CloseInput and CloseOutput  = true;

    The units of XML stream are XMLNodes; reader traverses the stream in depth first order. Depth property returns the current depth of the cursor.

    The most primitive way of reading is Read (), it first calls positions cursor to first node.

    -When Read() returns false means it went past last node, Attributes are not included in Read based traversal.

    Node Type is of XMLNodeType then its enum members are as follows

    Name , Comment , Document, XmDeclaration, entity, Documentype, Element, EndEntity, DocumentFragment, EndElement, EntityReference, Notation, Text Processing Instruction Whitespace, Attribute, CDATA, Significant Whitespace,

    String properties of Reader: Name & Value.

    Switch (r.NodeType)

    {

    .

    .

    Case XMLNodeType.XmlDeclaration: Console.Writeline(r.value);

    Break;

    Case XMLNodeType.DocumentType: Console.Writeline(r,name+”-“+r.value);

    Break;

    }

    An entity is like a macro; a CDATA is like a verbatim string(@”…”) in C#.

    Reading Elements : XmlReader provides few methods to read XMLDocument. XmlReader throws an XmlException if any validation fails. XmlException has line number and line Position.

    ReadStartElement() verifies that the current NodeType is StartElement

    ReadEndElement() verifies that the current NodeType is EndElement and then calls Read.

    Reader.ReadStartElement (“firstName”);

    Console.Write(Reader.Value);

    Reader.ReadEndElement();

    ReadElementContentAsString -> reads a start Element a text node and an end element, returning as a String;

    Similarly ReadElementContentAsInt -> reads a end Element as Int.

    MoveToContent() skips over all the fluff: XMLdeclarations  whitespace, comments and processing instructions.

    <customer/> -> ReadEndElement throws exception because there is no end element for xml reader.

    The workaround for the above scenario is

    bool Empty = reader.IsEmptyElement();

    reader.ReadStartElement(“customerList”);

    if(!isEmpty) reader.ReadEndElement();

    The ReadElementXXX() handles both kinds of empty elements.

    ReadContentAsXXX parses a text node into type XXX using the XMLConvert class.

    ReadElementContentAsXXX apply to element nodes rather than text node enclosed by the element.

    ReadInnerXML returns an element and all its descendants, when used for attribute returns the value of the attribute.

    ReadOuterXML includes the element at the cursor position and all its descendants

    ReadSubtree is a proxy reader that provides a view over just the current element.

    ReadToDescendant moves the cursor to the first descendant

    ReadToFollowing moves the cursor to the start of the first node

    ReadToNextSibiling moves the cursor to the start of the first sibling node with the specified name/namespace.

    ReadString and ReadElementString same as ReadContentAsString except these methods throw an exception if there’s more than a single text node with the element or comment.

    To make it easy the forward only rule is released during attribute traversal jump to any attribute by calling MoveToAttribute().

    MoveToElement(): returns start element from any place within the attribute node diversion.

    Reader.MoveToAttribute(“XXX”); returns false if the specified attribute doesn’t exists.

    Namespaces and Prefixes:

    XmlReader provides two parallel systems

    -Name

    -Namespace URI and LocalName.Name()

    <c: customer…>               c:customer

    So reader.StartElement(“c:Customer”);

    The second system is aware of 2 namespace-aware properties – NamespaceURI and LocalName

    e.g. <customer e(“logfile.xml”,settings))

    {

    r.readStartElement(“log”)

    while(r.Name == “logentry”)

    {

    XElement logEntry = (XElement)Xnode.ReadFrom(r );

    Int id= (int) logEntry.Attribute(“id”);

    DateTime dt = (DateTime)logEntry.Element(“date”);

    String source = (string)logEntry.Element(“source”);

    }

    r.ReadEndElement();

    }

    By implementing as shown above, you can slot a XElement into a custom type’s ReadXML or WriteXML method without the caller ever knowing you’ve cheated. XElement collaborates with XmlReader to ensure that namespace are kept intact and prefixes are properly expanded. Using XMLWriter with XElement to write inner Elements into an XmlWriter. The following code writes 1 million logentry elements to an XML file using XElement without storing the whole thing in memory:

    Using (XmlWriter w = XmlWriter.Create(“log.xml”)

    {

    w.writeStartElement (“log”);

    for (int I =0; I < 1000000; i++)

    {

    XElement e = new XElement(“logentry”, new XAttribute(“id”,i), new XElement(“source”,”test”));

    e.writeTo(w);

    }

    w.writeEndElement();

    }

    Using XElement incurs minimal execution overhead.

    XMLDocument: It is an in memory representation of an XML document, Its object model and methods conform to a pattern defined by the W3C.

    The base type for all objects in an XMLDocument tree is XmlNode. The following types derive from XmlNode:

    XmlNode:

    XmlDocument

    XmlDocumentfragment

    XmlEntity

    XmlNotation

    XmlLinkedNode è exposes Next Sibling and Prev Sibling.

    XmlLinkedNode is an abstract base for the following subtypes

    XmlLinkedNode

    XmlCharacterData

    XmlDeclaration

    XmlDocumentType

    XmlElement

    XmlEntityReference

    XmlProcessingInstruction

    Loading and Saving the XmlDocument: instantiate an XmlDocument and invoke Load () or LoadXML ()

    –          Load accepts a filename, stream, TextReader or XMLReader

    –          LoadXML accepts a literal XML String.

    e.g. XmlDocument doc = new XmlDocument();

    doc.Load(“customer1.xml”);

    doc.Save(“customer2.xml”);

    using ParentNode property, you can ascend backup the tree,

    Console.WriteLine (doc.DocumentElement.ChildNodes [1].ParentNode.Name);

    The following properties also help traverse the document

    FirstChild LastChild NextSibling PreviousSibling

    XmlNode express an attributes property for accessing attributes either by name or by ordinal position.

    Console.WriteLine (doc.DocumentElement.Attributes[“id”].Value);

    InnerText property represents the concatenation of all child text nodes

    Console.WriteLine (doc.DocumentElement.ChildNodes[1].ParentNode.InnerText);

    Console.WriteLine (doc.DocumentElement.ChildNodes[1].FirstChild.Value);

    Setting the InnerText property replaces all child nodes with a single text node for e.g.

    Wrong way => doc.DocumentElement.ChildNodes[0].Innertext=”Jo”;

    Right way => doc.DocumentElement.ChildNodes[0].FirstChild.InnerText = “jo”

    InnerXML property represents the XML fragment within the current node. Console.WriteLine (doc.DocumentElement.InnerXML);

    Output <firstname>Jim</firstname><lastname>Bo</lastname>

    InnerXML throws an exception if the node type cannot have children

    Creating and Manipulating Nodes

    1. Call one of the CreateXXX methods on XMLDocument.
    2. Add the new node into tree by calling AppendChild, prependChild, InsertBefore or InsertAfter on the desired parent node.

    To remove a node, you invoke RemoveChild, ReplaceChild or RemoveAll

    Namespaces: CreateElement & CreateAttribute () are overloaded to let you specify a namespace and prefix

    CreateXXX(string name);

    CreateXXX(string name, string namespaceURI);

    CreateXXX(string prefix, string localName, string namespaceURI)

    E.g. XmlElement customer = doc.CreateElement(“o”,”customer”,”http://oreilly.com”);

    XPath : Both DOM and the XPath DataModel represents an XMLDocument as a tree.

    XPath Data Model is purely data centric, abstracting away the formatting aspects of XMLText.

    For e.g. CDATA sections are not required in the XPath Data Model

    Given a XML document

    XPath queries within the code in the following ways :

    Call one of the SelectXXX methods on an XMLDocument or XMLNode

    –          Spawn an XPath Navigator from either

    • XmlDocument
    • An XPathDocument

    Call an XPathXXX extension method on an XNode.

    The SelectXXX methods accept an XPath query string

    XmlNode n = doc.SelectSingleNode (“customers/customer [instance=’Jim’] “);

    Console.WriteLine (n.Innertext); // Jim +Bo

    The SelectXXX methods delegate their implementation to XPathNavigator which is used directly over XMLDocument or read-only XPathDocument

    XElement e = e.XPathSelectElement(“customer/customer[firstname =’Jim’]”);

    The extension method used with XNodes are CreateNavigator (); XPathEvaluate (); XPathSelectElement (); XpathSelectElements ();

    Common XPath Operators are as follows

    Operator | Description

    /                              Children

    //                            Recursively children

    .                               CurrentNode

    ..                             ParentNode

    *                            Wildcard

    @                            Attribute

    []                             Filter

    :                               namespace separator

    XPathNavigator: It is a cursor over the XPathDataModel representation of an XML document It is loaded with the primitive methods that move the cursor around the tree

    XPathNavigator Select * () take XPath string / queries and return more complex navigations or multiple nodes.

    E.g. XPathNavigator nav = doc.CreateNavigator();

    XPathNavigator jim = nav.SelectSingleNode(“customers/customer[firstname=’Jim’]”);

    Console.WriteLine (jim.Value);

    The SelectSingleNode method returns a single XPathNavigator. The Select method returning returns XPathNode Iterator which iterates over multiple XPathNavigators.

    XPathNavigator nav = doc.CreateNavigator();

    String xPath = “customers/customer/firstname/text()”;

    Foreach (XPathNavigator nav in nav.Select(xPath))

    Console.WriteLine (nav.Value)

    For faster queries, compile XPath to XPathExpression then pass it to Select* method

    XPathNavigator nav = doc.CreateNavigator ();

    XPathExpression expr = nav.Compile (“customers/customer/firstname”);

    Foreach (XPathNavigator a in nav.Select (expr))

    Console.WriteLine (a.Value);

    Output: Jim Thomas.

    Querying with Namespace:

    XmlDocument doc = new XmlDocument ();

    Doc.Load(“customers.xml”);

    XmlNameSpaceManager xnm = new XMLNamespaceManager (doc.NameTable);

    We can add prefix/namespace pairs to it as follows:

    Xnm.AddNamespace (“o”,”http://oreilly.com” );

    The Select * methods on XMLDocument & XPathNavigator have overloads that accept as XMLNamespaceManager

    XmlNode n =doc.SelectSingleNode (“o: customers/o: customers”, xnm);

    XPathDocument: An XPathNavigator backed by an XPathDocument is faster than an XmlDocument but it cannot make changes to the underlying document:

    XPathDocument doc = new XPathDocument (“customers.xml”);

    XPathNavigator nav = doc.CreateNavigator ();

    Foreach (XPathNavigator a in nav.Select (“customers/customer/firstname”))

    Console.WriteLine (a.Value);

    XSD and Schema Validation: For each domain XML file confirms to a pattern / schema to the standardize and automate the interpretation and validation of XML documents widely used is XSD (XML Schema Definition) which is supported in System.XML

    Performing Schema Validation: You can validate an XML file on one or more schemas before processing it. The validation is done for following reasons

    –          You can get away with less error checking and exception handling.

    –          Schema validation pciks up errors you might otherwise overlook

    –          Error messages are detailed and informative.

    When XmlDocument is loaded into an XMLReader containing schema, validation happens automatically

    Settings.ValidationType = ValidationType.Schema;

    Settings.Schema,Add(null,”customers.xsd”);

    Using (xmlReader r = XmlReader.Create(“customers.xml”, settings))

    Settings.ValidationFlags |= XmlSchemaValidationFlags.ProcessInlineSchema

    if schema validation fails then XmlSchemaValidationException is thrown.

    e.g.

    try {

    While (r.Read());

    } catch (XmlSchemaValidationException ex)

    {

    }

    You want to report on all errors in the document, you must handle the ValidationEventhandler event;

    Settings. ValidationEventHandler t = ValidationHandler;

    Static void ValidationHandler(object sender, ValidationEventArgs e)

    {

    Console.WriteLine (“Error:”+e.Exception.Message);

    }

    The exception property of ValidationEventArgs contains the XmlSchemaValidationException that would have otherwise been thrown. You can also validate on XDocument or XElement that’s already in memory by calling extensions methods in System.XMLSchema. These methods accept XMLSchemaSet and a validationHandler

    e.g.

    XMLSchemaSet set = new XMLSchemaSet ();

    Set.Add (null,@”customer.xml”);

    Doc.Validate (set, (sender, args) => {error.AppendLine (args.Exception.message);});

    LINQ Queries:

    Linq is a set of language and framework feature for constructing type safe queries over in-memory collections and remote data sources. It enables us to query a collection implementing IEnumerable<T>. LINQ offers both validations i.e. compile time and run time error checking.

    The basic units of data in LINQ are sequences and elements. A sequence is any object that implements IEnumerable<T> and an element is each item in the sequence.

    Query operators are methods that transform/project a sequence. In the Enumerable class in System.Linq there are around 40 query operators which are implemented as extension methods. These are called standard query operators.

    Query operators over in-memory local objects are known as LINQ-to-Objects queries. LINQ also support sequence implementing IQueryable<T> interface and supported by standard query operators in Queryable class.

    A query is an expression that transforms sequence with query operators e.g.

    String[] names= {“Tom”, “Dick”, “Harry”};

    IEnumerable<string> filteredNames = names.Where(n=>n.Length>=4)

    Foreach(string name in filteredNames)

    Console.WriteLine(name);

    Next query operators accept lambda expression as an argument. Here it is the signature of where query operator.

    Public static IEnumerable<TSource> where <TSource>(this IEnumerable<TSource>source, Func<TSource, bool> predicate)

    C# also provides another syntax for writing queries called query expression syntax. IEnumerable<string>filteredNAmes from n in names where n.Contains(“a”) select n;

    Chaining Query Operators: To build more complex queries you append additional query operators  to the expression creating a chain. E.g. IEnumerable<string> query = names. Where(n=>n.Contains(“a”))

    .Orderby(n=>n.Length)

    .Select(n=>n.ToUpper());

    Where, OrderBy and select are standard query operators that resolve to extension methods in the Enumerable class

    Where operator: emits filtered verison of the input sequence.

    Orderby operator: emits sorted version of the input sequence.

    Select operator: emits a sequence where each input element is transformed or projected with a given lambda expression.

    The following are the signatures of above 3 operators

    public static IEnumerable<TSource> Where<TSource>(this IEnumerable<TSource>source, func<TSource, bool>predicate)

    public static IEnumerable<TSource> OrderBy<TSource>(this IEnumerable<TSource>source, func<TSource, Tkey>keyselector)

    public static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable <TSource>source, Func<TSource,TResult>selector);

    without extension methods the query loses its fluency as shown below

    IEnumerable<string> query = Enumerable.Select(Enumerable.OrderBy(Enumerable.Where( names, n => n.Contains(“a”)), n=>n.Length),n=>n.ToUpper());

    Whereas if we use extension methods we get a natural linear shape reflect the left-to-right flow of data and keeping lambda expression alongside their query operators

    IEnumerable<string> query = names.Where (n=>n.contains (“a”)).Orderby (n=>n.Length).Select (n=>n.ToUpper ());

    The purpose of the lambda expression depends on the particular query operator. An expression returning a bool value is called a predicate. A lambda expression is a query operator always works on individual elements in the input sequence not the sequence as a whole.

    Lambda expressions and Func signatures: The standard query operators utilize generic Func delegates. Func is a family of general purpose generic delegates in System.Linq, defined with the following intent: The tye arguments in Func appear in the same  order they do in lambda expression . Hence  Func<TSource, bool> matches TSoruce => bool Func<TSource, TResult> matches TSoruce => TResult.

    The standard query operators use the following generic type names

    TSource                ElementType for the input sequence

    TResult                 ElementType for the output sequence if different from TSource.

    TKey                      ElementType for the key used in sorting grouping or joining.

    TSource is determined by the input sequence. TResult and they are inferred from your lambda expression. Func<TSource, TResult> is same as TSource=>TResult lambda expression. TSource and TResult are different types,  so the lambda expression can change the type of each  element, further the lambda expression  determines the output sequence type.

    The where query operator is simpler and requires no type inference for the output because the operator merely filters elements it does not transform them.

    The orderby query operator has a predicate/Key selector as Func<TSource, Tkey> maps an input element to a sorting key. This is inferred from lambda expression and is separate from the input and output element types.

    Query operators in Enumerable class refer to methods instead of lambda expression to emit expression trees. Query operators in Queryable class refer to lambda expression to emit expression trees.

    Natural Ordering: the original ordering of elements in input sequence is important  in LINQ. Operators such as Where and Select preserve the original ordering of the input sequence. LINQ preserves the ordering wherever possible.

    Some of the operators which do not return sequence are as follows

    Int numbers ={10,9,8,7,6};

    Int firstnumber = numbers.First();

    Int Lastnumber = numbers.Last();

    Int secondnumber = numbers.ElementAt(1);

    Int LowestNumber = number.OrderBy(n=>n).First();

    The aggregation operators return a scalar value

    Int count = numbers.Count();

    Int min = numbers.Min();

    The quantifiers return a bool value

    Bool hasTheNumberNine = numbers.Contain(9);

    Bool hasMorethanZeroElement = numbers.Any();

    Bool hasAnOldElement = numbersAny(n=>n%2==1);

    Some query operators accept two input sequence for e.g.

    Int[] seq1 = {1,2,3}; Int[] seq1 = {3,4,5};

    IEnumerable <int> concat = seq1.Concat(seq2);

    IEnumerable <int> union = seq1.union(seq2);

    C# provides a syntactic shortcut for writing LINQ queries called query expressions. Query expression always start with a form clause and ends with either a select or group clause. The from clause declares an range variable similar to traversing the input sequence.

    e.g. IEnumerable<string> query = from n in names where n.contains(“a”) orderby n.length select n.toUpper();

    Range Variables: The identifier immediately following the from keyword syntax is called the range variable refers to the current element in the sequence

    Query expression also let you introduce new range variable via the following clauses: let into An additional from clause.

    Query Syntax vs Fluent Syntax

    Query syntax is simpler for queries that involve any of the following

    1. A let clause for introducing a new variable alongside the range variable.
    2. SelectMany, Join or GroupJoin, followed by an outer range variable reference.

    Finally there are many operators that no keyword in query syntax. These require that you use fluent syntax. This means any operator outside of the following : where select selectmany orderby thenby orderbydescending thenbydescending groupby join groupjoin.

    Mixed Syntax Queries: If a query operator has no query syntax support you can mix query syntax and fluent syntax. The only constraint is that each query syntax component must be complete.

    Deferred Execution: An important feature of most query operators is that they execute not when constructed but when enumerated.

    e.g. IEnumerable<int>query = numbers.Select(n=>*10)

    foreach(int n in query)

    Console.Write(n + “/”); //10 / 20

    All standard query operators provide deferred execution with the following exceptions:

    –          Operators that return a single element or scalar value such as First or Count

    –          The following conversion operators toArray, ToList, ToDictionary, ToLookup cause immediate query execution because their result type have no mechanism for providing deferred execution.

    Deferred Execution is important because its decouples query construction from query execution. This allows you to construct a query in several steps as well as making database queries possible.

    A deferred execution query is reevaluated when you re-enumerate:

    IEnumerate<int>query = numbers.Select(n=>n*10);

    Foreach (int n in query) Console.Write(n+”/”); o/p= 10/20/

    Numbers.Clear();

    Foreach (int n in query) Console.Write(n+”/”); o/p = nothing

    There are a couple of disadvantages:

    Sometimes you want to freeze or cache the results at a certain point in time.

    Some queries are computationally intensive so you don’t want to unnecessarily repeat them.

    Query’s captured variable : Query’s lambda expression reference local variables these variables are subject to captured variable semantics. This means that if you later change their value, the query changes as well.

    Int[] numbers = {1,2};

    int factor =10;

    IEnumerable<int> query = numbers.Select(n=>n*factor);

    Factor =20;

    Foreach(int n in query)Console.Write(n+”|”);//20|40|

    A decorator sequence has no backing structure of its own to store elements. Instead it wraps another sequence that you supply at runtime to which it maintains a permanent dependency. Whenever you request data from a decorator, it in turn must request data from the wrapped input sequence.

    Hence when you call an operator such as select or where you are doing nothing more than instantiating a enumerable class that decorates the input sequence.

    Changing query operators create a layer of decorators When you enumerate query, you are querying the original array, transformed through a layering or chain of decorators.

    Subqueries: A subquery is a query contained within another query’s lambda expression. E.g. string[] musos = {“David”,”Roger”,”Rick”}; IEnumerable<string>query = musos.Orderby(m=>m.split().last());

    m.split() converts each string  into a collection of words upon which we then call the last query operator. M.split().last is the subquery; query references the outer query.

    Subqueries are permitted because you can put any valid C# expression on the right hand side of a lambda. In a query expression, a subquery amounts to a query referenced from an expression in any clause except the from clause.

    A subquery is primarily scoped to the enclosing expression and is able to reference the outer lambda argument ( or range variable in a query expression). A subquery is executed whenever the enclosing lambda expression is evaluated. Local queries follow this model literally interpreted queries follow this model conceptually. The sub query executes as and when required to feed the outer query.

    An exception is when the sub query is correlated meaning that it references the outer range variable.

    Sub queries are called indirectly through delegate in the case of a local query or through an expression tree in the case of an interpreted query.

    Composition Strategies : 3 strategies for building more  complex queries

    –          Progressive query construction

    –          Using into keyword

    –          Wrapping queries

    There are a couple of potential benefits however to building queries progressively :

    It can make queries easier to write

    You can add query  operators conditionally For e.g.

    If(includeFilter)query = query.Where(….)

    This is more efficient than

    Query = query.Where(n=>!includeFilter||expressions) because it avoids adding an extra query operator if includeFilter is false. A progressive approach is often useful in query comprehensions, In fluent syntax we could write this query as a single expression

    IEnumerable<string>query = names.Select(n=>n.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””)).where(n=>n.length>2).orderby(n=>n);

    RESULT:{“Dck”,”hrry”,”mry”}

    We can rewrite the query in progressive manner as follows

    IEnumerable<string>query = from n in names.

    Select n.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””);

    Query = from n in query where n.length > 2 orderby n select n;

    RESULT:{“Dck”,”Hrry”,”Mry”}

    The INTO keyword: The into keyword lets you continue a query after a projection and is a shortcut for progressively querying . With into we can rewrite the preceding query as :

    IEnumerable<string> query = from n in names

    Select n.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””) into noVowel where noVowel.Length > 2 orderby noVowel select noVowel;

    The only place you can use into is after a select or group clause “into” restarts a query allowing you to introduce fresh where, orderby and select clauses.

    Scoping rules: All queries variables are out of scope following an into keyword. The following  willnot compile

    Var query = from n1 in names select n1.Toupper() into n2 where n1.contains(“x”) select n2;

    Here n1 is not in scope so above statement is illegal.

    To see why,

    Var query = names.Select(n1=>n1.Toupper())

    .where(n2=>n1.contains(“x”));

    Wrapping queries: A query built progressively can be formulated into single statement by wrapping one query around another query. In general terms:
    var tempQuery = tempQueryExprn

    Var finalQuery = from … in tempQuery can be reformulated as

    Var finalQuery = from … in (tempQueryExprn).

    Reformulated in wrapped form, it’s the following

    IEnumerable<string> query = from n1 in (

    From n2 in names

    Select n2.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””)).where n1.length>2 orderby n1.select n1

    Projection Strategies: All our select clauses have projected scalar element types. With C# object initializers, you can project into complex types, for e.g. we can write the following class to assist:

    Class TempProjectionITem

    {

    Public string Original;

    Public string Vowelless;

    }

    And then project into it with object initializers:

    String[] names = {“Tom”,”Dick”, “Harry”, “Mary”,”Jay”};

    IEnumerable <TempProjectionItem>temp =  from n in names select new TempProjectionItem {

    Original =n ,

    Vowelless=n.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””))};

    The result is of the type IEnumerable<TempProjectionItem> which we can subsequently query

    IEnumerable<string>query = from item in temp where item.Vowelless.length>2 select item.original;

    This gives the same result as the previous example, but without needing to write one-off class. The compiler does the job instead, writing a temporary class with fields that match the structure of our projection. This means however that the intermediate query has the following type:

    IEnumerable<random-compiler-produced-name>

    We can write the whole query more succinctly with the keyword

    Var query=from n in names

    Select new

    {

    Original =n,

    Vowelless = n.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””))} into temp where temp.Vowelless.Length>2 select tempOriginal;

    The let keyword: introduces a new variable alongside the range variable. With Let we can write a query as follows

    String[] names = {“Tom”, “Dick”, “Harry”, “Mary”,”Jay”};

    IEnumerable<string>query = from n in names

    Let vowelless = n.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””)) .where vowelless.Length>2.orderby vowelless .select n;

    The compiler resolves Let clause by projecting into a temporary anonymous type that contains both the range variable and new expression variable

    Let accomplishes two things:

    –          It projects new elements alongside existing elements

    –          It allows an expression to be used repeatedly in a query without being rewritten.

    Let approach is particularly advantageous in this example because it allows the select clause to project either the original name (n) or its vowel-removed version(v).

    You can have any number of let statements. A let statement can reference variables introduced in earlier let statements. Let reprojects all existing variables transparently.

    Interpreted Queries: LINQ provides two parallel architectures: Local queries for local object collections and interpreted queries for remote data sources. Local queries resolve to query operators in the enumerable class, which in turn resolve to chains of decorator sequences. The delegates that they accept whether expressed in query sysntax, fluent syntax or traditional delegates are fully local to IL code.

    By contrast, interpreted queries are descriptive. They operate over sequences that implement IQuerable<T> and they resolve to the query operators in the Queryable class which emit expression trees that are interpreted at runtime.

    These are two IQueryable<T> implementations in the .NET framework :

    LINQ to SQL

    EntityFramework(EF)

    Create Table Customer

    {

    ID int not null primarykey,

    Name varchar(30)

    }

    Insert customer values (1,”Tom”)

    Insert customer values (2,”Dick”)

    Insert customer values (3,”Harry”)

    Insert customer values (4,”Mary”)

    Insert customer values (5,”Jay”)

    We can write Interpreted Query to retrieve customers whose name contains the letter “a” as follows

    Using System;

    Using System.Ling;

    Using System.Data.Linq;

    Using System.Data.Linq.Mapping;

    [Table] public class Customer

    {

    [column(Isprimarykey=true)] public int ID;

    [column]public string Name;

    }

    Class Test

    {

    Static void main()

    {

    Datacontext datacontext = new DataContext(“connection String”);

    Table<customer>customers = dataContext.GetTable<Customer>();

    IQueryable<string>query = from c in customers where c.Name.contains(“a”).orderby(c.Name.Length).select c.Name.ToUpper();

    Foreach(string name in query)Console.WriteLine(name);

    }

    }

    LINQ to SQL would be as follows

    SELECT UPPER([to][Name])as[value] FROM[Customer]AS[to]WHERE[to].[Name]LIKE@po ORDER BY LEN([to].[Name])

    Here customers is of type table<>, which implements IQueryable<T>. This means the compiler has a choice in resolving where it could call the extension method in Enumerable or the following extension method in Queryable:

    Public static IQueryable<TSource>Where<TSource>(this IQueryable<TSource>source, Expression<Func<TSource,bool>>predicate)

    The compiler chooses “Queryable.Where” bcoz its signature is a more specific match.

    “Queryable.Where” accepts a predicate wrapped in an Expression<TDelegate> type. This instructs the compiler to translate  the supplied lambda expression in otherwords, n=>n.Name.contains(“a”) to an expression tree rather than a compiled delegate. An expression tree is an object model based on the types in System.Linq expression that can be inspected at runtime.

    When you enumerate over an integrated query the outermost sequence runs a program that traverse the entire expression tree, processing it as a unit. In our example LINQ to SQL translates the expression tree to a SQL statement, which it then executes yielding the results as a sequence.

    A query can include both interpreted and local operators. A typical pattern is to have the local operators on the outside and the interpreted components on the inside; this pattern works well with LINQ-to-DB queries.

    AsEnumerable: Enumerable.AsEnumerable is the simplest of all query operators. Here its complete definition

    Public static IEnumerable <TSource>AsEnumerable <TSource>(this IEnumerable<TSource>source)

    { return sources;}

    Its purpose is to cast an IQueryable<T>sequence to IEnumerable<T> forcing subsequent query operators to bind to Enumerable operators instead of Queryable operators. This causes the  instead of Queryable operators. This causes the remainder of the query to execute locally.

    E.g.

    Regex wordcounter = new Regex(@”b(w[-]+]b”);

    Var query =dataContext.MedicalArticales

    .where(article=>article.Topic == “influenza”)

    .AsEnumerable()

    .where(article=>wordCounter.Matches(article.Abstract).Count <100);

    An alternative to calling AsEnumerable is to call toArray or toList. The advantage of AsEnumerable is deferred execution.

    .NET

    Types and Common Type System

    What is Type in .NET Framework

    The .NET Framework is built around types. A type in .NET is a class, structure, interface, enumeration, or delegate. A type is the fundamental unit of programming in .NET. In C#, a type can be declared using the class, structure, and interface keywords. Every piece of code that you write in .NET, even the main program for your application, must be a member of some type.

    In .NET there are two main classifications of types and every type is derived from a Root Reference Type named System.Object (directly or indirectly through another base type).

    • Value Type
        • User Defined Value Types (Structures)
        • Enumeration
    • Reference Type
        • User Defined Types (Classes)
        • Array
        • Delegate

      The runtime requires every type to ultimately derive from System.Object type. This means that the following code is identical

      // Implicitly derived from Object
      class Employee {
      ….
      }
      //Explicitly derived from Object
      class Employee : System.Object {
      ….
      }

      Every instance in .NET is derived from System.Object, so it is guaranteed that every object of every type has a minimum set of methods. Explicitly, System.Object class offers the public listed methods given below

      Public class Object {
      public virtual bool Equals(object);
      public virtual int GetHashCode();

      public virtual string ToString();

      public Type GetType();

      // static
      public static bool Equals(object, object);

      //protected
      ~Object();  // Finalize
      protected object MemberwiseClone();
      }

      Public Methods Description
      Equals Returns true if two objects have the same value
      GetHashCode Returns a hash code for the objects type. A type should override this method if its objects are to be used as a key in a hash table collection. The method should provide a good distribution for its objects.
      ToString returns the full name of the type. However it is common to override this method so that it returns a String object containing a representation of the object state. For e.g. the core types such as Boolean and Int32, override this method to return a string representation of their values.
      Gettype Returns an instance of a type derived object that identifies the type of the object used to call GetType. The returned Type object can be used with the reflection classes to obtain metadata information about the object’s type.
      Protected Methods Description
      MemberwiseClone This nonvirtual method creates a new instance of the type and sets the new object’s instance to be identical to this object’s instance fields. A reference to the instance is returned.
      Finalize This virtual method is called when the garbage collector determines that object is garbage before memory for the object is reclaimed. Types that require cleanup when collected should override this method.

      The advantage of deriving from System.Object means that code is verify If all of your code is a member of a type, as long as you can guarantee type safety—in other words, as long as you can guarantee that it is impossible to coerce an object of one type into behaving like another type that it is not assignment compatible with you can go a long way toward guaranteeing that the code is safe. In addition to making it easy to write code that is more secure, less error prone, and easier to debug,

      Microsoft also wants to enable an unprecedented level of  cross-language interoperability in the .NET Framework. One of the key enabling technologies in the .NET Framework that makes this possible is the Common Type System (CTS). The CTS provides a common set of types for all CLR-compliant languages. With the CTS, Microsoft has created a type system that all CLR-compliant programming languages share. Table 3–1 shows a list of the types supported by the CTS. The definitions of all of these types can be found in the System namespace in the Framework libraries,

      Figure below  Widely Used Types in the .NET Framework  (REFERENCE mdsn)

      Common Types System
      Common Types System

      Type

      Inheritance

      Properties and Fields

      Methods

      Total

      System.Int32

      0

      8438

      6756

      15194

      System.String

      0

      2406

      6484

      8890

      System.Object

      2779

      456

      2947

      6182

      System.IntPtr

      0

      397

      1661

      2058

      System.Boolean

      0

      943

      1096

      2039

      System.EventHandler

      0

      4

      1766

      1770

      System.IComparable

      1158

      0

      0

      1158

      System.IConvertible

      1139

      0

      0

      1139

      System.IFormattable

      1135

      0

      0

      1135

      System.Enum

      1120

      0

      0

      1120

      System.Type

      5

      48

      659

      712

      System.Runtime.Serialization.ISerializable

      617

      0

      0

      617

      System.IDisposable

      602

      0

      0

      602

      System.Single

      0

      86

      505

      591

      System.ICloneable

      574

      0

      0

      574

      System.ValueType

      535

      0

      3

      538

      System.Int16

      0

      345

      139

      484

      System.Collections.IEnumerable

      472

      1

      9

      482

      System.Byte[]

      0

      46

      375

      421

      System.ComponentModel.IComponent

      353

      0

      42

      395

      System.MulticastDelegate

      346

      0

      0

      346

      System.UInt32

      0

      105

      227

      332

      System.IAsyncResult

      13

      3

      315

      331

      System.Byte

      0

      213

      112

      325

      System.UIntPtr

      0

      0

      314

      314

      System.AsyncCallback

      0

      0

      307

      307

      System.Int64

      0

      66

      234

      300

      System.Collections.ICollection

      261

      2

      34

      297

      System.Object[]

      0

      21

      256

      277

      System.Int32&

      0

      0

      267

      267

      System.Array

      233

      0

      32

      265

      System.Attribute

      222

      0

      37

      259

      System.Double

      0

      25

      218

      243

      System.Reflection.Emit.OpCode

      0

      222

      20

      242

      System.Globalization.CultureInfo

      0

      6

      227

      233

      System.Windows.Forms.IWin32Window

      171

      0

      16

      187

      System.String[]

      0

      29

      152

      181

      System.Int32[]

      0

      10

      171

      181

      System.Drawing.Rectangle

      0

      16

      164

      180

      System.Char

      0

      36

      142

      178

      System.DateTime

      0

      40

      134

      174

      System.Exception

      22

      6

      145

      173

      System.IO.Stream

      23

      3

      146

      172

      Figure below shows  Mapping the Basic CLR Types

      System Types Visual Basic .NET Managed C++ C#
      System.Boolean Boolean bool bool
      System.SByte N/A byte sbyte
      System.Int16 Short short short
      System.Int32 Integer long int
      System.Int64 Long __int64 long
      System.Byte Byte byte byte
      System.UInt16 N/A unsigned short ushort
      System.UInt32 N/A unsigned long uint
      System.UInt64 N/A unsigned __int64 ulong
      System.Single Single float float
      System.Double Double double double
      System.Char Char char char
      System.String String System::String string
      System.DateTime Date N/A N/A
      System.Decimal Decimal N/A decimal

      Microsoft has defined a subset of the CTS and features supported by the CLR that all languages must support as a minimum. This subset is known as the Common Language Specification (CLS). For compiler vendors, supporting the CLS means that your language can use any CLS-compliant class library or framework.

      The CTS defines the full set of types supported by the CLR and available internally to any .NET programming language, the CLS defines the subset of the CTS that you must restrict yourself to and a set of rules that compiler and framework developers must adhere to, in order to ensure that their software is usable by all CLR-compliant programming languages.

      Some examples of the rules in the CLS are as follows:

      • A type is CLS compliant if its public interfaces, methods, fields, properties, and events contain only CLS-compliant types or are marked explicitly as not CLS compliant.
      • A CLS Consumer can completely use any CLS-compliant type.
      • A CLS Extender is a CLS consumer tool, and it can also extend (inherit from) any CLS-compliant base class, implement any CLS-compliant interface, and use any CLS-compliant custom attribute on any type, method, field, parameter, property, or event.

      The CLR requires all objects to be created using the new operator, The following statement shows Empl instance creation:

      Empl e = new Empl(“ConstructorParam1”);

      The new operator is encounter by CLR, CLR asks new to perform following operations:

      1. CLR calculates the required bytes by all instance fields defined in the type and all of its base types up to and including System.Object.
      2. It allocates memory for the object by allocating the number of bytes required for the specified type from the managed heap, this allocated memory is initialized to zero.
      3. It initializes the object’s type object pointer and sync block index members.
      4. The type’s instance constructor is called, passing it any arguments specified in the call to new. Most compilers automatically emit code in a constructor to a call a base class’s constructor. Each constructor is responsible for initializing the instance fields defined by the type whose constructor is called. Eventually System.Object’s constructor is called, and this constructor method does nothing but return.

      After new has performed all of these operations, it returns the reference to the newly created object. Also the developer need not to have worry about delete operator anymore, The CLR uses a garbage-collected environment that automatically detects when objects are no longer being used or accessed and frees the object’s memory automatically.

      Casting Between Types

      At runtime, the CLR always knows what type an object is. As we already know that an object’s exact type can identified by calling the GetType method. Because this method is non virtual, it is impossible for a type to spoof another type.

      CLR allows you to cast an object to its type or to any of its base types without requiring you to specify the casting syntax since it is considered implicit conversions. However the developer does need to explicitly cast an object to any of its derived type since such a cast could fail at runtime. The following code shows it why:

      // This type is implicitly derived from System.Object.

      internal class Empl {

      ……

      }

      public sealed class Program{

                  public static void Main() {

                  // No cast needed since new returns an Empl object

                  // and object is a base type of Employee.

                  Object o = new Employee();

                  // Cast required since Empl is derived from object.

                  // Other languages ( such as VB) might not require

                  // this cast to compile

                  Empl e = (Employee) e;

      }

      At runtime, the CLR checks casting operations to ensure that casts are always to the object’s actual type or any of its base types. For e.g. the following code will compile, but at runtime, an invalidCastException will be thrown :

      internal class Empl {

      ……

      }

      public sealed class Program{

      public static void Main() {

                  // construct a Manager object and pass it to PromoteEmployee.

      // A manageer IS-A Object: PromoteEmployee runs OK

      Manager m = new Manager();

      PromoteEmployee(m) ;

      // Construct a DateTime object and pass it to PromoteEmployee.

      // A DateTime is NOT derived from Employee. PromoteEmployee

      // throws a System.InvalidCastException exception.

      DateTime newYears = new DateTime(2010, 1, 1);

      PromoteEmployee(newYears);

      }

      public static void PromoteEmployee(Object o) {

      // At this point, the compiler doesn’t know exactly what type of object o refers to.

      // so compiler allows that code to compile. However, at runtime, the CLR does know

      //what type o refers to (each time the cast is performed) and it checks whether the

      //object’s type is Employee or any type that is derived from Employee.

      Employee e = (Employee)o;

      }

      }

      Because Manager is derived from Employee, the CLR performs the cast and allows PromoteEmployee to continue executing. However, inside PromoteEmployee, the CLR checks the cast and detects that o refers to a DateTime object and is therefore not an Employee or any type derived from Employee. At this point, the CLR can’t allow the cast and throws a System.InvalidCastException.

      If the CLR had allowed the cast the code is unpredictable leading to application crash caused by the ability of types to easily spoof other types. This possibility of conversion is known as Type Spoofing, which is the cause of many security breaches and compromises an application’s stability and robustness. Type safety is therefore an extremely important part of the CLR.

      The “is” & “as” operators for casting objects:

      The C# language is to use the is operator. The is operator checks whether an object is compatible with a given type, and the result of the evaluation is a Boolean : true or false. The “is” operator never throw exception, it always evaluates to any Boolean value.

      Object o = new Object();

    Boolean b1 = (o is Object); // b1 is true

    b1 = (o is Employee);//b1 is now false

    if the object reference is null, the is operator always returns false because there is no object available to verify its existence.

    The is operator implemented as follows

    if (o is Employee) {

    Employee e = (Employee) o;

    // Use e within the remainder of the “if” statement.

    }

    The CLR’s type checking improves security, but it certainly comes at a performance cost, bcoz in the above example CLR verifies the type referred to by the variable o and CLR must ascend the hierarchy tree verifying each base type against the specified type (Employee).

    The above statement can be simplified by using “as” operator:

    Employee e = o as Employee;

    if (e != null) {

    // Use e within the ‘if’ statement.

    }

    In the above code, the CLR verifies whether o is compatible with the Employee type it will be an instance of Employee else it will be null. Then CLR after evaluation returns true if it is compatible and false if e is incompatible because e is now null. similar to “is” operator “as” operator also never emits an exception.

    using directive

    The C# provide a keyword “using” to avoid using namespace qualified types in the specified region/block  of code, the following e.g. shows how using keyword can be used to avoid by the developer from typing full  qualified type name and enhance readability.

    using System.IO;

    using System.Text;

    public sealed class Program {

    public static void Main() {

    FileStream fs = new FileStream(….);

    StringBuilder sb = new StringBuilder();

    }

    }

    The C# using directive instructs the compiler to try prepending different prefixes to a type name until a match is found.

    In the above example, if the C# compiler cannot find the specified type in the source files or in any referenced assemblies, it prepends System.IO. to the type name and verifies the type. If the compiler didn’t find suitable type then it verifies with the next using directive i.e. System.Text. so in this way FileStream is prepended with System.IO and StringBuilder is prepended with System.Text. The compiler automatically expands the code to match correct types during compilation. So using directive really saves a lot of time and improves readability.

    Generally any type that is specified in the source code usually starts matching with the core types found in the  Framework Class Library inside MSCorLib.dll

    The namespace are usually used for resolving the ambiguity problem faced when two vendors of the third party libraries have same type names, then we should use fully qualified type name composed of namespace and type name.

    Creating a namespace is simply a matter of specifying a namespace declaration into your code as follows (in C#)

    namespace CompanyName {

    public sealed class A {                           //typedef: CompanyName.A

    }

    namespace X {

    public sealed class B { … }                   //typedef Company.X.B

    }

    }

    The comment  on the right of the class definitions above indicates the real name of the type the compiler will emit into the type definition metadata table; this is the real name of the type from the CLR’s point of view.

    When the CLR starts running in a windows process, it automatically creates a special type object for the System.Type type defined in MSCorlib.dll. The user-defined type objects are instances of this type and hence their type object pointer members are initialized to refer to the System.Type type object.

    Also System.type type object is an object by itself and has a type object pointer member in it, and it’s member refers to itself because the System.Type type object is itself an “instance” of a type object. And System.Object’s GetType method returns the address stored in the specified object’s type object pointer member. In other words the GetType method returns a pointer to an object’s type object, and this is how you can determine the true type of any object in the system.

    .NET

    C# Fundamentals

    Annontations of C# Fundamentals: In this blog I have written breifly about each of the fundamental features of C# Programming language, I feel the annotations listed below highlights some of the fundamental features of every .NET developer should know before he starts developing professional software. I have tried to keep them very concise  so that the developer need not have to spend more time reading them than using them in the software development. These annontations highlights help every developer understand a broad picture about C#.NET from 2.0 to 4.0 version.

    This will be one of the post of the C# Annontation series being published and continued. I hope this blog makes your reading interesting and enjoyable. (Directly go  to first Annotation)

    Annotation 1. user-defined types .Net

    Annotation 2 the enumerations

    Annotation 3. Define Stream class in .Net Explain with an example using C#.Net.

    Annotation 4. Explain common stream types in .Net

    Annotation 5. Explain how to compress data with a compression Stream.

    Annotation 6. Explain how to Throw and Catch Exceptions in C#.Net

    Annotation 7.Explain how should we use StringBuilder class instead of the String class.

    Annotation 8. Explain why we should close and dispose resources in a finally block instead of a catch block?

    Annotation 9. What is an interface in Net ? Explain with an example using C#.Net?

    Annotation 10. Define most commonly used interfaces in Net Framework.

    Annotation 11. Explain with an example how to create and consume a Generic type.

    Annotation 12. Type Forwarding in .Net.

    Annotation 13. SOAPFormatter:

    Annotation 23. Explain XML Serialization.

    Annotation 24. Explain XML Serialization of an object in C#.

    Annotation 25. XML Deserialization of an object in C#.Net.

    Annotation 26.  Code access security. Describe the purpose of CAS.

    Annotation 27. What is Permission? What is a Permission Set?

    Annotation 28. comparison of strings in C#

    Annotation 30. The Access controlling options of a class member are as follows

    Annotation 31. characteristics of System.Object

    Annotation 32. the interoperability scenarios.

    Annotation 33. response files & its characteristics.

    Annotation 34. characteristics of an assembly

    Annotation 35. types of assemblies

    Annotation 36. the config file of assembly.

    Annotation 37. the publisher policy.

    Annotation 38. publisher Policy element

    ANNOTATION 39. System.Object

    Annotation 40. The CLR requires all objects to be created using the new operator.

    Annotation 41. Is operator.

    Annotation 42. using directive

    Annotation 43. namespace

    Annotation 44. The /checked+ compiler switch

    Annotation 45. checked and unchecked operators

    Annotation46. checked and unchecked statements

    Annotation 47. system.valuetype

    Annotation 48. differences between value type and reference type.

    Annotation 49. .NET type and its members.

    Annotation50. Member accessiblity

    Annotation 51.Partial keyword

    Annotation 52. Explain the namespaces in which .NET has the data functionality class.

    Annotation 53. Overview of ADO.NET architecture.

    Annotation 54.  a dataset object.

    Annotation 55.  the ADO.NET architecture.

    Annotation 56. the steps to perform transactions in .NET

    Annotation 57. Define connection pooling

    Annotation 58. Steps to enable and disable connection pooling?

    Annotation 59. explain enabling and disabling connection pooling.

    Annotation 60. What is the relation between Classes and instances?

    Annotation 61.Difference between dataset and datareader.

    Annotation 62. What are command objects?

    Annotation63. the use of data adapter.

    Annotation 64.  The basic methods of Dataadapter

    Annotation 65. the steps involved to fill a dataset.

    Annotation 66. Identifying changes made to dataset since it was loaded.

    Annotation 67. Steps to add/remove row’s in “DataTable” object of “DataSet”

    Annotation 68. the basic use of “DataView” and its methods.

    Annotation 69. To load multiple tables in a DataSet.

    Annotation 70. applications of CommandBuilder

    Annotation 71. Define connected and disconnected data access in ADO.NET

    Annotation 72. Describe CommandType property of a SQLCommand in ADO.NET.

    Annotation 73. list the debugging windows available.

    Annotation 74. Break mode:

    Annotation 75. the options for stepping through code

    Annotation 76. define a Breakpoint

    Annotation 77. Define Debug and Trace Class.

    Annotation 78. What are Trace switches?

    Annotation 79. configuration of trace switches in the application’s .config file.

    Annotation 80. What is an Event?

    Annotation 81. Define Delegate.

    Annotation 82. What is the purpose of AddHandler keyword?

    Annotation 83. exceptions handling in CLR

    Annotation 84. create and throw a custom exception.

    Annotation 85. difference between Localization and Globalization

    Annotation 86. Define Unicode

    Annotation 87. Steps to generate a resource file

    Annotation 88. Implementation of globalization and localization in the use interface in .NET.

    Annotation 89. the functions of the Resource Manager class

    Annotation 90. Explain preparation of culture-specific formatting in .NET.

    Annotation 91. Define XCopy

    Annotation 92. Explain visual INHERITANCE OF windows forms.

    Annotation 93. Explain the lifecycle of the form.

    Annotation 94. Explain the steps to create menus

    Annotation 95. Anchoring a control and Docking a control

    Annotation 96. Define ErrorProvider control.

    Annotation 97. Explain building a composite control

    Annotation 98. Explain the ways to deploy your windows application ?

    Annotation 99. Explain 3 types of configuration files in windows application in .NET?

    Annotation 100. What are the ways to optimize the performance of a windows application?

    Annotation 101. List out difference between the Debug class and Trace class.

    Annotation 102. Name three test cases you should use in unit testing?

    Annotation 103. Explain the finally statement in C#.NET.

    Annotation 104. the steps to create and implement Satellite Assemblies.

    Annotation 105. Explain the purpose of ResourceManager class. name the namespace that contains it.

    Annotation 106. Explain the purpose of CultureInfo class. What namespace contains it?

    Annotation 107. Explain steps to prepare culture-specific formatting.

    Annotation 108. the Steps to implement localizability to the user interface?

    Annotation 109. Define Trace Listeners and Trace Switches?

    Annotation 110. Explain tracing with an example using C#.NET.

    Annotation 111. Define CLR triggers.

    Annotation 112. Difference between an interface and abstract class

    Annotation 113. Difference between System.String and System.StringBuilder classes.

    Annotation 114. List different ways to deploy an assembly.

    Annotation 115. Define Satellite Assembly.

    Annotation 116. Declare a custom attribute for the entire assembly.

    Annotation 117. Explain abstraction in C#.NET.

    Annotation 118. Explain encapsulation usage in C#.

    Annotation 119. Differentiate between instance data and class data

    Annotation 120. the significance of static method

    Annotation 121. The application of boxing and unboxing.

    Annotation 122. Explain calling a native function exported from a DLL?

    Annotation 123. Simulation of optional parameters to COM functions.

    Annotation 124. Sealed class in C#.NET

    Annotation 125. generics in C#.NET

    Annotation 126. marking a method obsolete

    Annotation 127. System.Environment class in C#.NET.

    Annotation 128. implementation of synchronization in C#.

    Annotation 129. the advantages of CLR procedure over T-SQL procedure.

    Annotation 130. comparison of C# Generics and C++ Templates.

    Annotation 131. an object pool in .NET

    Annotation 132. Exceptions in .NET

    Annotation 133. Custom Exceptions in .NET

    Annotation 134. delegates and its application

    Annotation 135.Explain implementation of Delegates in C#

    Annotation 136. the difference between Finalize() and Dispose()

    Annotation 137. the XmlSerializer  and its use in ACL permissions.

    Annotation 138. circular references.

    Annotation 139. Explain steps to add controls dynamically to the form.

    Annotation 140. Extender provider components and its use.

    Annotation 141. the configuration files in .Net.

    Annotation 142. Describe the accessibility modifier “protected internal” in C#.

    Annotation 143. the difference between Debug.Write and Trace.Write

    Annotation 144. Explain the use of virtual, sealed, override, and abstract.

    Annotation 145. Benefits of a Primary Interops Assembly (PIA)

    Annotation 146. Explain the use of static members with example.

    Annotation 147. How to achieve polymorphism in C#.NET?

    Annotation 148. Define Code-Access security

    Annotation 149. Define Role-based security?

    Annotation 150. Explain steps to deploy an XML web service

    Annotation 151. Explain the namespaces in which .NET has the data functionality class.

    Annotations:

    Annotation 1. user-defined types .Net

    Public class Student
    {
    int age;
    string name;
    public Student(int _age, string _name)
    {
    age=_age;
    name=_name;
    }
    public int Age
    {
    get{return age;}
    set{age=value;}
    }
    public String Name
    {
    get{return name;}
    set{name=value;}
    }
    }

    Student is a user defined type which stores age and name of a student.

    Annotation 2 the enumerations

    An enumeration is a special type of collection in the .NET Framework that can contain lots of named constants.

    e.g.

    public enum IDSTablesType
    {
    SYSTEM, BOARD, CHIP, BLOCK, REGGROUP,REGISTER
    }

    Annotation 3. Define Stream class in .Net Explain with an example using C#.Net.

    The Stream class gives a view of various types of input and output. Streams involve three fundamental operations:

    a. Read: transfer of data from stream into a data structure.
    b. Write: transfer of data from a data structure into stream.
    c. Seeking: querying and updating the current position within the stream.

    Streams are a medium to read and write data to and from memory, file or other objects.

    e.g.:

    System.IO.StreamReader file=new StreamReader(@”abc.txt”)
    string temp=file.ReadToEnd();
    file.close();

    Annotation 4. Explain common stream types in .Net

    a. FileStream: is to read from, write to, open, and close files on a file system. FileStream objects support random access to files using the Seek method.

    b. MemoryStream: It creates streams that have memory as a backing store instead of a disk or a network connection. Memory streams can reduce the need for temporary buffers and files in an application.

    c. StreamReader: It is meant for character input , whereas the Stream class is meant to perform byte input and output. StreamReader is basically used for reading lines of text from a text file.

    d. StreamWriter: It is meant for character output instead of byte output.

    Annotation 5. Explain how to compress data with a compression Stream.

    Compression streams write to another stream. The compression streams take in data like any other stream. But then it writes it in compressed format to another stream.

    Steps to compress data using compression stream:

    a. Open the file and create a new file for the compressed version

    FileStream orgFile = File.OpenRead(@”C:abc.bak”);
    FileStream compFile = File.Create(@”C:abc.gzip”);

    b. Compression stream wraps the outgoing stream with the compression stream.

    GZipStream compStream = new GZipStream(compFile, CompressionMode.Compress);

    c. Write from the original file to compression stream

    int wrtTxt = orgFile.ReadByte();
    while (wrtTxt != -1)
    {
    compStream.WriteByte((byte) wrtTxt);
    wrtTxt = orgFile.ReadByte();
    }

    Annotation 6. Explain how to Throw and Catch Exceptions in C#.Net

    try
    {
    throw new Exception(“Caught Error”);
    }
    catch (Exception e)
    {
    MessageBox.Show(e.Message);
    }

    Annotation 7.Explain how should we use StringBuilder class instead of the String class.

    Whenever any of the methods of String class is used to modify the string, a new object is created and memory is allocated to that object. If there are repeated modifications on the string, the performance can get hampered making it costly. Whereas, the StringBuilder class can be used for modifications over and over again without creating a new object. So one situation where a StringBuilder class might be of use is, if we are manipulation string in a loop.

    Annotation 8. Explain why we should close and dispose resources in a finally block instead of a catch block?

    Catch block gets called only when an exception occurs or is explicitly thrown but we need to release our resources in either case (task failure or success). Finally block gets called irrespective of what happens in try. For example: we fetch data from database. In this case the first step would be creating a connection in order to access the database. Once our work is done successfully or if our logic crashes we need to end the connection so we write it in a finally block.

    Annotation 9. What is an interface in Net ? Explain with an example using C#.Net?

    Interface contains only the declaration for its abstract members (events, methods, properties). Any implementation must be placed in class that implements them. Interface is the only way that lets us the implement multiple inheritance. The interface cannot contain constants, data fields, constructors, and destructors.

    interface ITest
    {
    string Text
    {
    get;
    set;
    }
    string printString();

    Annotation 10. Define most commonly used interfaces in Net Framework.

    IComparable: It is implemented by types for ordering and sorting. The implementation types must implement one single method i.e. CompareTo.

    IDisposable:This is implemented to manage release of unmanaged resources from memory. The garbage collector acts on its own and hence the Dispose method is used to call the garbage collector to free unmanaged resources through this interface.

    IConvertible: It is implemented to convert value of the implementing type into another CLR compatible type. If the conversion fails then an invalidcastexception is thrown.

    ICloneable: Allows creating objects of a class having same values as another instance using the Clone method.

    IEquatable: Implemented to create type specific methods to know the equality between various objects.

    IFormattable: Implemented to convert object into string representations. Objects can define their own specific string representation through this interface’s ToString() method.

    Annotation 11. Explain with an example how to create and consume a Generic type.

    public static object CreateMethod(Type generic, Type innerType, params object[] args)
    {
    System.Type type = generic.MakeGenericType(new System.Type[] { innerType });
    return Activator.CreateInstance(type, args);
    }
    To use it:
    CreateMethod(typeof(List<>), typeof(string));

    Annotation 12. Type Forwarding in .Net.

    Type forwarding is a technique to move types from one assembly to another without the clients needing to recompile the assemblies.

    Steps:

    Assuming we are moving a type called Student from Assembly A to Assembly B

    a. Remove definition of Student in Assembly A and replace it with TypeForwardedTo attribute. E.g.: [assembly: TypeForwardedTo(type OF (AssemblyA.Student)))]

    b. Put the definition of Student in AssemblyB.

    c. Rebuild both assemblies and deploy.

    Annotation 13. SOAPFormatter:

    It is an xml based serialization technique which is used to serialize and deserialize objects and data across networks.

    using System.IO;
    using System.Runtime.Serialization;
    using System.Runtime.Serialization.Formatters.Soap;

    [Serializable]
    public class A:ISerializable
    {
    public void main()
    {
    A MyObjList = new A();
    FileStream fStream = new FileStream(“test.xml”, FileMode.Create);
    SoapFormatter serformatter = new SoapFormatter();
    serformatter.Serialize(fStream, MyObjList);
    fStream.Close();
    }
    public virtual void GetObjectData(SerializationInfo info, StreamingContext context)
    {
    info.AddValue(“Test”, TestString);
    info.AddValue(“Object1”, object1);
    info.AddValue(“Object2”, object2);
    }
    }

    Annotation 23. Explain XML Serialization.

    Serialization allows persisting objects. XML serializations stores objects in form of XML which has become a storage standard. The main advantage with XML is that it is platform, language independent. Any other software can practically read XML and write data, hence interoperability is an added advantage with XML serialization. XML also has the power to show relationships between various objects which is also advantageous when performing serialization of objects along with other related objects.

    Annotation 24. Explain XML Serialization of an object in C#.

    a. Use the namespace “System.Xml.Serialization”.

    b. Create a class whose object is to be serialized

    c. Create an object of the class for example: Class1 c = new Class1();

    d. Set the properties of the Class1 using the object c.

     c.name=”abc”; c.age=10;

    e. Create an instance of XmlSearializer : XmlSerializer x = new XmlSerializer(c.GetType()); TextWriter txtWrite = new StreamWriter( @”c:test.xml” );

    f. Use the xmlserializer instance to serialize the object to xml
    x.Serialize(txtWrite,c);

    g. Execute the project to verify

    Annotation 25. XML Deserialization of an object in C#.Net.

    XmlSerializer srl = new XmlSerializer(typeof(Class1));
    FileStream fStream = new FileStream(filename, FileMode.Open);
    XmlReader rdr = new XmlTextReader(fStream);
    Class1 cls1;
    cls1 = (Class1) srl.Deserialize(rdr);

    Annotation 26. Code access security. Describe the purpose of CAS.

    Code access security is a mechanism to help protect computer systems from malicious code, to run code from unknown origins with protection. It allows code to be trusted to different levels based on where its coming from and its identity. It reduces the chances of your code being misused for performing malicious tasks or operations. It reduces the security vulnerabilities that a piece of code may have as well. CAS defines the permissions and access rights the code has.

    Annotation 27. What is Permission? What is a Permission Set?

    Permission is a rule to enforce restriction on a piece of managed code. This is used by runtime in order to implement its mechanism. Code can request for permissions or runtime can grant permissions on the basis of the characteristics of the code. It also depends on how much the code can be trusted. There are 3 types of permissions:

    a. Code access permissions
    b. Identity permissions
    c. Role-based security permissions

    Permission set is a set of all the permissions that can be assigned to a code group.

    Annotation 28. comparison of strings in C#

    In the past, you had to call .ToString() on the strings when using the == or != operators to compare the strings’ values. That will still work, but the C# compiler now automatically compares the values instead of the references when the == or != operators are used on string types. If you actually do want to compare references, it can be done as follows: if ((object) str1 == (object) str2) { … } Here’s an example showing how string compares work: 
    using System;
    public class StringTest
    {
    public static void Main(string[] args)
    {
    Object nullObj = null; Object realObj = new StringTest();
    int i = 10;
    Console.WriteLine(“Null Object is [” + nullObj + “]n”
    + “Real Object is [” + realObj + “]n”
    + “i is [” + i + “]n”);
    // Show string equality operators
    string str1 = “foo”;
    string str2 = “bar”;
    string str3 = “bar”;
    Console.WriteLine(“{0} == {1} ? {2}”, str1, str2, str1 == str2 );
    Console.WriteLine(“{0} == {1} ? {2}”, str2, str3, str2 == str3 );
    }
    }
    Output:

    Null Object is []
    Real Object is [StringTest]
    i is [10]
    foo == bar ? False
    bar == bar ? True

    Annotation 29.
    Parts of a Managed Module

    Parts of a Managed Module

    1. PE32 or PE32+header

    2. CLR header

    3. Metadata

    4. IL code

    metadata is a set of data tables that describe what is defined in the module, such as types and their members. In addition, metadata also has tables indicating what the managed module references, such as imported types and their members. an assembly is a logical grouping of one or more modules or resource files. Second, an assembly is the smallest unit of reuse, security, and versioning.

    the C# compiler offers a /platform command-line switch. This switch allows you to specify whether the resulting assembly can run on x86 machines running 32-bit Windows versions only, x64 machines running 64-bit Windows only, or Intel Itanium machines running 64-bit Windows only. If you don’t specify a platform, the default is “anycpu”, which indicates that the resulting assembly can run on any version of Windows.

    Annotation 30. The Access controlling options of a class member are as follows


    A type that is visible to a caller can further restrict the ability of the caller to access the type’s members.

    The following list shows the valid options for controlling access to a member:

    1. Private The member is accessible only by other members in the same class type.

    2. Family The member is accessible by derived types, regardless of whether they are within the same assembly. Note that many languages (such as C++ and C#) refer to family as protected.

    3. Family and assembly The member is accessible by derived types, but only if the derived type is defined in the same assembly. Many languages (such as C# and Visual Basic) don’t offer this access control. Of course, IL Assembly language makes it available.

    4. Assembly The member is accessible by any code in the same assembly. Many languages refer to assembly as internal.

    5. Family or assembly The member is accessible by derived types in any assembly. The member is also accessible by any types in the same assembly. C# refers to family or assembly as protected internal.

    6. Public The member is accessible by any code in any assembly.

    Annotation 31. characteristics of System.Object

    All types must (ultimately) inherit from a predefined type:

    System.Object. As you can see, Object is the name of a type defined in the System namespace. This Object is the root of all other types and therefore guarantees that every type instance has a minimum set of behaviors. Specifically, the System.Object type allows you to do the following:

    1. Compare two instances for equality.

    2. Obtain a hash code for the instance.

    3. Query the true type of an instance.

    4. Perform a shallow (bitwise) copy of the instance.

    5. Obtain a string representation of the instance object’s current state.

    Annotation 32. the interoperability scenarios.

    the CLR supports three interoperability scenarios:

    1. Managed code can call an unmanaged function in a DLL Managed code can easily call functions contained in DLLs by using a mechanism called P/Invoke

    2. Managed code can use an existing COM component (server), Using the type library from the already built components, a managed assembly can be created that describes the COM component. Managed code can access the type in the managed assembly just as any other managed type.

    3. Unmanaged code can use a managed type (server) A lot of existing unmanaged code requires that you supply a COM component for the code to work correctly. It’s much easier to implement these components by using managed code so that you can avoid all of the code having to do with reference counting and interfaces.

    Annotation 33. response files & its characteristics.

    A response file is a text file that contains a set of compiler commandline switches. When you execute CSC.exe, the compiler opens response files and uses any switches that are specified in them as though the switches were passed to CSC.exe on the command line.

    The metadata is a block of binary data that consists of several tables. There are three categories of tables: definition tables, reference tables, and manifest tables.

    ModuleDef Always contains one entry that identifies the module. The entry includes the module’s file name and extension (without path) and a module version ID (in the form of a GUID created by the compiler). This allows the file to be renamed while keeping a record of its original name

    TypeDef Contains one entry for each type defined in the module. Each entry includes the type’s name, base type, and flags (public, private, etc.) and contains indexes to the methods it owns in the MethodDef table, the fields it owns in the FieldDef table, the properties it owns in the PropertyDef table, and the events it owns in the EventDef table.

    MethodDef Contains one entry for each method defined in the module. Each entry includes the method’s name, flags (private, public, virtual, abstract, static, final, etc.), signature, and offset within the module where its IL code can be found.

    FieldDef Contains one entry for every field defined in the module. Each entry includes flags (private, public, etc.), type, and name.

    ParamDef Contains one entry for each parameter defined in the module. Each entry includes flags (in, out, retval, etc.), type, and name.

    PropertyDef Contains one entry for each property defined in the module. Each entry includes flags, type, and name.

    EventDef Contains one entry for each event defined in the module. Each entry includes flags and name.

    Annotation 34. characteristics of an assembly

    An assembly is a collection of one or more files containing type definitions and resource files. One of the assembly’s files is chosen to hold a manifest. The manifest is another set of metadata tables that basically contain the names of the files that are part of the assembly. They also describe the assembly’s version, culture, publisher, publicly exported types, and all of the files that comprise the assembly.

    Here are some characteristics of assemblies that you should remember:

    a. An assembly defines the reusable types.

    b. An assembly is marked with a version number.

    c. An assembly can have security information associated with it.

    An assembly’s individual files don’t have these attributes—except for the file that contains the manifest metadata tables. Assemblies deployed to the same directory as the application are called privately deployed assemblies

    Annotation 35. types of assemblies

    The CLR supports two kinds of assemblies: weakly named assemblies and strongly named assemblies.

    An assembly can be deployed in two ways: privately or globally. A privately deployed assembly is an assembly that is deployed in the application’s base directory or one of its subdirectories. A weakly named assembly can be deployed only privately.

    A globally deployed assembly is an assembly that is deployed into some well-known location that the CLR looks in when it’s searching for the assembly. A strongly named assembly can be deployed privately or globally.

    A strongly named assembly consists of four attributes that uniquely identify the assembly: a file name (without an extension), a version number, a culture identity, and a public key.

    If the referenced assembly isn’t in the GAC, the CLR looks in the application’s base directory and then in any of the private paths identified in the application’s configuration file; then, if the application was installed using MSI, the CLR asks MSI to locate the assembly. If the assembly can’t be found in any of these locations, the bind fails, and a System.IO.FileNotFoundException is thrown.

    Any assembly that references .NET Framework assemblies always binds to the version that matches the CLR’s version. This is called unification, and Microsoft does this because they test all of the .NET Framework assemblies with a particular version of the CLR; therefore, unifying the code stack helps ensure that applications will work correctly.

    To the CLR, all assemblies are identified by name, version, culture, and public key. However, the GAC identifies assemblies using name, version, culture, public key, and CPU architecture.

    CLR uses the application’s XML configuration file to locate the moved files.

    <?xml version=”1.0″?>

    <configuration>

    <runtime>

    <assemblyBinding >

    <probing privatePath=”AuxFiles;binsubdir” />

    <dependentAssembly>

    <assemblyIdentity name=”JeffTypes” publicKeyToken=”32ab4ba45e0a69a1″

    culture=”neutral”/>

    <bindingRedirect oldVersion=”1.0.0.0″ newVersion=”2.0.0.0″ />

    <codeBase version=”2.0.0.0″ href=”http://www.Wintellect.com/JeffTypes.dll” />

    </dependentAssembly>

    <dependentAssembly>

    <assemblyIdentity name=”TypeLib” publicKeyToken=”1f2e74e897abbcfe”

    culture=”neutral”/>

    <bindingRedirect oldVersion=”3.0.0.0-3.5.0.0″ newVersion=”4.0.0.0″ />

    <publisherPolicy apply=”no” />

    </dependentAssembly>

    </assemblyBinding>

    </runtime>

    </configuration>

    Annotation 36. the config file of assembly.

    The XML config file contains

    -probing element : Look in the application base directory’s AuxFiles and binsubdir subdirectories when trying to find a weakly named assembly.

    -First dependentAssembly, assemblyIdentity, and bindingRedirect elements : When attempting to locate version 1.0.0.0 of the culture-neutral JeffTypes assembly published by the organization that controls the 32ab4ba45e0a69a1 public key token, locate version 2.0.0.0 of the same assembly instead.

    -codeBase element : When attempting to locate version 2.0.0.0 of the culture-neutral JeffTypes assembly published by the organization that controls the 32ab4ba45e0a69a1 public key token, try to find it at the following URL: www.Wintellect.com/JeffTypes.dll.

    -Second dependentAssembly, assemblyIdentity, and bindingRedirect elements : When attempting to locate version 3.0.0.0 through version 3.5.0.0 inclusive of the culture-neutral TypeLib assembly published by the organization that controls the 1f2e74e897abbcfe public key token, locate version 4.0.0.0 of the same assembly instead.

    -publisherPolicy element : If the organization that produces the TypeLib assembly has deployed a publisher policy file If the publisherPolicy element’s apply attribute is set to yes—or if the element is omitted—the CLR examines the GAC for the new assembly/version and applies any version number redirections that the publisher of the assembly feels is necessary; the CLR is now looking for this assembly/version.

    you can’t specify the probing or publisherPolicy elements in a publisher policy configuration file.

    Annotation 37. the publisher policy.

    You create the publisher policy assembly by running AL.exe as follows: AL.exe /out:Policy.1.0.JeffTypes.dll /version:1.0.0.0 /keyfile:MyCompany.snk /linkresource:JeffTypes.config

    Let me explain the meaning of AL.exe’s command-line switches:

    /out This switch tells AL.exe to create a new PE file, called Policy.1.0.JeffTypes.dll, which contains nothing but a manifest. The name of this assembly is very important. The first part of the name, Policy, tells the CLR that this assembly contains publisher policy information. The second and third parts of the name, 1.0, tell the CLR that this publisher policy assembly is for any version of the JeffTypes assembly that has a major and minor version of 1.0. Publisher policies apply to the major and minor version numbers of an assembly only; you can’t create a publisher policy that is specific to individual builds or revisions of an assembly. The fourth part of the name, JeffTypes, indicates the name of the assembly that this publisher policy corresponds to. The fifth and last part of the name, dll, is simply the extension given to the resulting assembly file.

    /version This switch identifies the version of the publisher policy assembly; this version number has nothing to do with the JeffTypes assembly itself. You see, publisher policy assemblies can also be versioned. Today, the publisher might create a publisher policy redirecting version 1.0.0.0 of JeffTypes to version 2.0.0.0. In the future, the publisher might want to direct version 1.0.0.0 of JeffTypes to version 2.5.0.0. The CLR uses this version number so that it knows to pick up the latest version of the publisher policy assembly.

    /keyfile This switch causes AL.exe to sign the publisher policy assembly by using the publisher’s public/private key pair. This key pair must also match the key pair used for all versions of the JeffTypes assembly. After all, this is how the CLR knows that the same publisher created both the JeffTypes assembly and this publisher policy file.

    /linkresource This switch tells AL.exe that the XML configuration file is to be considered a separate file of the assembly. The resulting assembly consists of two files, both of which must be packaged and deployed to the users along with the new version of the JeffTypes assembly.

    By the way, you can’t use AL.exe’s /embedresource switch to embed the XML configuration file into the assembly file, making a single file assembly,because the CLR requires the XML file to be contained in its own separate file.

    Annotation 38. publisher Policy element

    A publisher should create a publisher policy assembly only when deploying an update or a service pack version of an assembly. When doing a fresh install of an application, no publisher policy assemblies should be installed.

    the administrator would like to tell the CLR to ignore the publisher policy assembly. To have the runtime do this, the administrator can edit the application’s configuration file and add the following publisherPolicy element: <publisherPolicy apply=”no”/>

    This element can be placed as a child element of the <assemblyBinding> element in the application’s configuration file so that it applies to all assemblies, or as a child element of the <dependantAssembly> element in the application’s configuration file to have it apply to a specific assembly.

    In general, use a publisher policy assembly when you build a new version of your assembly that fixes a bug. You should test the new version of the assembly for backward compatibility. On the other hand, if you’re adding new features to your assembly, you should consider the assembly to have no relationship to a previous version, and you shouldn’t ship a publisher policy assembly. In addition, there’s no need to do any backward compatibility testing with such an assembly

    Annotation 39. System.Object

    the System.Object class offers the public instance methods namely

    Public Method Description

    Equals Returns true if two objects have the same value.

    GetHashCode Returns a hash code for this object’s value. A type should override this method if its objects are to be used as a key in a hash table collection. The method should provide a

    good distribution for its objects.

    ToString Returns the full name of the type (this.GetType().FullName).However, it is common to override this method so that it returns a String object containing a representation of

    the object’s state. For example, the core types,such as Boolean and Int32, override this method to return a string representation of their values. Note that ToString is

    expected to be aware of the CultureInfo associated with the calling thread.
    GetType Returns an instance of a Type-derived object that identifies the type of the object used to call GetType. The returned Type object can be used with the reflection classes to

    obtain metadata information about the object’s type. The GetType method is nonvirtual, which prevents a class from overriding this method and lying about its type,

    violating type safety.

    Protected Method Description

    MemberwiseClone This non-virtual method creates a new instance of the type and sets the new object’s instance fields to be identical to the this object’s instance fields. A reference to the new

    instance is returned.

    Finalize This virtual method is called when the garbage collector determines that the object is garbage before the memory for the object is reclaimed. Types that require cleanup

    when collected should override this method.

    Annotation 40. The CLR requires all objects to be created using the new operator.

    Employee e = new Employee(“ConstructorParam1”);

    Here’s what the new operator does:

    1. It calculates the number of bytes required by all instance fields defined in the type and all of its base types up to and including System.Object (which defines no instance fields of its own). Every object on the heap requires some additional members—called the type object pointer and the sync block index—used by the CLR to manage the object. The bytes for these additional members are added to the size of the object.

    2. It allocates memory for the object by allocating the number of bytes required for the specified type from the managed heap; all of these bytes are then set to zero (0).

    3. It initializes the object’s type object pointer and sync block index members.

    4. The type’s instance constructor is called, passing it any arguments (the string “ConstructorParam1” in the preceding example) specified in the call to new. Most compilers automatically emit code in a constructor to call a base class’s constructor. Each constructor is responsible for initializing the instance fields defined by the type whose constructor is being called. Eventually, System.Object’s constructor is called, and this constructor method does nothing but return. You can verify this by using ILDasm.exe to load MSCorLib.dll and examine System.Object’s constructor method.

    Annotation 41. Is operator.

    To cast in the C# language is to use the is operator. The is operator checks whether an object is compatible with a given type, and the result of the evaluation is a Boolean: true or false. The is operator will never throw an exception. The following code demonstrates:

    Object o = new Object();

    Boolean b1 = (o is Object); // b1 is true.

    Boolean b2 = (o is Employee); // b2 is false.

    If the object reference is null, the is operator always returns false because there is no object available to check its type.

    The is operator is typically used as follows:

    if (o is Employee) {

    Employee e = (Employee) o;

    // Use e within the remainder of the ‘if’ statement.

    }

    C# offers a way to simplify this code

    and improve its performance by providing an as operator:

    Employee e = o as Employee;

    if (e != null) {

    // Use e within the ‘if’ statement.

    }

    In this code, the CLR checks if o is compatible with the Employee type, and if it is, as returns a non-null reference to the same object. If o is not compatible with the Employee type, the as operator returns null.

    The as operator works just as casting does except that the as operator will never throw an exception. Instead, if the object can’t be cast, the result is null.

    Annotation 42. using directive

    The C# using directive instructs the compiler to try prepending different prefixes to a type name until a match is found.

    Namespace:

    using Microsoft; // Try prepending “Microsoft.”

    using Wintellect; // Try prepending “Wintellect.”

    public sealed class Program {

    public static void Main() {

    Wintellect.Widget w = new Wintellect.Widget(); // Not ambiguous

    }

    }

    Another way to remove the ambiguity, you must explicitly use alias to tell the compiler which Widget you want to create

    using Microsoft; // Try prepending “Microsoft.”

    using Wintellect; // Try prepending “Wintellect.”

    // Define WintellectWidget symbol as an alias to Wintellect.Widget

    using WintellectWidget = Wintellect.Widget;

    public sealed class Program {

    public static void Main() {

    WintellectWidget w = new WintellectWidget(); // No error now

    }

    }

    the C# compiler offers a feature called extern aliases that gives you a way to work around this rarely occurring problem. Extern aliases also give you a way to access a single type from two (or more) different versions of the same assembly.

    Annotation 43. namespace

    Creating a namespace is simply a matter of writing a namespace declaration into your code as follows (in C#):

    namespace CompanyName {

    public sealed class A { // TypeDef: CompanyName.A

    }

    namespace X {

    public sealed class B { … } // TypeDef: CompanyName.X.B

    }

    }

    The comment on the right of the class definitions above indicates the real name of the type the compiler will emit into the type definition metadata table; this is the real name of the type from the CLR’s perspective.

    Creating a namespace is simply a matter of writing a namespace declaration into your code

    as follows (in C#):

    namespace CompanyName {

    public sealed class A { // TypeDef: CompanyName.A

    }

    namespace X {

    public sealed class B { … } // TypeDef: CompanyName.X.B

    }

    }

    The comment on the right of the class definitions above indicates the real name of the type the compiler will emit into the type definition metadata table; this is the real name of the type from the CLR’s perspective.

    In C#, the namespace directive simply tells the compiler to prefix each type name that appears in source code with the namespace name so that programmers can do less typing.

    Any data types the compiler directly supports are called primitive types. Primitive types map directly to types existing in the Framework Class Library (FCL). For example, in C#, an int maps directly to the System.Int32 type.

    Primitive Type    FCL Type    CLS-Compliant    Description
    sbyte System.SByte No Signed 8-bit value
    byte System.Byte Yes Unsigned 8-bit value
    Short System.Int16 Yes Signed 16-bit value
    ushort System.Uint16 No Unsigned 16-bit value
    int System.Int32 Yes Signed 32-bit value
    Uint System.Uint32 No Unsigned 32-bit value
    long System.Int64 Yes Signed 64-bit value
    ulong System.UInt64 No Unsigned 64-bit value
    char System.Char Yes 16-bit Unicode character (char never represents an 8-bit value as it would inunmanaged C++.)
    float System.Single Yes IEEE 32-bit floating point value
    double System.Double Yes IEEE 64-bit floating point value
    bool System.Boolean Yes A true/false value
    decimal System.Decimal Yes A 128-bit high-precision floating-point value commonly used for financial calculations in which rounding errors can’t be tolerated. Of the 128 bits, 1 bit represents the sign of the value, 96 bits represent the value itself, and 8 bits represent the power of 10 to divide the 96-bit value by (can be
    anywhere from 0 to 28). The remaining bits are unused.
    string System.String    Yes    An array of characters
    object System.Object Yes Base type of all types
    Dynamic System.Object Yes To the common language runtime (CLR), dynamic is identical to object. However, the C# compiler allows dynamic variables to participate in dynamic dispatch using a simplified syntax.

    Specifically, the C# compiler supports patterns related to casting, literals, and operators, as shown in the following examples :

    First, the compiler is able to perform implicit or explicit casts between primitive types such as these:

    Int32 i = 5; // Implicit cast from Int32 to Int32

    Int64 l = i; // Implicit cast from Int32 to Int64

    Single s = i; // Implicit cast from Int32 to Single

    Byte b = (Byte) i; // Explicit cast from Int32 to Byte

    Int16 v = (Int16) s; // Explicit cast from Single to Int16

    Annotation 44. The /checked+ compiler switch

    One way to get the C# compiler to control overflows is to use the /checked+ compiler switch. This switch tells the compiler to generate code that has the overflow-checking versions of the add, subtract, multiply, and conversion IL instructions. The code executes a little slower because the CLR is checking these operations to determine whether an overflow occurred. If an overflow occurs, the CLR throws an OverflowException.

    Annotation 45. checked and unchecked operators


    C# allows this flexibility by offering checked and unchecked operators. Here’s an example that uses the unchecked operator:

    UInt32 invalid = unchecked((UInt32) (-1)); // OK

    And here is an example that uses the checked operator:

    Byte b = 100;

    b = checked((Byte) (b + 200)); // OverflowException is thrown

    b = (Byte) checked(b + 200); // b contains 44; no OverflowException

    Annotation46. checked and unchecked statements

    C# also offers checked and unchecked statements. The statements cause all expressions within a block to be checked or unchecked:

    checked { // Start of checked block

    Byte b = 100;

    b = (Byte) (b + 200); // This expression is checked for overflow.

    } // End of checked block

    In fact, if you use a checked statement block, you can now use the += operator with the Byte, which simplifies the code a bit:

    checked { // Start of checked block

    Byte b = 100;

    b += 200; // This expression is checked for overflow.

    } // End of checked block

    Annotation 47. system.valuetype

    You need to bear in mind some performance considerations when you’re working with reference types.

    The memory must be allocated from the managed heap.

    Each object allocated on the heap has some additional overhead members associated with it that must be initialized.

    The other bytes in the object (for the fields) are always set to zero.

    Allocating an object from the managed heap could force a garbage collection to occur.

     Any type called a class is a reference type and each structure or an enumeration is a value type. All value types must be derived from System.ValueType. All enumerations are derived from the System.Enum

    abstract type, which is itself derived from System.ValueType

    In particular, you should declare a type as a value type if all the following statements are true:

    The type acts as a primitive type. Specifically, this means that it is a fairly simple type that has no members that modify any of its instance fields. When a type offers no members that alter its fields, we say that the type is immutable. In fact, it is recommended that many value types mark all their fields as readonly

    The type doesn’t need to inherit from any other type.

    The type won’t have any other types derived from it.

    So, in addition to the previous conditions, you should declare a type as a value type if one of the following statements is true:

    Instances of the type are small (approximately 16 bytes or less).

    Instances of the type are large (greater than 16 bytes) and are not passed as method parameters or returned from methods.

    Annotation 48. differences between value type and reference type.

    Here are some of the ways in which value types and reference types differ:

    Value type objects have two representations: an unboxed form and a boxed form. Reference types are always in a boxed form.

    Value types are derived from System.ValueType. This type offers the same methods as defined by System.Object. However, System.ValueType overrides the Equals method so that it returns true if the values of the two objects’ fields match. In addition, System.ValueType overrides the GetHashCode method to produce a hash code value by using an algorithm that takes into account the values in the object’s instance fields.

    Because you can’t define a new value type or a new reference type by using a value type as a base class, you shouldn’t introduce any new virtual methods into a value type. No methods can be abstract, and all methods are implicitly sealed (can’t be overridden).

    Reference type variables contain the memory address of objects in the heap. By default, when a reference type variable is created, it is initialized to null, indicating that the reference type variable doesn’t currently point to a valid object. Attempting to use a null reference type variable causes a NullReferenceException to be thrown. By contrast, value type variables always contain a value of the underlying type, and all members of the value type are initialized to 0. Since a value type variable isn’t a pointer, it’s not possible to generate a NullReferenceException when accessing a value type. The CLR does offer a special feature that adds the notion of nullability to a value type.

    When you assign a value type variable to another value type variable, a field-by-field copy is made. When you assign a reference type variable to another reference type variable, only the memory address is copied.

    Because of the previous point, two or more reference type variables can refer to a single object in the heap, allowing operations on one variable to affect the object referenced by the other variable. On the other hand, value type variables are distinct objects, and it’s not possible for operations on one value type variable to affect another.

    Because unboxed value types aren’t allocated on the heap, the storage allocated for them is freed as soon as the method that defines an instance of the type is no longer active. This means that a value type instance doesn’t receive a notification (via a Finalize method) when its memory is reclaimed.

    You tell the CLR what to do by applying the System.Runtime.InteropServices.StructLayoutAttribute attribute on the class or structure you’re defining. To this attribute’s constructor, you can pass LayoutKind.Auto to have the CLR arrange the fields, LayoutKind.Sequential to have the CLR preserve your field layout, or LayoutKind.Explicit to explicitly arrange the fields in memory by using offsets. If

    you don’t explicitly specify the StructLayoutAttribute on a type that you’re defining, your compiler selects whatever layout it determines is best.

    You should be aware that Microsoft’s C# compiler selects LayoutKind.Auto for reference types (classes) and LayoutKind.Sequential for value types (structures).

    Here’s an example:

    using System;

    using System.Runtime.InteropServices;

    // Let the CLR arrange the fields to improve

    // performance for this value type.

    [StructLayout(LayoutKind.Auto)]

    internal struct SomeValType {

    private readonly Byte m_b;

    private readonly Int16 m_x;


    }

    using System;

    using System.Runtime.InteropServices;

    // The developer explicitly arranges the fields of this value type.

    [StructLayout(LayoutKind.Explicit)]

    internal struct SomeValType {

    [FieldOffset(0)]

    private readonly Byte m_b; // The m_b and m_x fields overlap each

    [FieldOffset(0)]

    private readonly Int16 m_x; // other in instances of this type

    }

    It should be noted that it is illegal to define a type in which a reference type and a value type overlap. It is possible to define a type in which multiple reference types overlap at the same starting offset; however, this is unverifiable. It is legal to define a type in which multiple value types overlap; however, all of the overlapping bytes must be accessible via public fields for the type to be verifiable.

    It’s possible to convert a value type to a reference type by using a mechanism called boxing.

    Internally, here’s what happens when an instance of a value type is boxed:

    1. Memory is allocated from the managed heap. The amount of memory allocated is the size required by the value type’s fields plus the two additional overhead members (the type object pointer and the sync block index) required by all objects on the managed heap.

    2. The value type’s fields are copied to the newly allocated heap memory.

    3. The address of the object is returned. This address is now a reference to an object; the value type is now a reference type.

    Note that the lifetime of the boxed value type extends beyond the lifetime of the unboxed value type.

    The CLR accomplishes this copying in two steps. First, the address of the Point fields in the boxed Point object is obtained. This process is called unboxing. Then, the values of these fields are copied from the heap to the stack-based value type instance. Unboxing is not the exact opposite of boxing. The unboxing operation is much less costly than boxing. Unboxing is really just the operation of obtaining a pointer to the raw value type (data fields) contained within an object.

    Internally, here’s exactly what happens when a boxed value type instance is unboxed:

    1. If the variable containing the reference to the boxed value type instance is null, a NullReferenceException is thrown.

    2. If the reference doesn’t refer to an object that is a boxed instance of the desired value type, an InvalidCastException is thrown.

    The second item above means that the following code will not work as you might expect:

    public static void Main() {

    Int32 x = 5;

    Object o = x; // Box x; o refers to the boxed object

    Int16 y = (Int16) o; // Throws an InvalidCastException

    }

    Logically, it makes sense to take the boxed Int32 that o refers to and cast it to an Int16. However, when unboxing an object, the cast must be to the exact unboxed value type—Int32 in this case. Here’s the correct way to write this code:

    public static void Main() {

    Int32 x = 5;

    Object o = x; // Box x; o refers to the boxed object

    Int16 y = (Int16)(Int32) o; // Unbox to the correct type and cast

    }

    Passing a value type instance as an Object will cause boxing to occur, which will adversely affect performance. If you are defining your own class, you can define the methods in the class to be generic

    // p1 DOES get boxed, and the reference is placed in c.

    IComparable c = p1;

    Console.WriteLine(c.GetType());// “Point”

    When casting p1 to a variable (c) that is of an interface type, p1 must be boxed because interfaces are reference types by definition. So p1 is boxed, and the pointer to this boxed object is stored in the variable c. The following call to GetType proves that c does refer to a boxed Point on the heap.

    When overriding the Equals method, there are a few more things that you’ll probably want to do:

    Have the type implement the System.IEquatable<T> interface’s Equals method This generic interface allows you to define a type-safe Equals method

    Overload the == and !=operator methods Usually, you’ll implement these operator methods to internally call the type-safe Equals method.

    If you define a type and override the Equals method, you should also override the GetHashCode method. The reason why a type that defines Equals must also define GetHashCode is that the

    implementation of the System.Collections.Hashtable type, the System.Collections. Generic.Dictionary type, and some other collections require that any two objects that are equal must have the same hash code value.

    When your code invokes a member using a dynamic expression/variable, the compiler generates special IL code that describes the desired operation. This special code is referred to as the payload. At runtime, the payload code determines the exact operation to execute based on the actual type of the object now referenced by the dynamic expression/variable.

    When the type of a field, method parameter, method return type, or local variable, is specified as dynamic, the compiler converts this type to the System.Object type and applies an

    instance of System.Runtime.CompilerServices.DynamicAttribute to the field, parameter, or return type in metadata.

    Note that the generic code that you are using has already been compiled and will consider the type to be Object; no dynamic dispatch will be performed because the compiler did not produce any payload code in the generic code.

    Do not confuse dynamic and var. Declaring a local variable using var is just a syntactical shortcut that has the compiler infer the specific data type from an expression. The var keyword can be used only for declaring local variables inside a method while the dynamic keyword can be used for local variables, fields, and arguments. You cannot cast an expression to var but you can cast an expression to dynamic. You must explicitly initialize a variable declared using var while you do not have to initialize a variable declared with dynamic.

    The C# compiler emits payload code that, at runtime, figures out what operation to perform based on the actual type of an object. This payload code uses a class known as a runtime binder.

    At runtime, the C# runtime binder resolves a dynamic operation according to the runtime type of the object. The binder first checks to see if the type implements the IDynamicMetaObjectProvider interface. If the object does implement this interface, then the interface’s GetMetaObject method is called, which returns a DynamicMetaObjectderived type. This type can process all of the member, method, and operator bindings for the object. Both the IDynamicMetaObjectProvider interface and the DynamicMetaObject base class are defined in the System.Dynamic namespace, and both are in the System.Core.dll assembly.

    When accessing a COM component, the C# runtime binder will use a DynamicMetaObject-derived type that knows how to communicate with a COM component. The COM DynamicMetaObject-derived type is defined in the System.Dynamic.dll assembly.

    Annotation 49. .NET type and its members.

    A type can define zero or more of the following kinds of members:

    Constants: A constant is a symbol that identifies a never-changing data value.

    Fields: A field represents a read-only or read/write data value. A field can be static, in which case the field is considered part of the type’s state. A field can also be instance (nonstatic), in which case it’s considered part of an object’s state.

    Instance constructors: An instance constructor is a special method used to initialize a new object’s instance fields to a good initial state.

    Type constructors: A type constructor is a special method used to initialize a type’s static fields to a good initial state.

    Methods: A method is a function that performs operations that change or query the state of a type (static method) or an object (instance method). Methods typically read and write to the fields of the type or object.

    Operator overloads : An operator overload is a method that defines how an object should be manipulated when certain operators are applied to the object. Because not all programming languages support operator overloading, operator overload methods are not part of the Common Language Specification (CLS).

    Conversion operators: A conversion operator is a method that defines how to implicitly or explicitly cast or convert an object from one type to another type. A conversion operator are not part of CLS.

    Properties: A property is a mechanism that allows a simple, field-like syntax for setting or querying part of the logical state of a type (static property) or object (instance property) while ensuring that the state doesn’t become corrupt.

    Events: A static event is a mechanism that allows a type to send a notification to one or more static or instance methods. An instance (nonstatic) event is a mechanism that allows an object to send a notification to one or more static or instance methods.Events are usually raised in response to a state change occurring in the type or object offering the event. An event consists of two methods that allow static or instance methods to register and unregister interest in the event. In addition to the two methods, events typically use a delegate field to maintain the set of registered methods.

    Types : A type can define other types nested within it. This approach is typically used to break a large, complex type down into smaller building blocks to simplify the implementation.

    When the corresponding compiler must process your source code and produce metadata and Intermediate Language (IL) code for each kind of member in the preceding list. The format of the metadata is identical regardless of the source programming language you use, and this feature is what makes the CLR a common language runtime. The metadata is the key to the whole Microsoft .NET framework development platform; it enables the seamless integration of languages, types, and objects.

    Annotation50. Member accessiblity

    A public type is visible to all code within the defining assembly as well as all code written in other assemblies.

    An internal type is visible to all code within the defining assembly, and the type is not visible to code written in other assemblies.

    The C# compiler requires you to use the /out:<file> compiler switch when compiling the friend assembly (the assembly that does not contain the InternalsVisibleTo attribute). The switch is required

    because the compiler needs to know the name of the assembly being compiled in order to determine if the resulting assembly should be considered a friend assembly.

    Also, if you are compiling a module (as opposed to an assembly) using C#’s /t:module switch, and this module is going to become part of a friend assembly, you need to compile the module by using the C# compiler’s /moduleassemblyname:<string> switch as well. This tells the compiler what assembly the module will be a part of so the compiler can allow code in the module to access the other assembly’s internal types.

    Table 6-1 Member Accessibility

    CLR Term C# Term Description
    Private private The member is accessible only by methods in the defining type or any nested type.
    Family Protected The member is accessible only by methods in the defining type, any nested type, or one of its derived types without regard to assembly.
    Family and Assembly (not supported) The member is accessible only by methods in the defining type, any nested type, or by any derived types defined in the same assembly.
    Assembly internal The member is accessible only by methods in the defining assembly.
    Family or Assembly protected internal The member is accessible by any nested type, any derived type (regardless of assembly), or any methods in the defining assembly.
    Public Public The member is accessible to all methods in any assembly.

    The compiler enforces many restrictions on a static class:

    The class must be derived directly from System.Object because deriving from any other base class makes no sense since inheritance applies only to objects, and you cannot create an instance of a static class.

    The class must not implement any interfaces since interface methods are callable only when using an instance of a class.

    The class must define only static members (fields, methods, properties, and events). Any instance members cause the compiler to generate an error.

    The class cannot be used as a field, method parameter, or local variable because all of these would indicate a variable that refers to an instance, and this is not allowed. If the compiler detects any of these uses, the compiler issues an error.

    E.g.

    using System;

    public static class AStaticClass

    {

    public static void AStaticMethod() { }

    public static String AStaticProperty

    {

    get { return s_AStaticField; }

    set { s_AStaticField = value; }

    }

    private static String s_AStaticField;

    public static event EventHandler AStaticEvent;

    }

    Annotation 51.Partial keyword

    The partial keyword tells the C# compiler that the source code for a single class, structure, or interface definition may span one or more source code files.

    Using the partial keyword allows you to split the code for the type across multiple source code files, each of which can be checked out individually so that multiple programmers can edit the type at the same time.

    Splitting a class or structure into distinct logical units within a single file

    Code spitters : Visual Studio creates two source code files: one for your code and the other for the code generated by the designer.

    Annotation 52. Explain the namespaces in which .NET has the data functionality class.

    System.data contains basic objects. These objects are used for accessing and storing relational data. Each of these is independent of the type of data source and the way we connect to it.

    These objects are:
    1.DataSet
    2.DataTable
    3.DataRelation.

    System.Data.OleDB objects are used to connect to a data source via an OLE-DB provider.These objects have the same properties, methods, and events as the SqlClient equivalents.A few of the object providers are:

    1.OleDbConnection
    2.OleDbCommand

    System.Data.SqlClient objects are used to connect to a data source via the Tabular Data Stream (TDS) interface of only Microsoft SQL Server.The intermediate layers required by an OLE-DB connection are removed in this.This provides better performance.System.XML contains the basic objects required to create, read, store, write, and manipulate XML documents according to W3C recommendations.

    Annotation 53. Overview of ADO.NET architecture.

    Data Provider provides objects through which functionalities like opening and closing connection, retrieving and updating data can be availed.It also provides access to data source like SQL Server, Access, and Oracle).Some of the data provider objects are:

    1.Command object which is used to store procedures.
    2.Data Adapter which is a bridge between datastore and dataset.
    3.Datareader which reads data from data store in forward only mode.

    Annotation 54. a dataset object.

    A dataset object is not in directly connected to any data store. It represents disconnected and cached data. The dataset communicates with Data adapter that fills up the dataset. Dataset can have one or more Datatable and relations.DataView object is used to sort and filter data in Datatable.

    Annotation 55. the ADO.NET architecture.

    ADO.NET provides access to all kind of data sources such as Microsoft SQL Server, OLEDB, Oracle, XML.ADO.NET separates out the data access and data manipulation componenets. ADO.NET includes some providers from the .NET Framework to connect to the database, to execute commands, and finally to retrieve results. Those results are either directly used or can be put in dataset and manipulate it.

    Annotation 56. the steps to perform transactions in .NET

    Following are the general steps that are followed during a transaction:
    1.Call the BeginTransaction. This marks the beginning of the transaction.
    2.Assign the Transaction object returned by BeginTransaction to the Transaction property of the SqlCommand.
    3.Execute the command.
    4.Call the Commit method from SqlTransaction object to save the changes made to the data through the transaction. Call Rollback undo all the transaction which belong to this transaction.

    Annotation 57. Define connection pooling

    A connection pool is created when a connection is opened the first time. The next time a connection is opened, the connection string is matched and if found exactly equal, the connection pooling would work.Otherwise, a new connection is opened, and connection pooling won’t be used.Maximum pool size is the maximum number of connection objects to be pooled.If the maximum pool size is reached, then the requests are queued until some connections are released back to the pool. It is therefore advisable to close the connection once done with it.

    Connection pooling is a method of reusing the active database connections instead of creating new ones every time the user request one. Connection pool manager keeps track of all the open connections. When a new request comes in, the pool manager checks if there exists any unused connections and returns one if available. If all connections are busy and the maximum pool size has not been reached, a new connection is formed and added to the pool. And if the max pool size is reached, then the requests gets queued up until a connection in the pool becomes available or the connection attempt times out.

    Connection pooling behavior is controlled by the connection string parameters. The following are four parameters that control most of the connection pooling behavior:Default max pool size is 100.

    Annotation 58. Steps to enable and disable connection pooling?

    Set Pooling=true. However, it is enabled by default in .NET.To disable connection pooling set Pooling=false in connection string if it is an ADO.NET Connection.If it is an OLEDBConnection object set OLEDB Services=-4 in the connection string.

    Annotation 59. explain enabling and disabling connection pooling.

    To enable connection pooling:

    SqlConnection myConnection = new SqlConnection(@”Data Source=(local)SQLEXPRESS; Initial Catalog = TEST;Integrated Security=SSPI;”);
    This has connection pooling on by default

    To disable connection pooling:
    SqlConnection myConnection = new SqlConnection(@”Data Source=(local)SQLEXPRESS;Initial Catalog=TEST;Integrated Security=SSPI;Pooling=false;”);

    Annotation 60. What is the relation between Classes and instances?

    Class is a group of items, attributes of some entity. Object is any specific item that may or may not be a part of the class. When a class is created, objects for those classes are created.Example:-We create a Class “Food”. This class has attributes like ‘price’, ‘quantity’. For this food class, we create objects like “Spaghetti”, “Pasta”.

    Annotation 61.Difference between dataset and datareader.


    Dataset is

    1.Disconnected
    2.Can traverse data in any order front, back.
    3.Data can be manipulated within the dataset.
    4.More expensive than datareader as it stores multiple rows at the same time.

    Datareader is

    1.Connection needs to be maintained all the time
    2.Can traverse only forward.
    3.It is read only therefore, data cannot be manipulated.
    4.It is less costly because it stores one row at a time

    Annotation 62. What are command objects?

    The command objects are used to connect to the Datareader or dataset objects with the help of the following methods:

    1.ExecuteNonQuery:

    This method executes the command defined in the CommandText property.The connection used is defined in the Connection property for a query.It returns an Integer indicating the number of rows affected by the query.

    2.ExecuteReader:

    This method executes the command defined in the CommandText property.The connection used is defined in the Connection property.It returns a reader object that is connected to the resulting rowset within the database, allowing the rows to be retrieved.

    3.ExecuteScalar:

    1.This method executes the command defined in the CommandText property.
    2.The connection used is defined in the Connection property.
    3.It returns a single value which is the first column of the first row of the resulting rowset.
    4.The rows of the rest of the result are discarded.
    5.It is fast and efficient in cases where a singleton value is required.

    Command objects are used to execute the queries, procedures. Sql statements etc. It can execute stored procedures or queries that use parameters as well.
    It works on the basis of certain properties like ActiveConnection, CommandText, CommandType, Name etc.Command object has three methods:

    1.Execute: executes the queries, stored procedures etc.
    2.Cancel: stops the method execution
    3.CreateParameter: to create a parameter object

    Annotation63. the use of data adapter.

    The data adapter objects connect a command objects to a Dataset object.They provide the means for the exchange of data between the data store and the tables in the DataSet.An OleDbDataAdapter object is used with an OLE-DB provider.A SqlDataAdapter object uses Tabular Data Services with MS SQL Server.
    Data adapters are the medium of communication between datasource like database and dataset. It allows activities like reading data, updating data.

    Annotation 64. The basic methods of Dataadapter

    The most commonly used methods of the DataAdapter are:

    1.Fill:
    This method executes the SelectCommand to fill the DataSet object with data from the data source.Depending on whether there is a primary key in the DataSet, the ‘fill’ can also be used to update an existing table in a DataSet with changes made to the data in the original datasource.

    2.FillSchema
    This method executes the SelectCommand to extract the schema of a table from the data source.It creates an empty table in the DataSet object with all the corresponding constraints.

    3.Update
    This method executes the InsertCommand, UpdateCommand, or DeleteCommand to update the original data source with the changes made to the content of the DataSet.

    Annotation 65. the steps involved to fill a dataset.

    The DataSet object is a disconnected storage.It is used for manipulation of relational data.
    The DataSet is filled with data from the store.We fill it with data fetched from the data store. Once the work is done with the dataset, connection is reestablished and the changes are reflected back into the store.

    Steps to fill a dataset in ADO.NET are:
    1.Create a connection object.
    2.Create an adapter by passing the string query and the connection object as parameters.
    3.Create a new object of dataset.
    4.Call the Fill method of the adapter and pass the dataset object.

    Annotation 66. Identifying changes made to dataset since it was loaded.

    The changes made to the dataset can be tracked using the GetChanges and HasChanges methods.The GetChanges returns dataset which are changed since it was loaded or since Acceptchanges was executed.The HasChanges property indicates if any changes were made to the dataset since it was loaded or if acceptchanges method was executed.The RejectChanges can be used to revert thee changes made to the dataset since it was loaded.

    Annotation 67. Steps to add/remove row’s in “DataTable” object of “DataSet”

    ‘NewRow’ method is provided by the ‘Datatable’ to add new row to it.’DataTable’ has “DataRowCollection” object which has all rows in a “DataTable” object.
    Add method of the DataRowCollection is used to add a new row in DataTable.We fill it with data fetched from the data store. Once the work is done with the dataset, connection is reestablished.Remove method of the DataRowCollection is used to remove a ‘DataRow’ object from ‘DataTable’.RemoveAt method of the DataRowCollection is used to remove a ‘DataRow’ object from ‘DataTable’ per the index specified in the DataTable.

    Annotation 68. the basic use of “DataView” and its methods.

    A DataView is a representation of a full table or a small section of rows.It is used to sort and find data within Datatable.Following are the methods of a DataView:
    1.Find : Parameter: An array of values; Value Returned: Index of the row
    2.FindRow : Parameter: An array of values; Value Returned: Collection of DataRow
    3.AddNew : Adds a new row to the DataView object.
    4.Delete : Deletes the specified row from DataView object

    Annotation 69. To load multiple tables in a DataSet.

    MyDataSet myds = new MyDataSet();

    SqlDataAdapter myda = new SqlDataAdapter (“procId”, this.Connection);
    myda.SelectCommand.CommandType = CommandType.StoredProcedure;
    myda.SelectCommand.Parameters.AddWithValue (“@pId”, pId);
    myda.TableMappings.Add (“Table”, myds.xval.TableName);
    myda.Fill (myds);

    ADO.NET Code showing Dataset storing multiple tables.
    DataSet ds = new DataSet();
    ds.Tables.Add(dt1);
    ds.Tables.Add(dt2);
    ds.Tables.Add(dtn);

    Annotation 70. applications of CommandBuilder

    CommandBuilder builds “Parameter” objects automatically.CommandBuilder is used to build complex queries. It can even build commands that are based on the returned results. CommandBuilder is less error prone and more readable than the command object.

    Annotation 71. Define connected and disconnected data access in ADO.NET

    Data reader is based on the connected architecture for data access. Does not allow data manipulation.Dataset supports disconnected data access architecture. This gives better performance results.

    Annotation 72. Describe CommandType property of a SQLCommand in ADO.NET.

    CommandType is a property of Command object which can be set to Text, Storedprocedure. If it is Text, the command executes the database query. When it is StoredProcedure, the command runs the stored procedure. A SqlCommand is an object that allows specifying what is to be performed in the database.

    Access database at runtime using ADO.NET
    SqlConnection sqlCon = new SqlConnection(connectionString)
    sqlCon.Open();
    string strQuery = “select CategoryName from abcd”;
    SqlCommand cmd = new SqlCommand(strQuery, conn);
    SqlDataReader reader = cmd.ExecuteReader();
    while (reader.Read())
    {
    Console.WriteLine(reader [0]);
    }
    reader.Close();
    con.Close();

    Annotation 73. list the debugging windows available.     

    The windows which are available while debugging are:
    Breakpoints, Output, Watch, Autos, Local, Immediate, Call Stacks, Threads, Modules, Processes, Memory, Disassembly and Registers.

    Annotation 74. Break mode:

    When changes are made to the code in an application, the way to be able to view how those changes have changed the way of execution is Break Mode. In break mode, a snapshot of the running application is taken in which the status and values of all the variables is stored.

    Annotation 75. the options for stepping through code

    The applications consist of various activities which need to be performed during the execution. Some of them are composite activities which need to be executed in parallel or conditionally. These activities are classified as ParallelActiviy and ConditionalActivity.

    The two options of debugging handle these activities differently as follows:

    1. Branch stepping:
    In this, when the control gets transferred to another concurrent activity, it happens without being noticed. Only the activities in the currently selected branch are stepped through although other activities in the workflow may be executing concurrently. If you want to debug any concurrent activity, then a breakpoint needs to be placed appropriately. Stepping continues in that branch when the breakpoint is triggered.

    2. Instance stepping:
    In this, you can step through as well as debug the concurrent activities. You can even notice the change in control that occurs when concurrently executing activities get executed. Instance stepping option should be chosen while debugging state machine workflows.

    Annotation 76. define a Breakpoint

    Using Breakpoints you can break or pause the execution of an application at a certain point.
    A breakpoint with an action associated with it is called a ‘tracepoint’. Using tracepoints, the debugger can perform additional actions instead of having an application only enter a break mode.

    Annotation 77. Define Debug and Trace Class.

    1.Debug Class (System.Diagnostics)
    It provides a set of methods and properties that help debug your code. This class cannot be inherited.

    2.Trace Class (System.Diagnostics)
    It provides a set of methods and properties that help you trace the execution of your code. This class cannot be inherited. For a list of all members of this type, see Trace Members.

    Annotation 78. What are Trace switches?

    Trace switches are used to enable, disable and filter the tracing output. They are objects can be configured through the .config file.

    Annotation 79. configuration of trace switches in the application’s .config file.

    Switches are configured using the .config file.Trace switches can be configured in an application and the trace output can be enabled or disabled. Configuring involves changing the value of the switch from an external source after being initialized.The values of the switch objects can be changed using the .config file.

    Annotation 80. What is an Event?

    When an action is performed, this action is noticed by the computer application based on which the output is displayed. These actions are called events. Examples of events are pressing of the keys on the keyboard, clicking of the mouse. Likewise, there are a number of events which capture your actions.

    Annotation 81. Define Delegate.     

    Delegates are kind of similar to the function pointers. But they are secure and type-safe.A delegate instance encapsulates a static or an instance method.
    Declaring a delegate defines a reference type which can be used to encapsulate a method having a specific signature.

    Annotation 82. What is the purpose of AddHandler keyword?

    The AddHandler statement allows you to specify an event handler. AddHandler has to be used to handle shared events or events from a structure.
    The arguments passed to the Addhandler are:
    1.The name of an event from an event sender and
    2.An expression that evaluates to a delegate

    Annotation 83. exceptions handling in CLR

    Usually the exceptions that occur in the try are caught in the catch block. The finally is used to do all the cleaning up work. But exceptions can occur even in the finally block. By using CLR, the exception are caught even in the finally block. Also when an exception is thrown, the CLR looks for an appropriate catch filter that can handle the exception and then it executes the finally block before terminating the execution on the catch filter.

    The code in the finally always runs. If you return out of the try block, or even if you do a “goto” out of the try, the finally block always runs:

    using System;

    class main
    {
    public static void Main()
    {
    try
    {
    Console.WriteLine(“In Try block”);
    return;
    }
    finally
    {
    Console.WriteLine(“In Finally block”);
    }
    }
    }

    Both “In Try block” and “In Finally block” will be displayed. Whether the return is in the try block or after the try-finally block, performance is not affected either way. The compiler treats it as if the return were outside the try block anyway. If it’s a return without an expression (as it is above), the IL emitted is identical whether the return is inside or outside of the try. If the return has an expression, there’s an extra store/load of the value of the expression (since it has to be computed within the try block).

    Annotation 84. create and throw a custom exception.     

    The usual try – catch – finally – ent try format has to be followed. However, in this case, instead of using the preset exceptions from the System.Exception, you define your OwnException class and inherit Exception or ApplicationException class.You need to define three constructors in your OwnException Class: One without parameters, other with String parameter for error message and the last one has to have one parameter as a String and other as an Inner exception object.

    Example:
    class OwnException : ApplicationException
    {
    public OwnException() : base() {}
    public OwnException(string s) : base(s) {}
    public OwnException(string s, Exception ae) : base(s, ae) {}
    }

    Annotation 85. difference between Localization and Globalization

    In Globalization, an application is developed to support various languages and cultures.Its features and code design independent of a single language or locale.In Localization, an application is adapted for a local market. It may include translation of the UI to the local language and customizing its features if necessary.

    Annotation 86. Define Unicode

    Unicode is a 16-bit character encoding scheme that enables characters from various languages to be used. The characters are taken from languages of Western European, Eastern European, Cyrillic, Greek, Arabic, Hebrew, Chinese, Japanese, etc regions. It is defined by ISO 10646.

    Annotation 87. Steps to generate a resource file

    Resource files are used to separate the implementation of the application from its User Interface. Thus, it eliminates the need to change a various sections of code to add a different language.

    Annotation 88. Implementation of globalization and localization in the use interface in .NET.

    Globalization is the process of making an application that supports multiple cultures without mixing up the business logic and the culture related information of that application. Localization involves adapting a global application and applying culture specific alterations to it. The classes and the interfaces provided by the System.Resources allow storing culture specific resources.

    Annotation 89. the functions of the Resource Manager class

    The ResourceManager class performs:

    1.A look up for culture-specific resources
    2.Provision of resource fallback when a localized resource does not exist
    3.Supports for resource serialization

    Annotation 90. Explain preparation of culture-specific formatting in .NET.

    The CutureInfo class can be used for this purpose. It represents information about a specific culture including the culture names, the writing system, and the calendar. It also provides an access to objects that provide information for common operations like date formatting and string sorting.

    Annotation 91. Define XCopy

    XCopy command is an advanced version of the copy command used to copy or move the files or directories to another location (including locations across networks). It excludes the hidden and system files.

    Annotation 92. Explain visual INHERITANCE OF windows forms.

    Steps:
    1.Create a windows form called the base window
    2.Add a menu bar on this form
    3.Right click on the solution explorer and select add new item
    4.Select inherited form
    5.From the dialogue box choose basewindow form.
    6.Name the new form as the childform
    7.Now the new childform would have a menu from the baseform.
    8.We can place as many controls on baseform as we want to.

    Annotation 93. Explain the lifecycle of the form.

    1.Load: fired when form is first loaded in the application
    2.Activated: fired whenever the form gets the focus i.e. when loaded first time, restored from the minimize state, whenever the form is brought in front.
    3.Deactivated: fired whenever the form looses focus i.e. when form is closed, minimized, when it is in background.
    4.Closing: Triggered when application wishes to be closed.
    5.Closed: Triggered when application is closed.
    6.Disposed: Used for garbage collection.

    Annotation 94. Explain the steps to create menus 

    1.MenuItem item1=new MenuItem();
    2.item1.text=”item1″;
    3.item1.Value=”one”;
    4.Menu1.items.add(item1);

    Annotation 95. Anchoring a control and Docking a control

    1. Anchoring: used to resize controls dynamically with the form.
    2. Docking: to adhere to the edges of its container.

    Annotation 96. Define ErrorProvider control.

    It is a control to provide a feedback about another control that has an error. It allows user to see where the error is by displaying a blinking icon in front of the control. On mousehover, tooltip appears showing the control description.

    Annotation 97. Explain building a composite control 

    Steps to create a Composite control:

    1.Select a project
    2.Right click and add a new item (User Control – .ascx) to the selected project.
    3.Add @Control Directive
    4.Add all the controls that you want to be displayed on the User control as a part of one or more web pages.
    5.Write the code for all the tasks to be performed by the user control.
    6. Create accessor methods for the outside world accessing this user control.

    Annotation 98. Explain the ways to deploy your windows application ?

    The ways to deploy your windows application
    1.Merge Module Project: Allows the package of the components to be shared between multiple applications.
    2.Setup Project: Builds an msi and exe installer for a Windows application.
    3.Web Setup Project: Builds an installer for a Web application.
    4.Cab Project: Creates a cabinet file for downloading.

    Annotation 99. Explain 3 types of configuration files in windows application in .NET?

    1.Application Coniguration: They contain configuration settings specific to applications. These files provide a way of overriding the metadata in assemblies without having to rebuild the application.

    2.Machine Configuration Files:Machine Configuration allows to provide settings for all the applications on a computer. The name of machine configuration file is machine.config.

    3.Security configuration files: It contains information that describes the permissions and rights. The information in this files can be related to code access security system.

    Annotation 100. What are the ways to optimize the performance of a windows application?

    1.Knowing when to use StringBuilder
    2.Comparing Non-Case-Sensitive Strings
    3.Use string.Empty
    4.Replace ArrayList with List
    5.Use && and || operators
    6.Smart Try-Catch
    7.Replace Divisions
    8.Code profiling
    9.Use performance monitor to observe counters written in the application

    Annotation 101. List out difference between the Debug class and Trace class.  

    Use debug class for debug builds, use Trace class for both debug and release builds.

    Annotation 102. Name three test cases you should use in unit testing? 

    Positive test cases (correct data, correct output), negative test cases (broken or missing data, proper handling), exception test cases (exceptions are thrown and caught properly).

    Annotation 103. Explain the finally statement in C#.NET.

    Finally statement is executed always irrespective of any condition or error occurring in a method. It is often used to cleanup code.
    E.g.:

    public void MyMethod()
    {
    try
    {
    //code to connect to database
    //code to perform action
    }
    catch (Exception ex)
    {
    //handle exception
    }
    finally
    {
    //code to disconnect database.
    }
    }

    Annotation 104. the steps to create and implement Satellite Assemblies.

    Satellite assemblies are resource assemblies specific to language/culture. Different resource files are created for different languages/cultures and then the needed one is loaded based on the user.

    1. Create a new web application
    2. Drag 2 label controls, a dropdownlist and a button control. Add 2 items to the drop down, en-US and fr-FR.
    3. Add a new folder called resources and add 2 resource assembly files in it., e.g.: res1.resx and res1.fr-FR.resx In res1.resx, add a key “Welcome” and value as “Hello there”. In res1.fr-FR.resx, add a key called “Welcome” and value as “Bon Journe”.
    4. Generate their resource files by using the resgen command for each of these resource files and keep them in the App_GlobalResources folder.
    5. On button’s click even at the following code:
    6. Session[“language”]=dropdownlist1.SelectedItem .Text ;
    Response.Redirect (“Webform2.aspx”);
    7. Add a label inWebform2.aspx.
    When you execute the Webform1.aspx, choose one of the English or French option from the dropdownlist and hit the button. It should display welcome message based on the chosen option from the dropdownlist.

    Annotation 105. Explain the purpose of ResourceManager class. name the namespace that contains it.

    Use ResourceManager class to retrieve resources that exist in an assembly. Steps to do so are:

    1.create a reference to the assembly that has the resources.
    2.create an instance of ResourceManager.
    3.specify the base name of the resource file and provide the reference to the assembly that contains it.
    4.Use the ResouceManager’s GetObject or GetString method to retrive the resource.
    System.Resources namespace contains it.

    Annotation 106. Explain the purpose of CultureInfo class. What namespace contains it?

    System.Globalization namespace contains CultureInfo class. This class provides information about a specific culture, i.e. datetime format, currency, language etc.

    Annotation 107. Explain steps to prepare culture-specific formatting.

    The NumberFormatInfo class is used to define how symbols, currencies etc are formatted based on specific cultures.
    E.g.:

    using System;
    using System.Globalization;

    public class TestClass
    {
    public static void Main()
    {
    int i = 100;
    // Creates a CultureInfo for English in the U.S.
    CultureInfo us = new CultureInfo(“en-US”);
    // Display i formatted as currency for us.
    Console.WriteLine(i.ToString(“c”, us));

                 // Creates a CultureInfo for French.
    CultureInfo fr = new CultureInfo(“fr-FR”);
    // Displays i formatted as currency for france.
    Console.WriteLine(i.ToString(“c”, fr));
    }
    }

    Annotation 108. the Steps to implement localizability to the user interface?

    Steps to implement localizability to the user interface.Implementation consists of basically translating the UI.set the culture and UI culture for an application make the following entry in the web.config

     For setting culture in pages use the following:

    Annotation 109. Define Trace Listeners and Trace Switches?

    Trace listeners are objects that are used to receive store and route tracing information. The trace listener decides the final destination where the tracing information is routed to. There are 3 types
    Default, TextWriter and EventLog
    Trace switches are used to define behavior of Trace Listeners.

    E.g.:

    both switches are off, to turn them on , replace 0 by 1.

    Annotation 110. Explain tracing with an example using C#.NET.

    In web.config, set EnableTracing=”true”

    System.Diagnostics.Trace.WriteLine (“Error in Method1.”);
    System.Diagnostics.Trace.WriteLineIf(variable, “Error in Method1.”);

    Annotation 111. Define CLR triggers.

    A CLR trigger could be a Date Definition or Date Manipulation Language trigger or could be an AFTER or INSTEAD OF trigger.Methods written in managed codes that are members of an assembly need to be executed provided the assembly is deployed in SQL 2005 using the CREATE assembly statement.The Microsoft.SqlServer.Server Namespace contains the required classes and enumerations for this objective.

    Steps for creating CLR Trigger

    Follow these steps to create a CLR trigger of DML (after) type to perform an insert action:
    1. Create a .NET class of triggering action
    2. Make an assembly (.DLL) from that Class
    3. Enable CLR environment in that database.
    4. Register the assembly in SQL Server
    5. Create CLR Trigger using that assembly

    Annotation 112. Difference between an interface and abstract class 

    In the interface all methods must be abstract; In the abstract class some methods can be concrete.
    In the interface no accessibility modifiers are allowed, which is possible in abstract classes.

    Annotation 113. Difference between System.String and System.StringBuilder classes. 

    System.String is immutable;System.StringBuilder was designed with the purpose of having a mutable string where a variety of operations  can be performed.

    Annotation 114. List different ways to deploy an assembly. 

    MSI installer, a CAB archive, and XCOPY command.

    Annotation 115. Define Satellite Assembly. 

    When you write a multilingual or multi-cultural application in .NET, and want to distribute the core application separately from the localized modules, the localized assemblies that modify the core application are called satellite assemblies.

    Annotation 116. Declare a custom attribute for the entire assembly.

    Global attributes must appear after any top-level using clauses and before the first type or namespace declarations. An example of this is as follows:
    using System;
    [assembly : MyAttributeClass] class X {}
    Note that in an IDE-created project, by convention, these attributes are placed in AssemblyInfo.cs.

    Annotation 117. Explain abstraction in C#.NET.

    Abstraction is used to create a common set of methods that might have different specific implementations by subclasses. Abstract class cannot be instantiated and consists of abstract methods without any implementations. Classes inheriting from abstract class must implement all the methods in abstract class.

    Public abstract class Shape
    {
    Private float _area;
    Public Float Area
    {
    Get{return _area;}
    Set{_area=value;}
    }
    Public abstract void CalculateArea();

         Class Rect:Shape
         {
               Private float _height;
               Private float _width;
               Public Rect(float height, float width)
               {
                       _height = height;
                       _width = width;
               }
               Public Float Height
               {
                     Get{return _height}
                     Set{_height=value;}
               }
               Public Float Width
               {
                      Get{return _width}
                      Set{_width=value;}
               }
               Public override void CalculateArea()
               {
                      This.Area=_height*_width;
          }
    }

    Annotation 118. Explain encapsulation usage in C#.

    Encapsulation hides the internal state and behavior of an object. Encapsulation if used with access modifiers such as private, public, protected. It provides a way to protect data.

    public class MyClass
    {
    private string name;
    public string Name
    {
    get
    {
    return name;
    }
    set
    {
    name = value;
    }
    }
    }
    public class main
    {
    public static int Main(string[] args)
    {
    MyClass myclass = new MyClass();
    myclass.name = “Communication”;
    Console.WriteLine(“The name is :{0}”, myclass.Name);
    return 0;
    }
    }

    See use of encapsulation through properties for accessing and setting the values.

    Annotation 119. Differentiate between instance data and class data

    Class data in terms of static class is the data that particular class holds in its structure. Instances can refer to different objects of the same class that hold different values using the same class structure in memory heap.

    Annotation 120. the significance of static method

    Static methods are used when we want only one copy of that method to perform action and remain active at a single point in time.Imagine a scenario where you need a class to connect to the database once and remain as it is for the entire application.

    public static class MyDbConnection
    {
    public static void ConnectToDb()
    {
    //code to connect to database
    }
    }
    public void SomeMethod()
    {
    MyDbConnection.ConnectToDb();
    }
    You don’t want different instances of this class to be created and connect to the database again and again, hence make a static method in a static class.

    Annotation 121. The application of boxing and unboxing.

    Boxing and Unboxing are used to convert value types into reference types and vice versa. Developers often need to make some methods generic and hence create methods that accept objects rather than specific value types. The advantage of reference types is that they don’t create a copy of the object in memory, instead pass the reference of the data in the memory. This uses memory much more efficiently, especially, if the object to be passed is very heavy.

    public class MyClass
    {
    public void MyClass()
    {
    }
    public void MyMethod()
    {
    int intVar1 = 1; // i is an integer. It is a value type variable.
    object objectVar = intVar1;
    // boxing occurs. The integer type is parsed to object type
    int intVar2 = (int)objectVar;
    // unboxing. The object type is unboxed to the value type
    }
    }

    Annotation 122. Explain calling a native function exported from a DLL?

    Here’s a quick example of the DllImport attribute in action:
    using System.Runtime.InteropServices;
    class C
    {
    [DllImport(“user32.dll”)]
    public static extern int MessageBoxA(int h, string m, string c, int type);
    public static int Main()
    {
    return MessageBoxA(0, “Hello World!”, “Caption”, 0);
    }
    }
    This example shows the minimum requirements for declaring a C# method that is implemented in a native DLL. The method C.MessageBoxA() is declared with the static and external modifiers, and has the DllImport attribute, which tells the compiler that the implementation comes from the user32.dll, using the default name of MessageBoxA. For more information, look at the Platform Invoke tutorial in the documentation.

    Annotation 123. Simulation of optional parameters to COM functions.

    You must use the Missing class and pass Missing.Value (in System.Reflection) for any values that have optional parameters.

    Annotation 124. Sealed class in C#.NET

    The sealed modifier is used to prevent derivation from a class. An error occurs if a sealed class is specified as the base class of another class. A sealed class cannot also be an abstract class.

    Annotation 125. generics in C#.NET

    Generic types to maximize code reuse, type safety, and performance. They can be used to create collection classes. Generic collection classes in the System.Collections.Generic namespace should be used instead of classes such as ArrayList in the System.Collections namespace.

    The classes and the methods can treat the values of different types uniformly with the use if generics. The usage of generics is advantageous as:
    1.They facilitate type safety
    2.They facilitate improved performance
    3.They facilitate reduced code
    4.They promote the usage of parameterized types
    5.The CLR compiles and stores information related to the generic types when they are instantiated. (The generic type instance refers to the location in memory of the reference type to which it is bound for all the instances of the generic type.)

    Annotation 126. marking a method obsolete

    [Obsolete] public int Foo() {…}or

    [Obsolete(“This is a message describing why this method is obsolete”)] public int Foo() {…}Note: The O in Obsolete is always capitalized.

    Annotation 127. System.Environment class in C#.NET.

    The System.Environment class can be used to retrieve information like:
    1.command-line arguments
    2.the exit code
    3.environment variable settings
    4.contents of the call stack
    5.time since last system boot
    6.the version of the common language runtime.

    Annotation 128. implementation of synchronization in C#.

    You want the lock statement, which is the same as Monitor Enter/Exit:
    lock(obj) { // code }
    translates to

    try {
    CriticalSection.Enter(obj);
    // code
    }
    finally
    {
    CriticalSection.Exit(obj);
    }

    Annotation 129. the advantages of CLR procedure over T-SQL procedure.

    The use of the CLR procedure makes it possible to do the complex database operations without having an in-depth knowledge of T-SQL. It also enables focusing the business layer towards the database in order to increase the server performance by avoiding unnecessary server trips. Also, with the help of the .NET Base Class Libraries, the complex logical operations can be effectively performed. It deals with automatic garbage collection, memory management, exception handling, etc. due to which it is known as Managed Procedure. The concepts of the OOP can also be applied .It provides the ability to leverage the features of .NET Code Access Security (CAS) to prevent assemblies from performing certain operations.

    Annotation 130. comparison of C# Generics and C++ Templates.

    1.C# generics and templates in C++ are more or less similar syntactically.
    2. C# Generic types are strong typed. C++ Templates are loosely typed.
    3. C# Generic types are instantiated at the runtime. C++ templates are instantiated at the compile time.
    4. C# Generic types do not permit the type parameters to have default values. C++ templates do.

    Annotation 131. an object pool in .NET

    An object pool is a container of objects that holds a list of other objects that are ready to be used.
    It keeps track of:
    1.Objects that are currently in use
    2.The number of objects the pool holds
    3.Whether this number should be increased
    4.The request for the creation of an object is served by allocating an object from the pool.
    This reduces the overhead of creating and re-creating objects each time an object creation is required.

    The request for the creation of an object is served by allocating an object from the pool.This reduces the overhead of creating and re-creating objects each time an object creation is required.Its access is limited to the types derived from the defining class in the current assembly or the assembly itself.

    Annotation 132. Exceptions in .NET

    It is a runtime error which occurs because of unexpected and invalid code execution..Net had enhanced exception handling features. All exceptions inherit from System.Exception.

    Annotation 133. Custom Exceptions in .NET

    Custom Exceptions are user defined exceptions.There are exceptions other than the predefined ones which need to be taken care of. For example: The rules for the minimum balance in a Salary A/C would be different from that in a Savings A/C due to which these things need to be taken care of during the implementation.

    Annotation 134. delegates and its application

    The delegates in .NET are like the pointers to the functions in C/C++. The difference is that these are type safe unlike the once in C/C++.There are situations where in a programmer or an application needs to perform an action on a particular event. Eg: Some user action like click, text change, etc. So when these actions are performed by the user, the delegates invoke the respective functions.Delegates are like type safe function pointers. We need delegates as they can be used to write much more generic functions which are type safe also.It encapsulates the memory address of a function in the code. Events are created using delegates in .Net. When an event is published/thrown, the framework examines the delegate behind the event and then calls the function that is referred to by the delegate.

    Annotation 135.Explain implementation of Delegates in C#

    Here is an implementation of a very simple delegate that accepts no parameters.

    public delegate void MyDelegate();// Declaration
    class MyClass
    {
    public static void MyFunc()
    {
    Console.WriteLine(“MyFunc Called from a Delegate”);
    }
    public static void Main()
    {
    MyDelegate myDel = new MyDelegate(MyFunc);
    myDel();
    }
    }

    Delegate implementation
    namespace Delegates
    {
    public delegate int DelegateToMethod(int x, int y);

             public class Math
             {
                    public static int Add(int a, int b)
                    {
                          return a + b;
                    }

              public static int Multiply(int a, int b)
    {
    return a * b;
    }

              public static int Divide(int a, int b)
    {
    return a / b;
    }
    }
    public class DelegateApp
    {
    public static void Main()
    {
    DelegateToMethod aDelegate = new DelegateToMethod(Math.Add);
    DelegateToMethod mDelegate = new DelegateToMethod(Math.Multiply);
    DelegateToMethod mDelegate = new DelegateToMethod(Math.Multiply);
    DelegateToMethod dDelegate = new DelegateToMethod(Math.Divide);
    Console.WriteLine(“Calling the method Math.Add() through the aDelegate object”);
    Console.WriteLine(aDelegate(5, 5));
    Console.WriteLine(“Calling the method Math.Multiply() through the mDelegate object”);
    Console.WriteLine(mDelegate(5, 5));
    Console.WriteLine(“Calling the method Math.Divide() through the dDelegate object”);
    Console.WriteLine(dDelegate(5, 5));
    Console.ReadLine();
    }
    }
    }

    Annotation 136. the difference between Finalize() and Dispose()

    Dispose() is called by as an indication for an object to release any unmanaged resources it has held.Finalize() is used for the same purpose as dispose however finalize doesn’t assure the garbage collection of an object.Dispose() operates determinalistically due to which it is generally preferred.

    Annotation 137. the XmlSerializer and its use in ACL permissions.

    The XmlSerializer constructor generates a pair of classes derived from XmlSerializationReader and XmlSerializationWriter by analysis of the classes using reflection.Temporary C# files are created and compiled into a temporary assembly and then loaded into a process.The XmlSerializer caches the temporary assemblies on a per-type basis as the code generated like this is expensive. This cached assembly is used after a class is created.Therefore the XmlSerialize requires full permissions on the temporary directory which is a user profile temp directory for windows applications.

    Annotation 138. circular references.

    A circular reference is a run-around wherein the 2 or more resources are interdependent on each other rendering the entire chain of references to be unusable. There are quite a few ways of handling the problem of detecting and collecting cyclic references.

    1. A system may explicitly forbid reference cycles.
    2. Systems at times ignore cycles when they have short lives and a small amount of cyclic garbage. In this case a methodology of avoiding cyclic data structures is applied at the expense of efficiency.
    3. Another solution is to periodically use a tracing garbage collector cycles.
    Other types of methods to deal with cyclic references are:
    a. Weighted reference counting
    b. Indirect reference counting

    Annotation 139. Explain steps to add controls dynamically to the form.

    The following code can be called on some event like page load or onload of some image or even a user action like onclick.

    protected void add_button(Button button)
    {
    try
    {
    panel1.Controls.Add(button); // Add the control to the container on a page
    }
    catch (Exception ex)
    {
    label1.Text += ex.Message.ToString();
    }
    }

    Annotation 140. Extender provider components and its use.

    An extender provider is a component that provides properties to other components.

    Implementing an extender provider:

    1.Use the ProvidePropertyAttribute, which specifies the name of the property that an implementer of IExtenderProvider provides to other components, attribute to specify the property provided by your extender provider.
    2.Implement the provided property.
    3.Track which controls receive your provided property.
    4.Implement the IExtenderProvider, which defines the interface for extending properties to other components in a containe, interface.

    Annotation 141. the configuration files in .Net.

    The Machine.Config file, which specifies the settings that are global to a particular machine.

    This file is located at the following path:
    WINNTMicrosoft.NETFramework[Framework Version]CONFIGmachine.config

    The simplest way to add application-specific settings into an application configuration file is to use an application configuration file.
    The file is an XML file and contains add elements with key and value attributes.

    The authentication section controls the type of authentication used within your Web application Windows, Forms or Passport type of authentication can be defined.

    Eg:
    or tags can be used with authorization to allow or deny access to your web application to certain users or roles,

    Annotation 142. Describe the accessibility modifier “protected internal” in C#.

    The Protected Internal access modifier can be accessed by:
    1.Members of the Assembly
    2.The inheriting class
    3.The class itself

    Annotation 143. the difference between Debug.Write and Trace.Write

    1.Debug.Write: Debug Mode, Release Mode (used while debugging a project)
    2.Trace.write: Release Mode (used in Released version of Applications)

    Annotation 144. Explain the use of virtual, sealed, override, and abstract.

    The virtual keyword enables a class to be overridden. If it has to be prevented from being overridden, then the sealed keyword needs to be used. If the keyword virtual is not used, members of the class can even then be overridden. However, its usage is advised for making the code meaningful. The override keyword is used to override the virtual method in the base class. Abstract keyword is used to modify a class, method or property declaration. You cannot instantiate an abstract class or make calls to an abstract method directly. An abstract virtual method means that the definition of the method needs to bQe given in the derived class.

    Annotation 145. Benefits of a Primary Interops Assembly (PIA)

    A primary interop assembly contains type definitions (as metadata) of types implemented with COM. Only a single PIA can exist, which needs to be signed with a strong name by the publisher of the COM type library. One PIA can wrap multiple versions of the same type library. A COM type library imported as an assembly can be a PIA only if it has been signed and published by the same publisher. Therefore, only the publisher of a type library can produce a true PIA, that can be considered as the unit of an official type definition for interoperating with the underlying COM types.

    Annotation 146. Explain the use of static members with example.

    Static members are not associated with a particular instance of any class. They need to be qualified with the class name to be called. Since they are not associated with object instances, they do not have access to non-static members .i.e.: “this” cannot be used, which represents the current object instance.

    Annotation 147. How to achieve polymorphism in C#.NET?

    Polymorphism is when a class can be used as more than one type through inheritance. It can be used as its own type, any base types, or any interface type if it implements interfaces. It can be achieved in the following ways:
    1.Derived class inherits from a base class and it gains all the methods, fields, properties and events of the base class.
    2. To completely take over a class member from a base class, the base class has to declare that member as virtual

    Annotation 148. Define Code-Access security

    Code access security is a mechanism that helps limit the access to the code by protecting the resources. It defines permissions which tell about the rights to access various. It also imposes restrictions on code at run time.
    Code:

    using System;
    namespace ConsoleReadRegistry
    {
    class SecurityClass
    {
    [STAThread]
    static void Main(string[] args)
    {
    Microsoft.Win32.RegistryKey regKey;
    try
    {
    regKey =Microsoft.Win32.Registry.LocalMachine.OpenSubKey
    (“Software\Microsoft\.Net Framework”, false);
    string[] skNames = regKey.GetSubKeyNames();
    for (int i=0;i
    {
    Console.WriteLine(“Registry Key: {0}”, skNames[i]);
    }
    regKey.Close();
    }
    catch(System.Security.SecurityException e)
    {
    Console.WriteLine(“Security Exception Encountered: {0}”, e.Message);
    }
    }
    }
    }

    Annotation 149. Define Role-based security?

    Application that provide access to its data based on credentials check, verify the user’s role and hands over the access on the basis of such roles. Managed code finds out the role of a principal through a Principal object. This further contains a reference to an Identity object.

    1.User accounts -> represent people
    2.Group accounts -> represent certain categories of users and the rights they own.
    3.In .NET Framework:- Identity objects represent users, Roles represent memberships and security.

    A security principal represents a user and their roles which tells about their authority in the application. Role-based security is mostly used in custom authentication.
    Way to set the default policy to the application is:
    AppDomain appDomain = AppDomain.CreateDomain (“test”);
    appDomain.SetPrincipalPolicy(PrincipalPolicy.WindowsPrincipal);

    Annotation 150. Explain steps to deploy an XML web service

    To deploy the XML Web service, once can add a Web Setup project using project templates or use xcopy to copy the files from the source to the destination server. There onwards, make the destination directory a virtual directory in IIS

    Demonstrate how to access unmanaged code using Interop
    1.Use System.Runtime.InteropServices.
    2.Use DllImport to declare the unmanaged procedure.
    3.Map the data types of the procedures parameters to equivalent .NET types.
    4.Call the unmanaged procedure.
    5.Test it

    Annotation 151. Explain the namespaces in which .NET has the data functionality class.

    1. System.Data

    The System.Data namespaces contain classes for accessing and managing data from diverse sources. The top-level namespace and a number of the child namespaces together form the ADO.NET architecture and ADO.NET data providers. For example, providers are available for SQL Server, Oracle, ODBC, and OleDB. Other child namespaces contain classes used by the ADO.NET Entity Data Model (EDM) and by WCF Data Services.

    The System.Data namespace provides access to classes that represent the ADO.NET architecture. ADO.NET lets you build components that efficiently manage data from multiple data sources.

    In a disconnected scenario such as the Internet, ADO.NET provides the tools to request, update, and reconcile data in multiple tier systems. The ADO.NET architecture is also implemented in client applications, such as Windows Forms, or HTML pages created by ASP.NET.

    The centerpiece of the ADO.NET architecture is the DataSet class. Each DataSet can contain multiple DataTable objects, with each DataTable containing data from a single data source, such as SQL Server.

    Each DataTable contains a DataColumnCollection–a collection of DataColumn objects–that determines the schema of each DataTable. The DataType property determines the type of data held by the DataColumn. The ReadOnly and AllowDBNull properties let you further guarantee data integrity. The Expression property lets you construct calculated columns.

    If a DataTable participates in a parent/child relationship with another DataTable, the relationship is constructed by adding a DataRelation to the DataRelationCollection of a DataSet object. When such a relation is added, a UniqueConstraint and a ForeignKeyConstraint are both created automatically, depending on the parameter settings for the constructor. The UniqueConstraint guarantees that values that are contained in a column are unique. The ForeignKeyConstraint determines what action will happen to the child row or column when a primary key value is changed or deleted.

    Using the System.Data.SqlClient namespace (the.NET Framework Data Provider for SQL Server), the System.Data.Odbc namespace (the.NET Framework Data Provider for ODBC), the System.Data.OleDb namespace (the.NET Framework Data Provider for OLE DB), or the System.Data.OracleClient namespace (the .NET Framework Data Provider for Oracle), you can access a data source to use together with a DataSet. Each.NET Framework data provider has a corresponding DataAdapter that you use as a bridge between a data source and a DataSet.

    Class Description
    Constraint Represents a constraint that can be enforced on one or more DataColumn   objects.
    ConstraintCollection Represents a collection of constraints for a DataTable.
    ConstraintException Represents the exception that is thrown when attempting an   action that violates a constraint.
    DataColumn Represents the schema of a column in a DataTable.
    DataColumnChangeEventArgs Provides data for the ColumnChanging   event.
    DataColumnCollection Represents a collection of DataColumn   objects for a DataTable.
    DataException Represents the exception that is thrown when errors are   generated using ADO.NET components.
    DataRelation Represents a parent/child relationship between two DataTable   objects.
    DataRelationCollection Represents the collection of DataRelation   objects for this DataSet.
    DataRow Represents a row of data in a DataTable.
    DataRowBuilder Infrastructure. The DataRowBuilder type   supports the .NET Framework infrastructure and is not intended to be used   directly from your code.
    DataRowChangeEventArgs Provides data for the RowChanged,   RowChanging,   OnRowDeleting,   and OnRowDeleted   events.
    DataRowCollection Represents a collection of rows for a DataTable.
    DataRowComparer Returns a singleton instance of the DataRowComparer(Of TRow) class.
    DataRowComparer(Of TRow) Compares two DataRow   objects for equivalence by using value-based comparison.
    DataRowExtensions Defines the extension methods to the DataRow   class. This is a static class.
    DataRowView Represents a customized view of a DataRow.
    DataSet Represents an in-memory cache of data.
    DataSetSchemaImporterExtension This member supports the .NET Framework infrastructure and is   not intended to be used directly from your code.
    DataSysDescriptionAttribute Obsolete. Marks a   property, event, or extender with a description. Visual designers can display   this description when referencing the member.
    DataTable Represents one table of in-memory data.
    DataTableClearEventArgs Provides data for the Clear   method.
    DataTableCollection Represents the collection of tables for the DataSet.
    DataTableExtensions Defines the extension methods to the DataTable   class. DataTableExtensions   is a static class.
    DataTableNewRowEventArgs Provides data for the NewRow   method.
    DataTableReader The DataTableReader   obtains the contents of one or more DataTable   objects in the form of one or more read-only, forward-only result sets.
    DataView Represents a databindable, customized view of a DataTable   for sorting, filtering, searching, editing, and navigation.
    DataViewManager Contains a default DataViewSettingCollection   for each DataTable   in a DataSet.
    DataViewSetting Represents the default settings for ApplyDefaultSort,   DataViewManager,   RowFilter,   RowStateFilter,   Sort,   and Table   for DataViews created from the DataViewManager.
    DataViewSettingCollection Contains a read-only collection of DataViewSetting   objects for each DataTable   in a DataSet.
    DBConcurrencyException The exception that is thrown by the DataAdapter   during an insert, update, or delete operation if the number of rows affected   equals zero.
    DeletedRowInaccessibleException Represents the exception that is thrown when an action is tried   on a DataRow   that has been deleted.
    DuplicateNameException Represents the exception that is thrown when a duplicate   database object name is encountered during an add operation in a DataSet   -related object.
    EntityCommandCompilationException Represents errors that occur during command compilation; when a command   tree could not be produced to represent the command text.
    EntityCommandExecutionException Represents errors that occur when the underlying storage   provider could not execute the specified command. This exception usually   wraps a provider-specific exception.
    EntityException Represents Entity Framework-related errors that occur in the EntityClient namespace. The EntityException   is the base class for all Entity Framework exceptions thrown by the EntityClient.
    EntityKey Provides a durable reference to an object that is an instance of   an entity type.
    EntityKeyMember Represents a key name and value pair that is part of an EntityKey.
    EntitySqlException Represents errors that occur when parsing Entity SQL command   text. This exception is thrown when syntactic or semantic rules are violated.
    EnumerableRowCollection Represents a collection of DataRow   objects returned from a LINQ to DataSet query. This API supports the .NET   Framework infrastructure and is not intended to be used directly from your   code.
    EnumerableRowCollection(Of TRow) Represents a collection of DataRow   objects returned from a query. This API supports the .NET Framework   infrastructure and is not intended to be used directly from your code.
    EnumerableRowCollectionExtensions Contains the extension methods for the data row collection   classes. This API supports the .NET Framework infrastructure and is not   intended to be used directly from your code.
    EvaluateException Represents the exception that is thrown when the Expression   property of a DataColumn   cannot be evaluated.
    FillErrorEventArgs Provides data for the FillError   event of a DbDataAdapter.
    ForeignKeyConstraint Represents an action restriction enforced on a set of columns in   a primary key/foreign key relationship when a value or row is either deleted   or updated.
    InRowChangingEventException Represents the exception that is thrown when you call the EndEdit   method within the RowChanging   event.
    InternalDataCollectionBase Provides the base functionality for creating collections.
    InvalidCommandTreeException The exception that is thrown to indicate that a command tree is   invalid. This exception is currently not thrown anywhere in the Entity   Framework.
    InvalidConstraintException Represents the exception that is thrown when incorrectly trying   to create or access a relation.
    InvalidExpressionException Represents the exception that is thrown when you try to add a DataColumn   that contains an invalid Expression   to a DataColumnCollection.
    MappingException The exception that is thrown when mapping related service   requests fail.
    MergeFailedEventArgs Occurs when a target and source DataRow   have the same primary key value, and the EnforceConstraints   property is set to true.
    MetadataException The exception that is thrown when metadata related service   requests fails.
    MissingPrimaryKeyException Represents the exception that is thrown when you try to access a   row in a table that has no primary key.
    NoNullAllowedException Represents the exception that is thrown when you try to insert a   null value into a column where AllowDBNull   is set to false.
    ObjectNotFoundException The exception that is thrown when an object is not present.
    OperationAbortedException This exception is thrown when an ongoing operation is aborted by   the user.
    OptimisticConcurrencyException The exception that is thrown when an optimistic concurrency   violation occurs.
    OrderedEnumerableRowCollection(Of TRow) This API supports the .NET Framework infrastructure and is not   intended to be used directly from your code. Represents a collection of   ordered DataRow   objects returned from a query.
    PropertyCollection Represents a collection of properties that can be added to DataColumn,   DataSet,   or DataTable.
    ProviderIncompatibleException The exception that is thrown when the underlying data provider   is incompatible with the Entity Framework.
    ReadOnlyException Represents the exception that is thrown when you try to change   the value of a read-only column.
    RowNotInTableException Represents the exception that is thrown when you try to perform   an operation on a DataRow   that is not in a DataTable.
    StateChangeEventArgs Provides data for the state change event of a .NET Framework   data provider.
    StatementCompletedEventArgs Provides additional information for the StatementCompleted   event.
    StrongTypingException The exception that is thrown by a strongly typed DataSet   when the user accesses a DBNull value.
    SyntaxErrorException Represents the exception that is thrown when the Expression   property of a DataColumn   contains a syntax error.
    TypedDataSetGenerator Obsolete. Used to   create a strongly typed DataSet.
    TypedDataSetGeneratorException The exception that is thrown when a name conflict occurs while   generating a strongly typed DataSet.
    TypedTableBase(Of T) This type is used as a base class for typed-DataTable   object generation by Visual Studio and the XSD.exe .NET Framework tool, and   is not intended to be used directly from your code.
    TypedTableBaseExtensions Contains the extension methods for the TypedTableBase(Of T) class.
    UniqueConstraint Represents a restriction on a set of columns in which all values   must be unique.
    UpdateException The exception that is thrown when modifications to object   instances cannot be persisted to the data source.
    VersionNotFoundException Represents the exception that is thrown when you try to return a   version of a DataRow   that has been deleted.

    a