Month: November 2017

.NET

CLR Fundamentals.

    1. Introduction

    2. The Common Language Runtime (CLR)

    3. How Common Language Runtime Loads:

    4. IL and Verification:

    5. Unsafe Code

    6. The NGen Tool

    7. The Framework Class Library

    8. The Common Type System

    9. The Common Language Specification

Introduction

This is one of my initial blogs on CLR Overview and Basics, which I believe every .NET developer must know. I believe this topic is one of the prerequisite for starting anything related to .NET, it may be Console Application, Web page or an Application on Windows Phone. To start with I will tried to give you a broad overview of Common Language Runtime(CLR).

The Common Language Runtime (CLR)

is a runtime and provides an environment for a programming language that targets it. CLR has no idea which programming language the developer used for the source code. A developer can write code in any .NET language that target the CLR, it may be C# or VB or F# or C++/CLI etc. Compiler acts as syntax verifiers and does code analysis, this allows developers to code in their desired .NET languages and makes it easier to express one’s idea and develop software easily.

Fig 1.1
Environment of .NET Runtime.

Regardless of which compiler is used the result is a managed module. A managed module is a standard 32 bit Windows PE32 file or a standard 64 bit Windows (PE32+) file that require CLR to execute. Managed Assemblies always take advantage of Data Execution Prevention (DEP) and Address Space Layout Randomization(ASLR) in Windows, These two are security features of .NET Framework.

Table 1-1 Parts of Managed Module

All CLR compilers generate IL Code, every compiler emits full metadata into every managed module. Metadata is superset of COM TypeLib and Intermediate Definition Language (IDL). CLR metadata is far more complete and associated with the file containing the IL code. The metadata and IL code are embedded in the same EXE/Dll as the code making it impossible to separate the two. Because metadata and managed code are built at the same time and binds them together into resulting managed module. They are never out of sync with one another.

Metadata  has many applications or benefits v.i.z:,

  • Metadata removes the need for native header/library files during compilation, since all the information is available in the Assembly (PE32+) file. It also has the IL code that which implements the type and members. Compiler can comprehend the metadata directly from the managed module.
  • Visual Studio uses metadata to assist the developer in writing the code, Intellisense of Visual Studio parses the metadata table to inform coder what is the property, method, events and fields or a type offers and  in the case of methods, what parameters the method expects.
  • CLR code verification process uses metadata to ensure that you code performs only type-safe operations.
  • Metadata allows serialization of object on local machine and deserialization of the same object state on a remote machine.
  • Metadata allows the garbage collector to track the lifetime of objects.

C# and the IL Assembler always produce modules that contain managed code and data. So end users must have CLR installed on their devices to execute these managed code.

C++/CLI compiler is an exception to this it builds EXE/DLL modules that contain unmanaged code and manipulate unmanaged data at runtime, by adding the /CLR switch to the compiler options the C++ compiler can produce modules that contain hybrid of managed and unmanaged code, for these modules CLR is a must for execution. C++ compiler allows developer to write both managed and unmanaged code but still emit a single module.

Merging managed Modules to an Assembly:

Fig 1.2 Integrating managed modules into single assembly

CLR works with assemblies which is logical grouping of one or more modules or resource objects. An assembly is the smallest unit of versioning, reuse and security. You can produce a single file or a multi-file assembly. An assembly is similar to what we would say Component in COM World.

Single PE32(+) is a logical grouping of files which has manifest embedded is set to metadata tables. These tables describe the files that make up the assembly with public types implementation and the resource or data files that are associated  with the assembly.

If you want group of files into an assembly you will have to be aware of more tools and their command-line arguments. An assembly allows you to decompose the deployment of the files  while still treating all of the files as a single collection. An assembly modules have information about referenced  assemblies which makes them “self describing”. It means assembly’s immediate dependencies can be identified and verified by CLR.

How Common Language Runtime Loads:

An Assembly execution is managed by CLR, so CLR  needs to be loaded first into the process. You can determine if the .NET Framework is installed on a particular machine by looking for MSCorEE.dll in the  %SystemRoot%System32 directory. The existence of this file confirms that .NET framework is installed. The different versions of NET can be installed on a machine and this can be identified by looking at the following Register Key

HKEY_LOCAL_MACHINESOFTWAREMicrosoftNET Framework SetupNDP .

The .NET Framework SDK includes a command-line tool CLRViewer to view the version of the installed Runtime. If assemblies contain only type safe managed code then it should work on both 32-bit  and 64-bit  versions of Windows without making any source code changes. The executable will run on any machine with a version of .NET Framework installed on it. If .NET developer want to develop an assembly that works on a specific version of Windows then developer needs to use C# compiler “/platform” command-line switch. This switch allows to set whether the assembly can be executed on x86 machines with 32-bit Windows version or on X64 machines with 64-bit Windows version or on Intel Itanium machines with 64-bit Windows version. But the default value is “anycpu” which makes assembly to execute run on any version of Windows.

Depending on the /platform command line option, the compiler will generate an assembly that contains either a PE32 or PE32+ header, and the compiler will also insert the desired CPU architecture information into the header. MS ships two tools with the SDK i.e. DumpBin.exe and CorFlags,exe which can be used to examine the header information contained in a managed module.

When executing the assembly, windows determines using the file header whether to execute the application in 32-bit or 64-bit address space. An executable file with a PE32 header can run in a 32-bit or 54-bit address space, and a executable with PE32+ header requires 64-bit address space Windows also verifies the CPU architecture to confirm that the machine has the required CPU. Lastly 64-bit Windows version has a feature called WOW64 – Windows on Windows64 that allows 32-bit applications to run on it.

Table 1-2 Runtime State of Modules based on /platform switch
/platform Switch
Type of Managed Module x86Windows x64Windows IA64 Windows
any-cpu PE32/agnostic Runs as a 32-bit application Run as a 64-bit application Runs as a 64-bit application
x86 PE32/x86 Runs as a 32-bit application Runs as a WOW64 application Runs as a WOW64 application
x64 PE32+/x64 Doesn’t run Run as a 64-bit application Doesn’t run
Itanium PE32+/Itanium Doesn’t run Doesn’t run Runs as a 64-bit application

After Windows has examined the assembly header to determine whether to create a 32-bit process, a64-bit process, or a WOW64 process, Windows loads the x86, x64 or IA64 version of MSCorEE.dll into the process’s address space. Then process’s primary thread calls a method defined inside MSCorEE.dll. This method initializes the CLR, loads the EXE assembly and then calls its entry point method (Main). When a unmanaged application loads a managed assembly, Windows loads and initialize the CLR in order to process the code contained within the assembly.

IL is a much higher language when compared to most CPU m/c languages. It can access and manipulate object types and has instructions to create and initialize objects, call virtual methods on objects and manipulate array elements directly. LI can be written in assembly language using IL Assembler, ILAsm.exe. Microsoft also provides an IL Disassembler, ILDasm.exe

The IL assembly language allows a developer to access all of the CLR’s facilities which is hidden by other programming language which you would really wanted to use. In this scenario you can use multiple languages which CLR supports to utilize the otherwise the hidden CLR facilities, in-fact level of integration between .NET programming languages inside CLR makes mixed-language programming a biggest advantage for the developer.

To execute a method its IL code is initially converted to native CPU instructions. This is the job of the CLR’s JIT compiler.

Fig shows what happens when the first time a method is called

Just before the main method executes, the CLR detects all of the types that are reference by Main code. This causes the CLR to allocate an internal data structure that is used to manage access to the referenced types. This internal data structure contains an entry for each method defined  by the Console type. Each entry holds the address where the method’s implementation can be found. When initializing this structure the CLR sets each entry to an internal, undocumented function contained inside the CLR itself I call this function JITCompiler

When Main makes its first call to WriteLine, the JITCompiler function is called. The JIT Compiler function is responsible for compiling a method’s IL code into native CPU instructions. Because  the IL is being compiled “just in time” this component of the CLR is referred to as a JITter or a JIT Compiler.

The JIT Compiler function then searches the defining assembly’s metadata for the called method’s IL. JITCompiler next verifies and compiles the IL code into native CPU instructions. The native CPU instructions are saved in a dynamically allocated block of memory. Then, JITCompiler goes back to the entry for the called method in the type’s internal data structure created by the CLR and replaces the reference that called it in the first place with the address of the block of memory containing the native CPU instructions it just compiled. Finally, the JITCompiler function jumps to the code in the memory block. When this code returns, it returns to the code in Main which continues execution as normal.

Main now calls WriteLine a second time. This time, the code for WriteLine has already been verified and compiled. so the call goes directly to the block of memory, skipping the JITCompiler function entirely. After the WriteLine method executes, it returns to main.

A performance  hit is incurred only the first time a method is called. All subsequent calls to method execute at the full speed of the native code because verification and compilation to native code don’t need to be performed again.

The native CPU instructions in dynamic memory the compiled code is discarded when the application terminates. So if you run the application again the JIT compiler will have to compile the IL to native instructions again. It’s also likely that more time is spent inside the method then calling the method. The CLR’s JIT compiler optimizes the native code, it may take more time to produce the optimized code but the code will execute in less time with better performance compared to non-optimized code.

The two C# compiler switches that impact code optimization /optimize and /debug. The following table shows the impact of code performance based the two switches.

  • Compiler Switch Settings                    C# IL Code Quality                           JIT Native Code Quality
  • /optimize- /debug-                                      Unoptimized                                     Optimized
  • /optimize- /debug(+/full/pdbonly)               Unoptimized                                     Unoptimized
  • /optimize+ /debug(-/+/full/pdbonly)            Optimized                                         Optimized

The unoptimized IL code contains many no-operation instructions and also branches that jump to the next line of code, these unoptimized code instructions are generated to enable edit-and-continue feature of Visual Studio while debugging and enable applying break points to the code.

When producing optimized IL code the C# compiler will remove these extraneous NOP and branch instructions, making the code harder to single-step through in a debugger as control flow will be optimized. Furthermore, the compiler produces a Program Database (PDB) file only if specify the /debug(+/full/pdbonly) switch. The PDB file helps the debugger find local variables and map the IL instructions to source code. The /debug:full switch tells the JIT compiler will track what native code came from each IL instruction. This allows developer to use JIT Debugger of Visual studio to connect a debugger to an already running process and debug the code easily. Without the /debug:full switch, the JIT compiler does not track the IL to native code information which makes the JIT compiler run a little faster and also uses a little less memory. If you start a process with the Visual Studio debugger, it forces the JIT Compiler to track the IL to native code information unless you off the suppress JIT Optimization On Module Load (Managed Only) option in Visual Studio. In this managed environment, compiling the code is accomplished in two phases. Initially the compiler parses over the source code, doing as much work as possible in producing IL. But IL itself must be compiled into native CPU instructions at runtime,requiring more memory and more CPU time to be allocated to complete the task.

The following are difference or comparison of managed code to unmanaged code:

  1. A JIT compiler can determine if the application is running on an Intel Pentium 4 CPU and produce native code that takes advantage of any special instructions offered by the Pentium 4. Usually, unmanaged applications are compiled for the lowest-common-denominator CPU and avoid using special instructions that would give the application a performance boost.
  2. A JIT compiler can determine when a certain test is always false on the machine that it is running on. In those cases, the native code would be fine-tuned for the host machine; the resulting code is smaller and executes faster.
  3. The CLR could profile the code’s execution and recompile the IL into native code while the application runs. The recompiled code could be reorganized to reduce incorrect  branch predictions depending on the observed execution patterns.

NGen.exe tool compiles all of an assembly’s IL code into native code and saves the resulting native code to a file on disk. At runtime, when an assembly  is loaded, the CLR automatically checks to see whether a precompiled code so that no compilation is required at runtime. the code produced by NGen.exe will not be as highly optimized as the JIT compiler-produced code.

IL and Verification:

While compiling IL into native CPU instructions, the CLR performs a process called verification. Verification examines the high-level IL code and ensures that everything the code does is safe. For e.g. verification checks that every method is called with the correct number of parameters. The managed module’s metadata includes all of the method and type information used by the verification process.

In Windows, each process has its own virtual address space. Separate address spaces are necessary because you can’t trust an application’s code. It is entirely possible that an application will read from or write to an invalid memory address. By placing each windows process in a separate address space, you gain robustness and stability;

You can run multiple managed applications in a single Windows virtual address space. Reducing the number of processes by running multiple applications in a single  OS process can improve performance, require fewer resources and be just as robust as if each application had its own process.

The CLR does offer the ability to execute multiple managed applications in a single OS process. Each managed application executes in an AppDomain. Every managed EXE file will run in its own separate address space that has just the one AppDomain. A process hosting the CLR can decide to run AppDomain in a single OS process.

Unsafe Code

Safe code is code that is verifiably safe. Unsafe code is allowed to work directly with memory addresses and manipulate bytes at these addresses. This is a very powerful feature and is typically useful when interoperating with unmanaged code or when you want to improve the performance of a time-critical algorithm.

The C# compiler requires that all methods that contain unsafe code be marked with the unsafe keyword. In addition, the C# compiler requires you to compile the source code by using the /unsafe compiler switch.

JIT compiler attempts to compile an unsafe method, it checks to see if the assembly containing the method has been granted the System.Security.Permissions.SecurityPermission with  System.Security.Permissions.SecurityPermissionFlag’s SkipVerification flag set. The JIT compiler will compile the unsafe code and allow it to execute. The CLR is trusting this code and is hoping the direct address and byte manipulations do not cause any harm. If the flag is not set, the JIT compiler throws either a System.InvalidProgramException or a System.Security.VerificationException preventing the method from executing. In fact, the whole application will probably terminate at this point, but at least no harm can be done.

PEVerify.exe  tool examines all of an assembly’s methods and notifies you of any methods that contain unsafe code. So when you use PEVerify to check an assembly, it must be able to locate and load all referenced assemblies. Because PEVerify uses the CLR to locate the dependent assemblies, the assemblies are located using the same binding and probing rules that would normally be used when executing the assembly.

The NGen Tool

The NGen.exe tool is inserting machine code during the build process, so it is interesting in two scenarios

  • Improving an application startup time: The just-in time compilation is avoided because the code will already be compiled into native code and hence improve the startup time.
  • Reducing an application working set: The reason is because the NGen.exe tool compiles the IL to native code and saves the output in a separate file. This file can be memory mapped into multiple-process address spaces simultaneously, allowing the code to be shared;

When a setup program invokes nGen.exe. A new assembly file containing only this native code instead of IL code is created by NGen.exe. This new file is placed in a folder under the directory with a name like C:WindowsAssemblyNativeImages_v4.0.#####_64. The directory name includes the version of the CLR and information denoting whether the native code is compiled for x86, x64 or Itanium.

Whenever the CLR loads an assembly file, the CLR looks to see if a corresponding NGen’d native file exists. There are drawbacks to NGen’d files

  • No intellectual property protection: At runtime, the CLR requires that the assemblies that contain IL and metadata be shipped. if the CLR can’t use the NGen’d file for some reason the CLR gracefully goes back to JIT compiling the assembly’s IL code which must be available.
  • NGen’d files can get out of sync: When the CLR loads NGen’d file. It compares a number of characteristics about the previously compiled code and the current execution environment. Here is a partial list of characteristics that must match.
  • – CLR version: this changes with patches or service packs.
  • – CPU type: this changes if you upgrade your processor hardware
  • – Windows OS version: this changes with a new service pack update
  • – Assembly’s identity module version ID (MVID): this changes when recompiling.
  • – Referenced assembly’s version IDs: this changes when you recompile a referenced assembly
  • – Security : this changes when you revoke permission such as SkipVerification or UnmanagedCode that were once granted.
  • Whenever an end user installs a new service pack of the .NET framework the service pack’s installation program will run NGen.exe in update mode automatically so that NGen’d files are kept in sync with the version of the CLR installed.
  • Inferior execution-time performance: NGen can’t make as many assumptions about the execution environment as the JIT compiler can. This causes NGen.exe to produce inferior code. Some NGen’d applications actually perform about 5% slower when compared to their JIT-compiled counterpart. So, if you’re considering using NGen.exe you should compare NGen’d and non-NGen’d versions to be sure that the NGen’d version doesn’t actually run slower. the reduction in working set size improves performance so using NGen can be net win.
  • NGen.exe makes little or no sense because only the first client request experiences a performance hit; future client requests run at higher speed. In addition for most server applications only one instance of the code is required, so there is no working set benefit . NGen’d images cannot be shared across AppDomains so there is no benefit to NGen’ing an assembly that will be used in a cross-AppDomain scenario.

The Framework Class Library

  1. The Framework Class library (FCL) – is a set of DLL assemblies that contain several thousand type definition in which each type exposes some functionality
  2. Following are the different types of application that can be created/developed using FCL:
  3. Web Services
  4. Web Forms HTML-based applications (Web sites)
  5. Rich Windows GUI applications
  6. Rich internet Applications (RIAs)
  7. Windows console applications
  8. Windows services
  9. Database stored procedures
  10. Component Library

Below are the General Framework Class Library namespaces

Namespace                                                          Description of Contents

  1. System                                              All of the basic types used by every application
  2. System.Data                                     Types for communicating with database & processing data.
  3. System.IO                                         Types for doing stream I/O and walking directories and files
  4. System.Net                                       Types that allows for low-level network communications.
  5. System.Runtime.InteropServices   Types that allow managed code to access unmanaged OS                                                                               platform facilities such as DCOM and Win32 functions.
  6. System.Security                                Types used for protecting data and resources
  7. System.Text                                       Types to work on text in different encodings.
  8. System.Threading            Types used for asynchronous operations & synchronizing access to resources.
  9. System.Xml                    Types used for processing Extensible Markup Language schemas & data.

The Common Type System

The types are at the root of the CLR so Microsoft created a format specification – The Common Type System (CTS) that describes how types are defined and how they behave. The CTS specification states that a type can contain zero or more members

  • Field: A data variable that is part of the object’s state. Fields are identified by their name and type
  • Method A function that performs an operation on the object, often changing the object’s state. Methods have a name a signature and modifiers
  • Property: Properties allow an implementer to validate input parameters and object state before accessing the value and/or calculating a value only when necessary. They also allow a user of the type to have simplified syntax. Finally properties allow you to create read-only or write only fields.
  • Event: An event allows a notification mechanism between an object and other interested objects

The CTS also specifies the rules for type visibility and access to the members of a type. thus the CTS establishes the rules by which assemblies form a boundary of visibility for a type and the CLR enforces the visibility rules

A type that is visible to a caller can further restrict the ability of the caller to access the type’s members. The following list shows the valid options for controlling access to a member:

Private : The member is accessible only by other members in the same class type

Family : The member is accessible by derived types regardless of whether they are within the same assembly.

Family and assembly The member is accessible by derived types but only if the derived type is defined in the same assembly.

Assembly: The member is accessible by any code in the same assembly Many languages refer to assembly as internal.

Family or assembly: The member is accessible by derived types in any assembly. C# refers to family or assembly as protected internal.

Public : The member is accessible by any code in any assembly.

The CTS defines the rules governing type inheritance, virtual methods, object lifetime and so on. And it will map the language specific syntax into IL, the “language” of the CLR, when it emits the assembly during compilation. The CTS allows a type to derive from only one base class. To help the developer Microsoft’s C++/CLI compiler reports an error if it detects that you are attempting to create managed code that includes a type deriving from multiple base types.

All types must inherit from a predefined type: System Object. This object is the root of all other types and therefore guarantees that every type instance has a minimum set of behaviours. Specifically the System.Object type allows you do the following:

– compare two instances for equality

– Obtain a hash code for the instance

– Query the true type of an instance

– Perform a shallow copy of the instance

– Obtain a string representation of the instance object’s current state.

The Common Language Specification:

Microsoft has defined a Common Language Specification (CLS) that details for compiler vendors the minimum set of features their compiler must support if these compilers are to generate types compatible with other components written by other CLS-compliant languages on top of the CLR.

The CLS defines rules that externally visible types and methods must adhere to if they are to be accessible from any CLS-compliant programming language. Note that the CLS rules don’t apply to code that is accessible only within the defining assembly. Most other languages, such as C#, Visual Basic and Fortran expose a subset of the CLR/CTS features to the programmer. THE CLS defines the minimum set of features that all languages must support. you shouldn’t take advantage of any features that are outside of the CLS in its public and protected members. Doing so would mean that your type’s members might not be accessible by programmers writing code in other programming languages.

The [assembly:CLSCompliant(true)] attribute is applied to the assembly. This attribute tells the compiler to ensure that any publicly exposed type doesn’t have any construct that would prevent the type from being accessed from any other programming language. The reason is that the SomeLibraryTypeXX type would default to internal and would therefore no logner be exposed outside of the assembly

The table below show s how the programming language constructs got mapped to the equivalent CLR fields and methods

Type member Member Type Equivalent Programming Language Construct
AnEvent Field Event the name of the field is AnEvent and its type is System.EventHandler
.ctor Method Constructor
Finalize Method Constructor
add_AnEvent Method Event add accessor method
get_Aproperty Method Property get accessor method
get_Item Method Indexer get accessor method
op_Addition Method + operator
op_Equality Method == operator
op_Inequality Method != operator
remove_Anevent Method Event_remove accessor method
set_Aproperty Method Property set accessor method
set_Item Method Indexer set accessor method.

Interoperability with Unmanaged Code: CLR supports 3 interoperability scenarios

  • – Managed code can call an unmanaged function in a DLL
  • – Managed code can use an existing COM component (server)
  • – Unmanaged code can use a managed type (server).
Digg This
.NET

C# 4.0 new Features.

Dynamice Language Runtime

Dynamic  Lookup

dynamic keyword : These Object type need not be known till runtime. Member’s signature is not know till it is executed.

E.g. System.Reflection

Programming against COM IDispatch

Programming against XML or HTML DOM

Dynamic Language Runtime (DLR) behaves more like Python or Ruby.

Dynamic in C# is a type for e.g.

Dynamic WildThings(dynamic  beast, string name)

{

Dynamic whatis = beast.Wildness(name);

..

return whatsits;

}

dynamic : statically declared on object type, when object is marked to be dynamic that object is recognized by the compiler and it replaces the object metadata to be used during runtime, the runtime  then check to resolve the call which would be invoked either as dynamic dispatch or throws runtime error.

dynamic != var

Var  keyword is used for type inference and compile time check is made.

Dynamic keyword is used for object that is unknown during compilation and hence compile time  check is not made.

dynamic cannot be used for Extension methods

dynamic methods invocation cannot use anonymous methods  as parameter.

dynamic heisenberg;

Void LocationObserver(float x, float t) {}

Heisenberg.Observer(LocationObserver); –> right way of using call

Heisenberg.Observer(delegate (float y, float t){});–> wrong way of using call

Heisenberg.Observer((x,t)=>x+t);–> wrong way of using call

dyanmic  objects cannot be used in LINQ.

Dynamic collection = {1,2,4,5 ,6, 7,8}

Var result = collection.Select(e=>e.size>25)

  1. Select is an extension method
  2. Selector is a lambda

Dynamic Language Runtime is loaded everytime  dynamic objects are executed.

It reduces the efficiency because for caching only for the first time and then subsequent execution is same as normal execution as no caching will be required.

DLR is a normal assembly part of System.Core , dynamic objects implement IDispatch or IDynamicObject Interface. Using Dynamic XML now we can shorten the invocation for e.g. element.Lastname instead of element.Attribute[LastName].

COM support in C# 4.0

COM interops is feature where COM Interface methods are used to interact with Automation Object like Office Automation. Now ref keyword can ignored while using COM Interops and PIA objects.

Now the publisher creates the COM interops assembly using COM Interface which was earlier release done by the developer of COM Interops. With the latest release of C# there is no option of PIA, hence code is generated or implemented only for the COM Interface methods  that were used by the application.

Named Parameters and  Optional Parameters

 

Optional Parameters sets a default value for the parameter used; Optional parameter is used for consistence in C# syntax; Optional parameter takes the default value if the parameter is not passed with method invocation.

Static void Entrée(string name, decimal price=10.0M, int servers=1, bool vegan =false)

Static void main ()

{

Entrée(“Linuine Prime”, 10.25M,2, true); -> overrides all default values

Entrée(“Lover”, 11.5M,2); -> overrides bool

Entrée(“Spaghetti”, 8.5M); ->overrides bool int

Entrée(“Baked Ziu”); -> overrides bool int decimal

}

Named parameters : Bind values to parameters e.g. using Microsoft.Office.Tools.Word;

Document doc;

Object filename = “MyDoc.docx”;

Object missing = System.Reflection.missing.Value;

Doc.SaveAs(ref fileName, ref missing, ref missing ,…ref embeddedTTFS,…..);

Now it can be used as doc.SaveAs(FileName:ref fileName, embeddedTTFS: ref embedTTFS);

the method invocation will contain the parameters that are mentioned and other missing parameter will now have default values.

e.g. Thing(string color=”white”, string texture=”smooth”, string slope=”square”, string emotion=”calm”, int quantity =1)

Publi static void Things()

{

Thing(“blue”,”bumpy”,”oval”,”shaken”,17);

Thing(“blue”,”bumpy”,”oval”,”shaken”);

Thing(texture :”Furry”,shape:”triangular”);

Thing(emotion:”happy”,quantity:4);

}

Benefits : No longer creating overload() simply for the convenience of omitting parameter

Office Automation COM interops use optional parameters

No longer have to scorn about VB language.

It uses principle of Least surprise while mapping of the method.

Liabilities: Complicates overload resolution of optional parameter

Events in C# 4.0 

Syntax for events :

public event EventHandler<TickEventArgs>Tick;

Public void OnTick(TickEventArgs e){ Tick(this,e);}

Public class TickEventArgs:EventArgs

{

public string Symbol {get; private set;}

public string Price {get; private set;}

public TickEventArgs(symbol, decimal, price)

{

Symbol = symbol;

Price = price;

}

}

In C#4.0, events is now implemented based on compose and swap technique.

Now Events works for static & instance types, events works for reference and value types.

Covariance and ContraVariance:

Covariance : Modifier out on a generic Interface or delegate e.g. IEnumerable<out T>

The parameter type T can only occur in an output position, if used in input position it will throw error, if used in output position then an argument of a less derived type can be passed.

Enumeration of giraffe is also Enumeration of animals

Contravariance: Modifier in on a generic interface or delegate e.g. IComparable<in T>

Type T can only occur in input position, Compiler will generate  contravariant  conversions. It means an argument of a more derived type can be passed.

So variance can be used for comparison and enumeration of collections in type safe manner.

AutoProperties in C# 

Type inference changes in C# 4.0 has now allowed developer to declare properties and their corresponding accessor  and mutator method are generated by compilier defaultly. For e.g.

Public class Pt{

Public int X { get; set;}

Public int Y { get; set;}

} and compiler generates the back field which is inaccessible.

This type of property is now known as Auto properties.

Implicitly typed local variables : These variables can occur

1 inside foreach.
2 Initialization of for
3 Using statement
4 Local variable declaration

Initializers specifies values for fields and properties in single statement.

Var p1 = new Point {X=1, Y=2};

Var p2= new Point (1){Y=2};

Collection Initializers:

The class should have on Add public method which would take on one Key parameter and the other value parameter then we can use collection initializers as follows

Public class Dictionary <Tkey, Tvalue>:IEnumerable

{

public void Add(Tkey key, Tvalue value) {…}

….

}

Var namedCircles = new Dictionary<string, Circle>

{

{“aa”, new Circle{Origin=new PT{X=1,Y=2}, Radius=2}}

{“ab”, new Circle{Origin=new PT{X=2,Y=5}, Radius=3}}

};

Lambda in C#

Anonymous methods is a delegate function which is inlined as a block of code.

Lambda is a functional declarative syntax way of writing Anonymous method and it is a single statement.

Lambda function has an operator “=>” known as ‘goesto’

Delegate int SomeDelegate(int i);

SomeDelegate squareint = x =>x*x;

Int j =squareint(5); //25
(x,y) => x ==y;  //type infered
(int x, string s) => s.Length > x; //type declared.
() => Console::WriteLine(“Hi”); // no args

Statement Lambda :e.g.

Delegate void Another Delegate(string s);

AnotherDelegate Hello = a => {

string w = String.Format(“Hello, {0}”,a);

Console::WriteLine(w);

}

Hello(“world”);  == Hello world

Extension Methods :

Extension Methods are static methods that can be Invoked using instance method syntax. Extension method are less discoverable and has less functionality. Extension method are static methods has one parameter ‘this’.

Using Extension Methods

  • Must define inside non generic static class
  • Extension methods are still external static methods
  • Cannot hide, replace or override instance methods
  • Must import namespace for extension method.

System.Linq defines extension methods for IEnumerable and IQueryable <T>

Shrinking Delegates using lambda expression Func<int, int> sqr = x=>x*x

What if entries are not in memory then use lambda expression for that we need to import System.Ling.Expression.

Lambda functions as delegates become opaque code and treat it as special type, the alternative is Expression<TDelegate>. Expression Trees is used for runtime analysis.

e.g.

Int[] digits={0,1,2,3,4,5,6};

Int [] a = digits.Slice(4,3).Double()

Is same as Instance Syntax i.e.

Int []a = Extension.Double(Extension.Slice(digits,4,3));

LINQ to XML

Introduction: W3C-Compilant DOM a.k.a. XMLDocument, XMLReader & XMLWriter are part of namespace System.Xml.Linq.

What is DOM: declarations, element, attribute value and text content can be represented with a class, this tree of objects fully describe a document. This is called a document object model or DOM.

The LINQ to XML DOM: Xdocument, Xelement and Xattribute, Xdom -> LINQ friendly: This means LINQ has methods that emit useful IEnumerable sequences upon which you can query. It constructors are designed or create an XDOM tree through LINQ project.

XDOM Overview:

Types of Elements

XElement

XObject is the root element inheritance hierarchy.

XElement & XDocument are roots of the containership.

XObject is the abstract base class of XDocument.

XNode is the base class which excludes attributes and it is the ordered collection of mixed types.

<data>

Helloworld              à XText

<subelement1/>   àXelement

<!—comment – -> àXComment

<subelement2/> à Xelement

</data>

XContainer

XElement ———————————————————|————————————————-XDocument

XDocument: is the root of an XMLTree wraps the root Xelement adding an Xdeclaration.

Loading and Parsing: XElement, XDocument loads and parse methods to build X-DOM tree from existing source.

–          Loads builds an XDOM from a file, URI, Stream, TextReader or XmlReader

–          Parse builds an X-Dom from a string

–          XNode is created using ReadFrom() from XmlReader.

–          XmlReader/XMLWriter reads or write from XNode via from CreateReader() or CreateWriter()

Saving and Serializing: Saving and Serializing of XMLDom is done using the save method from file or stream using TextWriter/XMLWriter

Instantiating an X-DOM using the Add method of XContainer, for e.g.

Xelement lastName = new Xelement (“lastName”, “Blogs”);

LastName.Add(new Xcomment(“nicename”);

Functional Construction: XDOM supports Functional Construction (it is a mode of instantiation), you build an entire tree in a single expression.

Automatic Deep Cloning : An already parent node is added to second parent node and deep cloning is made, this process in known as deep Cloning. This automatic duplication keeps X-DOM object instantiation free of side effects.

Navigating and Querying:

XDOM returns single value or sequence implementing IEnumerable when a LINQ query is executed.

FirstNode, LastNode returns first child and last child

Nodes () returns all children, Elements () return child nodes of XElement type

SelectMany Query

Elements () is an extension method that implements IEnumerable<XContainer>

Element () is same as Elements ().FirstorDefault ()

Recursive function: Descendants / Descendant Nodes return recursively child elements/Nodes

Parent Navigation: XNode have parent property and AncestorXXX methods, A parent is always XElement, To access the XDocument we use Document property and Ancestor method return XElement Collection when first element is Parent.

XElement customer =

(new XElement (“Customer”,

new XAttribute (“id”,12),

new XElement (“firstname”, ”joe”),

new XElement(“lastname”,”Bloggs”),

XComment(“nice name”)

)

);

Advantage of Functional Construction is

–          Code resembles the shape of the XML.

–          It can be incorporated into the select clause of the LINQ query.

Specific Content: XElement overloaded take params object array. Public XElement (XName name, params object[] content) here are the decision made by the XContainer.

Diagram:

Attribute Navigation: XAttribute define PreviousAttribute () and NextAttribute ().

Updating an XDOM:

Most convenient methods to update elements and attributes are as follows

SetValue or reassign the value property

SetElementValues /SetAttributeValue

RemoveXXX

AddXXX/ReplaceXXX

Add –> appends a child node

AddFirst -> adds @ the beginning of collection

RemoveAll  è {RemoveAttributes (), RemoveNodes ()}

ReplaceXXX => Removing and then adding,

AddBeforeSelf, AddAfterSelf, Remove and ReplaceWith are applied to Collections.

Remove () -> removes current Node from its Parent

ReplaceWith -> Remove and then insert some other content at the same position.

E.g. Removes all contacts that feature the comment “confidential” anywhere in their tree

Contacts. Elements ().Where (e=>e.Descendant.Nodes ()

.OfType<XComment> ()

.Any (c=>c.Value ==”confidential”)).Remove();

Internally Remove () —-Copiesà temporary list –enumerateà temporary list àperform deletionsà avoids errors while deleting and querying at the same time.

XElement —Values()à the content of that node.

Setting Values: SetValue or assign the value property it accepts any simple data types

Explicit casts on XElement & XAttribute

All standard numeric types

String, bool, DateTime, DateTimeOffset, TimeSpan & Guid Nullable<> versions of the aforementioned value types

Casting to a nullable int avoids a NullReferenceException or add a predicate to the where clause

For e.g. where cust.Attributes(“Credit”).Any() && (int)cust.Attribute

Automatic XText Concatenation: If you specifically create XText nodes but end up with multiple children

Var e = new XElement(“test”, new Xtext(“Hello”), new Text(“World”));

e.Valueè HelloWorld

e.Nodes().Count()è2

XDocument: It wraps a root XElement and adds XDeclaration, It is based on XContainer and it supports AddXXX, RemoveXXX & replaceXXX.

XDocument can accept only limited content

-a single XElement object (the ‘root’)

-a single XDeclaration

– a single XDocumentType object

– Any number of XProcessing Instruction

– Any number of XComment objects

Simplest valid XDocument has just a root element

var doc= new XDocument(XElement (“test”,”data”));

XDeclaration is not an XNode and does not appear in document Nodes collection.

XElement & XDocument follow the below rules in emitting xml declarations:

–          Calling save with a filename always writes a declaration

–          Calling save with an XMLWriter writes a declaration unless XMLWriter is instructed otherwise

–          The toString() never emits XML declaration

XMLWriter will be set with the following settings OmitXmlDeclaration and Conformance Level properties to produce XML without declaration.

The purpose of XDeclaration is

What text encoding to use

What to put in the XML declaration encoding /standalone attributes.

XDeclaration Constructors parameters are

  1. Version
  2. Encoding
  3. Standalone

Var doc = new XDocument ( new Xdeclaration(“10”,”utf-8”,”yes”),new XElement(“test”, “data”));

File.WriteAllText è encodes using UTF-8

Namespace in XML: Customer element in the namespace

OReilly.Nutshell.CSharp is defined as

<customer >Attributes:  Assign namespace to attributes

<customer >http://www.w3c.org/2007/XMLSchema-instance>

<firstname>Joe</firstname>

<lastname xsi: nil=”true”/>

</customer>

Unambiguously xsi: nil attributes informs that lastname is nil.

Specifying Namespace in the X-DOM

  1. Var e = new XElement(“{http://domain.com/xmlpsace}customer”,”Bloggs”);
  2. Use the XNamespace and XName types

Public sealed class XNamespace

{

Public string Namespace Name {get ;}

}

Public sealed class XName

{

Public string LocalName {get ;}

Public XNamespace Namespace {get ;}

}

Both types define implicit casts from string, so the following is legal,

XNamespace ns = “http://domain.com/xmlspace”;

XName localName = “customer”;

XName fullName = “{http://domain.com/xmlspace/customer}”;

XName overloads +operator

XElement, namespace must be explicitly given otherwise it will not inherit from parent.

XNamespace ns=”http://domain.com/xmlspace”;

var data = new XElement (ns+”data”, newXElement(ns+”customer”,”Bloggs”), new

XElement (ns+”purchase”, “Bicycle”));

O/p:

<data >http://domain.com/xmlspace>

<customer>Bloggs</customer>

<purchase>Bicycle</purchase>

</data>

For nil attribute we write it as <dos xsi_nil=”true”/>

Annotations: Annotations are intended for your own private use and are treated as black boxes by X-DOM. Following are XObject add & remove annotations

Public void AddAnnotations(object annot)

Public void RemoveAnnotations<T> () where T: class

Annotations methods to retrieve a sequence of matches

The source can be anything over which LINQ can query such as

-LINQ to SQL or Entity Framework queries

-A load collections

-Another X-DOM

Regardless of the source, the strategy is the same in using LINQ to emit X-DOM

For e.g. retrieve customers from a db into XML

<customers>

<customer id=’1’>

<name>sue</name>

<buys>3</buys>

<customers>

We start by writing a functional construction expression for the X-DOM

Var customers = new XElement (“customers”, new XElement (“customer”, new XAttribute (“id”, 1), new XElement (“name”,”sue”), new XElement (“buys”, 3)));

We then turn this into a projection and build a LINQ query around it.

Var customers = New XElement(“customers”, “from c in dataContext.Customers select

New XElement(“customers” new XAttribute(“id”,c.ID),

new XElement (“name”,c.Name),

new XElement(“bugs”,c.Purchase.count)

)   );

IQueryable <T> is interface used during enumeration of database query and execution of SQL statement. XStreaming Element is a cut down version of XElement that applies to deferred loading semantics to its child content. This queries passed into an XStreaming Element constructor are not enumerated until you call save, toString or writeTo on the element: this avoids loading the whole X-DOM into memory at once.

XStreaming Element doesn’t expose methods such as Elements or Attributes. XStreaming Element is not based on XObject.

Concat operator preserves order so all elements/ nodes are arranged alphabetically.

System.XML namespace:                             System.XML.*

XMLReader & XMLWriter

XmlDocument

Systen.XML.XPath

  • XPathNavigator -Information and API

System.Xml.XMLSchema

System.XML.Serialization

System.Xml.XLinq

LINQ centric version of XMLDocument

XmlConvert – a static class for parsing and formatting XML Strings

XMLReader is a high performance class for reading

XMLStream is a low level and forward only manner class for I/O operations

XMLReader – instantiated using the Create Method

XMLReader rdr = XMLReader. Create (new System.IO.StringReader (myString));

XmlReader settings object used to create parsing of validation options:

XMLReaderSettings settings = new XMLReaderSettings();

Settings.IgnoreWhitespace = true

Settings.IgnoreProcessingInstructions = true

Settings.IgnoreWhitespace = true

Using ( XMLReader reader =  XmlReader.Create(“customer.xml” ,settings));

XMLReaderSettings.CloseInput() to close the underlying stream when the reader is closed. The default value for CloseInput and CloseOutput  = true;

The units of XML stream are XMLNodes; reader traverses the stream in depth first order. Depth property returns the current depth of the cursor.

The most primitive way of reading is Read (), it first calls positions cursor to first node.

-When Read() returns false means it went past last node, Attributes are not included in Read based traversal.

Node Type is of XMLNodeType then its enum members are as follows

Name , Comment , Document, XmDeclaration, entity, Documentype, Element, EndEntity, DocumentFragment, EndElement, EntityReference, Notation, Text Processing Instruction Whitespace, Attribute, CDATA, Significant Whitespace,

String properties of Reader: Name & Value.

Switch (r.NodeType)

{

.

.

Case XMLNodeType.XmlDeclaration: Console.Writeline(r.value);

Break;

Case XMLNodeType.DocumentType: Console.Writeline(r,name+”-“+r.value);

Break;

}

An entity is like a macro; a CDATA is like a verbatim string(@”…”) in C#.

Reading Elements : XmlReader provides few methods to read XMLDocument. XmlReader throws an XmlException if any validation fails. XmlException has line number and line Position.

ReadStartElement() verifies that the current NodeType is StartElement

ReadEndElement() verifies that the current NodeType is EndElement and then calls Read.

Reader.ReadStartElement (“firstName”);

Console.Write(Reader.Value);

Reader.ReadEndElement();

ReadElementContentAsString -> reads a start Element a text node and an end element, returning as a String;

Similarly ReadElementContentAsInt -> reads a end Element as Int.

MoveToContent() skips over all the fluff: XMLdeclarations  whitespace, comments and processing instructions.

<customer/> -> ReadEndElement throws exception because there is no end element for xml reader.

The workaround for the above scenario is

bool Empty = reader.IsEmptyElement();

reader.ReadStartElement(“customerList”);

if(!isEmpty) reader.ReadEndElement();

The ReadElementXXX() handles both kinds of empty elements.

ReadContentAsXXX parses a text node into type XXX using the XMLConvert class.

ReadElementContentAsXXX apply to element nodes rather than text node enclosed by the element.

ReadInnerXML returns an element and all its descendants, when used for attribute returns the value of the attribute.

ReadOuterXML includes the element at the cursor position and all its descendants

ReadSubtree is a proxy reader that provides a view over just the current element.

ReadToDescendant moves the cursor to the first descendant

ReadToFollowing moves the cursor to the start of the first node

ReadToNextSibiling moves the cursor to the start of the first sibling node with the specified name/namespace.

ReadString and ReadElementString same as ReadContentAsString except these methods throw an exception if there’s more than a single text node with the element or comment.

To make it easy the forward only rule is released during attribute traversal jump to any attribute by calling MoveToAttribute().

MoveToElement(): returns start element from any place within the attribute node diversion.

Reader.MoveToAttribute(“XXX”); returns false if the specified attribute doesn’t exists.

Namespaces and Prefixes:

XmlReader provides two parallel systems

-Name

-Namespace URI and LocalName.Name()

<c: customer…>               c:customer

So reader.StartElement(“c:Customer”);

The second system is aware of 2 namespace-aware properties – NamespaceURI and LocalName

e.g. <customer e(“logfile.xml”,settings))

{

r.readStartElement(“log”)

while(r.Name == “logentry”)

{

XElement logEntry = (XElement)Xnode.ReadFrom(r );

Int id= (int) logEntry.Attribute(“id”);

DateTime dt = (DateTime)logEntry.Element(“date”);

String source = (string)logEntry.Element(“source”);

}

r.ReadEndElement();

}

By implementing as shown above, you can slot a XElement into a custom type’s ReadXML or WriteXML method without the caller ever knowing you’ve cheated. XElement collaborates with XmlReader to ensure that namespace are kept intact and prefixes are properly expanded. Using XMLWriter with XElement to write inner Elements into an XmlWriter. The following code writes 1 million logentry elements to an XML file using XElement without storing the whole thing in memory:

Using (XmlWriter w = XmlWriter.Create(“log.xml”)

{

w.writeStartElement (“log”);

for (int I =0; I < 1000000; i++)

{

XElement e = new XElement(“logentry”, new XAttribute(“id”,i), new XElement(“source”,”test”));

e.writeTo(w);

}

w.writeEndElement();

}

Using XElement incurs minimal execution overhead.

XMLDocument: It is an in memory representation of an XML document, Its object model and methods conform to a pattern defined by the W3C.

The base type for all objects in an XMLDocument tree is XmlNode. The following types derive from XmlNode:

XmlNode:

XmlDocument

XmlDocumentfragment

XmlEntity

XmlNotation

XmlLinkedNode è exposes Next Sibling and Prev Sibling.

XmlLinkedNode is an abstract base for the following subtypes

XmlLinkedNode

XmlCharacterData

XmlDeclaration

XmlDocumentType

XmlElement

XmlEntityReference

XmlProcessingInstruction

Loading and Saving the XmlDocument: instantiate an XmlDocument and invoke Load () or LoadXML ()

–          Load accepts a filename, stream, TextReader or XMLReader

–          LoadXML accepts a literal XML String.

e.g. XmlDocument doc = new XmlDocument();

doc.Load(“customer1.xml”);

doc.Save(“customer2.xml”);

using ParentNode property, you can ascend backup the tree,

Console.WriteLine (doc.DocumentElement.ChildNodes [1].ParentNode.Name);

The following properties also help traverse the document

FirstChild LastChild NextSibling PreviousSibling

XmlNode express an attributes property for accessing attributes either by name or by ordinal position.

Console.WriteLine (doc.DocumentElement.Attributes[“id”].Value);

InnerText property represents the concatenation of all child text nodes

Console.WriteLine (doc.DocumentElement.ChildNodes[1].ParentNode.InnerText);

Console.WriteLine (doc.DocumentElement.ChildNodes[1].FirstChild.Value);

Setting the InnerText property replaces all child nodes with a single text node for e.g.

Wrong way => doc.DocumentElement.ChildNodes[0].Innertext=”Jo”;

Right way => doc.DocumentElement.ChildNodes[0].FirstChild.InnerText = “jo”

InnerXML property represents the XML fragment within the current node. Console.WriteLine (doc.DocumentElement.InnerXML);

Output <firstname>Jim</firstname><lastname>Bo</lastname>

InnerXML throws an exception if the node type cannot have children

Creating and Manipulating Nodes

  1. Call one of the CreateXXX methods on XMLDocument.
  2. Add the new node into tree by calling AppendChild, prependChild, InsertBefore or InsertAfter on the desired parent node.

To remove a node, you invoke RemoveChild, ReplaceChild or RemoveAll

Namespaces: CreateElement & CreateAttribute () are overloaded to let you specify a namespace and prefix

CreateXXX(string name);

CreateXXX(string name, string namespaceURI);

CreateXXX(string prefix, string localName, string namespaceURI)

E.g. XmlElement customer = doc.CreateElement(“o”,”customer”,”http://oreilly.com”);

XPath : Both DOM and the XPath DataModel represents an XMLDocument as a tree.

XPath Data Model is purely data centric, abstracting away the formatting aspects of XMLText.

For e.g. CDATA sections are not required in the XPath Data Model

Given a XML document

XPath queries within the code in the following ways :

Call one of the SelectXXX methods on an XMLDocument or XMLNode

–          Spawn an XPath Navigator from either

  • XmlDocument
  • An XPathDocument

Call an XPathXXX extension method on an XNode.

The SelectXXX methods accept an XPath query string

XmlNode n = doc.SelectSingleNode (“customers/customer [instance=’Jim’] “);

Console.WriteLine (n.Innertext); // Jim +Bo

The SelectXXX methods delegate their implementation to XPathNavigator which is used directly over XMLDocument or read-only XPathDocument

XElement e = e.XPathSelectElement(“customer/customer[firstname =’Jim’]”);

The extension method used with XNodes are CreateNavigator (); XPathEvaluate (); XPathSelectElement (); XpathSelectElements ();

Common XPath Operators are as follows

Operator | Description

/                              Children

//                            Recursively children

.                               CurrentNode

..                             ParentNode

*                            Wildcard

@                            Attribute

[]                             Filter

:                               namespace separator

XPathNavigator: It is a cursor over the XPathDataModel representation of an XML document It is loaded with the primitive methods that move the cursor around the tree

XPathNavigator Select * () take XPath string / queries and return more complex navigations or multiple nodes.

E.g. XPathNavigator nav = doc.CreateNavigator();

XPathNavigator jim = nav.SelectSingleNode(“customers/customer[firstname=’Jim’]”);

Console.WriteLine (jim.Value);

The SelectSingleNode method returns a single XPathNavigator. The Select method returning returns XPathNode Iterator which iterates over multiple XPathNavigators.

XPathNavigator nav = doc.CreateNavigator();

String xPath = “customers/customer/firstname/text()”;

Foreach (XPathNavigator nav in nav.Select(xPath))

Console.WriteLine (nav.Value)

For faster queries, compile XPath to XPathExpression then pass it to Select* method

XPathNavigator nav = doc.CreateNavigator ();

XPathExpression expr = nav.Compile (“customers/customer/firstname”);

Foreach (XPathNavigator a in nav.Select (expr))

Console.WriteLine (a.Value);

Output: Jim Thomas.

Querying with Namespace:

XmlDocument doc = new XmlDocument ();

Doc.Load(“customers.xml”);

XmlNameSpaceManager xnm = new XMLNamespaceManager (doc.NameTable);

We can add prefix/namespace pairs to it as follows:

Xnm.AddNamespace (“o”,”http://oreilly.com” );

The Select * methods on XMLDocument & XPathNavigator have overloads that accept as XMLNamespaceManager

XmlNode n =doc.SelectSingleNode (“o: customers/o: customers”, xnm);

XPathDocument: An XPathNavigator backed by an XPathDocument is faster than an XmlDocument but it cannot make changes to the underlying document:

XPathDocument doc = new XPathDocument (“customers.xml”);

XPathNavigator nav = doc.CreateNavigator ();

Foreach (XPathNavigator a in nav.Select (“customers/customer/firstname”))

Console.WriteLine (a.Value);

XSD and Schema Validation: For each domain XML file confirms to a pattern / schema to the standardize and automate the interpretation and validation of XML documents widely used is XSD (XML Schema Definition) which is supported in System.XML

Performing Schema Validation: You can validate an XML file on one or more schemas before processing it. The validation is done for following reasons

–          You can get away with less error checking and exception handling.

–          Schema validation pciks up errors you might otherwise overlook

–          Error messages are detailed and informative.

When XmlDocument is loaded into an XMLReader containing schema, validation happens automatically

Settings.ValidationType = ValidationType.Schema;

Settings.Schema,Add(null,”customers.xsd”);

Using (xmlReader r = XmlReader.Create(“customers.xml”, settings))

Settings.ValidationFlags |= XmlSchemaValidationFlags.ProcessInlineSchema

if schema validation fails then XmlSchemaValidationException is thrown.

e.g.

try {

While (r.Read());

} catch (XmlSchemaValidationException ex)

{

}

You want to report on all errors in the document, you must handle the ValidationEventhandler event;

Settings. ValidationEventHandler t = ValidationHandler;

Static void ValidationHandler(object sender, ValidationEventArgs e)

{

Console.WriteLine (“Error:”+e.Exception.Message);

}

The exception property of ValidationEventArgs contains the XmlSchemaValidationException that would have otherwise been thrown. You can also validate on XDocument or XElement that’s already in memory by calling extensions methods in System.XMLSchema. These methods accept XMLSchemaSet and a validationHandler

e.g.

XMLSchemaSet set = new XMLSchemaSet ();

Set.Add (null,@”customer.xml”);

Doc.Validate (set, (sender, args) => {error.AppendLine (args.Exception.message);});

LINQ Queries:

Linq is a set of language and framework feature for constructing type safe queries over in-memory collections and remote data sources. It enables us to query a collection implementing IEnumerable<T>. LINQ offers both validations i.e. compile time and run time error checking.

The basic units of data in LINQ are sequences and elements. A sequence is any object that implements IEnumerable<T> and an element is each item in the sequence.

Query operators are methods that transform/project a sequence. In the Enumerable class in System.Linq there are around 40 query operators which are implemented as extension methods. These are called standard query operators.

Query operators over in-memory local objects are known as LINQ-to-Objects queries. LINQ also support sequence implementing IQueryable<T> interface and supported by standard query operators in Queryable class.

A query is an expression that transforms sequence with query operators e.g.

String[] names= {“Tom”, “Dick”, “Harry”};

IEnumerable<string> filteredNames = names.Where(n=>n.Length>=4)

Foreach(string name in filteredNames)

Console.WriteLine(name);

Next query operators accept lambda expression as an argument. Here it is the signature of where query operator.

Public static IEnumerable<TSource> where <TSource>(this IEnumerable<TSource>source, Func<TSource, bool> predicate)

C# also provides another syntax for writing queries called query expression syntax. IEnumerable<string>filteredNAmes from n in names where n.Contains(“a”) select n;

Chaining Query Operators: To build more complex queries you append additional query operators  to the expression creating a chain. E.g. IEnumerable<string> query = names. Where(n=>n.Contains(“a”))

.Orderby(n=>n.Length)

.Select(n=>n.ToUpper());

Where, OrderBy and select are standard query operators that resolve to extension methods in the Enumerable class

Where operator: emits filtered verison of the input sequence.

Orderby operator: emits sorted version of the input sequence.

Select operator: emits a sequence where each input element is transformed or projected with a given lambda expression.

The following are the signatures of above 3 operators

public static IEnumerable<TSource> Where<TSource>(this IEnumerable<TSource>source, func<TSource, bool>predicate)

public static IEnumerable<TSource> OrderBy<TSource>(this IEnumerable<TSource>source, func<TSource, Tkey>keyselector)

public static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable <TSource>source, Func<TSource,TResult>selector);

without extension methods the query loses its fluency as shown below

IEnumerable<string> query = Enumerable.Select(Enumerable.OrderBy(Enumerable.Where( names, n => n.Contains(“a”)), n=>n.Length),n=>n.ToUpper());

Whereas if we use extension methods we get a natural linear shape reflect the left-to-right flow of data and keeping lambda expression alongside their query operators

IEnumerable<string> query = names.Where (n=>n.contains (“a”)).Orderby (n=>n.Length).Select (n=>n.ToUpper ());

The purpose of the lambda expression depends on the particular query operator. An expression returning a bool value is called a predicate. A lambda expression is a query operator always works on individual elements in the input sequence not the sequence as a whole.

Lambda expressions and Func signatures: The standard query operators utilize generic Func delegates. Func is a family of general purpose generic delegates in System.Linq, defined with the following intent: The tye arguments in Func appear in the same  order they do in lambda expression . Hence  Func<TSource, bool> matches TSoruce => bool Func<TSource, TResult> matches TSoruce => TResult.

The standard query operators use the following generic type names

TSource                ElementType for the input sequence

TResult                 ElementType for the output sequence if different from TSource.

TKey                      ElementType for the key used in sorting grouping or joining.

TSource is determined by the input sequence. TResult and they are inferred from your lambda expression. Func<TSource, TResult> is same as TSource=>TResult lambda expression. TSource and TResult are different types,  so the lambda expression can change the type of each  element, further the lambda expression  determines the output sequence type.

The where query operator is simpler and requires no type inference for the output because the operator merely filters elements it does not transform them.

The orderby query operator has a predicate/Key selector as Func<TSource, Tkey> maps an input element to a sorting key. This is inferred from lambda expression and is separate from the input and output element types.

Query operators in Enumerable class refer to methods instead of lambda expression to emit expression trees. Query operators in Queryable class refer to lambda expression to emit expression trees.

Natural Ordering: the original ordering of elements in input sequence is important  in LINQ. Operators such as Where and Select preserve the original ordering of the input sequence. LINQ preserves the ordering wherever possible.

Some of the operators which do not return sequence are as follows

Int numbers ={10,9,8,7,6};

Int firstnumber = numbers.First();

Int Lastnumber = numbers.Last();

Int secondnumber = numbers.ElementAt(1);

Int LowestNumber = number.OrderBy(n=>n).First();

The aggregation operators return a scalar value

Int count = numbers.Count();

Int min = numbers.Min();

The quantifiers return a bool value

Bool hasTheNumberNine = numbers.Contain(9);

Bool hasMorethanZeroElement = numbers.Any();

Bool hasAnOldElement = numbersAny(n=>n%2==1);

Some query operators accept two input sequence for e.g.

Int[] seq1 = {1,2,3}; Int[] seq1 = {3,4,5};

IEnumerable <int> concat = seq1.Concat(seq2);

IEnumerable <int> union = seq1.union(seq2);

C# provides a syntactic shortcut for writing LINQ queries called query expressions. Query expression always start with a form clause and ends with either a select or group clause. The from clause declares an range variable similar to traversing the input sequence.

e.g. IEnumerable<string> query = from n in names where n.contains(“a”) orderby n.length select n.toUpper();

Range Variables: The identifier immediately following the from keyword syntax is called the range variable refers to the current element in the sequence

Query expression also let you introduce new range variable via the following clauses: let into An additional from clause.

Query Syntax vs Fluent Syntax

Query syntax is simpler for queries that involve any of the following

  1. A let clause for introducing a new variable alongside the range variable.
  2. SelectMany, Join or GroupJoin, followed by an outer range variable reference.

Finally there are many operators that no keyword in query syntax. These require that you use fluent syntax. This means any operator outside of the following : where select selectmany orderby thenby orderbydescending thenbydescending groupby join groupjoin.

Mixed Syntax Queries: If a query operator has no query syntax support you can mix query syntax and fluent syntax. The only constraint is that each query syntax component must be complete.

Deferred Execution: An important feature of most query operators is that they execute not when constructed but when enumerated.

e.g. IEnumerable<int>query = numbers.Select(n=>*10)

foreach(int n in query)

Console.Write(n + “/”); //10 / 20

All standard query operators provide deferred execution with the following exceptions:

–          Operators that return a single element or scalar value such as First or Count

–          The following conversion operators toArray, ToList, ToDictionary, ToLookup cause immediate query execution because their result type have no mechanism for providing deferred execution.

Deferred Execution is important because its decouples query construction from query execution. This allows you to construct a query in several steps as well as making database queries possible.

A deferred execution query is reevaluated when you re-enumerate:

IEnumerate<int>query = numbers.Select(n=>n*10);

Foreach (int n in query) Console.Write(n+”/”); o/p= 10/20/

Numbers.Clear();

Foreach (int n in query) Console.Write(n+”/”); o/p = nothing

There are a couple of disadvantages:

Sometimes you want to freeze or cache the results at a certain point in time.

Some queries are computationally intensive so you don’t want to unnecessarily repeat them.

Query’s captured variable : Query’s lambda expression reference local variables these variables are subject to captured variable semantics. This means that if you later change their value, the query changes as well.

Int[] numbers = {1,2};

int factor =10;

IEnumerable<int> query = numbers.Select(n=>n*factor);

Factor =20;

Foreach(int n in query)Console.Write(n+”|”);//20|40|

A decorator sequence has no backing structure of its own to store elements. Instead it wraps another sequence that you supply at runtime to which it maintains a permanent dependency. Whenever you request data from a decorator, it in turn must request data from the wrapped input sequence.

Hence when you call an operator such as select or where you are doing nothing more than instantiating a enumerable class that decorates the input sequence.

Changing query operators create a layer of decorators When you enumerate query, you are querying the original array, transformed through a layering or chain of decorators.

Subqueries: A subquery is a query contained within another query’s lambda expression. E.g. string[] musos = {“David”,”Roger”,”Rick”}; IEnumerable<string>query = musos.Orderby(m=>m.split().last());

m.split() converts each string  into a collection of words upon which we then call the last query operator. M.split().last is the subquery; query references the outer query.

Subqueries are permitted because you can put any valid C# expression on the right hand side of a lambda. In a query expression, a subquery amounts to a query referenced from an expression in any clause except the from clause.

A subquery is primarily scoped to the enclosing expression and is able to reference the outer lambda argument ( or range variable in a query expression). A subquery is executed whenever the enclosing lambda expression is evaluated. Local queries follow this model literally interpreted queries follow this model conceptually. The sub query executes as and when required to feed the outer query.

An exception is when the sub query is correlated meaning that it references the outer range variable.

Sub queries are called indirectly through delegate in the case of a local query or through an expression tree in the case of an interpreted query.

Composition Strategies : 3 strategies for building more  complex queries

–          Progressive query construction

–          Using into keyword

–          Wrapping queries

There are a couple of potential benefits however to building queries progressively :

It can make queries easier to write

You can add query  operators conditionally For e.g.

If(includeFilter)query = query.Where(….)

This is more efficient than

Query = query.Where(n=>!includeFilter||expressions) because it avoids adding an extra query operator if includeFilter is false. A progressive approach is often useful in query comprehensions, In fluent syntax we could write this query as a single expression

IEnumerable<string>query = names.Select(n=>n.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””)).where(n=>n.length>2).orderby(n=>n);

RESULT:{“Dck”,”hrry”,”mry”}

We can rewrite the query in progressive manner as follows

IEnumerable<string>query = from n in names.

Select n.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””);

Query = from n in query where n.length > 2 orderby n select n;

RESULT:{“Dck”,”Hrry”,”Mry”}

The INTO keyword: The into keyword lets you continue a query after a projection and is a shortcut for progressively querying . With into we can rewrite the preceding query as :

IEnumerable<string> query = from n in names

Select n.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””) into noVowel where noVowel.Length > 2 orderby noVowel select noVowel;

The only place you can use into is after a select or group clause “into” restarts a query allowing you to introduce fresh where, orderby and select clauses.

Scoping rules: All queries variables are out of scope following an into keyword. The following  willnot compile

Var query = from n1 in names select n1.Toupper() into n2 where n1.contains(“x”) select n2;

Here n1 is not in scope so above statement is illegal.

To see why,

Var query = names.Select(n1=>n1.Toupper())

.where(n2=>n1.contains(“x”));

Wrapping queries: A query built progressively can be formulated into single statement by wrapping one query around another query. In general terms:
var tempQuery = tempQueryExprn

Var finalQuery = from … in tempQuery can be reformulated as

Var finalQuery = from … in (tempQueryExprn).

Reformulated in wrapped form, it’s the following

IEnumerable<string> query = from n1 in (

From n2 in names

Select n2.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””)).where n1.length>2 orderby n1.select n1

Projection Strategies: All our select clauses have projected scalar element types. With C# object initializers, you can project into complex types, for e.g. we can write the following class to assist:

Class TempProjectionITem

{

Public string Original;

Public string Vowelless;

}

And then project into it with object initializers:

String[] names = {“Tom”,”Dick”, “Harry”, “Mary”,”Jay”};

IEnumerable <TempProjectionItem>temp =  from n in names select new TempProjectionItem {

Original =n ,

Vowelless=n.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””))};

The result is of the type IEnumerable<TempProjectionItem> which we can subsequently query

IEnumerable<string>query = from item in temp where item.Vowelless.length>2 select item.original;

This gives the same result as the previous example, but without needing to write one-off class. The compiler does the job instead, writing a temporary class with fields that match the structure of our projection. This means however that the intermediate query has the following type:

IEnumerable<random-compiler-produced-name>

We can write the whole query more succinctly with the keyword

Var query=from n in names

Select new

{

Original =n,

Vowelless = n.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””))} into temp where temp.Vowelless.Length>2 select tempOriginal;

The let keyword: introduces a new variable alongside the range variable. With Let we can write a query as follows

String[] names = {“Tom”, “Dick”, “Harry”, “Mary”,”Jay”};

IEnumerable<string>query = from n in names

Let vowelless = n.Replace(“a”,””) .Replace(“e”,””) .Replace(“i”,””) .Replace(“o”,””) .Replace(“u”,””)) .where vowelless.Length>2.orderby vowelless .select n;

The compiler resolves Let clause by projecting into a temporary anonymous type that contains both the range variable and new expression variable

Let accomplishes two things:

–          It projects new elements alongside existing elements

–          It allows an expression to be used repeatedly in a query without being rewritten.

Let approach is particularly advantageous in this example because it allows the select clause to project either the original name (n) or its vowel-removed version(v).

You can have any number of let statements. A let statement can reference variables introduced in earlier let statements. Let reprojects all existing variables transparently.

Interpreted Queries: LINQ provides two parallel architectures: Local queries for local object collections and interpreted queries for remote data sources. Local queries resolve to query operators in the enumerable class, which in turn resolve to chains of decorator sequences. The delegates that they accept whether expressed in query sysntax, fluent syntax or traditional delegates are fully local to IL code.

By contrast, interpreted queries are descriptive. They operate over sequences that implement IQuerable<T> and they resolve to the query operators in the Queryable class which emit expression trees that are interpreted at runtime.

These are two IQueryable<T> implementations in the .NET framework :

LINQ to SQL

EntityFramework(EF)

Create Table Customer

{

ID int not null primarykey,

Name varchar(30)

}

Insert customer values (1,”Tom”)

Insert customer values (2,”Dick”)

Insert customer values (3,”Harry”)

Insert customer values (4,”Mary”)

Insert customer values (5,”Jay”)

We can write Interpreted Query to retrieve customers whose name contains the letter “a” as follows

Using System;

Using System.Ling;

Using System.Data.Linq;

Using System.Data.Linq.Mapping;

[Table] public class Customer

{

[column(Isprimarykey=true)] public int ID;

[column]public string Name;

}

Class Test

{

Static void main()

{

Datacontext datacontext = new DataContext(“connection String”);

Table<customer>customers = dataContext.GetTable<Customer>();

IQueryable<string>query = from c in customers where c.Name.contains(“a”).orderby(c.Name.Length).select c.Name.ToUpper();

Foreach(string name in query)Console.WriteLine(name);

}

}

LINQ to SQL would be as follows

SELECT UPPER([to][Name])as[value] FROM[Customer]AS[to]WHERE[to].[Name]LIKE@po ORDER BY LEN([to].[Name])

Here customers is of type table<>, which implements IQueryable<T>. This means the compiler has a choice in resolving where it could call the extension method in Enumerable or the following extension method in Queryable:

Public static IQueryable<TSource>Where<TSource>(this IQueryable<TSource>source, Expression<Func<TSource,bool>>predicate)

The compiler chooses “Queryable.Where” bcoz its signature is a more specific match.

“Queryable.Where” accepts a predicate wrapped in an Expression<TDelegate> type. This instructs the compiler to translate  the supplied lambda expression in otherwords, n=>n.Name.contains(“a”) to an expression tree rather than a compiled delegate. An expression tree is an object model based on the types in System.Linq expression that can be inspected at runtime.

When you enumerate over an integrated query the outermost sequence runs a program that traverse the entire expression tree, processing it as a unit. In our example LINQ to SQL translates the expression tree to a SQL statement, which it then executes yielding the results as a sequence.

A query can include both interpreted and local operators. A typical pattern is to have the local operators on the outside and the interpreted components on the inside; this pattern works well with LINQ-to-DB queries.

AsEnumerable: Enumerable.AsEnumerable is the simplest of all query operators. Here its complete definition

Public static IEnumerable <TSource>AsEnumerable <TSource>(this IEnumerable<TSource>source)

{ return sources;}

Its purpose is to cast an IQueryable<T>sequence to IEnumerable<T> forcing subsequent query operators to bind to Enumerable operators instead of Queryable operators. This causes the  instead of Queryable operators. This causes the remainder of the query to execute locally.

E.g.

Regex wordcounter = new Regex(@”b(w[-]+]b”);

Var query =dataContext.MedicalArticales

.where(article=>article.Topic == “influenza”)

.AsEnumerable()

.where(article=>wordCounter.Matches(article.Abstract).Count <100);

An alternative to calling AsEnumerable is to call toArray or toList. The advantage of AsEnumerable is deferred execution.

Programming

AJAX and RIA two parents of Web 2.0 Technologies

SilvierLight
RIA from Microsoft using Silverlight

AJAX Software Requirements :

Textpad or Visual Studio

Browser with JavaScript capability

Web Server  e.g. IIS FTP Client

Some html code in text format

AJAX applications can be written using JavaScript, Jquery & PHP.

The following is the JavaScript  code which can be embedded to view AJAX features in a static html page

Document.getElementID(“DIVID”).Show()

var  page = eval(result)

AJAX is the core technology used in developing Rich Internet Applications a.k.a. RIA

At the beginning of Web, RIA had only client and server where Client displayed the contents and Server contained all the business application. Later it improved to have thin Client at the browser and thick Server at the web server side; these apps were developed using  CGI, ASP, JSP, PHP, and other Lightweight HTML Client. Now we have RIA composed of heavy Client and heavy Server. RIA Interface similar to desktop apps where we have options than clicks. RIA is an internet  based application. RIA’s web page interacts with the server allowing data to be transferred and partial page updated and view the web page without delay in rendering.

Business drivers for RIA

Improve Web user experience

Lesser steps to complete tasks

Less errors by users

Less time consumed

Similar Web & desktop user Interface

Lower training

Improve Application responsiveness

Uninterrupted workflow and less waiting .

Runtime Error checking on the fly.

Reduce network demands

Less information is sent between client and server.

Only relevant information is transmitted.

Business Costs of RIA

Finding trusted technologies which are scalable and effective.

A lot of RIA technology outside of AJAX

A third party library for JavaScript.

Getting development team up to current web standards

Differing browser capabilities

Browsers continually updating

Moving standards :

1. Technology surrounding RIAs continuously changing and improvised features being added.

2. Implementation of technology continuously changing and improvised features being added.

User Expectations of RIA
RIA is too advanced for users

Legal risks are RIA should adhere to Section 508, Telecom Act and  American Disabilities Act.

Search Engine Optimization

Some of the potential areas where RIA have been developed and implemented are as follows

Chat

Collaboration Documents

E-Commerce

Education

Games

Mail Utilities

Mapping Software

Search Utilities

Spell Checking etc.

Scripting and AJAX

ASP.NET Ajax Library

Javascript Debugging and Intellisense

Jquery Integration.

The following are the resources available to get started for development of RIA using AJAX

  1. http://www.asp.net/ajax
  2. Codeplex
  3. Jquery
  4. MSDN which supports Jquery
    CDN and AJAX CDN
  5. AJAX libraries toolkit.

AJAX toolkit provided by MS contains Rich set of controls which seamless integrate with jquery, this toolkit part of ASP.NET toolbox. For AJAX to implemented and enabled in a web site one has embed script tag between head tag.

The debugging of AJAX can be done using IE9 tools and Visual Studio Intellisense.

BlindEye Nature and Culture

Blind Eye

BlindEye
Created by Anagha Agile Systems
Global Warming Effects[/caption]

This is my first blog, so I thought of writing the blog on creating awareness about our environment and stop GLOBAL WARMING.
I wonder whether we would live for another 100 years. I wonder whether our children would have properly grown limbs and organs in next few decades. Everyone is turning Blind eye towards these dangerous occurrences in our holy planet EARTH.

As days goes we find many different kinds unpredictable hurricanes in Atlantic Ocean due to ICE melting in polar regions. We have unpredictable monsoon and unscheduled rains in Indian plateau. We have prolonged droughts in African continents, huge floods in China. Earthquakes in Iran and central Asia. Extinction of birds and butterfly in Central America. Extinction of reptiles and snakes and cats in South America.

Recently we had pandemic like Bird flu and Swine Flu globally. And few facts observed but not documented or acknowledge internationally is the quality of Vegetables, fruits and cereals we have now compared to we had 10 years before.The fast foods creating Fast Diabetic and Cardiac patients. Pollutions due to vehicle emissions in 1 tier and 2 tier cities have made bronchitis and other lung diseases as common diseases like headache and cold. People are becoming less immune to normal water and most of the people wants to drink mineral water which is destroying water table as well as environment.

All these accounts not only due to increase in CO2 in atmosphere BUT also due to many facts listed below: The laziness imbibed in us from our childhood.

The fast lifestyle we have go into due to rat race for nothing.

The greed of human mind leading to information overload leading to stress which is the core of the major issues of human life.

The facilities we have in our cities now for e.g.

1. The lifts for 2 – 5 stored buildings.

2. The EPABX systems and messenger within our offices, when we can walk few steps and talk to them.

3. Two wheelers instead of Bicycles to travel less than 2 km.

4. Drinking and Eating packed foods instead of Fruits and Green Vegetables.

5. The most important in work is sitting continuously in-front of computer in white collar jobs and insufficient air circulation in non white collar jobs.

6. Usage of non biodegradable plastics and cosmetics.

7. Disposable of non biodegradable waste and drugs is totally unscientific.

8. Usage of alternate resources and recycled resources is considered as a stigma in most elite societies.

9. Most important and biggest culprit of all causes is the Laziness and attitude amoung us human beings towards flora and fauna.

The Blind eye towards flora and fauna is actually not killing environment around us but we are turning Blind eye towards murderers of our children and grand children.

Do we need to have this BLIND EYE ?

Time has come for us, humans, to have this eye keep open always every second every minute every day. Put some effort in your life style, Enjoy your days with god given natural things.

Legs are meant to be used for long walks.

Hands are meant for preparing delicious foods also.

Eyes are meant not to read computer screens or television but also to see some natural beauties like sun rise early in the dawn.

Ears are not for hearing honks on the road but also meant to hear bird chirps in the morning.

Nose is meant for smelling good natural food aroma not fast food artificial smell.

Nose is for smelling flowers.

Tongue is for tasting natural honey and nectar not only for chatting.

Why this BLIND EYE towards these beautiful things and open eye for traveling through traffic jams and computer screens?

YOU DECIDE DO WE NEED THIS BLIND EYE.

Uncategorized

About

“if U CAN dream it, U CAN do it”  – WALT DISNEY

 

This is an humble & truthful attempt to share our knowledge on the latest technology to the society.  

AAS Startup:

     AAS is small AI startup working very hard on developing next generation ML and Cognitive solutions
for our esteemed clients. AAS was founded in September 2011.  It is privately owned IT professional business
firm catering to the Cloud expertise and other IT needs of local and international Clients.
  
    AAS has been in business for the past 7 yrs helping local businesses and international organisations to 
 
provide solutions to their international clientele.  AAS has been successful delivering high quality solutions
and services under the able leadership of Jayaprakash.

AAS Office :

AAS-aGILE-ScrumBoard
AAS ScrumBoard
AAS is located in Bangalore India , the IT Startup of
Asia and backoffice of the world.  We are specialising in Cloud hosting Big data analytics and enterprise
software. Off late we have successfully provided technical expertise related to Deep Learning algorithm
design and development of AI based apps for their international clientele.

AAS Team :

Development TEAM
AAS Development TEAM
We have more than 20 years of professional experience mostly
high-end hardcore technologies of FIN TECH and Data Analytics
of Bio-Science. We have around 7 years of Mobile App development
experience.
We develop mobile apps as part of technical consultancy to our
international companies who are providing technical expertise for their
Fortune 100 Clientele. Our team is very very hard working highly skilled
Geeks. we have been working on Opensource helping and collaborating
with many open platform users.

 

AAS is based on 8 FUNDAMENTAL pillars:

TEST AUTOMATION : INDUSTRY STANDARD TEST PRACTICES,

INNOVATION : SOFTWARE DEVELOPMENT THROUGH EMERGING                                             TECHNOLOGIES

SCALABILITY : OPTIMISATION, REFACTORING & REUSABILITY,

 CREATIVITY : RESPONSIVE UX DESIGN THRU DESIGN THINKING,

SECURITY : EFFECTIVE SECURE PROGRAMMING,

PRODUCTIVITY: STREAMLINED PROCESS, FAST DELIVERY NO REWORK,

RELIABILITYTRANSPARENT EFFECTIVE COMMUNICATIONS, and

TRUE VALUE: VALUE FOR MONEY

AAS Development principles:

 

 

IMAGINE IMPLEMENT INSPIRE !!!!