Corey Coogan

Python, .Net, C#, ASP.NET MVC, Architecture and Design

  • Subscribe

  • Archives

  • Blog Stats

    • 89,695 hits
  • Meta

Archive for the ‘Architecture and Design’ Category

Initializing NHProf in MSTest

Posted by coreycoogan on September 4, 2010


Alternate title for this post could be Running initialization code before an MSTest Test Run

I’m not at all a fan of MSTest for various reasons. My preference is NUnit, however xUnit, mbUnit or any other framework is just as well. There are often cases when a client wants to use MSTest, usually for its Visual Studio integration. This is a compelling argument for a shop new TDD or unit testing, that wants the least amount of friction and doesn’t want to invest in TestDriven.Net or R#.

Integration tests written with MSTest are a great place to profile your NHibernate data access and catch any gotchas early in the development cycle. When the client is willing to pay for it, there’s no better tool than Ayende’s NHProf for this. NHProf can be easily initialized in code, but this should be done only once per test run, as you would bootstrap NHibernate itself only once. This led me on a search in MSTest for how to initialize once per test run. There’s widely known attributes for executing setup code when a test class fires and also whenever a test fires, but the answer to my requirement was found in the lesser known [AssemblyInitialize] attribute.

Just throw that attribute on a static void method that takes a TestContext argument and it will execute once before each test run. The class that houses the initialization method must also be decorated with the [TestClass] attribute.

[TestClass]
    public class AssemblyInit
    {
   
        [AssemblyInitialize]
        public static void Init(TestContext testContext)
        {
             HibernatingRhinos.Profiler.Appender.NHibernate.NHibernateProfiler.Initialize();
        }
    }

Posted in Architecture and Design, C#, NHibernate, TDD | Tagged: , , | Leave a Comment »

The Composite Key Conundrum

Posted by coreycoogan on June 2, 2010


Preface

This is an edited version of an argument I wrote in hopes of convincing some DBA’s to start adopting surrogate keys.  This was for an Oracle shop, hence the heavy use of of Sequence speak, but the arguments are pretty much the same for any DB. We’re also using NHibernate, so that tool is also talked about here, however, other popular ORM frameworks will benefit from the arguments as well.

Overview

There’s long been a debate amongst practitioners as to what is better – a natural key, which is often a composite, or a surrogate key. Application and Database Developers tend to favor surrogate keys for their simplicity and ease to work with while DBA’s often favor natural keys for the same reasons. There are many arguments from both sides of the debate, each having validity. As with any decision, the “right” choice depends on the level of risk, cost and ROI.

Surrogate Key

A surrogate key, or artificial key, is a database field that acts as the primary key for a table but has no real meaning in the problem domain. Surrogate keys are typically in the form of an auto-incrementing integer (Sequence/Identity) or a UUID/GUID.

Pros:

  • The primary key values never change, so foreign key values are stable.
  • All of the primary key and foreign key indexes are compact, which is very good for join performance.
  • The SQL code expression of a relationship between any two tables is simple and very consistent.
  • Data retrieval code can be written faster and with fewer bugs due to the consistency and simplicity provided by a surrogate key.
  • With surrogate keys there is only one column from each table involved in most joins, and that column is obvious.
  • Object Relational Mapper (ORM) frameworks, such as NHibernate, SubSonic, LLBLGEN and others are designed to work optimally with surrogate keys, offering much simpler implementations over composite keys.
  • Allows for a higher degree of normalization since key data doesn’t need to be repeated.

Cons:

  • In order to guarantee data integrity, a unique index must be created for the fields that would have made up the composite key. This can increase administrative overhead.
  • If a sequence/identity is used for the surrogate key, it must be created, which can increase administrative overhead.
  • In Oracle, Sequences can have slight performance penalties, typically realized only under very heavy load, when a proper caching strategy is not utilized.
  • Tables can be perceived by some as “cluttered” when an extra column of meaningless data is added.
  • The primary key of the table has no real business meaning.
  • The natural key values are often included in the WHERE clause, although not part of the join semantics.
  • Extra disk space is needed to store the key values, although a sequence will account for only 8 bytes.

Natural Key

A natural key is the true unique indicators of a database record. It is this value, or composite values, that have business meaning and allow applications to distinguish one row from another. Unlike the surrogate key, the natural key can be one or more columns of any type. Examples include [social security number], [last name, date of birth, phone number]

Pros:

  • Natural keys have real business meaning and identify unique records by nature.
  • When there is no surrogate key, there is no need to create unique indexes or sequences, thereby reducing administrative overhead.
  • Fewer sequences and database objects give DBA’s less to worry about.
  • Reduced performance concerns that could result from mismanaged sequences.
  • Reduced disk space usage.

Cons:

  • Querying with joins can become more complicated as multiple columns are involved.
  • The use of date fields in keys often requires error prone casting to write queries.
  • The keys can change which can cause a ripple effect of breaking queries and require participating tables to be updated.
  • Reduced form of normalization since key values will be duplicated throughout tables.
  • Keys names and types are inconsistent which may require developers to visually inspect table definitions to understand how to query.
  • Makes application development that interacts with a database more complex and time consuming due to the semantics of the keys and how they join to other tables.
  • Makes using ORM frameworks very difficult and time consuming because they are designed to work best with surrogate keys.

Common Arguments against Surrogate Keys

  • Using a sequence in Oracle will decrease performance.

    It is true that using a sequence for a surrogate key comes with some overhead. This overhead is very minimal and apparent only under heavy load. Using a sequence caching strategy, however, can greatly improve performance issues associated with sequence generation.
  • Using a surrogate key means more data has to be stored which will require more disk space. Disk space is cheap these days, but the cost of a software developer is not. Hardware costs will continue to decrease over time while the cost of developer staff will continue to rise.
  • The record already has a meaningful, natural key. The key will be maintained in the form of a unique index, which provides the benefits of the natural key with the benefits of a surrogate key.
  • Software development tools, such as ORMs, shouldn’t dictate database architecture. Although using surrogate keys opens many doors for the use of an ORM, there is also significant benefits beyond (see the list of Pros). A database is a place to store data used by applications. Without applications, the database offers little value. Because the database exists solely to support applications, it stands to reason that there is great benefit in optimizing the database to work with the applications that offer the real value to the business.

Object Relational Mapping

Why use an ORM

Using an ORM, such as NHibernate, provides significant value to applications and application developers. It removes the need to write tedious and time consuming database access code. It also optimizes queries and makes database query code more consistent and easy to read and write. In addition, an ORM manages database sessions and provides a consistent way for reusing connections while employing data caching and lazy loading to reduce unnecessary traffic. When implemented correctly, an ORM can save a tremendous amount of developer time and remove countless database related concerns during application development.

ORM’s with a Natural Key

Because ORM’s are designed to work with surrogate keys, it takes substantial effort and testing to “fit the square peg in the round hole”. Below is an outline of the ramifications of using natural keys in NHibernate (NH) and how to get around them.

Natural Key Issue Impact Work Around
Because natural keys are assigned by the application, NH has no way to know if a record is a new or existing record (insert vs. update). Upon saving an object or group of objects, NH must query the database first to determine how the record should be persisted. Using a version column to each table, such as a timestamp that changes automatically with each update, can eliminate this. Without a version column, the extra trip to the database can’t be avoided.
NH defaults its queries to use a parent’s primary key as the foreign key into related records. Loading related tables doesn’t “just work” out of the box. Lazy loading is lost and the developer must explicitly retrieve child objects with hand-written code. To get around this, it may be possible to force developers to use hand-written XML configuration files and write queries that take the expected parameters but use them in an unexpected way to get the desired result. For example, when a needless date parameter is passed, the clause may contain a statement like “where ‘1/1/1800’ NOT EQUAL :PolicyDate”. Where this isn’t an option, hand written code will have to be written and called.

Conclusion

Surrogate keys offer many benefits when a database is used for software development. Aside from the simplicity, consistency and stability it also makes the use of an ORM extremely viable. That’s not to say that they come without a price. This price falls mainly in the area of database administration and is relatively low, especially when weighed against cost benefits of using them. Since the databases exists to support applications, optimizing the database for that purpose seems like a sensible choice. Using an ORM in software development can have an extremely positive impact on not only development time but also quality. Both of these reasons lead to an increased bottom line in the form of lower development costs and decreased cost of ownership.

“Humans should use natural keys, and computers should use machine-generated surrogate keys”
- Jeffrey Palermo, CIO HeadSpring Systems, Austin TX

Posted in Architecture and Design, NHibernate | Tagged: , , , , , | 5 Comments »

StructureMap + WCF + NHibernate Part 1

Posted by coreycoogan on May 26, 2010


WCF + IoC

I’m a big fan of TDD/BDD and leaning on my IoC container of choice, StructureMap (SM), to sort things out at runtime. What I’m not a huge fan of is WCF. It can be a pain to work with the .config spaghetti and it will always add complexity to any design that can do without. There are places where services are warranted and in these cases WCF is usually the prescribed tool.

When working with WCF, I typically relied on Poor Man’s DI to attain a testable service implementation while adhering to WCF’s no-arg constructor requirement. I knew there were ways to DI-enable WCF, but didn’t have the time to get it setup. That changed recently and I had the opportunity to create a library that allows me to let SM manage all my service dependencies with the help of this post from Jimmy Bogard.

WCF + IoC + NHibernate

I’m also a fan of NHibernate and the Session per Request pattern. This makes things relatively easy and safe – the biggest benefit being that you let your web application open an ISession in the beginning of a request and flush and dispose of the ISession at the end of the request. The Repositories and Application Services need only be written to accept an ISession (constructor injection is my preference) and leave the plumbing to the application. Of course this requires configuring your application to manage the ISession and setting up your Container to create the required instances.

Applying this pattern to WCF is a little trickier. Where a web application can manage this from an HttpModule or the HttpApplication (Global.asax), a WCF service may not necessarily be accessed via Http or Https. In this case, we have to manage our Sessions and dependency resolution from a totally different pipeline. For this reason we change our nomenclature from Session/Request to Session/Call – as in a WCF call that could be Net.TCP, HTTP, Named Pipes or MSMQ. To achieve this, I will build on top of the WCF IoC library described above and utilize the contextual Session extension point in NHibernate and the techniques outlined by this post on the Real Fiction blog.

Two Part Series

The solutions will be laid out in 2 blog posts. The first is this one, where I’ll build on Jimmy’s example by providing a base for the WCF NHibernate Session per Call library. The second post, StructureMap + WCF + NHibernate Part 2, is where I’ll actually extend the WCF IoC library to allow for NHibernate Session management.

Extending Jimmy’s Example

First let me say, many thanks to you Jimmy for this post. I always enjoy everything Jimmy writes – he has a great writing style that I find easy to follow. If you aren’t subscribed to Los Techies, where Jimmy blogs, I highly recommend adding it to your feeds and checking out MVC 2 in Action from Manning Press.

I extended Jimmy’s solution relatively slightly. Because I knew I was going to be adding NHibernate support, I gave the WCF extension points in this library the ability to work stand alone or to be subclassed. In this way, I won’t couple services to NHibernate that only need IoC support.

Instance Provider

In WCF, an Instance Provider is a factory that is called upon to create the instance of the service class. WCF gives us the IInstanceProvider interface for creating our own Instance Provider. Nothing too magical here, but this is where we wire in SM to create our service instance by accessing it as a Service Locator. Besides the name change, this code is directly from Jimmy’s post.


public class IocInstanceProvider : IInstanceProvider

{

	private readonly Type _serviceType;

	public IocInstanceProvider(Type serviceType)

	{

		_serviceType = serviceType;

	}

	public virtual object GetInstance(InstanceContext instanceContext)

	{

		return GetInstance(instanceContext, null);

	}

	public virtual object GetInstance(InstanceContext instanceContext, Message message)

	{

		//assumes that the SM has been configured already

		return ObjectFactory.GetInstance(_serviceType);

	}

	public virtual void ReleaseInstance(InstanceContext instanceContext, object instance)

	{

	}

}

Service Behavior

Service Behaviors in WCF do exactly what the name implies – add behavior to services. In this case, we need a service behavior to tell WCF that we want to use our own IInstanceProvider implementation. The bulk of the heavy lifting is done in the ApplyDispatchBehavior method, where we set our custom IInstanceProvider to every End Point on our service.

The main difference here between my implementation and Jimmy’s is my virtual InstanceProviderCreator method used to instantiate my custom IInstanceProvider. I default to the IocInstanceProvider class shown above, but the method is virtual and can therefore be overridden by a derived Service Behavior.


public class IocServiceBehavior : IServiceBehavior
{

	public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
	{
		foreach (ChannelDispatcherBase cdb in serviceHostBase.ChannelDispatchers)
		{
			var cd = cdb as ChannelDispatcher;
			if (cd != null)
			{
				foreach (EndpointDispatcher ed in cd.Endpoints)
				{
					ed.DispatchRuntime.InstanceProvider =
						InstanceProviderCreator(serviceDescription.ServiceType);
				}
			}
		}
	}

	public virtual void AddBindingParameters(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase, Collection<ServiceEndpoint> endpoints, BindingParameterCollection bindingParameters)
	{
	}

	public virtual void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
	{
	}

	/// <summary>
	/// A Func that takes the ServiceType in the constructor and instantiates a new IInstanceProvider.
	/// Defaults to an IocInstanceProvider
	/// </summary>
	public virtual Func<Type,IocInstanceProvider> InstanceProviderCreator
	{
		get
		{
			return (type) => new IocInstanceProvider(type);
		}
	}
}

Service Host

Using a custom Service Host will allow us to add the IocServiceBehavior shown above into the pipeline without needing to configure it in the .config file. In my opinion, that is a major pain and adding the custom Service Host pays for itself with that feature alone.

Once again, my implementation is almost identical to Jimmy’s with the exception of the virtual ServiceBehavior getter. This allows me to derive a custom Service Hosts that may need further modifications and add them to the DI-enabled pipeline.

public class IocServiceHost : ServiceHost
{
	public IocServiceHost(Type serviceType, params Uri[] baseAddresses)
		: base(serviceType, baseAddresses)
	{
	}

	protected override void OnOpening()
	{
		Description.Behaviors.Add(ServiceBehavior);
		base.OnOpening();
	}

	public virtual IocServiceBehavior ServiceBehavior
	{
		get
		{
			return new IocServiceBehavior();
		}
	}
}

Service Host Factory

The Service Host Factory is responsible for creating the instance of the Service Host that will eventually put an instance of the service on the wire when it is called. A custom ServiceHostFactory is used as the starting point of our WCF call pipeline extension. To sum it up as simply as possible, it looks like this:

*.SVC -> ServiceHostFactory -> ServiceHost -> ServiceBehavior -> IInstanceProvider -> Service

My implementation is once again very similar to Jimmy’s with a couple exceptions. First, I offer the virtual SvcHost property for creating an instance of a custom ServiceHost. This will again allow me to plug in a different implementation of a ServiceFactory, which I’ll utilize in my NHibernate solution. Another difference is in how I’m initializing SM. Where Jimmy shows this being done in the constructor of the ServiceHostFactory, I keep Container initialization out of the Factory and allow the application to handle it during startup.


public class IocServiceHostFactory : ServiceHostFactory
{
	/// <summary>
        /// Depending on the app servicehost type, the Container should be initialized during
        /// startup, i.e. HttpApplication.Application_Start or
        /// AppInitialize() (http://consultingblogs.emc.com/matthall/archive/2009/08/18/castle-windsor-and-non-http-protocol-wcf-services.aspx)
        /// </summary>
        public IocServiceHostFactory()
        {
        }
	protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)
	{
		return SvcHost(serviceType, baseAddresses);
	}

	/// <summary>
	/// Override to create custom ServiceHost implementation.  Defaults to an IocServiceHost
	/// </summary>
	protected virtual Func<Type, Uri[], IocServiceHost> SvcHost
	{
		get
		{
			return (t,u) => new IocServiceHost(t,u);
		}
	}

}

The ServiceHostFactory is referenced in code or in the markup of an .SVC file and would look like something like this.

<%@ ServiceHost
Language="C#"
Service="Service.DoSomethingService"
CodeBehind=" Service.DoSomethingService.svc.cs"
Factory="Wcf.IoC.IocServiceHostFactory, Wcf.IoC, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"
 %>

Configuring StructureMap

I mentioned that I’m letting the application initialize StructureMap during application start up. I also mentioned that I stuck to the WCF pipeline to ensure that this solution would work for Net.TCP or HTTP hosted web sites. This raised the question – where is a common application startup? That answer came from this post from Matt Hall. If you are using IIS 7, you can utilize a static AppInitialize() method in the root App_Code directory. This method gets called at startup from services hosted in IIS or WAS.

A few caveats worth metioning:

  • The class that holds the static AppInitialize() method must have a Build Action of “content”. In other words, the class should be deployed to the server as a .cs file and not compiled into the assembly. If it is compiled, I found it wouldn’t work.
  • If you choose to configure StructureMap from within this method, be aware of using the TheCallingAssembly() method when telling the scan where to look for types. This won’t work here because the class is JIT compiled into its own assembly that won’t contain the other types in your service.
  • For these reasons, I find it preferable to create a Bootstrapper class within my Service project and simply call it’s Initialize() method from within my AppInitialize() method.

A simple Bootstrapper example:

public static class Bootstrapper
{
	public static void Initialize()
	{

		ObjectFactory.Initialize(cfg =>
		{
		    cfg.AddRegistry<ProfileRegistry>();
		});

	}
}

A typical AppInitialize() method located in the App_Code root directory:

public static class AppInitializer
{

	public static void AppInitialize()
	{
	    Bootstrapper.Initialize();
	}

}

Conclusion

We’ve now seen how to hook WCF services up to StructureMap so that it will work with any type of hosting strategy – Net.TCP to HTTP and everything in between. We’ve also seen the extension points where we can add behavior to the IoC-enabled WCF pipeline.

In Part 2, I’ll show you how to build upon the solution provided in this post to create a second library that will utilize StructureMap and NHibernate to implement a Session per Call pattern that can be easily reused – allowing developers to focus on the requirements of the application, not the Data Access plumbing.

Posted in Alt.Net, Architecture and Design, IoC, NHibernate, StructureMap, WCF | Tagged: , , , , | 2 Comments »

Using StructureMap to Configure Applications and Components

Posted by coreycoogan on May 24, 2010


By now it’s probably a pretty safe assumption that any developer who’s worth their salt has at least heard of Inversion of Control (IoC) or Dependency Injection (DI), if not used it at least once. IoC as an architectural concept has many benefits, including testability and helping with Open/Closed Principle compliance. The specifics of DI have been discussed many times over, so consult your friendly neighborhood Google for more on the topic. What I’d like to talk about is the tooling that helps developers achieve IoC in their own systems. The tool is known as a Container and there are many good options to choose from in the .NET space. Top picks include Castle, StructureMap (SM), AutoFac, Ninject and even the very sad entry from Microsoft – Unity.

Why StructureMap?

I’ve had to use Unity at a current gig because it was approved and didn’t require .NET 3.5, but I’ll say right off that Unity is a very simple Container that lacks the features that make working with a Container fun. Granted, we’re using version 1.1, so I can’t speak to some of the enhancements found in 2.0.

I’ve blogged about Castle in the past, which I was using because it came standard in the S#arpArchitecture. Castle is a very good Container, but it has its downfalls. The biggest, in my opinion, is that its fluency is not very intuitive. Despite how many times I’ve used it, I can’t configure an application without looking at a previous example. It just doesn’t seem to flow and feels a bit clunky. It’s still a great tool, so I don’t want to take anything away from all the hard work that has gone into it.

Then there’s StructureMap. What can I say – SM rocks and I actually have fun using it. The fluent DSL flows easily and it is capable of doing everything I’ve ever really wanted it to. Some of my favorite, and most used features, include the default convention, the ease of adding custom conventions, registering open generics and showing what’s been configured. If you haven’t worked with it, have a look and play around with it. Like anything, it will take a bit to learn the ins and outs, but after that it’s pure joy. (NOTE: some of the doc I’m linking to is a bit out of date – the only downside to SM, however, there is plenty on CodeBetter)

Define Components

It is common practice to break different layers into separate components/assemblies/VisualStudio.Net Projects, such as Core, Data, Infrastructure, etc. It is also very common to have components shared throughout the enterprise, such as Inventory, SalesProcessing, Products, etc. This is what I’m referring to as components. Since each component may have its own unique DI requirements, configuring applications to initialize each of those components, as well as itself, can become challenging and when done incorrectly can make applications brittle.

Configuring the Application

The application, in the context of this post, refers to the actual executable – web site, Windows Form, WPF, Console, Windows Service, WCF. When using an IoC Container, the application should always have the responsibility of configuring and initializing the Container. It’s not uncommon to see some developers configuring a Container at the component level because that’s where the knowledge of the component’s dependencies exists. How would the application know about special conventions or nuances that the Container should be handling? The problem with this is the case where the application wants to leverage the Container and swap out one dependency for another.

Equally as painful is the practice of configuring all the components directly from the application. Once a developer gets everything working, the configuration is typically copied/pasted to other applications that use the same components. Now when something changes, someone has the joy of [hopefully] finding all the occurrences of the code in question and pasting a new version on top of it.

Configuring Components

So we don’t want the components to configure themselves in the Container and we don’t want to copy/paste the component configuration in all the consuming applications. How do we do it then? Well I’m glad you asked. The answer is found in StructureMap’s Registry facility.

A Registry in SM is a custom class, derived from the StructureMap.Configuration.DSL.Registry class. This is where the nuts and bolts of an SM configuration should exist. A component can have one or more registries, broken down into any sized unit that makes sense. Maybe the Inventory application has a registry for some common domain services and another for specific communication mechanisms that may vary depending on the type of application.

Here’s an example of a simple component Registry that has a couple specific needs and then relies on scanning and default conventions for the rest.


public class ProfileRegistry : Registry
{
	public ProfileRegistry()
	{
	    ForSingletonOf<IProfileValidator>()
		.Use<ProfileValidator>();

	    Scan(scanner =>
		     {
			 scanner.AssemblyContainingType<IProfileRepository>();

			 //use the default convention
			 scanner.WithDefaultConventions();

		     });
	}
}

Initializing the Application

Now that our components have their own default configurations broken into one or more Registry classes, we need to get that into our Container. As previously mentioned, it is the responsibility of the application to configure and initialize the Container. This is something we generally want to happen only once. This can be accomplished by putting the code to handle the initialization of SM, NHibernate, Logging, whatever, in a BootStrapper that gets called at the application’s entry point. This can be the Global.Application_Start in a web application or inside the Program class of a console or WinForm application.

Initializing Component Registries

So how does the BootStrapper initialize SM with each component registry? With StructureMap, there are actually three common techniques.

By Type

This technique is pretty straight forward and probably suits the majority of cases. In this example, we know what the Registries are and simply add them by type to SM’s configuration during initialization.


public static class Bootstrapper
{
	public static void Initialize()
	{

		ObjectFactory.Initialize(cfg =>
		{
		    cfg.AddRegistry<ProfileRegistry>();
		});

	}
}

By Instance

SM also offers the ability to add a registry to the SM configuration as an instance. This comes in very handy when you have a Registry that requires an instance of some type for its own configuration.

A real-life example of this is when using an NHibernate Session/Call pattern in a StructureMap enabled WCF service (details available in this post). In this case, I want the Wcf.NHibernate registry to tell SM how it should resolve an ISession. In order for this to happen, the Registry will need an initialized ISessionFactory instance, which will be passed to it by the application’s BootStrapper. Once the registry is instantiated, just add it to the SM’s configuration.


public static class Bootstrapper
{
	public static void Initialize()
	{
		ObjectFactory.Initialize(cfg =>
		{
		    cfg.AddRegistry<ProfileRegistry>();

			//get the sessionfactory from another method
		    ISessionFactory sessionFactory = InitFactory();

		    //create the WCF NH Registry
		    var nhWcfRegistry = new WcfNHibernateRegistry(sessionFactory);

			//add the Registry as an instance
		    cfg.AddRegistry(nhWcfRegistry);
		});
	}
}

By Scanning

Last but least, SM Registries can be added to the configuration via scanning. I really like SM’s ability to scan the types specified and apply common and custom conventions for type registration. I won’t go into details here, as the documentation is pretty good in this area.

During a scan, SM can be told to look for all Registry implementations and automatically add them to the configuration using the IAssemblyScanner.LookForRegistries() method. This can come in handy if you are scanning your components from your application. If you have a large number of components, it may be tempting to scan all the application’s assemblies and find registries, but be warned that this can make for a very long startup when you consider the number of types in your third party assemblies, like Log4Net, NHibernate, Castle, etc. Of course, you can always apply a convention to limit which components get scanned, but it’s definitely something to be aware of. Just put some thought into your use of IAssemblyScanner.AssembliesFromApplicationBaseDirectory() and IAssemblyScanner. AssembliesFromPath(path:string).


Scan(scanner =>
{
	 //include this
	scanner.TheCallingAssembly();

	//include the Data component
	scanner.AssemblyContainingType<IProfileRepository>();

	//include all the registries
	scanner.LookForRegistries();

});

Conclusion

Using Dependency Injection/IoC is a great way to build loosely coupled, composeable applications. An IoC Container is a tool that is used to tell your application how it should compose your types by defining the “what” and the “how” of your application’s dependency resolution. StructureMap is one of many available choices of IoC Containers for .NET, but its rich feature set and intuitive fluent DSL have put it at the top of my list.

Many examples and tutorials for StructureMap show how to configure a trivial application where all the types exist within the application or from application specific components. When dealing with components that are shared across the enterprise, it is important to give every consuming application a way to configure SM with their default configuration without copying the code into every application’s startup and without putting the configuration decisions solely in the hands of the component. This can be achieved through the use of the StructureMap Registry, which can be defined as coarse or granular as required at the co

Posted in Alt.Net, Architecture and Design, ASP.NET MVC, C#, IoC, NHibernate, StructureMap, TDD, Uncategorized | Tagged: , , , , , | 2 Comments »

Holy Over Mocking Batman: A Natural Progression

Posted by coreycoogan on January 28, 2010


Uncle Bob has been on an interesting kick lately, writing thought provoking posts with some controversial undertones. A few days ago, he posted “Mocking Mocking and Testing Outcomes“, where he mocked the [over?]use of mocking frameworks. It’s an interesting article and worth a read.

I can certainly see where he is coming from with this post. The part that really grabbed me was the last paragraph:

“Also, why do you need to verify that “verify(manager).createCredential” was called? If you really get the credentials you’re expecting from the Authenticator, why would you need to check where they come from? Isn’t it one more way to couple your test to the implementation?”

I think this is the single most common pitfall of anyone who has been doing TDD with mocking for a short period of time.  The initial tendency is to test each and every interaction between the objects.  It usually starts out with large tests that make too many assertions.  It’s not uncommon in these tests to see the developer assert that:

  1. The application service layer gets called
  2. A transaction is started
  3. The appropriate repository method[s] are called, which can be multiple repositories
  4. Some business logic is executed
  5. The right value is returned

Such a test is going to be very brittle and break upon any serious refactoring.  As Uncle Bob points out, do we really care about each of these interactions?  Sure, in some cases it may be important to make sure a transaction is started, but in most cases all you care about is that the right value is returned.  This is the natural progression of implementing TDD, especially when mocking gets introduced.  I think it takes the pains of these brittle tests to discover that asserting all those interactions is really not necessary.

The next progression starts is when the developer begins to understand what one assertion, or logical assertion group, per test really means.  This phase is a little less painful because at least now there is a separate test for each interaction, which is still typically unnecessary.  Now that the tests are broken up into logical units, the tests are easier to fix when refactoring breaks the interactions.

Now the final progression – the pains of these fragile tests lead to the notion that all these tests really don’t matter.  They really don’t prove anything about the quality of the software.  The tests match the implementation and the system depends a set of very brittle tests that could make refactoring too painful.  That’s when the tests get simpler.  That’s when the realization that a mock isn’t needed for everything and ever interaction doesn’t need to be tested.

If you are reading this and it sounds like you are in phase 1 or 2, I highly recommend you read the post “Three simple Rhino Mocks rules” by Jimmy Bogard.  It really influenced the way I write tests.

Posted in Alt.Net, Architecture and Design, Rhino Mocks, SOLID, TDD | Tagged: , , | 2 Comments »

ALT.NET Content in MSDN

Posted by coreycoogan on December 1, 2009


Microsoft has been doing a decent job of including many Alt.net-ish articles in each issue.  Most notably the Jeremy D. Miller articles that have appeared throughout the 2009 year.  The December 2009 issue is following the trend with not one, but two articles from the community.

First, an article on NHibernate from Ayende (Oren Eini).  It’s great to see MS embracing NHibernate and this looks to be a great article (although I haven’t read it yet).

The second article comes from David Laribee, he who coined the term Alt.Net.  He’s written an article on paying back technical debt with Agile techniques, which I’m also looking forward to reading.

Go check it out!

Posted in Agile, Alt.Net, Architecture and Design, NHibernate | Tagged: , , , | Leave a Comment »

Command Query Responsibility Separation (CQRS)

Posted by coreycoogan on November 18, 2009


CQRS is a relatively new way of architecting systems that is the topic of much discussion on the DDD lists.  The biggest proponents (creators?), Udi Dahan and Greg Young, are constantly answering question, giving suggestions and presenting the architecture at conferences around the world.  It’s fascinating and I really like what I’ve read and watched, but I’ve always had trouble grasping the ideas as a whole, especially Event Sourcing, without something concrete to look at.

UPDATE: As pointed out by Mark Nijhof in the comments, Greg and Udi have slightly different philosophies.  We’re talking about Greg’s flavor here.

The gist of CQRS is that you access your domain model for updating via commands and a separate service from the reads (queries) and reporting.  Reading from your database happens without going through the domain layer and can be performed against a denormalized database by passing simple and complete DTO’s to the UI.  The updates to the domain eventually update the read-only database when the published event messages get picked up and acted upon.

Thanks to Mark Nijhof, I have something more in depth that I can read and also a code base I can examine.  Anyone who is interested in this architecture should read this post now.  It is pretty well written and really helps solidify the essence of what CQRS looks like the problem it is trying to solve.

Posted in Architecture and Design, Domain Driven Design | Tagged: , , , | 2 Comments »

Castle Windsor Tutorial in Asp.Net MVC

Posted by coreycoogan on November 6, 2009


Castle Windsor is one of the more popular IoC containers in the .NET space today.  Others include StructureMap, NJect, AutoFac, Unity and others.  My top choices are StructureMap and Castle, but I’ve never really used NJect or AutoFac and it’s my opinion that Unity is the weakest of them all and hardly worth mentioning.  I’ll show some of the basics of Castle Windsor – enough to get you setup in your ASP.NET MVC, or any other .NET application. I’ll show you enough to handle 90%+ of the most common IoC needs. Much of my examples come from the S#arp Architecture, which I’m using in my current project.

Castle Windsor Configuration Options

Windsor offers 2 configuration options – .config file or code.  Like many others, I have moved away from trying to do everything in my .config file and do more in code, practicing Convention or Configuration (CoC).  Because the novelty of .config files is so early 2000′s, I’ll focus on configuring Castle using good ‘ole C# and some conventions I follow in my applications.

Common Conventions

Nothing ground breaking here, but I like to keep my controllers as light as possible.  Therefore, I keep my application logic in an application service layer.  My app services have one ore more repositories injected into them where domain objects can be retrieved for performing operations.  My repositories, application services and interfaces all reside in different layers, which in my case is a physical assembly.  Some folks prefer to inject repositories directly into the controller, which works as well, but using services works better for me because I feel I get better separation and it simplifies the controller’s constructor, which is how I handle dependency injection.

So here’s the breakdown of my layers (assemblies/projects):

Application Layer:
Application Services, Application Service Interfaces

Data Layer:
Repository Implementations

Domain Layer (Core):
Repository Interfaces

UI Layer:
Controllers

Configuring Castle to Handle My Conventions

All of my dependency injection is through the object’s constructor.  As long as Windsor can resolve all the dependencies required by the constructors, it will be able to create and resolve the dependent objects as well.  IoC configuration is typically left to the application (MVC, WinForms, WPF, etc.), so you would bootstrap the configuration in some sort of Application Start event, which in the case of ASP.NET is available from the Global.asax.  All the code you’re about to see will exist in a class responsible for the IoC configuration that gets called from my Application_Start event.

First, a sample of a repository class and its interface, then how to automatically register all repositories in one swoop.


//my repository class from the Data assembly
namespace S2sol.Rpo.Data
{
     public class ClassroomRepository : S2sol.Rpo.Core.DataInterfaces.IClassroomRepository
    {
    }
}

//my Repository interface from the Core assembly
namespace S2sol.Rpo.Core.DataInterfaces
{
    public interface IClassroomRepository
    {
    }
}

//this is how I would resolve an IClassroomRepository to its implementation from Castle
IClassroomRepository repo = container.Resolve<IClassroomrepository>();

To make things simple, I’ll use Castle’s AllTypes.Pick() method, which effectively scans the types in an assembly. In my example below, I’m scanning my Data assembly and looking for non-generic interfaces that are the first interface defined on the classes in my Core assembly and register them with the container.

private static void AddRepositoriesTo(IWindsorContainer container)
{
    container.Register(
    AllTypes.Pick()
    .FromAssembly(typeof(UserRepository).Assembly) //get the assembly where this repository lives
    .WithService.FirstNonGenericCoreInterface("S2sol.Rpo.Core") //look for interfaces from this assembly
    );
}

I’m going to want to automatically register all my Application Services as well so they can be injected into my controllers. This syntax is a little simpler because those interfaces and implementations are in the same assembly.

private static void AddApplicationServicesTo(IWindsorContainer container)
{
      container.Register(
        AllTypes.Pick()
        .FromAssembly(typeof(ProfileService).Assembly)
        .WithService.FirstInterface());
}

Now I’ll want to make sure that all my controllers are registered. This done by using the RegisterControllers extension method from the MvcContrib.Castle library.

private static void AddControllersTo(IWindsorContainer container)
{
	container.RegisterControllers(typeof(HomeController).Assembly);
}

Now all that’s left is to show the simple part, and that’s how to register any one-offs that may not fit into your conventions. For example, I have an IValidator interface that I want to resolve to the Validator implementation I’m using in this project.

container.AddComponent<IValidator,Validator>();

It’s as simple as that. Once this has been put in place, I can just continue to develop repositories, application services, controllers and their respective interfaces and never have to remember to register any of them as long as I follow my conventions.

Castle’s Factory Facility

Facilities are how Castle handles extensibility. These are plugins for Castle that can be used for just about anything. Some of the more popular ones support NHibernate, WCF and logging. The one that comes in handy for my needs is the FactorySupportFacility. This facility allows me to configure a factory method in the container and control how objects get resolved.

The RoomParentsOnline MVC application makes use of a custom IPrincipal object that gets injected into my UserSession class, along with an HttpSessionStateBase implementation. The UserSession class is used for interacting with the current user, and by passing it an IPrincipal and HttpSessionStateBase, I have a testable design that I can develop using TDD.

//constructor for the UserSession implementation
public UserSession(IProfileService profileSerivce,
            HttpSessionStateBase session, IPrincipal principal)

The first thing to do is make sure that Castle knows about the Factory Facility that I wish to use. To do this, you can either register the facility in the .config file or in code. I’ll show you how to add it in code. This would be done in your registrar class’s constructor to make sure it’s available right away.

container.AddFacility<FactorySupportFacility>();

Now that Castle knows I’m using the Factory facility, I can tell it how I want to resolve the IPrincipal and HttpSessionStateBase. I also have to tell it how to resolve an IIdentity because of the way my code is accessing it (for testability). In the code below, I am telling Windsor to keep the registered objects available during the scope of a request. I then pass it the Function expression for how to create the object, which is all coming from the HttpContext.

private static void AddSecurityConcernsTo(IWindsorContainer container)
{
	container.Register(Component.For<IIdentity>()
	  .LifeStyle.PerWebRequest
	  .UsingFactoryMethod(() => HttpContext.Current.User.Identity));

	container.Register(Component.For<IPrincipal>()
	  .LifeStyle.PerWebRequest
	  .UsingFactoryMethod(() => HttpContext.Current.User));
	
	container.Register(Component.For<HttpSessionStateBase>()
		.LifeStyle.PerWebRequest
		.UsingFactoryMethod(() => new HttpSessionStateWrapper(HttpContext.Current.Session)));


}

I’m sure you’ll agree that this code makes it very simple to invert some of those pesky dependencies that come from the core ASP.NET plumbing. This technique is very effective for designing testable classes that need to interact with some of the “ugly stuff”.

The MvcContrib WindsorControllerFactory

Now that we have our Windsor container all configured to resolve our controllers, why not let the MVC framework use that container for creating our controllers. This can be done quite easily using the WindsorControllerFactory from the MvcContrib project. This is an implementation of ASP.NET MVC’s IControllerFactory interface. Using it is simple – just create an instance and give it your container and then register the factory with MVC. This is something that needs to be done during Application_Start.

ControllerBuilder.Current.SetControllerFactory(new WindsorControllerFactory(container));

Common Service Locator

The last thing that I’ll mention is the CommonServiceLocator project. If you already have your IoC configured, you might as well make it available to all your code that may need to get object implementations without dependency injection. The CommonServiceLocator makes this easy by adapting all the major IoC containers to work under a common interface with a few key static methods. This is something that should also happen in the Application_Start.

ServiceLocator.SetLocatorProvider(() => new WindsorServiceLocator(container));

Bringing it all Together

Now I’ll just put everything together for your copy/paste pleasure.

Global.asax

protected void Application_Start()
{
	InitializeServiceLocator();

	//do everything else here
}

/// <summary>
/// Instantiate the container and add all Controllers that derive from 
/// WindsorController to the container.  Also associate the Controller 
/// with the WindsorContainer ControllerFactory.
/// </summary>
protected virtual void InitializeServiceLocator()
{
	//create the container
	IWindsorContainer container = new WindsorContainer();
	//set the controller factory
	ControllerBuilder.Current.SetControllerFactory(new WindsorControllerFactory(container));
	//configure the container
	ComponentRegistrar.AddComponentsTo(container);
	//setup the common service locator
	ServiceLocator.SetLocatorProvider(() => new WindsorServiceLocator(container));
}

ComponentRegistrar.cs

public class ComponentRegistrar
    {
        public static void AddComponentsTo(IWindsorContainer container)
        {
            container.AddFacility<FactorySupportFacility>();

            AddControllersTo(container);
            AddGenericRepositoriesTo(container);
            AddCustomRepositoriesTo(container);
            AddApplicationServicesTo(container);
            AddOneOffs(container);
            AddSecurityConcernsTo(container);
        }

		//add all my controllers
        private static void AddControllersTo(IWindsorContainer container)
        {
            container.RegisterControllers(typeof(HomeController).Assembly);
        }

		//handle any one off registrations that aren't convention based
        private static void AddOneOffs(IWindsorContainer container)
        {
            container.AddComponent<SharpArch.Core.CommonValidator.IValidator,Validator>("validator");
        }

       //handle registrations for my security classes
        private static void AddSecurityConcernsTo(IWindsorContainer container)
        {
            container.Register(Component.For<IIdentity>()
              .LifeStyle.PerWebRequest
              .UsingFactoryMethod(() => HttpContext.Current.User.Identity));

            container.Register(Component.For<IPrincipal>()
              .LifeStyle.PerWebRequest
              .UsingFactoryMethod(() => HttpContext.Current.User));
            
            container.Register(Component.For<HttpSessionStateBase>()
                .LifeStyle.PerWebRequest
                .UsingFactoryMethod(() => new HttpSessionStateWrapper(HttpContext.Current.Session)));


        }


		//register my application services
        private static void AddApplicationServicesTo(IWindsorContainer container)
        {
            container.Register(
                AllTypes.Pick()
                .FromAssembly(typeof(ProfileService).Assembly)
                .WithService.FirstInterface());
        }
		
		//register all custom repositories (not generic)
        private static void AddCustomRepositoriesTo(IWindsorContainer container)
        {
            container.Register(
                AllTypes.Pick()
                .FromAssembly(typeof(UserRepository).Assembly)
                .WithService.FirstNonGenericCoreInterface("S2sol.Rpo.Core"));
        }

		//register all my SharpArch generic repos
        private static void AddGenericRepositoriesTo(IWindsorContainer container)
        {
            container.AddComponent("entityDuplicateChecker",
                typeof(IEntityDuplicateChecker), typeof(EntityDuplicateChecker));
            container.AddComponent("repositoryType",
                typeof(IRepository<>), typeof(Repository<>));
            container.AddComponent("nhibernateRepositoryType",
                typeof(INHibernateRepository<>), typeof(NHibernateRepository<>));
            container.AddComponent("repositoryWithTypedId",
                typeof(IRepositoryWithTypedId<,>), typeof(RepositoryWithTypedId<,>));
            container.AddComponent("nhibernateRepositoryWithTypedId",
                typeof(INHibernateRepositoryWithTypedId<,>), typeof(NHibernateRepositoryWithTypedId<,>));

        }
    }

Conclusion

This post ended up being longer than I originally intended, but hopefully you gleaned some nice little gems here. Castle Windsor is really easy to setup and use and there are many contributions out there that add more great functionality. Sometimes it’s hard to know how to use these types of tools without some concrete examples and I hope to have you shown you some useful ones here.

Posted in Alt.Net, Architecture and Design, ASP.NET, ASP.NET MVC, Design Patterns, IoC, Uncategorized | Tagged: , , , , , , , , , , , , , | 23 Comments »

NServiceBus and Event Driven Architecture (EDA)

Posted by coreycoogan on November 6, 2009


At the October 2009 ALT.NET Northeast Wisconsin user group, Scott Felder put together a great demonstration of NServiceBus and how it actually works. He has since followed up with a well written article on his new blog that talks about Domain Driven Design and when/where NServiceBus, or some other message bus/EDA, might fit in. It’s definitely worth a read.

Posted in Alt.Net, Architecture and Design, SOA | Tagged: , , , , | 1 Comment »

ASP.NET MVC and S#arp Architecture

Posted by coreycoogan on August 21, 2009


It’s been a while since I’ve posted.  I’ve been very busy with my paying job and my side project, Room Parents Online (RPO).  For RPO, I’m using the S#arp Architecture, developed by Billy Mcafferty.  It’s a great framework for getting a new MVC project started, but one of the real benefits of using it for me was the initial setup and configuration of Fluent NHibernate.  I have never written a real production app in NH at all, so this was something I’ve been wanting to do for a long time.  There’s been plenty of pain on the bleeding edge, but more on that in future posts.

Now that I’ve laid the ground work, expect to see more posts coming from the RPO project.  I’ve been working on this project for about a month, so I have a good bit of material already, and will find the time to blog about the project regularly.  Topics to come will include:

- Automatic Model validation with xVal (this post was started 10 days ago)
- Fluent NHibernate
- NHibernate
- The pains of running with scissors
- ASP.NET MVC
- Google integration
- JQuery
- S#arp Architecture

So stay tuned for some good Alt.net stuff.

Posted in Alt.Net, Architecture and Design, ASP.NET MVC, S#arp Architecture | Tagged: , , , , , | 1 Comment »

 
Follow

Get every new post delivered to your Inbox.