Monday, November 28, 2011

Automapping with Fluent NHibernate: ShouldMap() implementation

When you use the automapping feature of Fluent NHibernate you will very quickly encounter a class called DefaultAutomappingConfiguration. This is the base class for your own automapping configuration class which you want to implement to control certain aspects of the automapping feature.

During startup of your application you have to provide NHibernate with information about how to map your domain classes to the database schema and back. Fluent NHibernate allows doing this automatically. Well, most of the time. There are cases when you want to be a little more specific with regards to those automatically created mappings.

The code you want to execute during startup looks similar to the following:

private static ISessionFactory CreateSessionFactory() {
  var rawConfig = new Configuration();
  rawConfig.SetNamingStrategy(new PostgresNamingStrategy());
  var sessionFactory = Fluently.Configure(rawConfig)
    .Database(PostgreSQLConfiguration.PostgreSQL82
      .ConnectionString(Common.DatabaseConnectionString))
    .Mappings(m => m.AutoMappings
      .Add(AutoMap
        .AssemblyOf<Customer>(new DbMappingConfiguration())
      .Conventions.Add(ForeignKey.EndsWith("Id"))));
   return sessionFactory.BuildSessionFactory();
}

The interesting piece in this case is line 9 where you provide the assembly that contains all your domain classes that you want to map. In this case you only need to provide one such class and Fluent NHibernate will also try to automatically map all other types that it can find in that assembly. Sometimes this is exactly what you want. In other cases you may not want to map all of the classes. Enter the DefaultMappingConfiguration class.

You can implement you own mapping configuration by deriving from DefaultMappingConfiguration. You can override just the aspects that you want to change, everything else is taken care of by the default implementation.

In this post I want to show one way for controlling which classes are mapped. A simplistic approach would be implementing your custom mapping configuration as follows:

public class DbMappingConfiguration 
    : DefaultAutomappingConfiguration {
  public override bool ShouldMap(Type type) {
    return type.Equals(typeof(Customer))
      || type.Equals(typeof(Order))
      || type.Equals(typeof(Address))
      || type.Equals(typeof(Invoice));
  }
}

This works. However as you can imagine this is not very intuitive and you will have to maintain this code as you modify your domain. Each time you add, remove or rename any of your domain classes you will have to update this method (admittedly the renaming is not a problem if you use a refactoring tool).

So let’s think of a better solution. The one I want to present here is based on a marker attribute. The implementation PersistentDomainClassAttribute is very simple:

[AttributeUsage(AttributeTargets.Class, 
                AllowMultiple = false, 
                Inherited = false)]
public class PersistentDomainClassAttribute 
  : Attribute {
}

The usage is fairly simple, too. Here is an example:

[PersistentDomainClass]
public class Customer {
  public virtual Guid Id { get; private set; }
  public virtual string FirstName { get; set; }
  public virtual string LastName { get; set; }
}

Having this in place on all domain classes the custom mapping configuration can now be simplified as follows using a generic implementation:

public class DbMappingConfiguration 
   : DefaultAutomappingConfiguration {
  public override bool ShouldMap(Type type) {
    var attr = type.GetCustomAttributes(
                  typeof(PersistentDomainClassAttribute),
                  true);
    return attr.Length > 0;
  }
}

As a result you now have a solution that lets you choose which domain classes to make persistent. At the same time you no longer have to maintain the ShouldMap() implementation of your custom mapping configuration either.

Sunday, November 27, 2011

Behavior of DirectoryInfo.Delete and Directory.Exists: Directories reappear!?

Please note that the following may be a corner case or an isolated case. If not then something is not quite right with running NUnit based test suites from within Visual Studio 2010 using ReSharper 6. Unfortunately I don’t have proof either way but I still wanted to share my observation just in case you have encountered a similar issue.

A couple of weeks ago I wrote some test code that cleaned up folders after tests were executed. The code looked as follows:

public static void CleanUpFolders(
                    List<DirectoryInfo> directoryInfos) {
   foreach (var directoryInfo in directoryInfos) {
      foreach (var file in directoryInfo.GetFiles("*", 
                            SearchOption.AllDirectories)) {
         file.IsReadOnly = false;
      }
      directoryInfo.Delete(true);
   }

   foreach (var directoryInfo in directoryInfos) {
      Assert.IsFalse(Directory.Exists(directoryInfo.FullName));
   }

   directoryInfos.Clear();
}

For each directoryInfo in the collection the code iterates through all files and sets the read-only attribute to false for each of them. Then the directory is deleted with all its contents.

Then I executed the test suite from within Visual Studio 2010 using ReSharper 6. It turned out that in some cases the assertion in the above code would fail. Despite the directories having been deleted the call to Directory.Exists would return true! So for some reasons the directory was deleted but then it wasn’t. When I reran the tests sometimes this assertion would fail and sometimes it wouldn’t. There were days where it was fine and there were days when the assertion would fail in 90% of the cases.

In addition to this the assertions would only fail on two particular directories but not on any other. I couldn’t identify the commonality between the directories on which the assertion would fail and the directories that were successfully deleted.

Initially I thought that maybe a different process or a different thread would have a handle to the directory. In that case my test would just ask the operation system (Windows 7, 64 bit) to mark the directory as deleted and as soon as the last handle was closed it would be removed. However, the tests would generate names and then create directories with that name. No other thread or process knew about the names. I didn’t have any thread in my system under test that would scan the parent directory thus ‘learning’ about the generated directory names.

To diagnose this issue I tried various things including restarting of Visual Studio and rebooting the workstation. I also tried some of Sysinternals’ tools.

The biggest surprise was an observation that I made when running the tests under the debugger. I set a breakpoint just after the loop that deleted all the directories. At that point Windows Explorer would report those directories as gone. Also when continuing the execution the assertions would not fail. However, once the entire suites was complete, two directories would reappear in Windows Explorer! So despite Directory.Exists stating the Directory to be non-existent it reappeared! This is repeatable.

To take elements out of the equation I got the latest revision of csUnit’s source code and upgraded it to VS2010 and .NET 4.0 (these changes are available at Sourceforge). Then I executed the same test suite without any modification using csUnit. In this case the directories were properly deleted and did not reappear.

Where does that leave me? I don’t know. All I can say is this: My test suite creates a number of directories with generated names. When I execute this suite from within VS2010 and using ReSharper 6, two folders reappear despite DirectoryInfo.Delete() being executed successfully and Directory.Exists() confirming that. However when I execute the same test suite from csUnitRunner, the code behaves as expected and the folders remain deleted. Despite searching for a long time I have not been able to find the reason for this difference in behavior.

The only reasonable conclusion at this stage seem to be that as developers we always need to be suspicious about the tools we use. While they may work in almost all the cases, sometimes they may be the cause for something that doesn’t work.

Tuesday, November 22, 2011

DirectoryInfo.Delete() when files are read-only

DirectoryInfo.Delete() will fail with UnauthorizedAccessException if that directory or any of its subdirectories contains a file that is read-only.

One solution is to remove the read-only attribute from all files. You can do so while recursively deleting directories. This approach is mentioned in some blogs and it looks as follows:

public static void RecursivelyDeleteDirectory(
                   DirectoryInfo currentDirectory) {
   try {
      currentDirectory.Attributes = FileAttributes.Normal;
      foreach (var childDirectory in 
                        currentDirectory.GetDirectories()) {
         RecursivelyDeleteDirectory(childDirectory);
      }

      foreach (var file in currentDirectory.GetFiles()) {
         file.IsReadOnly = false;
      }

      currentDirectory.Delete(true);
   }
   catch (Exception ex) {
      Console.WriteLine(ex); // Better option: Use log4net
   }
}

While this works some people do not like recursion. So here is an option for how the same can be achieved without recursion:

public static void DeleteDirectory(
                   DirectoryInfo currentDirectory) {
   try {
      foreach (var file in currentDirectory.GetFiles(
               "*", SearchOption.AllDirectories)) {
         file.IsReadOnly = false;
      }
      currentDirectory.Delete(true);
   }
   catch (Exception ex) {
      Console.WriteLine(ex); // Better option: Use log4net
   }
}

Please note that these implementations report exceptions at the console. A better option would be to use a standard logging framework like log4net.

Tuesday, November 08, 2011

ReSharper Not Executing Your Tests?

Today I ran into a small issue as ReSharper was not willing to execute my tests. Generally it does but today it was different. Let me explain.

I have two assemblies, one that contains the unit tests and one that contains the code under test. In ReSharper’s session window the unit tests were properly listed. I was even able to “execute” them. However it would always show that them as not executed with a gray bullet in front of them. This was the behavior for both “Run Test” and “Debug Test”. The Output window was empty and didn’t show anything that would indicate what was wrong.

I also check my unit tests but all attributes were properly in place and both the class and the methods were public and the methods had the correct signature. So what was going on?

To diagnose the issue I launched the unit test tool as a stand-alone application. When I tried to load the assembly with the unit tests it was immediately clear what was wrong when I saw the word BadImageFormatException displayed in a message box.

It turned out that the assembly with the unit tests was set to build for “Any CPU” while the assembly under test was build for “x86”. Since I am using a 64 bit machine the two assemblies were compiled as per those targets. The unit tests as 64 bit while the assembly under test as 32 bit. No wonder it didn’t work.

This is easy to fix, in my case by setting the target for the assembly with the unit tests to “x86” as well. It would, however, be nice if ReSharper had reported the exception in some form as it would have saved time. Well, maybe in the next version? Maybe this post helps you to save some time.

(Note: I’m using ReSharper version 6.0.2202.688 with Visual Studio 2010 SP1 on a 64 bit Windows 7 Enterprise Edition)

Friday, November 04, 2011

Fluent NHibernate and Fluent Migrator

A question at stack overflow whether NHibernate and migratordotnet play nicely together caught my interest. So I started a little experiment to find out myself. Instead of migratordotnet, however, I wanted to use Fluent Migrator because one of the teams I’ve been working with recently used it as well.

The first challenge I ran into was figuring out how to invoke and use Fluent Migrator from within an assembly. When you look at the sources of Fluent Migrator you will notice that it has several runners that can be used from the command line and from NAnt and msbuild. None of these was what I was looking for. Instead I wanted an API. I couldn’t find one so decided to implement my own API.

About one interface, two classes and about 150 lines of code later I had it running. During startup, e.g. a static initializer in the assembly the migrations are executed using the following statement:

new Migrator(new MigratorContext(Console.Out) {
   Database = "postgres",
   Connection = ConnectionString,
   MigrationsAssembly = typeof(Global).Assembly
}).MigrateUp();

Migrator is one of the classes I implemented. It serves as the façade to Fluent Migrator. Within my assembly I now can write and implement regular migrations, e.g. like the following:

[Migration(201110181840)]
public class CreateUserTable : Migration {
   public override void Up() {
      Create.Table("Users");
   }

   public override void Down() {
      Delete.Table("Users");
   }
}

As a result I can now write additional migrations as needed. This approach helps with development as we no longer need to run a separate tool. Each time the startup sequence was executed, e.g. running developer tests which loads the assembly, the database schema was brought up to date. And since I used the automapping feature of Fluent NHibernate I didn’t have to write mappings either.

What I noticed, though, is that although both Fluent NHibernate and Fluent Migrator now play together nicely, there appears to be code that looks similar. Take the following example of a domain class.

public class InputFile {
   public virtual Guid Id { get; private set; }
   public virtual Job Job { get; set; }
   public virtual string FileName { get; set; }
}

To have a table for this I also have the following Migration in the code:

[Migration(201111040531)]
public class CreateInputFilesTable : Migration {
   public override void Up() {
      Create.Table(TableName)
         .WithColumn("Id").AsGuid().PrimaryKey()
         .WithColumn("JobId").AsGuid().Indexed()
         .WithColumn("FileName").AsString();
   }

   public override void Down() {
      Delete.Table(TableName);
   }

   private const string TableName = "InputFile";
}

As you can see there are things that should not be required. For example the domain class already specifies that the property Id is of type Guid. This is equivalent to saying that the table should have a column named “Id” of type Guid.

An additional issue can arise when you add a column/property in one place but forget to add it to a different place. Of course this will be reported next time via an exception. However, it would be nice if I had to change it in a single place only.

So there appears to be an opportunity to simplify the code. Maybe I could rewrite the migration using reflection? And maybe I could rewrite that migration in a kind of generic way so I could reuse at least parts of it for other migrations.

Next I’ll be looking into how an improved solution might look like ideally generic or at the very least with much less duplication. Being able to avoid having to change two files when modifying one domain class would be a nice first step.

Update 13 Nov 2011: I have created a fork of Fluent Migrator on Github. The address for the fork is https://github.com/ManfredLange/fluentmigrator. In that fork I have added a new project FluentMigrator.InProc that contains the sources mentioned in this post.