Thursday, June 04, 2009

WPF and Silverlight: Sharing User Controls

Both WPF and Silverlight share XAML as the language to describe the layout, look and feel of a user interface. So I thought. There are differences, though, at least for .NET 3.5 and Silverlight 2. And the differences are not only limited to the XAML code but also to the code-behind regardless of whether you use C# or any other .NET language. Sure I did know that Silverlight doesn't support 3D graphics (at the moment) and so I didn't expect those to work. I'm also aware that Silverlight has a Visual Style Manager that is not available in WPF. And the root element in Silverlight is a Page while in WPF it is a Window. And Silverlight is more than just the UI part since it also represents a minimum CLR runtime environment and not a full .NET 3.5 implementation. So my expectations were along those lines. But I discovered more beyond my expectations. Small, yet sufficient differences that makes it hard to share UI components between WPF-based and Silverlight-based applications. And that's what I'd like to do since I want to leave the choice between using a native user interface and a rich internet application (RIA) to my customers. To reduce the gap I'm using I'm using components from the Silverlight tool kit that are in the "stable" band. Otherwise I wouldn't have available components such as TreeView. One example of a noticeable different is that in WPF a TextBlock can have an attribute named "Background" while in Silverlight you will get a parse error of you try to render the XAML code. There are also inconsistencies (or differences) in terms of how mouse events are handled. For instance the set of PreviewMouse* functions that are available in WPF don't exist in Silverlight 2. If you want to use Hittest() in your code-behind it won't work (or even compile) in Silverlight. Instead you have to use functionality of the VisualTreeHelper class. There are quite a few of these difference at the moment and I'll provide more details as I discover them along with code examples. My hope, though, is that over time the differences between the two are reduced at least for the features they have in common. It's pretty annoying that setting the background color is different between WPF and Silverlight. It's also pretty annoying that finding a component at a mouse cursor position is different. Maybe .NET 4.0 and Silverlight 3 are a step towards easier sharing of user controls. The community would love to see that!
Check out my blog on Agile Leadership.

Friday, May 22, 2009

ReSharper 4.5 Memory Consumption

Just tried ReSharper on a fairly large solution with about 38 projects (13 C# and 25 C++). With the ReSharper addin enabled the memory consumption is sitting at about 1.5 GByte for the solution. With the ReSharper addin disabled the memory consumption is sitting at about 300 MByte for the solution. The solution itself has over 90% of its source code in C++ (mix of managed and unmanaged) and only a smaller portion in C#.

JetBrains claim they have worked on the memory consumption of ReSharper 4.5. Looks to me as if more work is needed ....

On my 32 bit box I have to reboot once in a while because I'm running out of memory when the addin is renabled ... (Vista 32 bit, VS 2008, 4 GB RAM)



Update on 04 June 09: It looks as if the memory consumption goes up to the same value with R# disabled. This happens when IntelliSense creates/updates its database. However, with R# disabled the memory drops back to normal once IntelliSense has finished that activity. With R# enabled it appears as if IntelliSense doesn't free up the memory. So it looks as if the problem is caused by the combination of the two. A solution with just C# project (and hence without IntelliSense in C++) doesn't seem to have that issue either.
Since I am in contact with JetBrains at the moment let's see what they can find based on the info I can provide.


Update on 02 February 2010: I had some conversations with JetBrains and they have asked me to provide an example that show the behavior. The challenge is that is seams to happen (or become apparent) only when there is a large number of C++ projects in the solution and the entire C++ code base is significant as well. While I do have an example to demonstrate the behavior, I’m a bit challenged to provide the very example. Would you send your entire code base? At the moment some members of my team have switched off ReSharper when they are working with the solution that also contains C++ projects.


Also check out my blog on Agile Leadership.

Monday, May 18, 2009

A comment on McAfee

Right now McAfee is running a full system scan on my computer. Not only is it consuming a large amount of system resources - sometimes to the point of the system being unusable because of having to wait for some file operation to complete - there is also the following interesting 'feature'. Let's assume a full system scan is running. That seems to make sense since there are many different ways how viruses can find their way onto your computer. So that system scan is running, and you know that it will take an hour or more to complete. You have downloaded a file that you'd like to scan for viruses before you use it. Can you do that? Nope, not with the McAfee version I am currently using. It will display a dialog box telling me a different scan is running and that I have to cancel that one first before I can start a different one. My options are:
  1. Wait until McAfee has finished scanning my system.
  2. Cancel the system scan so I can scan the downloaded file
Option one makes me less productive. Option two has a potential security issue. None of the options meets my requirements. How about being able to explicitly scan files, e.g. from the context menu in the file explorer, regardless of whether other scans are running? I'll check whether other virus scanners work the same way or whether they behave differently. (Before someone asks: Automatic updates to keep McAfee up-to-date are enabled.)

Friday, May 15, 2009

I couldn't resist... - a quote from Subversion's web site

When assessing the memory consumption of the Subversion (SVN) 1.6.1 client, I also found the following on SVN's web site:
"all the memory we allocate will be cleaned up eventually" (Source: "Hacker's Guide To Subversion", retrieved 15 May 2009)
I like that quote since I think it is true for all software! "Eventually" all memory will be cleaned up, and if necessary when the process terminates. On second thought, though, here might be an opportunity for a new business model! What if we had one-way memory? You can allocate it once, and once a process has consumed it you need to buy new DRAM (or maybe it could be called OW-DRAM as in "One Way"-DRAM). I'm sure Intel and other chips vendors would love it! But seriously (and I'm sure I've got some funny things somewhere in my published text and code as well. Tell me!): A memory leak is a memory leak is a memory leak. Using add and commit for adding large amounts of files consumes about 1 KByte per file. In one case I tried to add about 13,000 files and the process fell over when it reached 1.3 GByte having added only 10% of the files. So this approach doesn't work for version 1.6.1. All indications are that this is a memory leak. It is proportional to the number of files you add and try to commit. OS tools show how the process grows in size. It never shrinks (unless the process terminates one way or the other). Admittedly I didn't use a memory profiler, but what do you think the issue is when an error is reported on the command line saying "out of memory"? The better option for getting large sets of files into a repository - and that's what I learned by now - is to use SVN's import functionality. I have made several tests now with up to 65,000 files in one batch (several hundred MB) and the memory consumption of the client process grew only very slowly - from about 10 MB to 58 MB at most. This growth - I suspect - is probably related to memory fragmentation but definitely within acceptable limits. So the recommendation is: Don't use svn add and then svn commit. Instead use svn import if you have large sets of files to import. If they go into the wrong place you can always move them later using a repository browser such as the one that comes as part of TortoiseSVN.

Saturday, May 09, 2009

Handling of middle mouse click in Firefox 3.0

Just had a "experience" of the undesired kind. Normally when you use the middle mouse button on a link in Firefox, it opens the page in a new tab. Well, when you use the middle mouse button on the tab (even when not hitting the little close button) then it closes the tab! Not quite what I expected. I thought it would take the link of what is displayed in that tab and open a new tab with that link. I just lost quite a lot of work that I just had entered into a web form. :-( Oh, well...

Friday, May 08, 2009

SVN Client: Out Of Memory

With the client SVN 1.6.1 (32 bit on Windows) I ran into an issue today. I tried to commit 13,000 files with about 100 MBytes of size. Most of them are text files and just a very small number of them are binary files. No matter what, this commit didn't work. I actually had to take small portions of it and commit one portion at a time. Admittedly this is not a typical change set, and so I don't want to complain too much. There is one observation, though, that made me think: When the client starts to send the content (file data) then the memory consumption goes up. I'm not sure why that is because the file is still available locally (since we are uploading the file) and hence it's not quite clear to me why the consumption needs to go up. The client sends one file at a time anyways. If a buffer is used to get the data into the appropriate format for the wire transmission I can understand that. But can't that buffer be reused once one file has been transmitted and the next one is started? Maybe I'm overlooking something about the inner workings of Subversion. In that case please comment on this post. Otherwise I have that gut feeling that there might be a memory leak in the current implementation (SVN client 1.6.1)? --- An update: There are a few open bugs related to memory leaks, even one with regards to committing large number of added files. That one is from 2004 and still open.

Sunday, April 26, 2009

ListView.GetItemAt() in WPF Application

At present (.NET 3.5 SP 1) a GetItemAt() method doesn't exist for the ListView class that is part of WPF. At least I couldn't find it. I always found it handy in Forms based development for instance when handling mouse events. Please note that the WPF class ListView is in System.Windows.Controls while the forms based class of the same name is in System.Windows.Forms. Don't confuse the two! In this post I'm referring to the WPF class ListView. I wanted a similar behavior to the method of the same name in the forms based library. As a result I want the client code to look like this:
      private void _replacementsListView_MouseDoubleClick(object sender, MouseButtonEventArgs e) {
         var mousePosition = e.GetPosition(_replacementsListView);
         var item = _replacementsListView.GetItemAt(mousePosition);
         if (item != null
            && item.Content != null) {
            EditTokenReplacement((TokenReplacement) item.Content);
         }
      }
Since the WPF class ListView doesn't come with a built-in method I decided to implement an extension method as follows:
using System.Windows;
using System.Windows.Controls;
using System.Windows.Media;

namespace Foo.Gui {
   internal static class Extensions {
      public static ListViewItem GetItemAt(this ListView listView, Point clientRelativePosition) {
         var hitTestResult = VisualTreeHelper.HitTest(listView, clientRelativePosition);
         var selectedItem = hitTestResult.VisualHit;
         while (selectedItem != null) {
            if (selectedItem is ListViewItem) {
               break;
            }
            selectedItem = VisualTreeHelper.GetParent(selectedItem);
         }
         return selectedItem != null ? ((ListViewItem) selectedItem) : null;
      }
   }
}
Please note that the position is passed into the method as an object of class Point. This means that the coordinates need to be relative to the list view object. In an event handler for a mouse button event, e.g. a double click, these can be calculated by using a method on MouseButtonEventArgs object that is passed into the mouse event handler. It's already shown in the first code snippet above but just to be on the safe side here are the relevant lines again:
      private void _replacementsListView_MouseDoubleClick(object sender, MouseButtonEventArgs e) {
         var mousePosition = e.GetPosition(_replacementsListView); // gets position relative to ListView
         var item = _replacementsListView.GetItemAt(mousePosition);
         ...
      }

Saturday, April 25, 2009

Getting Parent Window for UserControl in WPF

Again a small item that may take you some time to search and find on the internet. Suppose you are working on a WPF based application and you have a UserControl that you want to host inside of a window. In addition you need the window within which the UserControl is hosted. To find out that parent window you might wonder whether you could use the property 'Parent'. Well that might work in some cases but in other cases it might not. For example the parent might be a Canvas and then it starts to become tricky. So here is a solution that should make it a bit easier. It uses the static method Window.GetWindow() and passes a reference to the UserControl instance as a parameter:
   public partial class FooControl : UserControl {
      public FooControl() {
         InitializeComponent();
      }

      private void _barButton_Click(object sender, RoutedEventArgs e) {
         var window = new MyWindow {
            WindowStartupLocation = WindowStartupLocation.CenterOwner,
            Owner = Window.GetWindow(this)
         };
         if( window.ShowDialog() == true) {
            // Ok button clicked, do something useful here...
         }
      }
   }

Sunday, April 19, 2009

OpenFileDialog in .NET on Vista

I'm working on a WPF based application and as quite some applications do, I need to display an OpenFileDialog as well. Trying to be compliant with what .NET 3.5 SP 1 is offering I tried Microsoft.Win32.OpenFileDialog first. I ran into two issues. The first issue I noted was that the dialog box positioned itself just about anywhere but not centered (or at least on the same screen) as the owner window, even if I set the owner window. I did some research and found that despite .NET having been around for years (I started using it in 2001) there is still no OpenFileDialog box available for .NET that inherits from Window (or Forms previously) and can be easily customized. And therefore - as an example - there is no WindowStartupLocation property. Oh, well ... Even with the latest WPF version running on Vista the old XP style dialog box is displayed: In addition even when I passed the owner as a parameter to ShowDialog() it wouldn't center relative to that owner window. Then I tried the previous System.Windows.Forms.OpenFileDialog(). Although it is not recommended to use that namespace within WPF I thought it would still be worth a try. And voila! Here is the result under Vista: Not only did it display the correct dialogbox under Vista but in addition it's location centered on it's owner is correct as well. To achieve this you have to fiddle a bit with the namespaces as there are a few classes that have the same name in both Microsoft.Win32 and in System.Windows.Forms. Since I wanted the client code to look as simple as possible I created two very small wrapper classes. The first wrapper class is basically a replication of the Microsoft.Win32.OpenFileDialog interface (I left out most of the code since it is simply forwarding):
using SWF = System.Windows.Forms;

namespace FooApp.Presentation {
   internal class OpenFileDialog {
      public OpenFileDialog() {
         _dialog = new SWF.OpenFileDialog();
      }

      public bool? ShowDialog(System.Windows.Window window) {
         return _dialog.ShowDialog(new WindowWrapper(window)) == SWF.DialogResult.OK;
      }

      public string DefaultExt {
         get {
            return _dialog.DefaultExt;
         }
         set {
            _dialog.DefaultExt = value;
         }
      }
      public string FileName {
         get {
            return _dialog.FileName;
         }
         set { _dialog.FileName = value; 
         }
      }
      public string Filter {
         get {
            return _dialog.Filter;
         }
         set {
            _dialog.Filter = value;
         }
      }
      public string Title {
         get {
            return _dialog.Title;
         }
         set {
            _dialog.Title = value;
         }
      }

      private readonly SWF.OpenFileDialog _dialog;
   }
}
The only really interesting piece is the ShowDialog() method which takes a WPF window. System.Windows.Forms.DialogBox.ShowDialog() requires a IWin32Window instead (for easy access to the window handle), so we need a second wrapper class that looks as follows:
using System;
using System.Windows;
using System.Windows.Interop;

namespace FooApp.Presentation {
   /// <summary>WindowWrapper is an IWin32Window wrapper around a WPF window.
   /// </summary>
   /// <remarks>Add System.Windows.Forms to the references of your project to
   /// use this class in a WPF application. You don't need this class in a
   /// Forms based application.</remarks>
   internal class WindowWrapper : System.Windows.Forms.IWin32Window {
      /// <summary>
      /// Construct a new wrapper taking a WPF window.
      /// </summary>
      /// <param name="window">The WPF window to wrap.</param>
      public WindowWrapper(Window window) {
         _hWnd = new WindowInteropHelper(window).Handle;
      }

      /// <summary>Gets the handle to the window represented by the implementer.
      /// </summary>
      /// <returns>A handle to the window represented by the implementer.
      /// </returns>
      public IntPtr Handle {
         get { return _hWnd; }
      }

      private readonly IntPtr _hWnd;
   }
}
With this machinery in place the client code in a WPF based application looks very familiar and simple:
var dialog = new OpenFileDialog {
   DefaultExt = ".dll",
   Filter = "Assemblies (*.dll; *.exe)|*.dll;*.exe",
   Title = "Select Assembly"
};
if( dialog.ShowDialog(this) == true ) {
   // File is selected, do something useful...
}
As a result you have it all, simple client code, the correct OpenFileDialog on Vista and the dialog centered properly on its owner. The only thing that I still can't get my head around: After 8 years of .NET we still don't have a class available that is based on Window (or Form) and can be customized. It seems as if this is a puzzle that the guys in Redmond haven't figured out yet...

Wednesday, April 15, 2009

Separator for Menu Item in XAML (WPF, Silverlight)

Trivial task and yet worth mentioning: You want a separator in your XAML based application (WPF or Silverlight)? Use System.Windows.Controls.Separator. In XAML write as follows:
<Menu ...
  <MenuItem ...
     <MenuItem ...
     <Separator />
     <MenuItem>
  </MenuItem>
</Menu>
In C# use:
using System.Windows.Controls;

Menu menu = new Menu();
...
menu.Items.Add(new Separator());

Tuesday, April 14, 2009

LinearGradientBrush with more than two GradientStop's in XAML

Using Expression Blend 2 you may believe that you can have only two GradientStop's for your LinearGradientBrush in particular if you are UI addicted as I am. The UI of Expression Blend 2 doesn't offer adding another GradientStop. But funny enough it supports them! So just go to the XAML code for, e.g. select "View XAML" from the context menu and just add another GradientStop tag to the XAML code for the GradientBrush. Once you do that you will notice that the Expression Blend 2 user interface will happily display 3 (or more) sliders as demonstrated in the screenshot. Here is an example:
<lineargradientbrush key="WindowBackgroundBrush"
  spreadmethod="Pad" endpoint="0.5,1" startpoint="0.5,0">
<gradientstop color="#FFFFDCDC" offset="0.56" />
<gradientstop color="#FFFF6F6F" offset="0.77" />
<gradientstop color="#FFFFDCDC" offset="1" />
</lineargradientbrush>

Monday, April 13, 2009

TextBox and other Controls with Transparent Background in XAML

Again a small item but something you may have searched for quite a while. In a XAML based user interface (WPF, Silverlight) if you want your control (e.g. TextBox) to NOT have a background, that is you want the background to be transparent you can do one of the following:
  1. Set its Background property to 'Transparent'
  2. In Expression Blend select the Background brush and set the alpha channel to zero
Sounds simple but it might take you longer than expected to find this information on the internet (unless I'm completely hopeless to enter the appropriate keywords). Most sources explain for the 100th time how to set the background color.

Sunday, April 12, 2009

"The file 'x' is not part of the project or its 'Build Action' property is not set to 'Resource'"

When working on a WPF based application and using the Image control you may encounter the following error message when trying to enter the source file name for the image:
The file 'x' is not part of the project or its 'Build Action' property is not set to 'Resource'
It's not quite clear to me what causes this and it's not quite clear why Microsoft didn't fix it in Service Pack 1 for Visual Studio 2008 but here is a solution that may work for you:
  1. Add the file to your solution.
  2. Set its 'Build Action' to 'Resource' (in my case the drop-down also offers 'Embedded Resource' but that's not what you want)
  3. Select the image control.
  4. Set its Source property to the image file name. It should show up in the drop-down list. Next it may display the error message mentioned above. Then rename the image file to a short filename. Try setting the Source property of the Image control again. It should be fine now.
In case you want to try editing the XAML code then here is an example of what you may want set the source property to:
<Image ... source="Foo.Gui;component/pictures/myLogo.jpg" ... />
The assumptions for this example are
  • Your project is named 'Foo.Gui'
  • That project contains a folder named pictures
  • Your picture is in that folder and also included in the project. The name of the picture is 'myLogo.jpg'

Saturday, April 04, 2009

"Configuration system failed to initialize"

If you see a ConfigurationErrorsException along with the information "Configuration system failed to initialize" in your QuickWatch window then in all likelihood your app.config (or web.config) file is not correct. In my case I simply forgot to surround the membership provider section with <system.web></system.web>. Once I added those it worked like a breeze. Also, in case your custom provider cannot be found, make sure you have added the proper assembly name to the 'type' attribute for the <add> element of your provider in the <providers> section. And yes, you can test custom providers without having to deploy to or run in a web server. Just ensure your app.config file contains the bare minimum by copying some content from web.config and you should be fine. For my scenario it was sufficient to copy the config section for NHibernate, the hibernate configuration, and the declaration for my custom membership provider.

Friday, April 03, 2009

Membership Provider Implementation using NHibernate

ASP.NET allows selectively replacing provider implementations with your own custom implementation, e.g. when you want to store membership information in a database other than aspnetdb.mdf. There are more details provided on MSDN. Microsoft provides one such example implementation for ODBC in C# here. For VB.NET a sample membership provider implementation can be found here. While the provider concept is essentially nothing more than Microsoft's flavor of service trays, it is not a surprise that by default Microsoft has a preference for their own products, in particular Microsoft SQL Server. And as long as you don't have a good reason to get into the details of a custom implementation it probably is a good choice to go with what comes out of the box. However, I (and my customers) would like a little more flexibility. Since I'm also experimenting with Fluent NHibernate I am attempting to implement a custom membership provider for ASP.NET based on NHibernate. The idea is that I could use in-memory databases for testing and SQL Server or PostgreSQL for production without having to change a single line of code. All I would need is changing four lines in web.config. So goes the theory. Let's see whether practice proves it right. So far the code looks much simpler than the ODBC sample implementation and yet easier to read and understand. I'll keep you posted.

Saturday, March 28, 2009

csUnit 2.6 Released

csUnit 2.6 has been released and is available for download. More information is available here. The major points of interest are:
  • csUnit is now based on .NET 3.5 SP 1
  • Parameterized testing moved out of experimental
  • Basic support for Microsoft unit testing.
  • Several bug fixes.
csUnit 2.6 supports the following unit testing frameworks:
  • csUnit (no surprise!)
  • NUnit 2.4.7 (.NET 2.0)
  • Microsoft Unit Testing (basic support)
When executing MSFT based tests no files (except the XML results file were applicable) is generated. This means improved performance in some cases and reduced disk space requirements. Please note, that csUnit does not ship with the framework assemblies for NUnit and MSFT Unit Testing. csUnit, however does not require any of those assemblies to run. So if you use only the csUnit testing framework you can safely ignore that csUnit supports other framework as well. There is increasing interest in a 64 bit version and we are looking into that option. We have also some more ideas with regards to integrating csUnit with other tools. In addition we are taking a very tough look at the usability of the tool since we feel that there are opportunities for improvement in this area as well.

Friday, March 20, 2009

"Failed to create a service in configuration file"

You may encounter this error message when you try to add a WCF base service to your web service project. The additional text of this error message is: A child element named 'service' with same key already exists at the same configuration scope. Collection elements must be unique within the same configuration scope (e.g. the same application.config file). Duplicate key value: 'FooProject.BarService'. The cause for this error message might be that you added a service to your project with the same name previously, then deleted it again. The remnants of that deleted service prevent the new service with the same name to be created. To resolve the problem do this:
  1. In the web.config file locate the XML node within the node . For instance in the above example locate the node that describes the 'BarService'. Delete that node but leave in the file.
  2. In the same file locate the node with an attribute 'behaviorConfiguration' and an attribute value of 'BarService'. This node also contains the endpoints for the service. Remove this node.
  3. Try again, and this time it should work.
This is not a major problem but shows that a wizard for adding a service isn't necessarily coupled with an according wizard to remove the service in case you changed your mind. Removing means deleting the BarService.svc and its implementation file, e.g. BarService.svc.cs. It doesn't, however, delete the related entries from the web.config file.

Thursday, March 19, 2009

Updating Silverlight User Interface From Timer

Updating a Silverlight based user interface from a timer may not work as expected. The reason is that the timer thread is different from the user interface thread. Only the user interface thread can update a Silverlight based user interface. So for instance the following code will not work in a Silverlight application:
// _statusMessage is a System.Windows.Controls.TextBlock object
// _statusMessageTime is a System.Threading.Timer object
private void DisplayStatusMessage(string message) {
  _statusMessage.Text = message;
  _statusMessageTimer = new Timer(ResetStatusMessageCallback,
                            /* params left out for brevity */);
}

private void ResetStatusMessageCallback(object stateInfo) {
  _statusMessage.Text = "";
}
When the timer fires the callback is executed on a thread different to the user interface thread. The UI will not update. In my installation the Silverlight interface would then simply disappear! To fix this you need a way to execute the actual update on the user interface thread. One way is to use the Dispatcher object (Namespace: System.Windows.Threading) of the XAML page. Then the code of the callback implementation looks as follows:
private void ResetStatusMessageCallback(object stateInfo) {
  Dispatcher.BeginInvoke(() => {
                         _statusMessage.Text = "";
                         });
}
Another solution would be to use the DispatcherTimer. I spare you the details but you can check here for an example.

Monday, March 16, 2009

ClassInitialize Method Can Be 'static'

A few days ago I wrote about adding support for MS Unit Testing to csUnit (see here). Since I am also using ReSharper it sometimes suggests declaring methods as 'static' when the code of the method doesn't access any instance variables. So just now it happened that a method marked with the 'ClassInitializeAttribute' was marked as static as well. When executing the set of tests in this class the method with the 'ClassInitializeAttribute' wasn't executed at all because apparently csUnit wasn't picking it up. So I went and checked the csUnit code and found that the equivalent FixtureSetupAttribute was found only for non-static methods as well. So at least that was consistent. Giving it a second thought I decided that it definitely makes sense to allow the FixtureSetupAttribute (and consequentially the 'ClassInitializeAttribute') to be on static methods as well. However, that isn't quite that straight forward the way the scanners in csUnit are implemented. So I'll have to do some refactoring before support for static fixture setup methods becomes available in trunk (let alone in a future release). So that would bring the list of supported MS Unit Testing attributes to the following:
  • TestClassAttribute
  • TestMethodAttribute
  • ExpectedExceptionAttribute
  • ClassInitializeAttribute
Assertions are supported anyways. The support of MS Unit Testing in csUnit is making progress.

Sunday, March 15, 2009

Implicit Typing Can Make TDD Harder

Starting with C# 3.0 the language supports the concept of implicit typing at the method level using 'var'. While on one hand this new keyword has its benefits such the possibility to change return types of methods without having to change the type of all local variables that the return value is assign to, there is also a draw back of this new language feature. If you are using strict TDD, you write your tests to drive your design. That means it is not uncommon that a new method on a class doesn't exist yet, though you can write your test including assertions on the return type. Example:
public class Result {
   public long Status { get; set; }
}

public class Foo {
}
Given this starting point you may want to write the following unit test to ensure that class Foo requires a method 'Result NewMethod()':
[TestMethod]
public void TestNewMethod() {
   var foo = new Foo();
   var result = foo.NewMethod();
   Assert.AreEqual(0, result.Status);
}
As you type you will notice that when you have typed the dot after 'result' you will not be offered the member list of Result. It's not possible to implicitly type the variable 'result' since the method 'NewMethod()' doesn't exist yet. As a result writing tests in a TDD approach is slowed down when using 'var' instead of explicit types. Here is another view you may take: Writing tests for 'NewMethod()' should include all specifications, including the type of the return value. If you agree with that view you may want to avoid using 'var' in your unit tests. This certainly doesn't apply to people who create their unit tests after the method has been added to the class. I personally wouldn't call this test-first development, let alone test-driven development (or test-driven design as some people argue). Bottom line it depends on where you are coming from. 'var' might not always be the best choice even if it is a new 'cool' feature in C# 3.0.

Saturday, March 14, 2009

Passing Serialized Exceptions as Service Faults

One way to pass errors from methods back to a caller is using Exceptions. Unfortunately that doesn't work with services since a caller might be anything and so you shouldn't assume that the client understands the .NET platform (and in particular WCF) as well. Therefore in a service oriented world operations return faults. When implementing a service with WCF you could use the FaultContract() on your service operation. In addition you can also use the ExceptionShielding Attribute on your service implementation. However, ExceptionShielding along with includeExceptionDetailInFaults in the service configuration covers unknown and unhandled exceptions only. Other exceptions are mapped to faults, and that's where your responsibility comes in. Whatever you return to a caller, provide as little information about what happened as possible. For instance you may log an exception to a log file on the server hosting the service and attach a case id to it. Then return that case id as part of the service fault. To get more details about a fault this case id can be used to locate the detailed information in the log file. One thing you definitely shouldn't do is passing the entire exception including call stack in a textual or serialized format to the caller. You don't want to add that additional security risk. The reason you don't want to include too much information in the fault is that an attacker might be able to use the details for future attacks. You don't want to present that information on a SilverPlatter. So for instance you could use the following class for representing faults:
using System.Runtime.Serialization;

namespace AgileTraxx.Services {
 [DataContract]
 public class ServiceResult {
    public ServiceResult(string message, long incidentId) {
       Message = message;
       IncidentId = incidentId;
    }

    [DataMember]
    public string Message { get; set; }

    [DataMember]
    public long IncidentId { get; set; }
 }
}
This class would just take the incident id plus a message. The message could contain information about how to contact support and to note down the incident id. As you can see there is a lot to consider when designing a service interface, including security related factors.

Sunday, March 08, 2009

Service Reference for method returning void and out parameter

The title is quite lengthy but I couldn't find a better one. In essence I'd like to make describe a little catch you might experience when generating a service reference within Visual Studio 2008 (it might apply to other versions as well). Suppose you have a service implemented in WCF (Windows Communication Foundation). The services exposes an operation as follows: void UpdateItem(ItemData data, out ServiceFault fault); (Yes, I know that faults should be handled differently but if you want to support Silverlight there are not too many alternatives at present since we are running inside a browser. I wrote about this before so won't go into details here.) Note that the class ServiceFault is a very simple class. The details are not relevant here. The point I want to make here is this. When you create a service reference to the service that provides the above the operation Visual Studio will generate the signature in the service client as follows: public ServiceFault UpdateItem(ItemData data); You will notice that the return value has change from void to ServiceFault, and that the new operation takes only one parameter and has no out parameter. While generally it is probably a smart assumption that for a void operation the first (or only) out parameter is turned into the return value you may not want this in all cases. For some users this behavior might even be surprising. It might make sense to argue that this is a violation of the principle of least surprise. In my particular case I wanted the signature to be consistent with other signatures from other operations in the same service. So I changed the service interface to: long UpdateItem(ItemData data, out ServiceFault fault); The implementation always returns 0 as a result. And now, when I update the service reference I get the expected matching signature generated, and all signatures for the operations within my service are now consistent. On second thought, though, I might actually try a different approach. What if the return value becomes of type ServiceResult? And if I actually have to return some values these can always become an out paramter. I'll give that thought a try and keep you posted.

Thursday, March 05, 2009

Domain Objects with Validation in Fluent NHibernate

Here is an issue that took me quite some time to figure out how to resolve it. I am experimenting with Fluent NHibernate. My starting point was that I wanted the code of my domain classes squeaky clean: Not a single hint that they may become persistent. Why? I wanted to have the domain free from anything that has nothing to do with the domain. At the same time I wanted the domain model to contain the validation code. Ok, I know the way I implemented validation is not necessarily in line with the usual approach in NHibernate. But let's have a look at my domain class:
   internal class WorkItem {
      public WorkItem () {
      }

      public virtual long Id {
         get {
            return _id;
         }
         set {
            _id = value;
         }
      }

      public virtual string Title {
         get {
            return _title;
         }
         set {
            _title = Validation.EnsureNonEmptyString(value, "Title");
         }
      }

      private long _id;
      private string _title = "";
   }
I left most of it out. For now let's look at just the id and the title since those two demonstrate sufficiently the issue. What you will notice is that the setter for the title contains validation code ("Validation.EnsureNonEmptyString(...)"). The problem starts when you query for one or more WorkItem's. Then NHibernate will use the property setters to initialize the instance of WorkItem. For strings the default value is null (nothing in VB.NET). With the given code, however, the validation will throw an exception since that is what it is designed to do. It doesn't care whether the setter is called by NHibernate or anything else. So next I tried to figure out what alternatives I would have for validation and I found NHibernate.Validator. Although a step in the right direction I didn't like that the client code for the domain objects would have to explicitly call the validation. Alternatively the validation would have to be invoke via the event call backs from NHibernate. In both cases the domain class would only work properly if something else would collaborate. I didn't like that concept and started to look for an alternative. And there is a quite simple solution to this: Change the configuration for NHibernate so that it doesn't use the properties to initialize the domain objects. This configuration change can be done via Fluent NHibernate as follows:
       _hibernateConfig = new Configuration();
         AutoPersistenceModel persistenceModel = 
            AutoPersistenceModel
            .MapEntitiesFromAssemblyOf()
            .Where(TypeIsIncluded)
            .ForTypesThatDeriveFrom(
               map => map
                  .DefaultAccess
                  .AsCamelCaseField(Prefix.Underscore));
         persistenceModel.Configure(_hibernateConfig);
Depending on your naming conventions you may want to use a different access strategy or a different Prefix value. In my case it was Camel Casing with an underscore as a prefix. After I finally found this solution I was able to keep the domain classes squeaky clean and at the same time stay with using the Fluent NHibernate interface and avoiding exception during the initialization of instances of domain classes. Of course I'm not sure whether this is the only and/or best option. If you have other options that are even better, please drop me a note. I'm always keen to learn more!

Tuesday, March 03, 2009

Executing MS Unit Tests in csUnit

I have made a little progress on the MS Unit Test support in csUnit. So far I managed to create support for:
  • TestClass
  • TestMethod
  • ExpectedException
Admittedly, these are just the most basic ones but it is a start. Next, I'll have to do some refactoring to clean up the code. As of now, csUnit can execute tests implemented using the unit testing frameworks of either csUnit, NUnit, or MS Unit Test. (Note: This is in trunk as of now. No release is available yet.)

Monday, March 02, 2009

Another Tool for Silverlight Unit Testing

Just came across another unit testing tool for Silverlight. It's called SilverUnit. I haven't tried it out but I certainly will have a closer look. It will be interesting to see how this compares to Jeff Wilcox's approach and how it integrates with established unit testing tools. I'll keep you posted.

Sunday, March 01, 2009

csUnit migrated to .NET 3.5 and VS 2008

Finally I have found some time again to do a few things on csUnit. Actually the main driver was that I tried out the unit testing features that come out of the box with Visual Studio 2008 and I found them a little bit too cumbersome for my taste. I'm sure there are scenarios, teams, and people who are looking exactly for what VS's unit testing provides, including the ability to look at old test runs. But overall it felt a little bit too heavy. One example. A test fails. The result's view lists all tests and you can click on the one that failed. But it doesn't bring you straight to the failed test. That was my expectation. Instead it brings you to a page with the result details of that test. And only there you find a link to the actual implementation of the test. Conceptually that's probably what MSFT wanted. For me it felt like being slowed down. So now I've moved csUnit to .NET 3.5 and migrated the solution and all projects within it to VS 2008. And I'm looking into making it possible for csUnit to run tests implemented using MSFT's unit testing framework. Let's see how that goes. One difficulty I already discovered: Counting assertions. I don't have a good solution for that yet but if you do, please let me know!

Saturday, February 28, 2009

Designing Service Interfaces For Silverlight Clients

When designing a service interfaces based on WCF you might be considering indicating service errors via service faults. By and large that might be a good choice but in the case of Silverlight clients consuming that service you may want to read Eugene's blog first. Eugene describes very detailed the technical background for why a Silverlight client running in a web browser may not be able to see the fault with all details. He also provide a few suggestions for how to get around that limitation - which is not Silverlight's fault! - including code examples. In some cases have a separate set of services for Silverlight client's might be an option worth exploring as well. That way you can give service clients, which are not hampered by browser's 'filtering', the best possible experience.

Saturday, February 14, 2009

Parser Error Message: Could not load type 'Global'

If you get an error as follows: "Parser Error Message: Could not load type 'Global'" then you may be able to fix this issue by doing the following:
  1. Open the file containing your class Global typically located in the file Global.asax.cs,
  2. Note the namespace for that file
  3. Open the file Global.asax
  4. Locate the line that contains the element "... Application Codebehind=..."
  5. In that line ensure that the element "Inherits=..." includes the namespace for your Global class, e.g. MyWebSite.Global, (Inherits="MyWebSite.Global"). Replace MyWebSite with the namespace noted in step 2
  6. Recompile and redeploys.
  7. The error should be gone.
Note that this post applies to VS2008 and ASP.NET using .NET 3.5. The web site in question hosted web services only. This post may not work for other scenarios or for other root causes.

Tuesday, January 13, 2009

Unit Testing for Silverlight

Looking for instructions for Unit Testing for Silverlight 2? Jeff Wilcox posted an excellent tutorial in March 2008 here. Since then he's also updated his testing framework. In addition he has posted required changes to make the tutorial work with the final release of Silverlight 2. Although the latter post refers to RC0 of the testing framework the information still applies to the December 2008 binaries. The December 2008 release of the binaries of Jeff's unit testing framework can be downloaded from here. The project's homepage is here.

Tuesday, June 24, 2008

Watch Out: Window.ObjectKind is UPPER CASE!

Using Windows2.CreateToolWindow2() to create a VisualStudio addin tool window? If you do, then maybe you want to check how often it is called during the life cycle of your addin. And if you notice that you create it twice, you may want to avoid that by checking the collection EnvDTE80.Windows. If your tool window is already contained you don't want to create a second instance of your tool window. There is a possible surprise, though. Windows2.CreateToolWindow2() expects as the fifth parameter a guid. The online documentation (see here) says it is the "GuidPosition". This is a little misleading since it doesn't really refer to a position. In reality it is the type of the tool window, e.g. the Solution Explorer. Now, when you create the boiler plate code you may just use the example given there. If you do, then change the content of the variable guidpos given in the code to all upper case. So the important bit is the following change:
string guidpos = "{426E8D27-3D33-4fc8-B3E9-9883AADC679F}";
string guidpos = "{426E8D27-3D33-4FC8-B3E9-9883AADC679F}";
                              =====

I've highlighted the important bit with red color, bold, and a larger font. In addition I have underlined it. I think you got the idea. Why is this change important? Assume you iterate over the collection as follows:
foreach(Window2 toolWin in toolWins) {
string toolWinKind = toolWin.ObjectKind;
if( toolWinKind.Equals(guidString) ) {
 _toolWindow = toolWin;
 break;
}
}
The Equals() call may always result in 'false' even if you think you are looking at the correct tool window. The reason is that ObjectKind returns the guid all upper case. The example code has two lower case characters in the guidpos variable. The same can also happen when you use "Create GUID" from the "Tools" menu in Visual Studio. It may generate a guid for you with one or more lower case digits ('a' through 'f') as well. It's unfortunate that the online documentation doesn't mention that some time between calling CreateToolWindow2() and calling ObjectKind everything is made upper case. The example code leads to potentially incorrect behavior of your addin as well. This post gives you the heads-up. It took me quite some time to spot this little difference. In my case it was 'f' versus 'F' and only when Equals() insisted on returning false I took a closer look. Maybe this helps you saving some development time. And maybe someone from Microsoft is reading this. I tried to add it as Community Content at MSDN. It wasn't possible. So I rated the article and left a comment adding the suggestions for improvement. This would have been ideal for "Community Content". MSFT, you mind updating the online material? Thank you!

Thursday, June 19, 2008

csUnit: What's Next?

csUnit 2.5 is very well received but since we are always trying to find improvements I'd like to share a couple of items that we plan for the next version. For one, the performance has slightly slipped. Some new features resulted in an runtime penalty which we believe has become too high. So we did some performance analysis and have found ways to get a performance improvement of about 10% to 30% of the next version over version 2.5. Both the command line version and the GUI version benefit from this improvement. The other area we didn't like too much was the slightly cluttered user interface on the test hierarchy tab, which is the most frequently used view in the tool. In addition the search feature on the test hierarchy tab worksfor the test hierarchy only. This is a limitation. So we decided to revisit the search feature. The next version will therefore have a new search feature that works across all tabs. And at the sime time we were able to remove the related buttons from the test hierarchy page and replace them with a single button in the already existing toolbar. This freed up screen real estate for the important information and also simplified the appearance. These are just two of the improvements you will see in the upcoming version 2.6 of csUnit, which we are planning for August. Stay posted. If you'd like to participate in determining the future of csUnit then please don't hesitate to contact me or anybody else of the csUnit team.

Saturday, June 14, 2008

Vista: Finding Encrypted Files

To find encrypted files on Windows Vista do this:
  1. Open a command prompt and switch to a directory in which you have write permissions.
  2. Run the command: "cypher /s:c:\ /N /U > filelist.txt" and wait until finished (this example searches the entire volume c:
  3. When finished open the file filelist.txt. It contains a list of all files that are encrypted.
  4. In the Explorer window navigate to each file and though "Properties" -> "Advanced..." goto to the "Advanced Attributes" page and remove the checkmark from "Encrypt contents to secure data"

Tuesday, June 10, 2008

Addin Command Names

What is the name of a command when you register it for an addin? Typically you use Commands.AddNamedCommand() to register a command. The second parameter is the command name. Now, when you want to look up a command you can use _DTE.Commands using the name you used for AddNamedCommand() and you're done. Right? Wrong! Let's take the csUnit addin for Visual Studio as an example. The addin has the name csUnit.csUnit4VS2005.Connect. This is the class that implements the interface Extensibility.IDTExtensibility2. This interface includes members such as OnConnection(), OnDisconnection(), etc. This full name - 'csUnit.csUnit4VS2005.Connect' - is also used in the addin file (see node Extensibilily / Addin / FullClassName). What VS 2005 and VS 2008 do is this: When you register a command, the name of the command will be automatically prefixed with the name of the addin. So in this case a csUnit command named Foo would turn into csUnit.csUnit4VS2005.Connect.Foo. Be aware of this. Otherwise your code may not be able to find your command in _DTE.Commands.

Saturday, May 31, 2008

Custom Installer Creating Folder, Setting Permissions

When implementing custom installers you sometimes may want to created directories when the setup is executed. This may not be sufficient since you may also want to set sufficient permissions on those new directories so that users have access to the directories and the files in them.

The first thing to know is that an installer typically runs with elevated rights as the discussion on Chris Jackson's blog indicates. This means that your installer has sufficient rights to create directories but also to set appropriate permissions.

And here is the code for creating a directory and setting permissions:

public static void CreateWithReadAccess(string targetDirectory) {
try {
if(!Directory.Exists(targetDirectory)) {
Directory.CreateDirectory(targetDirectory);
}
DirectoryInfo info = new DirectoryInfo(targetDirectory);
SecurityIdentifier allUsersSid =
new SecurityIdentifier(WellKnownSidType.BuiltinUsersSid,
null);
DirectorySecurity security = info.GetAccessControl();
security.AddAccessRule(
new FileSystemAccessRule(allUsersSid,
FileSystemRights.Read,
AccessControlType.Allow));
info.SetAccessControl(security);
}
catch(Exception ex) {
Debug.WriteLine(ex.ToString());
}
}

This code checks for existence of the directory first. If the directory doesn't exist yet it is created. Then the security settings are applied. In this case the Read permissions are granted to all members of the group BUILTIN\Users.

By selecting another member of the WellKnownSidType enumeration you can grant permissions to a different group. Alternatively, if you'd like to grant permissions to a specific user, have a look at the NTAccount class. An instance of it can be passed into the FileSystemAccessRule constructor as a first parameter as well.

Tuesday, May 27, 2008

ReSharper 3.1 and Empty Constructor

ReSharper has this nice feature to make you aware of code that are not necessarily required. I like this feature since it allows me to reduce the number of characters that I have to read and as a consequence I can process more of them at a time. As a result I can work faster. However, I just came across an item where this feature doesn't work as I hoped. I have a class publicly visible from outside the assembly. For the constructor I have added a number of XML documentation tags. The constructor itself doesn't take parameters and it doesn't do anything. But I want to keep the constructor for the XML documentation. Since the compiler automatically generates the same constructor, ReSharper emits that as a warning and also suggests to remove the constructor. If I followed that suggestion it would also remove the XML documentation along with it. Alternatively ReSharper offers to supress the warning. So I did that. ReSharper is now happy. But my compiler isn't since it sees the following (the pragma was added by ReSharper):
#pragma warning disable EmptyConstructor
The compiler now complains about 'EmptyConstrutor' not being a valid number. Fair enough it is not. So I removed the pragma since any other pragma would get ReSharper complaining again. To keep ReSharper quiet the constructor now looks as follows:
public class Foo() {
  // ... the XML markup for documentation
  public Foo() {
    ; // to shut up ReSharper
  }
}
Now both, the compiler and ReSharper, are happy! :-) No big issue but the better solution would have been if ReSharper would modify the code only in such a way that it doesn't cause additional warnings. Update This applies to essentially all such pragma's ReSharper inserts, including but not limited to:
  • UnusedMemberInPrivateClass
  • PossibleNullReferenceException

As stated before: Best option would be if the code inserted by ReSharper would not cause additional warnings from the C# compiler. Or in other words: The added code should be "compatible" with the compiler. One option could be to use comments instead. The compiler ignores them.

The C# preprocessor is not as powerful and flexible as the C/C++ one. The latter allows your extensions by ignoring unknown ones. The C# version does not have that "back door" but checks them for validity as well. In essence you can't define your own. (Or at least you shouldn't if you don't want to cause compiler warnings.)

Thursday, May 15, 2008

Outlining/collapsing Comments in VS2005/VS2008

A minor issue but still it might bother you while working with your code. The C# editor (and possibly others) of Visual Studio offers collapsing and expanding classes name spaces, methods, blocks, etc. And comments. Most of the times this works as expected. But then it doesn't work for comments at the beginning of a file. Then it works often, and sometimes it doesn't. I can't reproduce it but it usually doesn't work when I want to collapse the comment, and it works if I don't need to collapse. Why is this annoying? Well at the beginning of files by default I add my company's copyright statement. It has about 10 lines. Not much but if you consider that generally I can see about 30 lines on my laptop display then it basically means that 33% of the real estate on the screen is consumed by a piece of text that has no value to me while editing code (rather than comments). You still think 10 lines is not much? If you really want to optimize the use of your time you are happy about every single contribution. A mouse click here, a key press there, some scrolling, waiting time, etc. it all adds up. I don't know whether this exists in other versions as well. I have observed the issue in both Visual Studio 2005 and Visual Studio 2008. It's not really an issue; more like a nuisance. So this is another item I'd like to ask to get fixed. Just in case someone from Redmond is ready this. The workaround I'm using is to surround those comments with #region/#endregion. This can be reliably collapse expanded even at the beginning of a source file.

Tuesday, May 13, 2008

SerializationException - "The constructor to deserialize an object of type 'foo' was not found."

Some classes need to be serializable. Even if you use the SerializableAttribute to mark it up you may run into the issue that a SerializationException is thrown. To fix it you need to put the following things in place. As a running example I'll use a class named Foo. The items you need to put in place are:
  1. Add the SerializableAttribute to the class.
  2. Have the class implement the ISerializable interface.
  3. Request sufficient privileges for your GetObjectData() implementation.

In essence you are rolling custom serialization. And here is how Foo would look like with all these in place:

using System.Runtime.Serialization;
using System.Security.Permissions;
...
[Serializable]
public class Foo : ISerializable {
  protected Foo(SerializationInfo info,
                StreamingContext context) {
    _barString = info.GetString("_barString");
    _barInteger = info.GetInt32("_barInteger");
  }

  [SecurityPermissionAttribute(
                SecurityAction.Demand,
                SerializationFormatter = true)]
  public virtual void GetObjectData(SerializationInfo info,
                                    StreamingContext context) {
    info.AddValue("_barString", _barString);
    info.AddValue("_barInteger", _barInteger);
  }
  private string _barString;
  private int _barInteger;
}
Note that the constructor is protected. This way it cannot be called except from the serialization code within .NET. It is however accessible by code in subclasses. If your class is derived from a class that implements ISerializable, then you need to call the base class in both the constructor and the GetObjectData() implementation as follows:
[Serializable]
public class Foo : Base { // Base implements ISerializable
  protected Foo(SerializationInfo info,
                StreamingContext context)
    : base(info, context) {
    // your deserialization code
  }

  [SecurityPermissionAttribute(
                SecurityAction.Demand,
                SerializationFormatter = true)]
  public virtual void GetObjectData(SerializationInfo info,
                                    StreamingContext context) {
    base.GetObjectData(info, context);
    // your serialization code
  }
}
For more information please check MSFT's web site here.

Sunday, May 11, 2008

Weakness in VS Debugger

This time I' d like to make you aware of a wee weakness of the Visual Studio debugger. Suppose you have overloaded the Equals() method for one of your classes. And it looks similar to this:
public class ProjectElement {
  ...
  public override bool Equals(object obj) {
    if( obj != null
      && obj.GetType().Equals(GetType())) {
      ProjectElement otherObject = 
                          (ProjecElement)obj;
      return _assemblyPathName.Equals(
                otherObject._assemblyPathName);
    }
  }
  ...
}
During a debugging session you can use the QuickWatch as a convenient way to examine variables. However, be aware of the following. When you use QuickWatch on '_assemblyPathName' in the above example then QuickWatch does not consider the context. E.g. even if you hover over the '_assemblyPathName' part of the expression 'otherObject._assemblyPathName' QuickWatch will just use the member variable of 'this', which not in all cases will have the same value. If you want to be on the safe side then select the entire expression you would like to inspect, e.g. all of 'otherObject._assemblyPathName'. After that select QuickWatch. I think this is a shortcoming of the QuickWatch feature since it displays the content of a variable that you didn't want to look at, and you might not even notice that you are looking at something different of a similar name. If one of you Microsofties are reading this maybe you could consider this as an improvement for the next release of Visual Studio? Thank you!

Thursday, May 08, 2008

Implementing Properties: Basic Considerations

Not too long ago I wrote about Property Getters with bad manors (here). I suggested ways for doing better but I would like to share more of how I generally implement properties in C# (also applicable for other .NET languages). I'm not claiming that my style is the only one or the best one. However, it works good for me and it is based on 7 years .NET development experience and over 15 years experience with C++. Where to start? For this post I would like to focus on very simple things. Suppose you have a class Foo that has a field named _bar (note that I prefix all fields with and underscore; not necessarily what MS recommends but I think that it improves the readability of the code). So the code would look as follows:
public class Foo {
   private string _bar;
}
Now we want to add an accessor aka the getter:
public class Foo {
   public string Bar {
      get {
         return _bar;
      }
   private string _bar;
}
This has been the easy part. There is not really a lot that can go wrong.

It starts to become more interesting when you add a modifier aka a setter. In it's most simple form you could write:

public class Foo {
   public string Bar {
      get {
         return _bar;
      }
      set {
         _bar = value;
      }
   private string _bar;
}
Easy you think. But hang on. There is more to it. You may want to decide whether it is acceptable to pass in a null reference or not. Whether you allow for null or not is a decision that should be made based on a number of factors. One way to find out is to look at all the places in your code where the getter is used. What would happen to that code if null would be returned? For example:
Foo foo = new Foo();
...
if( foo.Bar.Length > 25 ) {
   ...
}
...
In that case if null was returned this piece of code would crash. You could fix this issue by checking for nullness:
Foo foo = new Foo();
...
if(   foo.Bar != null
   && foo.Bar.Length > 25 ) {
   ...
}
...
This certainly work but you pay the price of a slightly less readable code. In addition you may have to have this in a lot of places. So in essence you may decide that the property Foo.Bar doesn't allow for null values. The code for class Foo would then look as this:
public class Foo {
   public string Bar {
      get {
         return _bar;
      }
      set {
         if( value != null ) {
            _bar = value;
         }
      }
   private string _bar;
}
This clearly provides the benefit of Foo.Bar never being null since upon initialization _bar will be initialized with string.Empty or "".

But again this comes at a price. The setter simply swallows the attempt to set Foo.Bar to null. This might be desirable. I personally prefer that a class doesn't swallow incorrect things but instead fails fast. In this particular case I would want my code to indicated the error by throwing an exception:

public class Foo {
   public string Bar {
      get {
         return _bar;
      }
      set {
         if( value != null ) {
            _bar = value;
         }
         else {
            throw new ArgumentNullException("value");
         }
      }
   private string _bar;
}
You see that although this is a simple property implementation it can already require quite a few decisions to be made and aspects to be considered.

To close off this particular post, I'd like to also bring performance considerations into the picture. What if the setter needs to validate any new value against a remote system such as a service? Let's look at the possible code:

public class Foo {
   public string Bar {
      get {
         return _bar;
      }
      set {
         if( value != null ) {
            if( _validationService.IsPermitted(value) ) {
               _bar = value;
            }
            else {
               throw new
                  ArgumentOutOfRangeException("value");
            }
         }
         else {
            throw new ArgumentNullException("value");
         }
      }
   private string _bar;
   private ValidationService _validationService = 
                                 new ValidationService(...);
}
Calling IsPermitted() can be quite expensive. So how to avoid this? Here is one possible solution:
public class Foo {
  public string Bar {
    get {
      return _bar;
    }
    set {
      if( _bar != value ) {
        if( value != null ) {
          if( _validationService.IsPermitted(value) ) {
            _bar = value;
          }
          else {
            throw new
                    ArgumentOutOfRangeException("value");
          }
        }
        else {
          throw new ArgumentNullException("value");
        }
      }
    }
  private string _bar;
  private ValidationService _validationService = 
                                 new ValidationService(...);
}
With this implementation the validation service is called only if the value has actually changed. Certainly if the set of permitted values is dynamic this implementation would not make the cut. With this post I want to demonstrate that even property that looks like an easy thing to do already requires a lot of considerations. We even touched performance briefly. It is important that we are aware of all of these aspects when implementing and testing such a property. There are more aspects to this but I think I've made my point. Even with simple things like properties there are already a quite a few aspects to consider.

Sunday, April 27, 2008

Codeplex, IE, and Firefox

Ok, maybe my system was completely screwed up and that's why it didn't work. But to the best of my knowledge I have a Vista Ultimate installation including IE 7 maintained by Microsoft Update. It has the latest and greatest patches installed. Today I went to codeplex.com - Microsoft's answer/contribution to the open-source community - and tried to download a few Sandcastle bits. I used Internet Explorer for it since I thought "It's a Microsoft site so the Microsoft browser should be fine." Well, it wasn't. When I clicked one of the two download links for Sandcastle I was presented with an overlay window (no pop-up) containing some licensing information. Before I was able to click the "Accept" buttone the overlay disappeared, and .... nothing happened. I restarted IE, and tried multiple times, eventually I got fast enough that I could actuall click the "Accept" button. Still it didn't work. Despite my recent experience with a different Microsoft Site that serves IE only, I thought it is at least worth a try to use Firefox. From there it was a breeze. Everything was straight forward, working flawlessly. Which leaves me wondering whether the guys who maintain the Codeplex site test mainly on Firefox.... (Onto my soap box: Microsoft - in my opinion - has started to move in the right direction. At the same time I believe it has still a long way to go until it competes on merits and great products and services only.)

Saturday, April 26, 2008

Accessing Installation Properties in Custom Actions

Assume you have a custom action for your installer and you'd like to access a parameter (or call it a variable or property) from within your installer. That property might have been set by another installer, e.g. a file search, or it could be part of the set of properties that are standard during installation, e.g. "TargetDir". Let's take TargetDir as an example. When you work on your setup project within Visual Studio (2005) it offers a number of macros for properties. E.g. when you want to specify the location for a folder that you'd like to create during install you can use [PersonalFolder] and others to specify the actual location. In order to pass such a variable to your custom action, simply open the Custom Actions editor for your setup project. Then select the custom action and then add the parameter in the "CustomActionData" property using the following syntax:
/TargetDir="[TARGETDIR]\"
Please note the trailing backslash, which you only need to use when you surround the property in double quotes. However, as a safety measure I suggest making it a habit to always add the double quotes and the backslash. Remember paths can and will contain spaces! In some examples on the internet the backslash may be mentiond in the text but not be included in the code sample. If you leave the backslash out you'll see a message box containing "error code 2869". However, this article makes it clear that you must add the trailing backslash if you use the double quotes. Also, if you want to pass more than one property separate them by a single space, e.g.
/TargetDir="[TARGETDIR]\" /UserDir="[PersonalFolder]\"
Then in your custom action implementation - a class derived from System.Configuration.Install.Installer - you can access it in the following way (C# here, but similar in other .NET languages):
string targetDir = Context.Parameters["TargetDir"];
That's all.

Two More Tips for Debugging Custom Actions

As I mentioned in my previous post one of the options you have to ensure you have sufficient rights to attach to a process, I'd like to share two more suggestions for how you can debug custom actions in your setup project. At various places you will find the tip to use MessageBox.Show() to stop execution of your custom action. Then when the message box is displayed you can attach the debugger. When I tried this the list of processes showed me three instances of msiexec.exe and I couldn't identify which one I should connect to. So I decided to select all of them and attach to all of them. Then I set a break point in a line after the message box and hit F5 to continue execution. Result: Setup continued happily and the break point was ignored. So this didn't help, but here are two simple suggestions that should make your life easier: Tip 1: For the message box make sure you give it a title or caption. For instance if you use MessageBox.Show() then use the overload that takes two strings as parameters. The first is the message in the message box. The second parameter is the caption. That's the important one since this is the string that you will find in the list of processes when you want to identify the right process. Tip 2: In the dialog box for attaching the debugger to a process make sure you tick "Show processes from all users" as it might be that the msiexec.exe instance you are interested in is not displayed.

Visual Studio: Access denied when attaching debugger to process

The other day when I was working within Visual Studio 2005 on Vista, I encountered the problem that I couldn't attach the debugger to some of the processes that I wanted to run in the debugger. The solution that worked for me was pretty simple: I closed Visual Studio and then selected "Run as administrator" from the context menu for the shurtcut to the application. This is certainly not the recommended way of bypassing security but on the other hand it may come in handy when needed. Prerequisite is that you have access to an account that has administrative rights. If you don't you may need to have a nice chat with your system administrator...

Thursday, April 17, 2008

MySQL closed-source: What's the alternative?

An anonymous writer posted on Slashdot that Sun Microsystem has announced that parts of MySQL will be closed-sourced, e.g. backup solutions and some more advanced features. The post is refering to Jeremy Cole's blog entry on the subject. The decision is certainly Sun Microsystem's business, and I'm sure it makes a lot of sense from their executive view. However, the users have an opportunity to help Sun Microsystem's understand the consequences of their decision. For instance, if you want to continue using an open-source database, PostgreSQL might be an alternative worth looking at. Developing for the Microsoft platform, it offers native interfaces for C, C++, and .NET, and even the "good old" (?) ODBC is available.

Wednesday, April 16, 2008

ReSharper 3.1: More Details

I promised to provide more details about my ReSharper experience as I go. So here is an item that I perceive as a nuisance. Sometimes, when you type in code, it displays the information above the line of code, and parameter info below the line of code. As a result in some cases almost all code is covered by those nice pop-ups. That can be very challenging at times. What I definitely like is the light bulbs ReSharper shows in the left margin. It's color indicates whether a suggestion is more serious (color is red) or if it is more optional (color is yellow). When you click on the light bulb a menu with useful next actions is displayed, e.g. "Optimize Usings" or "Qualifier is redundant" or "Make readonly". Although the fun stops after a while because you have used up all the light bulbs in the code base, it definitely helps improving the code, in particular making it more readable by removing all the "noise" from the stream of information that hits your eyes. Well done! I do have the impression though that Visual Studio responds slightly slower. This is not really surprising given the functionality ReSharper provides and given the a lot of elements need to be updated as you edit your code. I think, though, that the performance hit is a good investment since you get the improved code in return. Please be aware that this applies to version 3.1. JetBrains may decide to release newer versions that exposes a different behavior. Note: I paid in full for my license, and I have no financial or other interest in JetBrains the company behind ReSharper.

Tuesday, April 15, 2008

What's next for csUnit?

Since the March '08 release of csUnit we have thought about what area we should focus on next. We have improved the performance, we have added data-driven testing, and we there were always a few features that csUnit provided first. The next area we are looking at is usability. We'd like to improve usability significantly. For instance, as test suites grow it may become more and more time consuming to locate a particular test or test fixture. The next version of csUnit will therefore feature an easy to use search capability, available for both the stand-alone GUI and the addin for Visual Studio. We are also putting quite some effort into improving the test tree view. We believe that all users will benefit from improvements in this area. And finally, csUnit has become very feature rich over time. Some people prefer a simpler UI. We want to serve both groups of users the ones who prefer the rich functionality, and the ones who prefer to keep it simple. The idea is to add some configurability to the UI so that depending on your preference more or less elements become visible. Stay tuned! And if there is a feature you'd like to see, don't hesitate to visit csUnit's web site and follow the link to "Suggest a feature" that you can find on most pages.

ReSharper 3.1 in Visual Studio

For quite some time I have been using ReSharper now, currently version 3.1. Bottom line it helps creating better code and it has features that are beyond Visual Studio's capabilities. This is good mostly but has some side effects, too. I like for instance all the suggestions with regards to improving the code. It's a bit odd when you insert some code, e.g. a declaration of a variable of a type that is part of System, that ReSharper adds the fully qualified name including the name of the namespace. A few seconds later it may suggest to remove the namespace name is thinks that it is not necessary. Feels a bit like the fireman who wanted to be a hero, so he set a house on fire in order to being called for fighting the fire.... One thing that can be challenging if you use formatting features. ReSharper may decide to format items differently than Visual Studio. So depending on whether you have the settings for the two in sync you may end up with code that looks the same or that looks inconcistent. It certainly would be great if ReSharper would do some form of synchronization. There are other minor subtleties about which I'll blog as I encounter them. But by and large the product is a good complement to Visual Studio's feature set. Note: I paid in full for my license, and I have no financial or other interest in JetBrains the company behind ReSharper.

Tuesday, April 08, 2008

ControlPaint and Vista

What's ControlPaint for? Well, you can use it for different purposes, e.g. for printing a form including the controls on it properly rendered. Another reason why you would want to use ControlPaint is when you want to do some customization to controls that support ownerdraw, e.g. the TreeView control. Be aware, however, that ControlPaint doesn't support visual styles. So for instance if you use ControlPaint.DrawCheckBox you will notice that under Vista it will render the checkboxes with the XP style. That's not hat you want. In those cases where you want support for visual styles use classes such as the CheckBoxRenderer and similar classes in System.Windows.Forms. So for the checkbox you would use CheckBoxRenderer.DrawCheckBox. The difference will be that the CheckBoxRenderer supports visual styles. Your checkboxes will look as expected on Vista as well. I have tested this, and it works on Vista with .NET 2.0. Don't necessarily rely on the online information. Give it a try on your development and your target platform. Also give it a try on the .NET version you are using. The described technique should be available for all versions between 2.0 and 3.5. The online documentation is contradictory on this. It doesn't seem to be supported on the compact framework. But who runs Vista on his portable device....

Wednesday, April 02, 2008

Property Getters with Bad Manors

In the sources of a 3rd party library that I'm using I found the following code:
public static int Counter {
   get {
      int cnt = counter;
      counter = 0;
      return cnt;
   }
}
Ok, the documentation says that calling this getter will reset the counter. However... Still, in my opinion this is bad coding practice. Getters shouldn't modify the object. The reason for that is that some IDE's use the getters for displaying object data in debugging sessions. As a consequence debugging code and looking at the member of this object influences the outcome of the debugging session. I call this bad manors of that getter. The better approach would have been to use a method with a more intuitive name for example "ReadAndResetCounter()". That would a) avoid accidental modification during debugging sessions, b) follow the good coding practice of getters being non-modifying accessors, and c) would express the actual functionality of the code in a more understandable way. Some recommendations as a take-away:
  • Don't implement getters that modify the object
  • Instead use a method with a descriptive name

That way you will make it easier for other engineers to use your code and/or library.

(Now I'm wondering how long it will take until Charlie reads this. I'm sure he will agree with me!)

Monday, March 31, 2008

"Location is not available" on Vista

Ok, this is not exactly a .NET issue but I still thought it is worth mentioning it here since I assume there is a sufficient number of people currently using Vista for development who may have encountered this issue. You may have experienced one or more of the following:
  • "Preparing your desktop..." for an extended period of time
  • "Location is not available" after you log in
  • Desktop looks more like XP and has lost the Aero look-and-feel

I don't know what the reason for this was. Maybe it was relating to my Vodem to not being able to survive stand-by and/or hibernate. Maybe it was because Vista decided to run some lengthy checks, like CHKDSK. The event logs don't give a clue.

Resolution: Reboot another time.

Monday, March 24, 2008

csUnit 2.3 available

Just in case you are a regular reader of this blog: csUnit 2.3 has just been made available. With this release we have focused on improving the quality by fixing a few defects. On behalf of the csUnit team: Happy testing!

unresolved external symbol ?.cctor@@$$FYMXXZ

This error is usually related to upgrading Managed C++ project from Visual Studio 2003 to Visual Studio 2005. It is easy to resolve by removing a few options from both the compiler and linker, then recompiling the project. For the compiler remove the /ZI flag (don't confuse with /Zi). In the project properties pages choose "Configuration Properties", then "C/C++", then "Command Line". In the bottom part of the page you'll see "Additional Options". Remove /ZI from there if you have it at all. Next, go to "Linker", then "Command Line". Remove /NOENTRY and /NODEFAULTLIB from there if you have it at all. In all cases make sure you do this for all configurations not just the debug or the release configuration. More details about the context and background are available here.

Monday, March 17, 2008

csUnit, Visual Studio 2008, Vista, .NET 3.x

Maybe you have wondered about whether csUnit is supported on Vista and/or Visual Studio 2008. So in this post I'd like to give a few details for csUnit 2.2. Microsoft Windows Vista is not an issue. csUnit installs and runs just fine. This is also true for Visual Studio 2005 on Vista. The add-in registers properly and the context menus work, too. Visual Studio 2008 is a different story. The add-ins don't install and neither do the context menus. This is an item that we'll address in one of the next csUnit releases. If you are building for .NET 2.0 in VS 2008 you can still use csUnitRunner as a stand-alone application and run all tests. This mode works on both XP and Vista. If you are building an application for .NET 3.0 or .NET 3.5 that's fine, too. Simply include the reference to the csUnit 2.2 runtime assemblies as before and run your tests within csUnitRunner (or the add-in in VS 2005 if you managed to modify VS2005 to use .NET 3.x). If you encounter an issue with any of the above combinations, or if you find a combination that doesn't work, we would be very interested to hear about it. Please log a bug report at csUnit's but tracker at SourceForge and include as many details about your specific configuration as possible. Happy testing! (Note: All of the tests mentioned above were conducted using the English language version of the mentioned software components, all of them on the latest patch level as of the date of publication.)

Thursday, March 13, 2008

csUnit Sources About To Be Moved to Subversion

Since the beginning csUnit has used Sourceforge's CVS repository. In the meantime Subversion has become very stable and provides a number of benefits over CVS. As a consequence we will stop using the CVS repository. All committed source code that was available in CVS will continue to be available via CVS. Access is also possible via the internet and ViewVC. As of the conversion date all new commits will go into the Subversion repository. We have no plans to migrate old commits to the new repository. Details for accessing the new repository are available here.

Visual Studio: "Unable to find manifest signing certificate in the certificate store"

I just moved a Visual Studio project from one computer to a different one. When I then tried to rebuild the solution I received the following error:
"Unable to find manifest signing certificate in the certificate store"
As I was sure that I wasn't using any certificate to sign the assembly I couldn't understand the reason for this error message and the integrated help system for Visual Studio wasn't a big help either. It turned out that I had to manually go into the *.csproj file and remove the following three lines that were apparently left over from some past experiments with signing using a certificate:
<manifestcertificatethumbprint>...</manifestcertificatethumbprint> <manifestkeyfile>...</manifestkeyfile> <generatemanifests>...</generatemanifests> <signmanifests>...</signmanifests>
After I had removed those lines I reloaded the project and the solution rebuilt just fine. There is more information on this subject at a Microsoft Forum.