Menu

Tech blog (43)

Sync Framework 2 CTP2 – no SqlExpressClientSyncProvider yet

Ok, I’ve finally gotten around to installing the CTP2 of the Sync framework. Let's see what new and interesting stuff I got with it:

1. A headache.
2. Erm... anything else?

All witticism aside, a lot of details have probably changed, but the main gripe I had still stands: there is no support for hub-and-spoke replication between two SQL servers etc. (Oh, and the designer is still unusable… Two gripes).

As for the first thing, I hoped I was finally going to get rid of my SqlExpressClientSyncProvider debugged demo but no such luck. It turns out that nothing of the sort is (yet?) included in the sync framework. Judging by a forum post, v2 is soon due to be released, but any questions regarding the Sql Express provider are met with a dead silence. It doesn’t seem it will be included this time (9200 downloads of the SqlExpressClientSyncProvider demo are obviously not significant for these guys). You almost literally have to read between the lines: regarding information about this CTP, there was a very sparse announcement, and a somewhat misleading one at that (and since this is the only information you get, any ambiguity can lead you in the wrong direction).

The CTP2 release announcement said:

# New database providers (SqlSyncProvider and SqlCeSyncProvider)
Enable hub-and-spoke and peer-to-peer synchronization for SQL Server, SQL Server Express, and SQL Server Compact.

So, does SqlSyncProvider work in hub-and-spoke scenario? I thought yes. How would you interpret the above sentence?

The truth is that SqlSyncProvider cannot be used as local sync provider in a SyncAgent (that is, in a hub-and-spoke scenario as I know it) because it is not derived from ClientSyncProvider. The SyncAgent explicitly denies it - and throws a very un-useful exception that says, essentially “ClientSyncProvider”… Translated, this means: “use the Reflector to see what has happened”, which I did. The code in the SyncAgent looks like this:

public SyncProvider LocalProvider 
{ 
    get 
    { 
        return this._localProvider; 
    } 
    set 
    { 
        ClientSyncProvider provider = value as ClientSyncProvider; 
        if ((value != null) && (provider == null)) 
        { 
            throw new InvalidCastException(typeof(ClientSyncProvider).ToString()); 
        } 
        this._localProvider = provider; 
    } 
}

(Someone was too lazy to write a meaningful error message… How much effort does it take? I know I wouldn’t tolerate this kind of behavior in my company.)

So there’s no chance for it to work (or I’m somehow using an old version of the SyncAgent). Is there any other way to do a hub-and-spoke sync, without a SyncAgent? I don’t know of it. But the docs for the CTP2 say:

SqlSyncProvider and SqlCeSyncProvider can be used for client-server, peer-to-peer, and mixed topologies, whereas DbServerSyncProvider and SqlCeClientSyncProvider are appropriate only for client-server topologies.

I thought that client-server is the same as hub-and-spoke, now I’m not so sure… At the end, after hours spent researching, I still don't know what to think.

A macro to find missing files in Visual Studio Solutions

Or: how to solve the setup project message “ERROR: An error occurred while validating.  HRESULT = '80004005'” It seems that for a large part of features in Visual Studio .Net, the development stops at the point where they are mostly usable and effectively demo-able. The Microsoft people are very eager to show you how easy it is to solve a trivial problem with a couple of clicks, but they are very reserved once something really serious has to be done. A case in point: the Setup/Deployment projects in Visual Studio. (Yeah, the ones, Zero Click Deployment - they make it sound like it reads your thoughts - and the like). If you have a missing file in your solution, and if the file is of a non-critical type (e.g. a resource) so that the solution compiles, you have a big problem because the setup won’t. It will fail with a moronic message “ERROR: An error occurred while validating.  HRESULT = '80004005'”. Which file is missing? Well, if you really really care – go through all files in your projects and check (don’t forget to expand the controls, some RESX child file may be the culprit!) Another thing that can happen is that a reference in one of the projects is bad. For example, you had the project reference another project and you moved the other project out of the solution… The first project doesn’t use anything from it so that the build passes but the setup is not as forgiving. In this case, you need to check all references. If, like me, you have a solution that consists of thirty project and thousands of files, this would mean a major headache. One way to solve it would be to create a new, temporary setup project and add to it one by one all outputs from the original setup. Build after each step and when the error appears you’ll know which project is at fault. Now, if the first is the case (a missing source file), one possible solution is to automate the manual search by using a macro. Below you’ll find one, modified from the example on the excellent MZ Tools site. As for the missing reference, it is somewhat easier to find since there are considerably less references in a solution than project files (you can detect these using the one-by-one method described above). I’m leaving it to you as a TODO: improve this script to detect missing references, post it on your blog and let me know so I can add a link to it.

Option Strict Off
Option Explicit Off
Imports System
Imports EnvDTE
Imports EnvDTE80
Imports EnvDTE90
Imports System.Diagnostics
Imports System.Windows.Forms

Public Module Module2

    Sub FindMissingFiles()

        Dim objProject As EnvDTE.Project

        Try
            If Not DTE.Solution.IsOpen Then
                MessageBox.Show("Please load or create a solution")
            Else
                For Each objProject In DTE.Solution.Projects
                    NavigateProject(objProject)
                Next
            End If
        Catch objException As System.Exception
            MessageBox.Show(objException.ToString)
        End Try

    End Sub

    Private Sub NavigateProject(ByVal objProject As Project)

        Dim objParentProjectItem As ProjectItem

        Try
            objParentProjectItem = objProject.ParentProjectItem
        Catch
        End Try

        NavigateProjectItems(objProject.ProjectItems)

    End Sub

    Private Sub NavigateProjectItems(ByVal colProjectItems As ProjectItems)

        Dim objProjectItem As EnvDTE.ProjectItem

        If Not (colProjectItems Is Nothing) Then
            For Each objProjectItem In colProjectItems
                For i As Integer = 1 To objProjectItem.FileCount
                    Dim fileName As String = objProjectItem.FileNames(i)

                    If fileName <> "" And Not System.IO.File.Exists(fileName) _
                            And Not System.IO.Directory.Exists(fileName) Then
                        MessageBox.Show("File missing: " + fileName)
                    End If
                Next

                If Not (objProjectItem.SubProject Is Nothing) Then
                    ' We navigate recursively because it can be:
                    ' - An Enterprise project in Visual Studio .NET 2002/2003
                    ' - A solution folder in VS 2005
                    NavigateProject(objProjectItem.SubProject)
                Else
                    ' We navigate recursively because it can be:
                    ' - An folder inside a project
                    ' - A project item with nested project items (code-behind files, etc.)
                    NavigateProjectItems(objProjectItem.ProjectItems)
                End If
            Next
        End If

    End Sub

End Module

NHibernate queued adds on lazy-load collections

I don’t know if this behaviour is documented (well, yeah, the (N)Hibernate documentation is pretty thin but it’s improving), I wasn’t fully aware of it and this caused a bug… I’m posting this in hope it may save for someone else the time I have lost today :).

In our software we use a custom NHibernate collection type that is derived from AbstractPersistentCollection and keeps track of “back references”: that is, references to the record that owns the collection. It does this automatically when an object is added to the collection.

Now, on collections mapped as lazy-loading, Add() operations are allowed even if the collection is not initialized (i.e. the collection just acts as a proxy). When adding an object to a collection in this state, a QueueAdd() method is called that stores the added object in a secondary collection. Once a lazy initialization is performed, this secondary collection is merged into the main one (I believe it’s the DelayedAddAll() method that does this). This can be hard to debug because lazy load is transparently triggered if you just touch the collection with the debugger (providing the session is connected at that moment), and everything gets initialized properly.

Our backreference was initialized at the moment the object was really added into the main collection. But this is not enough, we had to support queued adds - that is, the cases when QueueAdd returns true. The other alternative is to disable delayed adds by commenting out the places where QueueAdd is called – I don’t know if this is possible, there seems to be some code that supports it. We decided to support delayed add, and it seems to work. The modification looks something like this (this is the PersistentXyz class):

int IList.Add(object value) 
{ 
    if (!QueueAdd(value)) 
    { 
        Write(); 
        return ((IList) bag).Add(value); 
    } 
    else 
    { 
        // if the add was queued, we must set the back reference explicitly 
        if (BackReferenceController != null) 
        { 
            BackReferenceController.SetBackReference(value); 
        } 
        return -1; 
    } 
}

Debugging SQL Express Client Sync Provider

How to finally get the SQL Express Client Sync Provider to work correctly? It’s been almost a year since it was released, and still it has documented bugs. One was detected by Microsoft more than a month after release and documented on the forum, but the fix was never included in the released version. We could analyze this kind of shameless negligence in the context of Microsoft's overall quality policies, but it’s a broad (and also well documented) topic, so we’ll leave it at that. It wouldn’t be such a problem if there were no people interested in using it, but there are, very much so. So, what else is there to do than to try to fix what we can ourselves… You can find the source for the class here. To use it, you may also want to download (if you don’t already have it) the original sql express provider source which has the solution and project files which I didn’t include. (UPDATE: the original source seems to be removed from the MSDN site, and my code was updated - see the comments for this post to download the latest version). The first (and solved, albeit only on the forum) problem was that the provider was reversing the sync direction. This happens because the client provider basically simulates client behavior by internally using a server provider. In hub-and-spoke replication, the distinction between client and server is important since only the client databases keep track of synchronization anchors (that is, remember what was replicated and when). I also incorporated support for datetime anchors I proposed in the mentioned forum post, which wasn’t present in the original source. But that is not all that’s wrong with the provider: it seems that it also swaps client and server anchors, and that is a very serious blunder because it’s very hard to detect. It effectively uses client time/timestamps to detect changes on the server and vice versa. I tested it using datetime anchors, and this is the most dangerous situation because if the server clocks aren’t perfectly synchronized, data can be lost. (It might behave differently with timestamps, but it doubt it). The obvious solution for anchors is to also swap them before and after running synchronization. This can be done by modifying the ApplyChanges method like this:

foreach (SyncTableMetadata metaTable in groupMetadata.TablesMetadata)
{
    SyncAnchor temp = metaTable.LastReceivedAnchor;
    metaTable.LastReceivedAnchor = metaTable.LastSentAnchor;
    metaTable.LastSentAnchor = temp;
} 

// this is the original line
SyncContext syncContext = _dbSyncProvider.ApplyChanges(groupMetadata, dataSet, syncSession); 

foreach (SyncTableMetadata metaTable in groupMetadata.TablesMetadata)
{
    SyncAnchor temp = metaTable.LastReceivedAnchor;
    metaTable.LastReceivedAnchor = metaTable.LastSentAnchor;
    metaTable.LastSentAnchor = temp;
}

This seems to correct the anchor confusion but for some reason the @sync_new_received_anchor parameter still receives an invalid value in the update/insert/delete stage, so it shouldn’t be used. The reason for this could be that both the client and server use the same sync metadata and that the server sync provider posing as client probably doesn’t think it is required to leave valid anchor values after it’s finished. I promise to post in the future some more information I gathered poking around the sync framework innards. Note that this version is by no means fully tested nor without issues, but its basic functionality seems correct. You have to be careful to use @sync_new_anchor only in queries that select changes (either that or modify the provider further to correct this behaviour: I think this can be done by storing and restoring anchors in the metadata during ApplyChanges, but I’m not sure whether this is compatible with the provider internal logic). Another minor issue I found was that the trace log reports both client and server providers as servers. If you find and/or fix another issue with the provider, please post a comment here so that we can one day have a fully functional provider.

Who cares about C# interfaces?

I remember that when the first version of Java was released one of the frequent questions was: why doesn’t it have multiple inheritance? This also happened later with C#. The answer in both cases was: you can use interfaces instead. It is not the same, you don’t inherit the logic, but you can at least simulate it by implementing multiple interfaces.

Well, this was obviously forgotten by the people designing C# classes: even worse, they seem not to have been aware of it because most of the classes in v1.0 onward don’t have equivalent interfaces that would allow this.

There’s a real-life example (or should I say everyday example? Because I’ve encountered it a million times, you probably did too) that perfectly illustrates the point: why isn’t there an IControl interface that represents a (windows, web, whichever) control type? The logical answer (and it was probably the real reason the framework designers came up with) would be: well, it would be too difficult to implement. Anything that is supposed to work as a control should really have to be derived from Control. True: but the purpose of interfaces is not only to implement them.

Let’s say I want to create my control type that implements some interface, IMyInterface. How do I reference this component, using a variable of Control type or IMyInterface? Whichever I use it isn’t complete, I have to cast it into the other one and I lose compile-time type safety. I could create a ControlWithMyInterface base class that implements the interface and use this instead of both, but then I’d have to inherit everything from it, which in most cases would not be possible. No, the only solution would be to support the “interfaces instead of multiple inheritance” principle and derive IMyInterface from IControl, but IControl doesn’t exist. Although it wouldn’t be too difficult to implement, one could just put all of Control’s public members into the interface.

There could really be a programming rule enforcing the existence of an equivalent interface for every class, even something that generates them on the fly from its public members. I wonder if there is a pre-processor that can help with this?

Supporting triggers that occasionally generate values with NHibernate

NHibernate supports fields that are generated by the database, but in a limited way. You can mark a field as generated on insert, on update, or both. In this case, NHibernate doesn’t write the field’s value to the database, but creates a select statement that retrieves its value after update or insert.

Ok, but what if you have a trigger that updates this field in some cases and sometimes doesn’t? For example, you may have a document number that is generated for some types of documents, and set by the user for other types. You cannot do this with NHibernate in a regular way – but there is a workaround…

It is possible to map multiple properties to the same database column. So, if you make a non-generated property that is writable, and a generated read-only property, this works. You have to be careful, though, because the non-generated property’s value won’t be refreshed after database writes.

A more secure solution would be to make one of the properties non-public and implement the other one to support both functionalities. Like this:

//
// This field is used only to send a value to the database trigger:
// the value set here will be written to the database table and can be consumed by
// the trigger. But it will not be refreshed if the value was changed by the trigger.
// 
private int? setDocumentNumber;	

private int? _documentnumber;

//
// The public property that works as expected, generated but not read-only
// public int? DocumentNumber { get { return _documentnumber; } set { // NHibernate is indifferent to this property's value (it will not // be written to the database), so we have to update the setDocumentNumber // field which is regularly mapped _documentnumber = value; setDocumentNumber = value; } }

Here’s the NHibernate mapping for these two:

<property name="DocumentNumber" generated="always" insert="false" update="false"/>
<property name="setDocumentNumber" column="DocumentNumber" access="field"/>

Model-Controller pattern for business rules

The model-view-controller (MVC) pattern has an interesting, let’s call it sub-pattern, that could be more broadly used. The basic purpose of MVC is to “clean up” the data (model) and interface (view) by delegating all the in-between “dirty” logic to the controller. The controller is aware of both the data and the interface and knows how to handle both: it controls what is happening in them, but from the outside. The controller, if written correctly, can be reused for multiple types of model or view. If we go a step further, we could program multiple controllers that know how to handle different aspects of the functionality: in this way we very much get add-ons that we can attach onto non-intelligent components and make them smarter. By choosing which controllers we attach, we can customize the behaviour of our application. And because the controller is made to work “from the outside” it is isolated from the internal implementation of the components: this could make it instantly reusable.

But it could be interesting to generalize the “add-on” controller pattern and make it work in other situations as well. One interesting area of application could be controlling data: if you use an Object-relational manager (ORM) to read your data (and if you don’t, you should), you already have your data in the perfect form for this - that is, as objects. Likewise if you use a traditional data-access layer and then create business entities as object-oriented structures. Usually, these classes contain much of the business logic inside and this is quite natural to do because business logic represents the behaviour of the data. But it’s not good for reusability: whether you combine the data access and business entities or implement them separately, the business logic is tightly coupled with data/hardcoded inside data so it’s not easy to customize. It would be nice if we could externalize at least some parts of it, but keep it as close to data as possible. Even better would be if we could split the logic and implement different aspects as different components.

If we attached controller objects to our data, they could take the responsibility of providing its behaviour. We could have different controller types for different types of behaviour (status changes, automatic calculations etc.), and if we instantiate them using some kind of configuration engine (or dependency injection framework), they will be easy to replace and thus make your application highly customizable.

Of course, these “data controllers” sound similar to what rules engines do, only more lightweight. But how much of an overhead would they add? As always, it depends on what we do, but I would say not too much. They would contain the code we already have, with probably one layer of isolation because it would be split into distinct classes. Would the instantiated rules waste memory? Probably some: but when you consider that the ORM instantiates each record as an object and has a complex infrastructure for tracking its changes; that for each data binding there is a couple of objects created in the interface; that there can possibly be a control (which is quite a large object) bound to each field… It seems that it shouldn’t make a big difference.

The most sensitive bit is collections: there we can have hundreds of records with one control (a data grid) bound to them. In this case, the overhead of the rules engine could be significant compared to the overhead of the interface – but not in comparison to ORM overhead. Again, this depends on how we implement the rules: the rule could be optimized to utilize a single instance for the whole collection – this can be done using an IBindingList implementation where the collection transmits events for everything happening inside it… Of course, here it is the collection which incurs the overhead, but let’s say we would use it anyway for data binding purposes.

Obviously, there’s a lot of technical issues here. This is partly because the base framework doesn’t have built-in support for what we need. But, it is to be expected that having an ORM (which can be interrogated for data structures and from which we can receive notifications when data is loaded/saved) generally helps.

Of course, I’m still talking hypothetically here: but I’ve already made a couple of small steps in the direction of having a living prototype (in fact, I’m testing an minimal working something as we – uhm, speak). More on that in another post.

Common Table Expression For Sorting Hierarchical Records By Depth

I tried to find this on the net, and was unable to find a satisfying and simple solution. How do you retrieve hierarchical records (say, from a self-referencing table) sorted by their depth in the hierarchy? Microsoft SQL Server 2005/2008 has a new SQL construct,  Here’s one example: a Category table with an ID (primary key) and ParentID (pointing to the hierarchical parent).

WITH CategoryCTE(ID, ParentID, Depth) 
AS 
( 
  SELECT ID, ParentID, 0 
  FROM Category 
  WHERE ParentID IS NULL – root records

  UNION ALL 

  SELECT cRecursive.ID, cRecursive.ParentID, cCte.Depth+1 
  FROM Category AS cRecursive JOIN CategoryCTE AS cCte 
      ON cRecursive.ParentID = cCte.ID 
) 
SELECT * 
FROM CategoryCTE 
ORDER BY Depth
Subscribe to this RSS feed