27 December 2016

An XmlResult for ASP.NET MVC


ActionResult methods in a Controller allow developers to easily return HTML results and JSON results via this.View(...) which creates a ViewResult and this.Json(...) which creates a JsonResult. [See the following MSDN resources for more information: Controller.View method and Controller.Json method.]

Unfortunately, there is no XmlResult type that derives from ActionResult.

Although there are WCF ways of returning XML in an ASP.NET MVC application, there are use cases where returning simple POX (plain old XML) from a call to an MVC Controller ActionResult is needed.

In this article, I'll show a simple way to create an XmlResult.

How to create an XmlResult


An XmlResult should

  • Inherit from ViewResult,
  • Override the ExecuteResult method to write the XML,
  • Expose the object to serialize as a gettable property, and
  • Allow calling code to customize the XML produced.

The ExecuteResult override should

  • Do nothing if there is no object to serialize,
  • Use the XmlAttributeOverrides if there are any,
  • Set the ContentType of the Response to application/xml, and
  • Write the XML-serialized version of the object to serialize to the Output stream of the Response.

XmlResult Code

Below is code that meets these requirements:

How to extend Controller to easily return an XmlResult

It is desirable to be able to have an ActionResult method in a Controller be able to obtain an XmlResult via a this.XML(...) statement.

This can be accomplished via an Extension Method.


The Controller.XML extension method should

  • Accept an object to serialize,
  • Optionally accept XmlAttributeOverrides, and
  • Return an XmlResult.


Below is code that meets these requirements:


This article shows a simple way to create an XmlResult to return XML that can be used within an ASP.NET MVC Controller, and a way to extend Controller to allow for easy use.

This approach is appropriate for simple needs and eliminates the need to write custom code each time XML is needed.

For more complicated needs, using a ContentResult offers a feasible approach.

23 December 2016

How to unit test an interface to make certain that it does not get changed

When and why Interface invariance matters

Agile principles teach us that program code should rely on and hold references to abstractions. In C#, this often means declaring a field, a property, an argument or a return type as an interface.

Agile also teaches us when building packages and multi-tier applications to let the client/consumer dictate the interface (logic-to-interface in SOA terms).

If an interface is only consumed within a single application, invariance isn't such a big concern. When interfaces are used by other applications or other packages, however, we must consider them as "published" and treat them as unchanging contracts (see Martin Fowler's article in IEEE Software March/April 2002 for more on this at http://martinfowler.com/ieeeSoftware/published.pdf).

It is important to note that not all interfaces need to be invariant.

Unit Testing for Interface Invariance

Using NUnit's ability to run the same suite of tests on multiple types via its TestFixture with Type and constructor parameters, it is fairly straight-forward to construct a unit test that ensures that an interface only has certain properties and methods.

Our approach will still be the traditional "Arrange/Act/Assert" unit testing pattern, but to eliminate repetitive code, the arrange and act steps will happen in the constructor for the test fixture.

This yields a test fixture ctor with a signature like the following:

There are many ways to approach interface testing. The approach that we favor is to simply test the signatures of properties and methods, optionally ignoring "Special Name" methods, which excludes "get" and "set" methods that properties generate behind the scenes. If you need to test for a read-only property, simply set the constructor parameter ignoreSpecialNames to false.

Arrange & Act


We are going to perform the "arrange" part of the unit tests by using NUnit's injection feature via the TestFixture.

To accomplish this, first declare the test fixture class like this:

public class InterfaceContractTests<T> : AssertionHelper where T : class

Next, add test fixture attributes similar to the following (the first argument sets the type T; the remaining are the constructor arguments):

  • Testing for just a method

      new string[] { "Void Write(System.String)" },
      new string[] { },

  • Testing for properties, get/set and inheritance

      new string[]
        "System.String get_Color()",
        "Void set_Color(System.String)",
        "System.String get_BackgroundColor()"
      new string[]
        "System.String Color",
        "System.String BackgroundColor"


Our code needs to test each property and method to ensure that it is declared by the type we are testing. This is done by checking the DeclaringType property of the MethodInfo and PropertyInfo objects that are returned when our code calls the methods to get the public properties and public methods of the interface.

If our tests are not checking for read-only properties, we can exclude the "get" and "set" methods by testing if the IsSpecialName property of the MethodInfo object is true.

The code for getting the actual method and property signatures is shown below.

Full Constructor Code

Below is the full code for the resulting constructor.


There are four tests that need to be run for each interface:

  • Verify that we're testing an interface,
  • Verify that the actual method signatures match expectations,
  • Verify that the actual property signatures match expectations, and
  • Verify that the interface does or does not extend another interface.

Below is the code that implements these tests.


When interfaces are used by independent components or clients, they should be considered to be "published" and invariant.

Interfaces that are invariant should have unit tests that ensure that they do not change and break the published contract.

This article shows an easy-to-use and repeatable testing approach that ensures interface invariance. If you add it to your suite of tests and update it as new published interfaces are authored, you will reduce your risk of bugs and broken code.

21 December 2016

How to adjust base class tests to test derived classes using NUnit

One of the categories of hard to chase down bugs I often see in C#.NET code is caused by violation of the Liskov Substitution Priniciple. In this post, I'll show a quick way to adapt NUnit unit tests for a base class to also test a derived class.

Liskov Substitution Principle

In a nutshell, LSP requires that derived types when substituted for their base types should behave in exactly the same way.

The Problem

In modern applications, it is fairly common to see inheritance implementations that break the LSP rule. If the violation isn't caught and fixed, it is a bug waiting to happen. Eventually, some program code will perform an operation using an object of a type derived from the base class, relying upon it to behave as the base class does. When it does, things will break.

The Rectangle/Square Example of LSP Violation

In geometry, a square is just a special type of rectangle with width and height equal.

Naïvely, a developer may decide to represent this in code as a Square class inheriting from a Rectangle class.

The Solution

The original unit tests for Rectangle at some point create a Rectangle instance.

The first step is to move this instantion into a [SetUp] method in your NUnit [TestFixture]. The setup method runs before every individual test.

The next step is to refactor the test class to be generic; something similar to the following will work:

public class ViolatorTests<TShape> : AssertionHelper where TShape : IRectangle, new().

Next change the setup method to instantiate an instance of the generic type TShape.

With these changes in place, you simply need to change your TestFixture attribute to [TestFixture(typeof(Rectangle))]. This gets you back to your original Rectangle tests.

Finally, to make the base class tests run against the derived type, simply add another TestFixture attribute to the class – this will cause the NUnit framework to run the test suite against the new type specified.

Below is a complete example for the rectangle/square scenario. The test will fail when the derived type Square is used, indicating that you have a violation of LSP and need to rethink how the two classes should be related


By slightly refactoring your NUnit base class unit tests to be generic, you can make them usable for testing derived classes without writing duplicate tests (remember: DRY – Don't Repeat Yourself). This will catch LSP violations early in development and prevent hard to pin down bugs.

16 December 2016

The SEO Problem with site redesigns

It is very common for sites to experience SEO problems after being redesigned. Specifically, loss of traffic and search engine results page (SERP) demotion for terms.

Why this happens


Modern search engines determine whether or not to show a link to a page on your site based on hundreds of factors.

When you move content from one URL to another, after the search engine bot/spider discovers the new content, it will begin indexing that content (a good thing), but unless you properly tell it that the old content has moved to the new location, it will continue to attempt to crawl and index the old URL, and, until the new content is well established, it may show the old (now broken) URL in the SERP.

Showing the broken link to users is problematic because in addition to the crawl, search engines "learn" whether or not a given page on your site is "good" based on how users act after clicking on the link. If the user quickly returns to the search engine, the machine learning algorithms will begin lowering the value of that URL.

EVENTUALLY, the search engines will catch up (assuming that your new content is on par with your old content).

Until they do, however, you will take a traffic and conversion rate hit.

How to Mitigate the Impact of URL moves

First, monitor your crawl errors in the webmaster tools provided by the major search engines. When you see new "Not Found" errors, fix them ASAP.

Second, monitor and log traffic to your website's error pages. When you see errors, fix them ASAP.

Naïve approach

If you only had one page that returned a 404 (Not Found) error, the fix would be as simple as building a controller that returned a 301 (Permanent Redirect) with the new URL. This is user-friendly: if a user visits the old URL, (s)he is immediately redirected to the new page. It is also search-engine-friendly: the spider/bot for the search engine understands the meaning of the HTTP status code 301 and will begin updating its index to use the new URL in lieu of the old URL.

Unfortunately, building a new controller every time a new error hits the logs is time-consuming and a waste of developer resources.

A better way

Rule-based 404 handling

A better way to handle the problem is to build a generic mechanism that handles three cases as follows:

  • old URL → new URL with a 301 status
  • old URL with no planned/intended replacement → a friendly error page that returns a 410 (Gone) status code (not a 404, which is temporary)
  • old URL of which you are unaware → a friendly error page that returns a 404 status code (this is "almost" the default in ASP.NET MVC with Custom Errors enabled – the framework actually returns a 302 and then a 404).

Our design goals are:

  • To not need to write code for newly-discovered broken links,
  • To maintain rules in a simple text file,
  • To have rule-order precedence, and
  • To have the server update the rules in use when the text file is saved

Replace the default error handling mechanism

Disable CustomErrors

In the web.config file in the root of your site, disable custom errors: <customErrors mode="Off" />

Wire up a replacement error page

In the code file for the application, Global.asax.cs, insert code similar to the following:

Since we're completely replacing the custom error handling in ASP.NET, all unhandled non-HttpException errors are converted to HTTP Status Code 500 errors.

The code for our HandleHttpException method is shown below. It clears the error on the server, asks IIS (which is the web server that hosts most ASP.NET websites) to skip any custom error handling it has in place, and finally, executes a custom error page controller.

Since we're working in the HttpApplication directly, we have to build the RouteData ourselves and then execute the controller to get back into the MVC framework.

Since in some cases our controller will return an error page, the code that follows will use a model with three public properties: ErrorMessage, StatusCode, and Url. These represent the HTTP error message, status code, and the page URL that generated the error.

HttpErrorModel class diagram

The code below is very simple. If the HTTP status code is 404 (Not Found), then look in the OldUrl property in each of our rules to see if we have one that has a pattern that matches the URL. If a matching rule is found, if there is a non-empty NewUrl, then return a permanent RedirectResult to the new URL. If there is an empty NewUrl, then we return a 410 (i.e. we have created a rule for content that will not be replaced). If we don't have a matching rule, then a 404 is returned.

To make this into a flexible system, we're going to store our rules in a text file as JSON and use Regular Expression patterns.

Below is an example of some sample rules.

Our controller class will store its rules in the default MemoryCache. This collection of rules will have a cache policy that causes a refresh when the JSON text file is changed. The initial caching of the rules will come from reading a JSON file and decoding it into a POCO.

Below is the CacheMappings code. The code to read the text from file and to convert the JSON to a POCO object is omitted for brevity.

If you implement a 404-handling pattern similar to the one shown in this article in your site redesign, then when "Not Found" errors are logged by either the search engines or your internal logging, the fix is as simple as adding a rule to the JSON file.

08 December 2016

Forcing an ASP.NET MVC site to only serve HTTPS

Why force HTTPS?

Hyper Text Transfer Protocol Secure (HTTPS) is the secure version of HTTP, the protocol over which data is sent between your browser and the website that you are connected to. The 'S' at the end of HTTPS stands for 'Secure'. It means all communications between your browser and the website are encrypted.

How to force HTTPS

ASP.NET Action Filters

ASP.NET MVC allows you to apply a [RequireHttps] attribute on individual page controllers. It also allows the attribute to be applied globally by adding code to Application_Start in the Global.asax.

The problem with the built-in RequireHttpsAttribute

In a nutshell, the problem is that it returns 302, a temporary redirection HTTP Status Code (see List of HTTP Status Codes [Wikipedia]). This is an SEO problem. A return value of 301 means "Moved Permanently" and is a hint to the search engines to update their indexes.

If the built-in attribute is used, a result similar to the one below is obtained when the page is retrieved:

curl http://www.localexample.com:4433/ -iILk
HTTP/1.0 302 Found

The desired result is:

curl http://www.localexample.com:4433/ -iILk
HTTP/1.0 301 Moved Permanently

Writing your own version of the RequireHttpsAttribute

Writing your own attribute is fairly straight-forward.

  1. Create a class that inherits from the RequireHttpsAttribute class
  2. Override the HandleNotHttpsRequest method.
  3. Add some code to handle running in your local development environment. (NB you will need to create a self-signed certificate for a dummy domain (we use www.localexample.com), install it on your machine and update your hosts file).
  4. Build the HTTPS address
  5. End the method by setting the Result property of the filterContext to a new RedirectResult that uses the HTTPS address and sets the permanent parameter to true.

Although you can apply this attribute to your controller methods individually, applying it globally minimizes the effort and ensures that nothing is missed.

Wiring up your filter configuration

You will need to create/update the FilterConfig class.

The code shown below illustrates the necessary change, which is simply the addition of your custom attribute to the global filter collection.

Code is then added to the App_Start method in the Global class to run RegisterGlobalFilters.

The code below illustrates how to do this.

06 December 2016

How to test that an ASP.NET MVC Controller Only Accepts an HTTP GET

Test-Driven Development (TDD)

An important agile priniciple is that code only be written in response to a test. Covering unit tests along with continuous integration (CI) allows development to move quickly and be confident that new code will not break old code.

Testing Orthogonal Concerns

The challenge with testing for HTTP Verb limitations is that, to follow the Single Responsibility Principle (SRP), the code limiting the verbs should be an orthogonal concern (i.e. that it should not be directly in the controller method).

Thankfully, ASP.NET MVC allows a developer to limit the HTTP verbs that a controller accepts via an Attribute.

How to verify that a method has an Attribute

Since ASP.NET MVC development uses managed languages, Reflection allows a test author to verify that an attribute has been applied.

Suppose one wants to test the following method for the presence of an HttpGet attribute:

The process for testing for an attribute is simple.

  1. Get a reference to the type.
  2. Get a reference to the method.
  3. Get a reference to the custom attribute.
  4. Test that the reference is not null (i.e. that the attribute was applied to the method)


Below is code showing how to test for the presence of an attribute on a method.

Below is a more complicated example which tests that the attribute is configured in a certain way.

05 December 2016

How to Collect and Log Web Clickstream Data

Clickstream Data

What is clickstream data?

The path a user takes through a website is generally referred to as the user's clickstream.

Why collect clickstream data?

When aggregated using map-reduce techniques and then using unsupervised machine learning predictive analytics techniques like clustering and association analysis, clickstream data can tell you many things, including

  • how users use your site
  • if there are different behavioral groups within users
  • what marketing works or does not work

Clickstream data can also be used to build recommender systems, perform ROI analysis on marketing (via Shapley values), and improve lead scoring

Client-side collection

Since a lot of the traffic to most websites is bot traffic, it is helpful to collect via client-side script as many bots do not evaluate JavaScript and many legitimate bots (e.g. search-engine spiders) identify themselves as such.

A simple method for collection is to add a <script> tag with an immediately-invoked function expression (IIFE) just before the closing </body> tag. An example of such a script is shown below.

The code above simply creates an image tag in code and assigns a source to it. The assignment of a src value causes the browser to fetch the requested resource.

  • Cache busting
    Since many browsers cache static resources, a random value is added to the URL as a cache buster.
  • Referrer
    For the purposes of clickstream data, it is useful to include the URL of the page that caused the current page to be loaded; this is called the "referrer" and is added to the pixel image URL as a parameter named "dr" (the name is "dr" is used to conform to the parameter names used in the Google Analytics Measurement Protocol.

Server-side ASP.NET MVC Listener

Pixel Endpoint

The code on the server side has three primary responsibilities:

  1. to set/update certain cookie values (explained below),
  2. to log the values, and
  3. to return something that the browser will accept (this response will carry the cookies)


In general, there are four anonymous values that need to be tracked (if your site allows users to log in, it may be useful to add a fifth cookie to allow you to aggregate cross-device behavior). The values to be tracked/collected are:

  • Session ID
    A temporary anonymous identifier that allows you to group together the page view records captured by the end point. The value will be different each time the user visits your site anew but will remain constant while the user is using your site.
  • Sequence #
    A temporary integer value that allows you to order the page view log records. The value will increment as the user navigates the site.
  • Client ID
    A semi-permanent anonymous identifier that allows you to group together multiple sessions from the same browser/client. NOTE: the value is specific to the browser/machine/user combination: a different logged in user on a machine will have a different client ID; if the same user uses multiple browsers (e.g. Chrome and Firefox), each browser will have a unique client ID.
  • Session count
    A semi-permanent anonymous identifier that allows you to analyze how user behavior changes over subsequent visits.

Getting the value of a cookie from the client request

The code for retrieving a cookie value from the client request is fairly straight-forward. One needs to guard against the case where the cookie does not exist.

Incrementing the sequence value

The "seq" cookie is simply a counter. It, along with the time of the request, which is logged, allows the analyst to study how users navigate a site. Along with the previous page, which is passed up as the "dr" parameter, the sequence value is useful for multi-tab browsing scenarios.

Incrementing the session count

A new session will not have a session ID. If this is the case, the session count needs to be incremented. The session count cookie should be semi-permanent.

Setting the client ID

The client ID should be semi-permanent and should only be set if there is not one already set.

Setting the session ID

The session ID is a temporary value. It's value will be cleared when the user closes the browser. Although it may be tempting to define a session timeout period, since any value chosen will be arbitrary, it is important that the data logged be agnostic and that any session timeout adjustments desired be done during analysis.

Setting cookie values

To safeguard the cookie values and the user, it is important that the cookies be set to be HTTP-only (preventing client script and browser plugin/addon tampering), that the cookies be secure (i.e. HTTPS-only – your site should be HTTPS-only).

To make your pixel useful over all of your web assets, it is helpful to set the domain. Suppose you have domain names like the following:

  • www.example.com
  • blog.example.com
  • response.example.com
To have one pixel connect your users over all of these domains, simply set the domain of the cookie to "example.com".

Logging Values

Given the power of Map-Reduce technologies that are available in Hadoop, R, MongoDB, etc., it makes sense to store the initial data in JSON format.

Four additional pieces of information are added to the serialized data.

  • the time,
  • the user agent
    which is useful for distinguishing mobile from desktop sessions,
  • the page calling the pixel end point, and
  • the referring page that caused the page with the pixel code to be loaded

The code for serializing the values follows:

Full code with unit tests

The full source code for this article is available at https://github.com/stand-sure/Clickstream