Wednesday, January 23, 2013

Mocking an internal interface with Moq

The other day I was trying to write some unit tests on a public class with an internal constructor.  In order for the test project to get access to the internal constructor (primarily used for testing) I had to add the assembly: InternalsVisibleTo to the AssemblyInfo.cs file and specify the class library I was testing would expose its internals to the test class library, for example:

[assembly: InternalsVisibleTo("SomeApi.Test")]

This got me close, next issue was the parameter of the internal constructor was an internal interface referencing a dependency of the class under test.  No big deal.  I figured I would use Moq to mock out the dependency and I got the following error when trying to run the test:

Test method SomeNamespace.SomeClass.SomeTestMethod threw exception:
Castle.DynamicProxy.Generators.GeneratorException: Type SomeNamespace.SomeInterface is not public. Can not create proxy for types that are not accessible.

Hmm, it seems like Moq does not have access to the internals.  Adding the following to the class libraries AssemblyInfo.cs file solved the problem

[assembly: InternalsVisibleTo("DynamicProxyGenAssembly2")]

Next time I ran the test Moq was able to access the internal interface and my test passed.

Sunday, November 4, 2012

Messing around with Generics

The other day I was looking through some code and thought of a way to refactor the code using generics.  I decided to create an example with the code in its current form and then see if I could make it work more generically. All the source can be found here (https://github.com/devdaves/genericconditions) on GitHub.

Here is the code example prior to using generics:

public ExampleResponse ValidateExampleWithoutGenerics(ExampleRequest request)
{
    ExampleResponse response = new ExampleResponse();

    Condition1(request, ref response);
    Condition2(request, ref response);
    Condition3(request, ref response);
    //could be many conditions...
    DoWork(request, ref response);

    return response;
}

The method accepts a request object and returns a response object.  There are several conditions that need to be validated prior to doing the work.  All condition and do work methods check the status property in the response prior to doing any work.  This way if Condition1 fails the rest of the methods will still be executed but the work will not be done since the status of the response is checked in each method.

What I am trying to solve is not running the other conditions or do work if the response status is in a faulted state.  Now I know I could just add some conditions into this method but what's the fun in that.  So time to refactor.

First the response needs to inherit from an IResponse that enforces that all responses will have a status property.  This will come in handy when we create the generic method a little later in the post.

public interface IResponse
{
    Status Status { get; set; }
}

Next rewrite the Validate method with generics.  Note the list of actions.

public ExampleResponse ValidateExampleWithGenerics(ExampleRequest request)
{
    ExampleResponse response = new ExampleResponse();

    List<Action<ExampleResponse>> todo = new List<Action<ExampleResponse>>()
    {
        {(r) => {Condition4(request, ref r);} },
        {(r) => {Condition5(request, ref r);} },
        {(r) => {Condition6(request, ref r);} },
        {(r) => {DoWork2(request, ref r);} },
    };

    //DoToDo<ExampleResponse>(todo, ref response);
    todo.RunWithShortCircuit(ref response);

    return response;
}

Initially the list of actions was sent to a method in the same class (DoToDo now commented out) and later turned into an extension method.  This is the extension method.

public static void RunWithShortCircuit<T>(this List<Action<T>> actions, ref T response)
    where T : IResponse
{
    foreach (var action in actions)
    {
        if (response.Status.StatusCode == 0)
        {
            action.Invoke(response);
        }
        else
        {
            // short circuits the rest of the actions from running
            break;
        }
    }
}

This extension method will work off any list of actions that inherit from the IResponse interface created earlier.  It will loop through the actions and execute each one and check the status.  If the status is in a faulted state it will stop processing the list of actions.  Since this is now generic any method we create in the project that uses the request/response pattern should be able to use this extension method to execute the conditions and dowork.

The cool thing here is not only are we short circuiting the process but each condition and dowork method no longer needs to check the status since its done in one place.  If the logic that checks the status is done in only one place the maintenance of the code is much easier.

I have to admit that I haven't spent a lot of time playing with generics like this but I can see from a maintenance point of view that it can save some substantial time in a decent sized project.  Plus I really like the way it reads although it does take some time to get used to.

Wednesday, May 30, 2012

Moq: Mocking HttpContext in your MVC3 unit tests

Lets take the following action method on our Home controller.

public ActionResult TestAction()
{
    var idFromCookie = Request.Cookies["ID"].Value;
    var model = new TestActionViewModel() { Id = idFromCookie };
    return View("TestAction", model);
}

Unit testing this action method can be difficult due to its dependency on the Request object.  Luckily the Request object here maps to the HttpContext of the controller.  The HttpContext on the controller is an instance of HttpContextBase and can be mocked in unit tests.

Below is a class I currently use to mock out the HttpContext.  Note that I have many other properties mocked that are part of the HttpContext.  I plan on doing other blog posts using those different properties in the future.

using System.Web;
using System.Web.Mvc;
using System.Web.Routing;
using Moq;

public class MockContext
{
    public Mock<RequestContext> RoutingRequestContext { get; private set; }
    public Mock<HttpContextBase> Http { get; private set; }
    public Mock<HttpServerUtilityBase> Server { get; private set; }
    public Mock<HttpResponseBase> Response { get; private set; }
    public Mock<HttpRequestBase> Request { get; private set; }
    public Mock<HttpSessionStateBase> Session { get; private set; }
    public Mock<ActionExecutingContext> ActionExecuting { get; private set; }
    public HttpCookieCollection Cookies { get; private set; }

    public MockContext()
    {
        this.RoutingRequestContext = new Mock<RequestContext>(MockBehavior.Loose);
        this.ActionExecuting = new Mock<ActionExecutingContext>(MockBehavior.Loose);
        this.Http = new Mock<HttpContextBase>(MockBehavior.Loose);
        this.Server = new Mock<HttpServerUtilityBase>(MockBehavior.Loose);
        this.Response = new Mock<HttpResponseBase>(MockBehavior.Loose);
        this.Request = new Mock<HttpRequestBase>(MockBehavior.Loose);
        this.Session = new Mock<HttpSessionStateBase>(MockBehavior.Loose);
        this.Cookies = new HttpCookieCollection();

        this.RoutingRequestContext.SetupGet(c => c.HttpContext).Returns(this.Http.Object);
        this.ActionExecuting.SetupGet(c => c.HttpContext).Returns(this.Http.Object);
        this.Http.SetupGet(c => c.Request).Returns(this.Request.Object);
        this.Http.SetupGet(c => c.Response).Returns(this.Response.Object);
        this.Http.SetupGet(c => c.Server).Returns(this.Server.Object);
        this.Http.SetupGet(c => c.Session).Returns(this.Session.Object);
        this.Request.Setup(c => c.Cookies).Returns(Cookies);
    }

}

In the constructor I new up the new mocks and do some wiring up of the dependencies of the objects you will need while working with the HttpContext.  Since the above action uses a cookie, note that there is a Cookies property that is mapped to the mocked request object.  So when Request.Cookies is called in the action method it will actually be looking at the Cookies collection defined in this class.

Here is a test that uses the MockContext object to test the action method above.

[TestMethod]
public void TestActionCookieValueReturnedInModel()
{
   //arrange
    var expectedValue = "TEST";
    MockContext mockContext = new MockContext();
    mockContext.Cookies.Add(new HttpCookie("ID", expectedValue));
    var homeController = new HomeController()
        {
            ControllerContext = new ControllerContext()
                {
                    HttpContext = mockContext.Http.Object
                }
        };

    //act
    var result = homeController.TestAction() as ViewResult;
    var model = result.ViewData.Model as TestActionViewModel;

    //assert
    Assert.AreEqual(expectedValue, model.Id);
}

Note that I new up a new instance of the MockContext and add the cookie to the cookie collection so the action method above will have access to it.  When creating an instance of the controller I used an object initializer to set the controller context to a new controller context using our MockContext http object.

Using this technique it would be very easy to mock out what would happen if the cookie was not in the cookies collection.

Monday, May 28, 2012

Moq: Adding Setups/Expectations work after providing object to Dependent

First some background code.

Lets assume we have a contact repository interface like so:

public interface IContactRepository
{
    Contact Add(Contact contact);
    bool Exists(Contact contact);
}

Lets assume we also have a contact service that implements the contact repository like so:

public class ContactService
{
    IContactRepository repository;

    public ContactService() : this(new ContactRepository())
    {
    }

    public ContactService(IContactRepository repository)
    {
        this.repository = repository;
    }

    public Contact Add(Contact contact)
    {
        if (!repository.Exists(contact))
        {
            return repository.Add(contact);    
        }
        return null;
    }
}

Now I used to write the tests like this:

[TestMethod]
public void AddContactContactDoesNotExistReturnsNewContact()
{
    var mockRepository = new Mock<IContactRepository>();
    mockRepository.Setup(x => x.Exists(It.IsAny<Contact>())).Returns(false);
    mockRepository.Setup(x => x.Add(It.IsAny<Contact>())).Returns(new Contact() { Id = 1 });
    var contactService = new ContactService(mockRepository.Object);

    var result = contactService.Add(new Contact() { Name = "Test" }) as Contact;

    Assert.IsNotNull(result, "Should have returned a contact object");
    Assert.AreEqual(1, result.Id);
}

Notice how I setup the mock repository expectations prior to assigning the mock repository to the constructor of the contact service. For some reason I always though you had to define your expectations prior to assigning the mock object to its dependents.  As it turns out this is not required.  Look at the following code and notice how the setup of the expectations comes after the mock repository has been assigned to the contact service.

[TestMethod]
public void AddContactContactDoesNotExistReturnsNewContact()
{
    var mockRepository = new Mock<IContactRepository>();
    var contactService = new ContactService(mockRepository.Object);
    mockRepository.Setup(x => x.Exists(It.IsAny<Contact>())).Returns(false);
    mockRepository.Setup(x => x.Add(It.IsAny<Contact>())).Returns(new Contact() { Id = 1 });
    
    var result = contactService.Add(new Contact() { Name = "Test" }) as Contact;

    Assert.IsNotNull(result, "Should have returned a contact object");
    Assert.AreEqual(1, result.Id);
}

Knowing that there is no specific order in regards to assigning the expectations to your mock object and assigning the mock object to its dependent object can go a long way in making your code more readable.

Thursday, August 18, 2011

Using the Flags attribute with an Enumeration

I was recently tasked with adding an attribute to an object.  This object persists in the database and I assumed that adding an attribute meant I would add a new property to the class, add a new column to the database and tie it all together.

Well, that wasn't the case.  The implementation that was already in place used an enumeration for the collection of possible attributes.  That enumeration allowed more then one value to be stored.  The enumeration was then converted into an integer as it was stored in the database.  All I had to do to add the new attribute was add an additional value to the enumeration.

This served as yet another epiphany on how something works and then the realization that I have seen and used this before in many places already. What I am talking about is using an enumeration of values as a bitmask.

Now the trick if you want to call it that is each numeric value of the enumeration was 2 multiplied by the preceding value (except for zero and one) and the enumeration has a Flags attribute.  Here is some example code:

[Flags]
public enum Colors : int
{
    None = 0,
    Red = 1,
    Blue = 2,
    Yellow = 4,
    Green = 8,
    White = 16
}

Since the values of the enumeration above are actually numeric they are represented by the following table:

Color Number 32 Bit Definition
None 0 00000000000000000000000000000000
Red 1

00000000000000000000000000000001

Blue 2

00000000000000000000000000000010

Yellow 4

00000000000000000000000000000100

Green 8

00000000000000000000000000001000

White 16

00000000000000000000000000010000

Since the enumeration is of type integer it will be stored as a 32 bit number. Since the integers are 32 bit each bit represents an enumeration value so the enumeration can only have 32 items.  If I wanted more then 32 items in the enumeration I could change the type of the enumeration to long.  This would store the value as a 64 bit number therefore giving you a capacity of 64 items in the enumeration.

Here is some example code showing how to add and remove colors:

Colors myColors = Colors.None;

//lets add some colors
myColors = myColors | Colors.Red;
myColors = myColors | Colors.Green;
myColors = myColors | Colors.Blue;
myColors = myColors | Colors.White;

//lets remove a color
myColors = myColors ^ Colors.White;

The table below represents the value of the myColors variable at each addition and removal of a color.

Colors Number 32 Bit Definition
None 0 00000000000000000000000000000000
Red (1) 1

00000000000000000000000000000001

Red (1), Green (8)   1+8=9 9

00000000000000000000000000001001

Red (1), Green (8), Blue (2)  1+8+2=11 11 00000000000000000000000000001011
Red (1), Green (8), Blue (2), White (16)  1+8+2+16=27 27 00000000000000000000000000011011
Red (1), Green (8), Blue (2) 1+8+2=11 11 00000000000000000000000000001011

Notice the 32 bit representation of the value of the colors above.  Each bit that is ON is 1 and the ones that are OFF are 0.

So how would you figure out what enumeration item values where ON with out the computer?  All you need to know is the enumeration value and the values of the individual enumeration items.  Lets take the value 27 for example. Going from highest enumeration item value that can go into the value we can figure out what colors are ON in the enumeration value.

27 – 16 (White) = 11
11 – 8 (Green) = 3
3 – 2 (Blue) = 1
1 – 1 (Red) = 0
Answer = White, Green, Blue and Red.

The value of myColor variable would get saved to the database as an integer value. I wondered how or if I could use SQL to figure out which items where ON or OFF.  As long as I know the value of the enumeration items I am looking for I can use that information to query the data.  Look at the following SQL script to see how to do it.

declare @colors as table (
    id int,
    name nvarchar(100) not null,
    colors int not null
)

insert @colors (id, name, colors)
values
(1, 'Red', 1),
(2, 'Red, Green', 9),
(3, 'Red, Green, Blue', 11),
(4, 'Red, Green, Blue, White', 27),
(5, 'Red, Green, Blue', 11)

--lets get all the rows that have white in them
--we know white is 16
select *
from @colors
where colors & 16 = 16

--lets get all the rows that have blue and green but not white
--we know blue = 2m green = 8 and white = 16
select *
from @colors
where colors & 2 = 2 and colors & 8 = 8 and colors & 16 = 0

 

My Thoughts

While the initial issue with using this technique is that you are limited to the number of items in the enumeration depending on what data type you use.  I feel if you know you will use less then the maximum number of items in the enumeration this implementation would work.  Not having to add a new column to a database table every time you add an attribute sounds like a real win.

Monday, July 25, 2011

Finally found a cool use of a SQL Cross Join

Setup:
Lets say you have a list of contacts stored in a contacts table and a list of attributes stored in an attributes table.  So lets say that each contact can have one or more attributes and each attribute can be used on one or more contacts.  To do this we would need to use a many to many table, something that stores the id of the contact and the id of the attribute.  For example:

create table ContactsAttributes (
    ContactID int not null,
    AttributeID int not null
)

Now lets say we just added 2 new attributes (attribute id: 10 and 11) to the attribute table and we needed to add them to 3 specific contacts (contact id: 1,2,3).  In this instance it would be easier to just write out the insert statements like so:

insert ContactsAttributes (ContactID, AttributeID)
values
(1, 10),
(1, 11),
(2, 10),
(2, 11),
(3, 10),
(3, 11)

As you can see this is pretty straight forward.  Now lets say we added 10 attributes and we needed to add all 10 of these attributes to 20 specific contacts.  Since there are 10 attributes that need to be added to 20 contacts that would be 200 items to write in the table constructor of the insert statement.  Using a Cross Join like below would be considerably less code and way cooler.

DECLARE @contacts AS TABLE (id int)
DECLARE @attributes AS TABLE (id INT)

INSERT @contacts (id) VALUES
(1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11)
,(12),(13),(14),(15),(16),(17),(18),(19),(20)

INSERT @attributes ( id ) VALUES
(10),(11),(12),(13),(14),(15)
,(16),(17),(18),(19),(20)

INSERT INTO ContactsAttributes
SELECT c.id, a.id
FROM @contacts c
CROSS JOIN @attributes a

Using the cross join basically returns all possible combinations of the id column in the contacts table variable with the id column of the attributes table variable.  This is what is known as a Cartesian Product.  To see the results as they would be inserted comment out or remove the “INSERT INTO ContactsAttributes” line.

Saturday, May 14, 2011

JavaScript: Get your console on…

Since I have been listening to both the “Official jQuery Podcast” and the “yayQuery podcast” JavaScript and jQuery have been on my mind lately.  It doesn’t hurt that some bugs I have fixed at work required mucking around in some jQuery.  Someone from one of those podcasts referenced some things that you can do with the console in both IE and Firefox (with Firebug). Today I wanted to show some of the power of playing with the console.

I will be using IE 9 as my example browser today but you can do these things in Firefox and Google Chrome.  So first lets open IE 9 and go to www.devdave.com.  Press F12 to open up the developer tools of IE (documentation).  You should notice an explorer bar, usually across the bottom of the screen.

image
  1. In the HTML tab press the arrow button. (This will allow you to select an html element on the screen.)
  2. Move the mouse and click the div containing the random quotes displayed on the site.
  3. Note that there is a tree view that represents the document object model and it is now highlighting the object that you selected.  Note that here you can see the class that this object has applied to it.  We are going to use that along with some jQuery to make that div slide up without having to write the code in the page.

Now that we know the class assigned to the div we can use that as a selector in jQuery.  Two tabs over from the HTML tab there is a Console tab clicking that will open up a new part of the developer tools.

image At the bottom of the screen you should see >> .  This is where you can type some JavaScript.  In order for the following to work the page you are working with needs to already have loaded jQuery.  Since this page has already done that lets go ahead and type

$(".tm-section")

into the console and press enter.  Note that the console is displaying a bunch of information regarding the object you selected.  If the selector you typed was correct you should see that the length property returns a value of 1.  If we had mistyped the selector then the length would be 0.

This would be a great way to practice using different types of selectors.

Since we have the correct selector we can type this to make the selected object slide up.

$(".tm-section").slideUp("slow")

And then we can type this to make it slide down.

$(".tm-section").slideDown("slow")

 

image So one line to write code might not be enough. Lets say you wanted to write a function or two. The console window has a button on the far right of where you type commands it’s the chevron symbol next to the green play button.  Clicking that will open a text area where you can write multiple lines of code.  Write the following in the script area and click the Run Script button.

 

function toggle(){
   $(".tm-section").slideToggle('slow', function(){
     alert("done");    
   });
}

toggle();

Pressing the Run Script button multiple times will cause the slide to toggle and an alert message to be displayed when its done.

As you can see this technique can really help in debugging or interrogating already running JavaScript. I have used this technique to debug asynchronous data paging issues, Google map marker issue.or just figuring out the correct selector syntax to select one or more objects in the DOM.