In order to explore and fully appreciate Riak, you’re going to need to setup multiple nodes. Since not everyone has 3 extra boxes around, they’ve made it easy to setup the nodes on a single box. I’ll walk you through the steps I took to setup a development Riak cluster on Ubuntu 12.04.

The first step is to get your dependencies installed:

sudo apt-get install build-essential libncurses5-dev openssl libssl-dev
sudo apt-get install erlang
sudo apt-get install git

Next, you’ll need to grab the riak source:

git clone git://github.com/basho/riak.git && cd riak

Before we build our cluster, there are few things to change from the defaults. This will turn off authentication for the admin tool (I did say this was a development cluster ;)).

sed -i 's/{auth, userlist}/{auth, none}/g' ./rel/files/app.config

Next, let’s turn on riak control and riak search:

sed -i 's/{enabled, false}/{enabled, true}/g' ./rel/files/app.config

Now we’re ready to build a 3 node cluster from source:

make devrel DEVNODES=3 && cd dev

Next, start up all of the nodes and join them together:

find dev*/bin/riak -exec {} start \;
find dev[2-9]/bin/riak-admin -exec {} cluster join dev1@127.0.0.1 \;
dev1/bin/riak-admin cluster plan
dev1/bin/riak-admin cluster commit

And now you have a running 3 node cluster listening on a bunch of weird ports. Let’s add haproxy in the mix to expose the cluster under standard (8098 for http and 8087 for protocol buffers) ports with roundrobin distribution. First, install haproxy:

sudo apt-get install haproxy

Next, create a configuration file for haproxy (I’ll name mine dev.haproxy.conf) with the following contents:

#Mostly from OJ Reeves post: http://buffered.io/posts/webmachine-erlydtl-and-riak-part-2/
global
        maxconn 2048

defaults
        retries 3
        maxconn 1024
        timeout connect 3000

frontend riak_pb
        mode tcp
        bind *:8087
        default_backend riak_pb_cluster
        timeout client 1200000

backend riak_pb_cluster
        mode tcp
        balance roundrobin
        timeout server 1200000
        server riak1 127.0.0.1:10017 check
        server riak2 127.0.0.1:10027 check
        server riak3 127.0.0.1:10037 check

frontend riak_http
        bind *:8098
        mode http
        default_backend riak_http_cluster
        timeout client 1200000

backend riak_http_cluster
        mode http
        balance roundrobin
        timeout server 1200000
        option httpchk GET /ping
        server riak1 127.0.0.1:10018 check
        server riak2 127.0.0.1:10028 check
        server riak3 127.0.0.1:10038 check

Now run haproxy -f dev.haproxy.conf and you now have a cluster listening for connections. Fire up your web browser and point it at machine:8098/admin and you should see the web interface, Riak Control, displaying your cluster status. That’s it! You’re ready to start learning about Riak.

Tagged with:  

Source Diving

On February 18, 2013, in development, by Josh Bush

Lately I’ve been exploring the source code of some of the open source projects I use most. Reading source code is one of the best things you can do as a developer. It’s something I’ve always done. I can’t seem to ever finish a programming book, but I’ll spend hours digging around projects on github. In my opinion, there is a lot more knowledge to be extracted from actual implementations that solve real world problems.

Not everyone writes code like you do.
Being able to sift through someone else’s code and understand it while ignoring their style and naming conventions goes a long way towards eliminating the “my way i the best way” mentality a lot of us have. Reading through codebases on a regular basis will give you a skill of being able to drop into a production crisis and fix something that your co-worker who is on vacation pushed right before he boarded his 2-week cruise.

There is no magic.
Using a framework doesn’t provide an excuse for you to ignore how something works. Let me say that again. You are responsible for knowing at a high level what the frameworks and tools you use are doing. No excuse. Many times documentation will fall short when you are are beyond the “getting started” phase. When something doesn’t behave how you expected, just pull up the source and see for yourself.

This starts a never-ending series of posts where I find things in the code I read that confuse and amaze me. I’ll blog them here and share my ignorance with you.  

Come join me for the first of such endeavors about something neat I found in underscore.js

 

Tagged with:  

Source Diving: _.each()

On February 18, 2013, in Uncategorized, by Josh Bush

The other day I was digging through several libraries implementation of “each”. I wanted to see how they handle determining how to iterate arrays vs objects and how they might utilize the native forEach array method when it is available. The first implementation I looked at was in Underscore.js. It’s very concise and doesn’t take too long to understand it. There was one little nugget in there that caught my eye because I’d never seen it before. Line 79 from version 1.4.4 has an interesting way to check if the object has a length property:

if (obj.length === +obj.length)

Do you see that plus on the right side of the equality comparison? My eyes glazed over it at first too. I went to the ECMAScript Language Specification ECMA-262 11.4.6 and it says:

Unary + Operator

The unary + operator converts its operand to Number type.

The production UnaryExpression : + UnaryExpression is evaluated as follows:

  1. Let expr be the result of evaluating UnaryExpression.
  2. Return ToNumber(GetValue(expr)).

This grabs the numeric version of the length property and compares it to the original value to make sure they are the same type and value. If that statement is true, then that means we have a length property that is numeric. I whipped up a quick jsfiddle to see how that statement behaves on other data types and all looks well.

I was expecting to open up the source and see something like typeof obj.length === 'number', so this was a fun little distraction to learn about something I hadn’t encountered. It’s terse and I’m not sure about the readability versus the typeof comparison though. Underscore.js is pretty amazing, so I’m sure I’ll be posting more finds from its codebase.

 

F# Luhny Bin: Luhn Algorithm

On July 5, 2012, in development, samples, by Josh Bush

I was recently introduced to the Luhny Bin coding challenge at the Nashville Functional Programmers Group. I decided to tackle this challenge in F#. I write C# during the day, so it’s nice to already have some familiarity with the .NET Stack.

Today I’m going to show you my implementation for the luhn algorithm. It’s a good place to start since the core of this challenge is identifying strings of numbers as potential credit cards. Making the transition to functional programming is challenging. I’ve been object focused for years now, but I’m trying to get over it. ;)

My first crack at this ended up looking like F# as if I were writing C#. Not too pretty. It’s also a bit slow, taking a couple of seconds to run through a million credit card strings on my virtual machine.

let double i x=
    match i%2 with
    | 1 ->
        let dub=x*2
        if(dub>9) then dub-9 else dub    
    | _ ->x

let luhn (x:string) =
    let n= x.ToCharArray()
        |> Array.filter(Char.IsDigit)
        |> Array.rev
        |> Array.map(fun c -> int c - int '0') 
        |> Seq.mapi(double) 
        |> Seq.sum 
    n%10 = 0

I worked through several iterations slowly changing this method as I had to implement the rest of the luhny bin challenge. Here’s where I landed by the end with a recursive call that uses pattern matching. It runs way faster chugging through the same million credit card strings half the time of my first iteration.

let luhn chars =
    let rec luhn even sum digits =
        match digits, even with
        | [], _ -> sum % 10 = 0
        | head :: tail, false when head > 4 -> luhn true (sum + head*2-9) tail
        | head :: tail, false -> luhn true (sum + head*2) tail
        | head :: tail, true -> luhn false (sum + head) tail         
    chars
        |> List.rev 
        |> List.map(fun (c:char) -> int c - int '0')
        |> luhn true 0

Line 1 defines the outer function that accepts a list of characters. This differs slightly from the first implementation where I took a string. I took this route because I had already decomposed the input down to a list by the time I needed to make the luhn call.

Line 2 is the beginning of the recursive function where I get to use pattern matching. Line 3 defines the tuple I’m going to pattern match against. Line 4 is the pattern where I need to stop, an empty list. The other patterns decompose the list into the head item and the rest of the list, “tail”. Then we check conditions of the head item combined with the “even” marker. Once we’ve fallen into our pattern we call the function again passing along the rest of the list and the new state.

Line 8 invokes the recursive function by reversing the list and converting the characters to integers.

This challenge ended up being a lot harder than I anticipated, but I made it through it. It was pretty fun to implement it and learn to think more functional. By the time I finished this I was already thinking of new ways to implement this that might be faster. Over the next little bit I’ll share with you the rest of my implementation to finish the luhny bin challenge.

Tagged with:  

CouchDB Bucket Demo

On June 28, 2012, in development, by Josh Bush

This month I was given the opportunity to speak about CouchDB at Codestock in Knoxville, TN. This is a talk I’ve been able to give a few times, but this is the first time I’ve attempted to record it. I’ve pulled out a 10 minute clip where we walk through storing a fast food order in a relational database and then storing the same order in a document database. The video is rough because all I had was my pocket camcorder.

CouchDB Bucket Demo, Codestock 2012 from digitalbush on Vimeo.

Also, here are the slides for the whole talk.

The sample code for the note taking app and map/reduce are in this repository. The wikipedia demo can be found in this repository. I’m still trying to get my legs with this whole speaking thing, so your feedback is much appreciated. Codestock was a blast and I hope to go back next year!

Tagged with:  

Mass Assignment Vulnerability in ASP.NET MVC

On March 5, 2012, in development, by Josh Bush

By now you may have seen what happened to github last night. In case you didn’t, let me bring you up to speed.

In a Ruby on Rails application, you can make a call to update your model directly from request parameters. Once you’ve loaded an ActiveRecord model into memory, you can poke its values by calling update_attributes and passing in the request parameters. This is bad because sometimes your model might have properties which you don’t want to be updated by just anyone. In a rails application, you can protect this by adding attr_accessible to your model and explicitly stating which properties can be updated via mass assignment.

I’m not going to pretend to be a Ruby dev and try to explain this with a Rails example. Github already linked to this fantastic post on the subject regarding Rails here. What I’m here to tell you is that this situation exists in ASP.NET MVC also. If you aren’t careful, you too could end up with a visit from Bender in the future.

So, let’s see this vulnerability in action on an ASP.NET MVC project.

First, let’s set up a model:

public class User {
    public int Id { get; set; }
    public string UserName { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public bool IsAdmin { get; set; }
}

Then let’s scaffold out a controller to edit this user:

public class UserController : Controller {
    IUserRepository _userRepository;
    public UserController(IUserRepository userRepository) {
        _userRepository = userRepository;
    }

    public ActionResult Edit(int id) {
        var user = _userRepository.GetUserById(id);
        return View(user);
    }

    [HttpPost]
    public ActionResult Edit(int id, FormCollection collection) {
        try {
            var user = _userRepository.GetUserById(id);
            UpdateModel(user);
            _userRepository.SaveUser(user);
            return RedirectToAction("Index");
        } catch {
            return View();
        }
    }
}

Do you see that UpdateModel call in the POST to ‘/User/Edit’. Pay attention to that. It looks innocent enough, but we’ll see in a minute why that is bad.

Next, we scaffold up a view and remove the checkbox that allows us to update the user’s Admin status. Once we’re done, it looks like this:

That works. We can ship it, right? Nope. Look what happens when we doctor up the URL by adding a query parameter:

I bet you guess what’s about to happen now. Here, I’ll break execution right at the problematic line so you can watch the carnage:

Okay, you can see the current values to the right. We’ve loaded user #42 from the database and we’re about to update all of his values based on the incoming request. Step to the next line and we see this:

UH OH. That’s not good at all. User #42 is now an administrator. All it takes is an industrious user guessing the names of properties on your entities for you to get burned here.

So, what can we do to prevent it? One way would be to change the way we call UpdateModel. You can use the overload which allows you to pass in an array of properties you want to include. That looks like this:

UpdateModel(user,new[]{"FirstName","LastName","Email"});

We’ve just created a whitelist of properties we will allow to be updated. That works, but it’s ugly and would become unmanageable for a large entity. Aesthetics aside, using this method isn’t secure by default. The developer has to actively do something here to be safe. It should be the other way around, it should be hard to fail and easy to succeed. The Pit of Success is what we want.

So, what can we really do to prevent it? The approach I typically take is to model bind to an object with only the properties I’m willing to accept. After I’ve validated that the input is well formed, I use AutoMapper to apply that to my entities. There are other ways to achieve what we want too, but I don’t have time to enumerate all of the scenarios.

Wrapping up
The point of all of this is that you need to understand exactly what your framework is doing for you. Just because there is a gun available, it doesn’t mean you have to shoot it. Remember folks, frameworks don’t kill people; developers with frameworks kill people. Stay safe out there friends, it’s a crazy world.

Tagged with:  

Getting Started with Box2D Physics

On February 29, 2012, in samples, by Josh Bush

The past few days I’ve been messing around with the Box2D physics engine. For someone who spends his days buried in business applications, this has been a fun bit of learning. Box2D has been ported to a ton of languages and I found a nice port to javascript called box2dweb.

First, let’s look at a simple demo:


Click here for full jsFiddle

The first thing you’ll need to do is set up a world and a loop to update it. The basics look like this:

var world = new b2World(
   new b2Vec2(0, 10), //gravity vector
   true
);

setInterval(function(){
    world.Step(1 / 60, 10, 10);
    world.ClearForces();
},1000/60);

We just declared a world with some gravity. In the example above, we’re applying gravity down, but you can have it pushing any direction you’d like. Next we set up an interval to run 60 times per second. Inside of that we tell the world to step 1/60th of a second while specifying the velocity and position iterations. For the velocity and positon iterations, the values can be altered to meet your needs. Lower will yield better performance, higher will yield better accuracy.

So, now you have a world with nothing in it. What fun is that? We’ll need to add some stuff and start crashing it into each other.

There are two type of objects you can create. Static objects, like the triangle above, are fixed in the space. They are not affected by gravity or other objects. Dynamic objects are the fun ones that you get to move around. Our circles above are created and then nudged slightly to make them fall on either side of the triangle.

Triangle

var fixDef = new b2FixtureDef;
fixDef.shape = new b2PolygonShape;
fixDef.density = 1.0;
fixDef.friction = 0.5;
fixDef.restitution = .5;
         
fixDef.shape.SetAsArray([
    new b2Vec2(-1, 0),
    new b2Vec2(0, -1),
    new b2Vec2(1, 0)],3
);

var bodyDef = new b2BodyDef;
bodyDef.type = b2Body.b2_staticBody;    
bodyDef.position.Set(7, 7);
world.CreateBody(bodyDef).CreateFixture(fixDef);

Circle

//Same fixture density, friction and restitution from above.
fixDef.shape = new b2CircleShape(.5);
bodyDef.position.Set(7,0);
var body=world.CreateBody(bodyDef);
body.CreateFixture(fixDef);

I mentioned above that I’m nudging the circles. In order to push the shapes, we can use the ApplyImpulse method. It needs two parameters, a vector defining the force to be applied and a point that it should be applied to. Take a moment to go poke around in the fiddle and change the vector for the impulse. You can do some fun stuff like punch them straight up in the air. Go ahead, I’ll wait.

There is one last bit you’ll need to get your own samples going. All of the code we’ve done above describes the objects and their interactions. We still need a way to visualize it though. Luckily box2dweb has a debug drawing mode to render the objects on a canvas element. Here’s what you need to set it up:

var debugDraw = new b2DebugDraw();
debugDraw.SetSprite(document.getElementById("playground").getContext("2d"));
debugDraw.SetDrawScale(20.0);
debugDraw.SetFillAlpha(0.5);
debugDraw.SetLineThickness(1.0);
debugDraw.SetFlags(b2DebugDraw.e_shapeBit);
world.SetDebugDraw(debugDraw);

With that, all that is left is to call world.DrawDebugData() right after you step. Now we can see our demolition derby in action!

I think that covers the basics. There is a lot of fun things you can do with the sample. Try changing the restitution (bounciness), the force of gravity, the direction of gravity, which direction you “nudge” the falling circles… heck, just start changing stuff and watch. It’s way more fun than it should be.

Tagged with:  

Knockout.js Observable Extensions

On December 27, 2011, in samples, by Josh Bush

This started out as a post about how to implement the new extender feature in Knockout.js 2.0. I wanted to see how well that would improve the experience of a money observable I created several months back. Once I had it implemented though, I was a bit disappointed. My extender doesn’t have any arguments, but the knockout observable extend call only accepts a hash in the form of {extenderName:extenderOptions}. I ended up with a call that looked like this: var cash=ko.observable(5.23).extend({money:null});

That didn’t leave a very good taste in my mouth. So, I pulled down knockout and set out to change the way the extenders were implemented. I’ve grown fond of how jQuery chaining worked, so why not bring that to Knockout’s observables? Luckily Ryan Niemeyer was there to save me from myself and pointed out that I could just extend ko.subscribable.fn to achieve the desired effect.

I’m happy with the outcome. Let’s explore the strategy a bit. Before I get in too deep, here’s the end result:


Click here for full jsFiddle

You may be asking yourself, “What’s so great about this?” This is basically the same as my previous sample with one exception. This implementation attaches directly to the subscribable type that KO provides. You might not have seen this unless you’ve spent some time digging around the knockout.js source. This type serves as a base for observables, obervableArrays and dependentObservables computed observables.

Here’s the code that provides the money formatting:

(function(){
    var format = function(value) {
        toks = value.toFixed(2).replace('-', '').split('.');
        var display = '$' + $.map(toks[0].split('').reverse(), function(elm, i) {
            return [(i % 3 === 0 && i > 0 ? ',' : ''), elm];
        }).reverse().join('') + '.' + toks[1];

        return value < 0 ? '(' + display + ')' : display;
    };

    ko.subscribable.fn.money = function() {
        var target = this;
    
        var writeTarget = function(value) {
            target(parseFloat(value.replace(/[^0-9.-]/g, '')));
        };
    
        var result = ko.computed({
            read: function() {
                return target();
            },
            write: writeTarget
        });

        result.formatted = ko.computed({
            read: function() {
                return format(target());
            },
            write: writeTarget
        });

        return result;
    };
})();

Breakdown
Line 11 is where we start. By extending the subscribable.fn object we are adding a property to each and every subscriabable object that KO creates for us. This will give us the ability to chain observables to one another as long as we return an observable from our method(line 32).

On line 12 we see that 'this' references the observable we're extending. I like this because there are no special method signatures we need to implement. Here I'm just grabbing my own reference of this as a variable named target.

Line 18 is where this starts to get a little interesting. I'm creating a writable computed observable that will return the value from the base observable when read. When it gets written to, it will sanitize the input and then write that to the base observable. This will be the observable we return for public consumption(line 32).

Line 25 is where the formatting comes into play. To the observable we're returning we'll add another observable as a property named 'formatted'. This is what we'll bind to whenever we want to see a pretty version of our value. This is another read/write computed observable like we did above. When the property is read from, it will pass the base observable's value through a formatter. The write is the same as the base observable.

Use It

var viewModel = {
    Cash: ko.observable(-1234.56).money(),
    Check: ko.observable(2000).money(),
    showJSON: function() {
        alert(ko.toJSON(viewModel));
    }
};

viewModel.Total = ko.computed(function() {
    return this.Cash() + this.Check();
}, viewModel).money();
ko.applyBindings(viewModel);

On lines 2,3, and 11 you can see where I've used the observable extension I created above. The cool thing about this technique is that we don't care what kind of observable we're extending, it just works.

The showJSON function on line 4 is what gets fired when we click the "Show View Model JSON" button on the example above. Click this and you will see that our json serialization is clean. This is because the base observable we return is the unformatted (no dollar signs, commas, or parenthesis) version.

The Payoff

<div class='ui-widget-content'>
    <p>
        <label>How much in Cash?</label>
        <input data-bind="value:Cash.formatted,css:{negative:Cash()<0}" />
    </p>
    <p>
        <label>How much in Checks?</label>
        <input data-bind="value:Check.formatted,css:{negative:Check()<0}" />
    </p>    
    <p>
        <label>Total:</label>
        <span data-bind="text:Total.formatted,css:{negative:Total()<0}" />
    </p>   
    <p>
        <button data-bind="click:showJSON">Show View Model JSON</button>
    </p>
</div>

Lines 4 and 8 we've bound the input's value to the formatted version of the extended observable. Line 12 has the text of a span bound to the formatted version of the computed observable.

I've rehashed this example 3 times now, but I'm happiest with this implementation. Extending *.fn.* isn't documented anywhere I saw, but maybe it should be. ;) Maybe I should RTFM, it's clearly documented here. This chaining technique will be familiar to anyone who has used jQuery. What do you think about this technique?

Tagged with:  

Manage Your Dependencies with Rake and NuGet

On December 14, 2011, in samples, by Josh Bush

Last week I blogged about how to perform some basic build tasks in your .NET project with Rake and Albacore. There was one bit about managing dependencies I left off though because I thought it warranted its own post. For the projects I’ve been working on lately, we’ve managed to keep our source repository light and nimble by not checking in binaries for all of the dependencies.

NuGet 1.6 came out this week and this functionality is baked in. You can check out the NuGet way in the documentation. The bummer of this is that you have to enable “Package Restore” for each project in your solution. You also now have multiple packages.config to maintain per project. Yes, you can manage it all though the GUI or the package manager console for your projects, but I want it all in one place. I also like not having to do anything on a per project basis other than standard references.

After several iterations on what Derek Greer started, I’ve ended up with the solution below. Dependencies are declared in the same packages.config format that nuget uses, so you can take something you’ve already created and centralize it. We have one build step to refresh our dependencies and it looks like this:

require 'rexml/document'
TOOLS_PATH = File.expand_path("tools")
LIB_PATH = File.expand_path("lib")

FEEDS = [
	#Your internal repo can go here
	"http://go.microsoft.com/fwlink/?LinkID=206669"
]

task :dependencies do
	file = File.new("packages.config")
	doc = REXML::Document.new(file)
	doc.elements.each("packages/package") do |elm|
		package=elm.attributes["id"]
		version=elm.attributes["version"]

		packagePath="#{LIB_PATH}/#{package}"
		versionInfo="#{packagePath}/version.info"
		currentVersion=IO.read(versionInfo) if File.exists?(versionInfo)
		packageExists = File.directory?(packagePath)
		
		if(!(version or packageExists) or currentVersion!= version) then
			feedsArg = FEEDS.map{ |x| "-Source " + x }.join (' ')
			versionArg = "-Version #{version}" if version
			sh "\"#{TOOLS_PATH}/nuget/nuget.exe\" Install #{package} #{versionArg} -o \"#{LIB_PATH}\" #{feedsArg} -ExcludeVersion" do |ok,results|
				File.open(versionInfo,'w'){|f| f.write(version)} if ok
			end
		end
	end
end

There’s a little bit of code there, but we’re getting some good benefits from this one task.

Control over where our dependencies go.
I’m not a big fan of the packages/ folder that nuget uses by default. You may be able to change this in the GUI somewhere, but I haven’t seen it yet. Yes, I’m aware that this is trivial, but I got used to storing my dependencies in lib/ and I’m okay with keeping that. :) Every team has their own conventions they like to follow and it’s nice to not have to change those just because you want to adopt a new tool.

No weird version number suffixes on our folders.
The default convention nuget uses is to store packages under a folder named {name}.{version}. That’s cool until you need to update your dependency to a new version. When you do, you (or your tooling) will have to update the reference paths in all of your *.csproj files to accomodate the new path. I would prefer to store it in a folder with just the name of the package. Keep in mind, this removes the ability to run multiple versions of the same library for different projects within a solution. This hasn’t come up on my projects yet though.

No need to keep tabs on what dependencies our dependency has.
I’m hoping this issue will change one day. As it stands right now (NuGet 1.6), if I have a single entry in my packages.config like so: <package id="NHibernate" version="3.2.0.4000"/> then calling $> nuget.exe install packages.config will not get NHibernate’s dependency ‘Iesi.Collections’. It turns out though, calling nuget like this: $> nuget.exe install NHibernate -Version 3.2.0.4000 will get that dependency for us, so that’s exactly how our rake script does it.

I feel like the ruby syntax reads fairly easy even if you aren’t familiar with the language. Still though, I think it would be beneficial to add a little commentary.

Line 5 is where we define our source(s) for nuget packages. At work we’re using a file share to cache packages and then falling back to the default source when needed.

Lines 11 and 12 are where we load up the packages.config xml file using the XML parser that ships with a default Ruby install. From my reading, there are better gems to accomplish this faster, but this is a really tiny XML file we’re dealing with.

Line 13 selects each package node and iterates over it. The next two lines just pick out the id and version attributes into variables. On lines 19 and 20 we read in the version file if it exists and also check if the package directory exists. We use all of that on line 22 to see if we need to restore this package.

If we’re all systems go for NuGet launch, then line 23 turns the array of feeds from line 5 into ‘-Source’ arguments for nuget.exe. Line 24 creates a version argument for nuget.exe if we have one. Finally, line 25 shells out to nuget.exe and assembles all of the command line arguments it needs to do the job. When we get our package, we poke(line 26) a version.info file to track the version we’ve downloaded for future runs.

Wrapping Up
That’s it. I almost didn’t write this post since NuGet 1.6 supports this scenario out of the box. I still feel like it’s worthwhile to have this as part of our rakefile if for no other reason than to manage my packages from a single place. What do you think? Please let me know if you see anywhere I could improve the process.

Tagged with:  

If Rake is a gateway drug to Ruby, then Derick Bailey is your dealer. He’s created a project named Albacore which makes building your .NET projects stupid easy with Rake. Doing anything in angle brackets for msbuild was painful for me. I write code for a living, so it just makes sense to write code to build my stuff.

Lately I’ve been doing some work with our builds and TeamCity. A coworker pointed me to Rake and next I discovered Albacore. I just wanted to take a moment to show you how simple it is to set up a build that compiles your code, runs your tests and assembles the output.

require 'albacore'

PRODUCT_NAME = "Autofac.Settings"
BUILD_PATH = File.expand_path("build")
TOOLS_PATH = File.expand_path("tools")
LIB_PATH = File.expand_path("lib")

configuration = ENV['Configuration'] || "Debug"

task :default => :all

task :all => [:clean,:dependencies,:build,:specs,:copy]

task :clean do
	rmtree BUILD_PATH
end

task :dependencies do
	#future post. ;) 
end

msbuild :build=>[:dependencies] do |msb|
	msb.properties :configuration => configuration
	msb.targets :Clean, :Build
	msb.verbosity = "minimal"
	msb.solution = "#{PRODUCT_NAME}.sln"
end

mspec :specs => [:build] do |mspec|
	mspec.command = "lib/Machine.Specifications/tools/mspec-clr4.exe"
	mspec.assemblies Dir.glob('specs/**/*Specs.dll')
end

task :copy => [:specs] do
	Dir.glob("src/**/*.csproj") do |proj|
		name=File.basename(proj,".csproj")
		puts "Copying output for #{name}"
		src=File.dirname(proj)
		dest = "#{BUILD_PATH}/#{name}/"
		mkdir_p(dest)
		cp_r("#{src}/bin/#{configuration}/.",dest)
	end
end

:default
So, let’s start from the top. Line 10 defines a default task. This is what will get called when you just call rake without any arguments from the command line.

:clean
Line 14 defines a task which just nukes the build output directory. This makes sure we don’t accidentally leave artifacts around from a previous build.

:build
Line 22 is my first albacore task. This is the task where I’m compiling my code. Line 23 would be ‘Debug’ or ‘Release’ if you’re using the default build configurations. The line after is where I tell it to clean the build output and then Build. Point it at a solution file and you’re good to go. Easy enough.

:specs
Line 29 is another albacore task to run my Machine.Specifications based tests. Tell it where mspec lives and what assemblies contain your tests. Done.

:copy
Line 34 is a simple file copy task to assemble the build output from src/ and copy them to the build folder. Find all of the project files and go to bin/{config} and get the output files. Move them to a folder with the name of the project.

That’s about it. Thanks to Derek Greer for getting me started with Rake. I was able to look at his sample Rakefile and start hacking away. Within a few minutes I had my own rakefile running with albacore tasks. Ruby is pretty straightforward and fun. Playing with Ruby via Rake just makes me want to write more Ruby.

Since I’m a Ruby n00b, I’m sure my Ruby is less than perfect. If you have some suggestions for me to make my code suck less, please leave a comment.

Tagged with: