All posts by lloydholman

Sorting the Desired State Configuration (DSC) “Running scripts is disabled on this system” error

Ever come across the following error when executing a Desired State Configuration (DSC) configuration against a target machine?

image

The error we see is one often associated with misconfiguration of execution policies, so is pretty generic.

C:\Windows\system32\WindowsPowerShell\v1.0\Modules\WebAdministration\WebAdministrationAliases.ps1
cannot be loaded because running scripts is disabled on this system. For more information, see
about_Execution_Policies at http://go.microsoft.com/fwlink/?LinkID=135170.
    + CategoryInfo          : SecurityError: (:) [], CimException
    + FullyQualifiedErrorId : UnauthorizedAccess,Microsoft.PowerShell.Commands.ImportModuleCommand
    + PSComputerName        : computer_name

Indeed, when I checked the execution policy using Get-ExecutionPolicy, all seemed fine, a bit of head scratching ensued

PS C:\> Get-ExecutionPolicy

PS C:\> Unrestricted

The problem

Ignore for the moment that this is squealing about a specific Powershell script ‘WebAdministration\WebAdministrationAliases.ps1’, this error can occur for any DSC configuration that references a DSC Resource that in turn executes an external Powershell script. In this particular case I was using the xWebAdministration Module – PowerShell Desired State Configuration Resource Kit that effectively wraps the Web Server (IIS) Administration Cmdlets in Windows PowerShell to provide DSC style configuration of IIS, out of scope in this post but thought it best to set the context.

This issue seems to manifest itself when applying a DSC configuration, using Start-DSCConfiguration either to the localmachine of to a remote target machine by specifically targeting the –ComputerName property of Start-DSCConfiguration, i.e. like so:

PS C:\> Start-DscConfiguration .\Stop_AppPool -Wait –Verbose

or

PS C:\> Start-DscConfiguration .\Stop_AppPool -ComputerName remote_server_name -Wait -Verbose

Note: I’m seeing this on Windows 7 and Windows 2008 R2 in a managed domain

As I mentioned above, simply running Get-ExecutionPolicy on local and remote machines gave me the expected outcome, i.e. Unrestricted (RemoteSigned will also suffice), that said, if we delve a little deeper in to the scope of execution policies we can indeed get a little more detail.

PS C:\> Get-ExecutionPolicy -List | Format-List
Scope           : MachinePolicy
ExecutionPolicy : Undefined
Scope           : UserPolicy
ExecutionPolicy : Unrestricted
Scope           : Process
ExecutionPolicy : Undefined
Scope           : CurrentUser
ExecutionPolicy : Undefined
Scope           : LocalMachine
ExecutionPolicy : Undefined

Note: I’ve piped to Format-List just to make web formatting a little easier.

The scopes we’re interested here are UserPolicy and LocalMachine, and this is where the problem lies, although my UserPolicy (as controlled by Active Directory Group Policy) is Unrestricted, the LocalMachine scoped policy is Undefined, this is important in DSC world.

A quick reminder of how the DSC Local Configuration Manager (LCM) works

From the official technet article “Local Configuration Manager  is the Windows PowerShell Desired State Configuration (DSC) engine. It runs on all target nodes, and it is responsible for calling the configuration resources that are included in a DSC configuration script.”

Further more, the LCM runs under the security context of Local System, this is by design to limit the scope of DSC to the local machine. I’m unable to find an official Microsoft statement on this, but it is widely known and discussed in more detail here: http://blogs.citrix.com/2014/09/18/what-is-desired-state-configuration/

In Push mode, DSC operation can be simplified and though of as in figure 1 below. Taken from Windows PowerShell Blog – Push and Pull Configuration Modes

image

Figure 1. DSC Push mode.

So, to summarise, the DSC LCM runs in the Local System security context, which would suggest the LocalMachine scoped execution policy needs to be set to either a RemoteSigned or Unrestricted level, so let’s try that.

The solution

Assuming we’re working on a single machine, we can run the following

PS C:\> Get-ExecutionPolicy -List | Format-List
Scope           : MachinePolicy
ExecutionPolicy : Undefined
Scope           : UserPolicy
ExecutionPolicy : Unrestricted
Scope           : Process
ExecutionPolicy : Undefined
Scope           : CurrentUser
ExecutionPolicy : Undefined
Scope           : LocalMachine
ExecutionPolicy : Undefined

 

Running our DSC configuration as follows.

PS C:\> Start-DscConfiguration .\Stop_AppPool -Wait –Verbose

We get the error we’ve been discussing.

C:\Windows\system32\WindowsPowerShell\v1.0\Modules\WebAdministration\WebAdministrationAliases.ps1
cannot be loaded because running scripts is disabled on this system. For more information, see
about_Execution_Policies at http://go.microsoft.com/fwlink/?LinkID=135170.
    + CategoryInfo          : SecurityError: (:) [], CimException
    + FullyQualifiedErrorId : UnauthorizedAccess,Microsoft.PowerShell.Commands.ImportModuleCommand
    + PSComputerName        : computer_name

Now, let’s set that pesky LocalMachine execution policy and see what happens.

PS C:\> Set-ExecutionPolicy RemoteSigned -Scope LocalMachine –Force

Note: You may receive the following error when attempting this on a domain joined machine, it explains itself well really and we can safely ignore it.

“Set-ExecutionPolicy : Windows PowerShell updated your execution policy successfully, but the setting is overridden by
a policy defined at a more specific scope.  Due to the override, your shell will retain its current effective
execution policy of Unrestricted. Type “Get-ExecutionPolicy -List” to view your execution policy settings. For more
information please see “Get-Help Set-ExecutionPolicy”.
At line:1 char:1
+ Set-ExecutionPolicy RemoteSigned -Scope LocalMachine
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : PermissionDenied: (:) [Set-ExecutionPolicy], SecurityException
    + FullyQualifiedErrorId : ExecutionPolicyOverride,Microsoft.PowerShell.Commands.SetExecutionPolicyCommand”

Now, checking the execution polices again, we now get.

PS C:\> Get-ExecutionPolicy -List | Format-List
Scope           : MachinePolicy
ExecutionPolicy : Undefined
Scope           : UserPolicy
ExecutionPolicy : Unrestricted
Scope           : Process
ExecutionPolicy : Undefined
Scope           : CurrentUser
ExecutionPolicy : Undefined
Scope           : LocalMachine
ExecutionPolicy : RemoteSigned

Running our DSC configuration again, like so.

PS C:\> Start-DscConfiguration .\Stop_AppPool -Wait –Verbose

And success.

VERBOSE: Operation ‘Invoke CimMethod’ complete.
VERBOSE: Time taken for configuration job to complete is 2.698 seconds

A little bit more, validating the fix

We can indeed roll this back and prove that setting the LocalMachine execution policy back to Undefined causes the error again, however, there are a few hoops to jump through to validate that, here goes.

Checking those execution polices again, we still have the correct value for LocalMachine.

PS C:\> Get-ExecutionPolicy -List | Format-List
Scope           : MachinePolicy
ExecutionPolicy : Undefined
Scope           : UserPolicy
ExecutionPolicy : Unrestricted
Scope           : Process
ExecutionPolicy : Undefined
Scope           : CurrentUser
ExecutionPolicy : Undefined
Scope           : LocalMachine
ExecutionPolicy : RemoteSigned

Running our DSC configuration again, like so.

PS C:\> Start-DscConfiguration .\Stop_AppPool -Wait –Verbose

We get success, as expected.

VERBOSE: Operation ‘Invoke CimMethod’ complete.
VERBOSE: Time taken for configuration job to complete is 2.698 seconds

Setting the LocalMachine scoped execution policy back to Undefined like so

PS C:\> Set-ExecutionPolicy Undefined -Scope LocalMachine –Force

Yields what we expect, all good so far.

PS C:\> Get-ExecutionPolicy -List | Format-List
Scope           : MachinePolicy
ExecutionPolicy : Undefined
Scope           : UserPolicy
ExecutionPolicy : Unrestricted
Scope           : Process
ExecutionPolicy : Undefined
Scope           : CurrentUser
ExecutionPolicy : Undefined
Scope           : LocalMachine
ExecutionPolicy : Undefined

Running our DSC configuration again.

PS C:\> Start-DscConfiguration .\Stop_AppPool -Wait –Verbose

VERBOSE: Operation ‘Invoke CimMethod’ complete.
VERBOSE: Time taken for configuration job to complete is 2.698 seconds

Oh, we still get success, not expected!

Ok, so we have a DSC engine issue I would believe, See ‘My resources won’t update: How to reset the cache’ in this post for more details about re-starting the process that is hosting the DSC engine, but for now, running the following I can get a process Id (PID)

PS C:\> Get-WmiObject msft_providers | Where-Object {$_.provider -like ‘dsccore’} | Select-Object HostProcessIdentifier,Provider
HostProcessIdentifier Provider
——————— ——–
4324 dsccore

And then I can forcefully kill it.

PS C:\> Get-Process -Id 4324 | Stop-Process

Running our DSC configuration again.

PS C:\> Start-DscConfiguration .\Stop_AppPool -Wait –Verbose

We finally get the error we’ve been discussing back again, as expected.

C:\Windows\system32\WindowsPowerShell\v1.0\Modules\WebAdministration\WebAdministrationAliases.ps1
cannot be loaded because running scripts is disabled on this system. For more information, see
about_Execution_Policies at http://go.microsoft.com/fwlink/?LinkID=135170.
    + CategoryInfo          : SecurityError: (:) [], CimException
    + FullyQualifiedErrorId : UnauthorizedAccess,Microsoft.PowerShell.Commands.ImportModuleCommand
    + PSComputerName        : computer_name

Run the same build PROCESS everywhere

For the purpose of this post, ‘build’ refers to the Commit phase of a common deployment pipeline, e.g. version, compilation (I often work in .NET environments), unit test and package for deployment later on.

Most people that have worked with me have probably been subjected to the bee in my bonnet that I’m immediately sceptical of a build process if I *cannot* run it locally on a development machine. I believe there are a number of benefits of being able to do this.

  • Fail earlier. Catch more failed builds earlier on by changing fewer variables, i.e. don’t compile and build in a completely different way on your Continuous Integration (CI) server than your developers do on their development machines.
  • Own a debuggable build process. Building software is tricky, let me (or anyone) debug that locally, so we don’t have to constantly break the CI builds to see things fail.
  • Make the build process a first class citizen. Yes, let’s put the build process under version control and treat it as code, I care about changes and versions… A  LOT. Not to mention I want an easy way to clone that build process in to as many applications and branches as I want.
  • Improve quality. If we treat the build process as code, then we can also test it as part of its own pipeline.
  • Decouple yourself from the CI server vendor. I love TeamCity, have a long running tempestuous relationship with Team Foundation Server (TFS), have had a few flings with Jenkins  and chatted up various other CI and build servers/services, including but not limited to http://www.appveyor.com/, https://travis-ci.org/ and https://appharbor.com/. The one thing they all have in common (with the notable exception of https://appharbor.com/) is they give you an inch, but allow you to take a mile. It is incredibly easy to use their out-the-box and add-on tasks, runners and plug-ins, however this means it is almost impossible to move to another CI server technology or run a build in its entirety locally.

 

I don’t like re-inventing the wheel on every project, so have a pattern for the above, it’s called OneBuild and I’ve now made it Open Source on github and available as a nuget package, here’s a summary:

OneBuild

Tech disclaimer: This is very Windows/.NET focused, we’re using PowerShell and a few selective open source libraries, along with a few necessities to allow us to build .NET projects, not least MSBuild. That said, the concept is transferable across technology stacks.

Encapsulated build logic

Build logic is encapsulated in PowerShell script modules, for example:

lloydholman_OneBuild_Modules_1

Those modules are loosely covered with Pester unit tests, like so:

lloydholman_OneBuild_Tests_1

Task based

Orchestration of OneBuild’s build and loading of modules is handled by an Invoke-Build script, giving OneBuild a light weight task orientated build script. Invoke-Build has some thorough documentation about how it works and its concepts for those interested in the detail.

OneBuild currently defines and executes the following top-level tasks for each commit build, for an up to date list see the OneBuild site.

Task Synopsis
Invoke-Commit The default OneBuild task. Invoke-Commit is the entry point to and initiates the complete commit build.
Invoke-HardcoreClean

Aggressively and recursively deletes all /obj and /bin folders from the build path as well as the \BuildOutput folder.

Set-VersionNumber Sets the consistent build number of the form [major].[minor].[buildCounter].[revision]. This task has a few dependent tasks that either read from or create the OneBuild VersionNumber.xml file.
Invoke-Compile

Cleans and Rebuilds a Visual Studio solution file (identified by convention) to generate compiled .NET assemblies.

Invoke-UnitTests Executes all unit tests (currently only supports NUnit) for compiled .NET assemblies matching a defined naming convention, outputting results to XML file.
New-Packages

Generates new Nuget ([packageName].[version].nupkg) and optional Symbols({packageName].[version].symbols.nupkg) package(s) by passing all .nuspec files found in the solution root folder.

Dogfooding

image

OneBuild bootstraps itself, i.e. it is responsible for versioning, packaging and publishing itself to nuget.org

Convention over configuration

OneBuild is heavily convention based:

  • Will attempt to build the first Visual Studio Solution (.sln) file it finds. (during task Invoke-Compile)
  • Intelligently versions locally in the same way it does on the CI server. (during task Set-VersionNumber)
  • Will run unit test projects that match a basic naming convention (currently supports NUnit) and are compiled in to a ‘\bin’ folder. (during task Invoke-UnitTests)
  • Will attempt to create NuGet packages from any ‘.nuspec’ files that are found sitting next to the Visual Studio Solution file. (during task New-Packages)

Use

Assuming we install OneBuild using the Visual Studio Package Manager (there are other ways too) like so.

PM> Install-Package OneBuild

Convention means we can simply do this to run the commit build, no parameters required.

C:\>cd “Path to your solution”
C:\Path to your solution\>OneBuild.bat

OneBuild is in its early days, all feedback is greatly appreciated, as is contribution, after checking the documentation site feel free to get involved through the github issues page.

IIS, why you squashing my custom error responses?

I’ve been working on an open source project called Femah recently and came across a rather time consuming error when we pushed a demo application up to Azure.

The work I was doing was around the Femah API, providing some interactive documentation, using some fiddles from  http://dotnetfiddle.net.

It works on my machine

Windows 8.1, IIS 8.5, .NET 4.5, Visual Studio 2012

Yes it does, I can happily cURL this, no problems.

c:\>curl -v -X GET http://localhost/femah.axd/api/featureswitchtypes/notvalid/

and get the following output, notice that custom error message there “Error: Service ‘featureswitchtypes’ does not support parameter querying.” Note: Yes I know that should be a JSON object, working on it!

Blog-IISStompingOnCustomErrors1

It doesn’t work on Azure.

We have a test application that references the Femah Nuget package, which you can see here, running the same cURL command against the Azure site this time.

c:\>curl -v -X GET http://localhost/femah.axd/api/featureswitchtypes/notvalid/

Yields this, notice I have no custom error here.

Blog-IISStompingOnCustomErrors2

Further more the Content-Type of the response is Content-Type: text/html as opposed to Content-Type: application/json as we saw in a successful response locally.

What is it?

A lot of googling and chasing my tail a little and learning how to enable Failed Request Routing on Azure Websites and I found this

Blog-IISStompingOnCustomErrors3

That’s beyondcompare giving up a nice diff between two IIS failed request logs (Azure on the left, localhost on the right), you can see the EventData buffer is replaced with a standard error message for the HTTP code (405) that we’re returning.

So, I finally found this on Stackoverflow, more detail here from Mike Volodarsky which I recommend you read, explains in more detail and how to restrict passthrough of existing errors to a single website.

So, adding the following within the system.webServer element of the web.config of our demo app.

<system.webServer>
<httpErrors existingResponse="PassThrough" />
</system.webServer>

Resulted in us seeing our custom errors from the Azure hosted app.

Blog-IISStompingOnCustomErrors4

To conclude

This working locally through me, maybe Azure has a different default to harden their IIS configurations, or maybe the loadbalancer (ARR?) is getting a bit too involved. Regardless, there’s a fix and now I have somewhere to look if I come across it again in a years time.

DevOps engineers, the marriage counsellors of business

What is DevOps? An engineering competency? Yes.

This is a short series of three also fairly concise articles exploring what is meant by the emerging and ever popular term “DevOps”.

Previous articles in this series introduced and discussed

In this third and final post we look at DevOps as an engineering competency.

Continue reading DevOps engineers, the marriage counsellors of business

The 5 enablers of ‘DevOps as a practice’

What is DevOps? A practice? Yes.

This is a short series of three also fairly concise articles exploring what is meant by the emerging and ever popular term “DevOps”.  In this second post we look at DevOps as a practice.

Continue reading The 5 enablers of ‘DevOps as a practice’

A book review – The Phoenix Project

Title: The Phoenix Project

Authors: Gene Kim, Kevin Behr, George Spafford

Published: January 10th 2013

Print Length: 345 pages

Genre: Fiction

I was first made aware of The Phoenix Project from a post reviewing it by Jez Humble on the Continuous Delivery Blog, to quote the Amazon description, “The Phoenix Project is a novel about IT, DevOps and helping your business win”.  Being a DevOps engineer myself this immediately sparked my imagination so I recently got hold of a copy on the kindle and got reading.  Note: If you’re both an Amazon Prime and Kindle owner you can borrow the Kindle version of The Phoenix Project at no extra cost; superb.

The book opens by immediately setting the scene of the ficticious US Parts Unlimited business, highlighting its recent struggles to maintain a competitive advantage over its competitors and describing a significant board level restructure with immediate effect, the analysts highlighting their lack of confidence in the departing chairman but continuing CEO Steve Masters.  From this point we’re then introduced to our main character, Bill Palmer, the director of Midrange Technology Operations at Parts Unlimited.  Bill’s day to day activities and relative stress levels will no doubt ring true with anyone that has worked in a monolithic slow to react business where IT is simply seen as a burden and another cost centre.

We follow Bill throughout the book as he is rapidly promoted against his will to VP of IT Operations, experiencing how the success of the IT department and Parts Unlimited itself soon becomes his responsibility, we are then introduced to the infamous Phoenix Project itself, an internal IT project within Parts Unlimited that has been promising to the board and share holders for the last 3 years that this will be the turning point for the business, allowing them to restore profitability.

“The company has long promised that its “Phoenix” program will restore profitability and close the gap by tightly integrating its retailing and e-commerce channels”

A catalogue of IT failures unfold from this point, described in detail and allowing us sometime to understand what bad IT looks like and to get to know Bill, a likeable, pragmatic ex-serviceman.

The story really takes a turn when Bill is introduced to Erik by his CEO Steve Masters, Erik quickly becomes Bill’s mentor and begins to advise him in his own, sometimes patronising manor of The Three Ways: The Principles Underpinning DevOps.  The story continuous and follows Bill on his journey to understand and implement The Three Ways, ‘Systems thinking’, ‘Amplifying feedback loops’, and ‘Creating a culture of continual experimentation and learning’.  This is the main purpose of the book and the authors do a very good job of likening IT scheduling to the process of satisfying customer demand in manufacturing.

“We’re doing what Manufacturing Production Control Departments do.  They’re the people that schedule and oversee all of production to ensure they can meet customer demand.  When they accept an order, they confirm there’s enough capacity and necessary inputs at each required work centre, expediting work when necessary.  They work with the sales manager and plant manager to build a production schedule so they can deliver on all their commitments.”

The above is something Erik believes without question and is repeated and instilled in us as the reader throughout the book.  Bill’s journey, although rather predictable at times is enjoyable and well portrayed, there are a number of constant characters throughout the book, some likeable, others much less so, yes you Sarah and of course the always present super-geek and single point of failure Brent, a techie who is identified early on as the unintentional cause of many a bottleneck.

Personally I found the mid part of the book the most interesting, this is where Bill is in full swing implementing the The Three Ways and there are some fantastic high-level case studies that it doesn’t take all too much imagination to apply to your own situation.  In particular the discussion around reducing the amount of WIP (Work In Progress) rung very true with my experiences both within manufacturing and IT, pairing this with reducing batch sizes (size of changes) made for a very interesting read and I challenge anyone to put some of this in to practise and not experience an increase in IT delivery throughput.  A quote I particulalry enjoy is when they are discussing the current once yearly deployments.

“You’ll never hit the target you’re aiming at if you can fire the cannon only once every nine months.  Stop thinking about Civil War era cannons. Think antiaircraft guns.”

A slight gripe I had is in agreement with Jez’s comment in that there was I believe an uncaracteristic amount of success in implementing changes, timescales were also incredibly short, but hey, it’s a novel.

In summary The Phoenix Project is a great read, at times I couldn’t put it down and I blitzed through it within a fortnight, rather surprisingly for me as a slowish reader.  The focus of the whole story around The Three Ways is clever, although I firmly believe that DevOps and agile success within IT can’t all be summed up with hard and fast principles alone, however it provides some excellent context to real world, I also enjoyed the analogies to manufacturing engineering.  With well thought out characters and some good side plots that ensure you don’t feel like it’s becoming a lesson, I would highly recommend The Phoenix Project to any business professional, not just those working within IT.

Reset TeamCity Admininstrator password

Can’t remember your TeamCity administrator password but have access to the server? Try giving this a go….

Resetting the ‘administrator’ user password in TeamCity 6.5.4 on a Windows machine

  • Logon/Remote Desktop (RDP) on to the TeamCity server
  • Open an administrator console window (Start>Cmd (Right-click ‘Run as administrator)
  • Navigate to the path of your TeamCity install by doing the following

C:>cd c:TeamCitywebappsROOTWEB-INFlib

  • Stop the TeamCity Web Server service by running the following command

c:TeamCitywebappsROOTWEB-INFlib>net stop "teamcity web server"

…and observe the following output

The TeamCity Web Server service is stopping.
The TeamCity Web Server service was stopped successfully.

  • Run the following command to change the administrator user password to: ;wn3b4(|6HkH95W (as we always want to use a strong password)

c:TeamCitywebappsROOTWEB-INFlib>........jrebinjava -cp server.jar;common-api.jar;commons-codec-1.3.jar;util.jar;hsqldb.jar ChangePassword administrator ;wn3b4(|6HkH95W c:TeamCity.BuildServer

…and observe the following output

Using TeamCity configuration directory path: c:/TeamCity/.BuildServer
Password changed successfuly

  • Start the TeamCity Web Server service by running the following command

c:TeamCitywebappsROOTWEB-INFlib>net start "teamcity web server"

…and observe the following output

The TeamCity Web Server service is starting.
The TeamCity Web Server service was started successfully.

Subversion (svn) recursively removing and ignoring folders

How to recursivley and permanently ignore bin and obj folders from a subversion (svn) repository

This is primarily focused at those of us on Windows but should work in a *nix world to.

A quick recap on ignores in subversion
There are a couple of methods to ignore files in an svn repo.

  1. Most people know of the global ignore list and it’s accessibility through the likes of TortoiseSvn but this is client side and commiters without the correct list configured locally can still commit the dreaded /bin and /obj folders.
  2. The preferred way is to permanantly set an svn property, the one we’re particulalry interested in is svn:ignore, this will enforce ignoring of all marked folders or files for every single client commit.

Both of these methods result in the same action at the client, reducing that ridiculous long list of untracked files when you select ‘Add to repository’, the latter is by far the most effective way of doing this.

## What do we need to ignore and delete?
There are generally a couple of scenarios we need to tackle when tidying up an svn repository to remove/ignore a large quantity of folders and files.

  1. Ignore any untracked bin or obj folders
  2. Remove any tracked bin or obj folders
  3. Ignore any tracked bin or obj folders

Scenarios 1 and 3 are solved easily with the second method above, I’ve found the 2nd scenario to undoubtedly be a little manual, the most effective way I have achieved this is to use the [TortoiseSvn add to ignore list ]() approach, which marks the folder for delete from the repository and sets the svn:ignore property.

Install an svn client console

Unfortunately for any recursive actions on a large number of folders/files TortoiseSvn is too granular and not your friend, by all means feel free to click navigate through every folder you would like to premanently ignore by [doing wha I mentioned above]() but personally I only use that for the specific scenario where files are already tracked.

I use the SlikSvn client as it’s kept up to date and comes with an MSI that does everything installing wise, including updating your PATH, either way you need to install a client to achieve the below.

Report on existing svn:ignore properties

Navigate to the top level of your svn repository, in my case this was trunk, and run the following command:

[code language=”bash”]svn propget svn:ignore -R[/code]

The above will list out all existing svn:ignore properties, this could be huge or contain very little or nothing.

[code language=”bash”]svn propset svn:ignore bin . -R[/code]

Continuous integration build traffic lights with TeamCity

Summary: Put simply we had a requirement to make a broken build of our software incredibly obvious to everyone, the purpose was to get everyone’s buy in (developers, testers, and management alike) to build quality, after all everyone knows what a red traffic light means!

In my current contract we have recently switched from an incredibly monolithic Team Foundation Server custom build to using the frankly awesome TeamCity.

Due to the historical nature of this company the true continuous integration ethic had never really been instilled, with lack of responsibility for broken builds an unfortunate by product. The switch to TeamCity was mainly a technical decision, with more in house experience of CI using TeamCity and to be honest it seemed like a far less daunting (and more enjoyable) task than getting it right with TFS. Our new TeamCity build has given us the following benefits:

  • Reduced build times by over 60%.
  • Focused us on only ever building once, every developer integration with a successful build is a potential release candidate, i.e. everything is built in Release mode.
  • Improved build notification options, the TeamCity WebUI, taskbar notifier and RSS feeds.

The final point is what I will focus on in this post, put simply we had a requirement to make a broken build off our software incredibly obvious to everyone, the purpose was to get everyone’s buy in (developers, testers, and management alike) to build quality, after all everyone knows what a red traffic light means!

So a home project ensued, which started with me winning the following Ebayauction.

Post3-EbayTrafficLightsWin

These lights are pretty simple, being 24Volt DC and coming supplied with two 24volt/40W bulbs (unfortunately only one of which worked, I bought replacements online from BLT Direct, opening up the traffic lights by removing the lenses I was pleasantly surprised to find the wiring all in excellent condition.

Post3-TrafficLightsInsides

Next step was to put together some sort of parts list to get these lamps lighting up our TeamCity build status.

  1. Traffic lights – Got.
  2. Some spare bulbs BLT Direct – Got.
  3. A power supply capable of running the bulbs continuously, so around 2Amps rating, i.e. 40W/24volt = 1.6Amps.
  4. An Arduino Uno prototyping microprocessor board.
  5. An Arduino ethernet shieldto allow the Arduino board to chat to TeamCity over the network.
  6. A suitable relay, Single Poll Double Throw (SPDT), capable of switching 2Amps at 24 volts with a 5volt control signal (from the Arduino board).
  7. A plastic case to house the Arduino board and ethernet shield off board of the traffic lights.
  8. A power supply capable of running the Arduino board, nominal 5volts output.
  9. Minimum 2 Amp rated wire, for internal lamp wiring.
  10. A handful of insulated crimp butt connectors.
  11. Some low rated in equipment wire various colours (for the power, ground and control wires from the Arduino to the relay).
  12. Insulating tape (to keep things tidy).
  13. Heatshrink tubing (to keep the power, ground and control wires from the Arduino to the relay nice and tidy).
  14. Double sided sticky pads.

For 2. I found a 19volt laptop AC-DC power supply that would do nicely (it had a 3.42A rating so could handle powering a traffic lamp continuously, i.e. 40W/19volt = 2.1Amps).

For 6. I used a multi voltage AC-DC power supply, that would output 6Volts at up to 300mA, perfect for the Arduino which can handle a voltage input of around 3-10Volts.

Post3-TrafficLightsPowerSupplies

The rest of the parts I bought from the links in the parts list, mostly from maplin.co.uk proto-pic.co.uk and

The build Part #1, checking the lights

I began the build by simply wiring up each traffic light bulb directly to a low voltage power supply to check the integrity of the supplied wiring which passed with flying colours.

Post3-TrafficLightsSimpleWiring

The build Part #2, switching the relay

The next step was to prove how to control the relay with the Arduino, I started by simply wiring it up as follows. See here for an explanation of Normally Open (NO) and Normally Closed (NC) states.

Post3-ArduinoAndRelayWiringDiagram

I adapted an example that did nothing more than simply switch digital pin 8 of the Arduino on and off as below.

/* Blink without Delay

Turns on and off a light emitting diode(LED) connected to a digital
pin, without using the delay() function. This means that other code
can run at the same time without being interrupted by the LED code.

The circuit:
* LED attached from pin 13 to ground.
* Note: on most Arduinos, there is already an LED on the board
that’s attached to pin 13, so no hardware is needed for this example.

created 2005
by David A. Mellis
modified 8 Feb 2010
by Paul Stoffregen

This example code is in the public domain.

http://www.arduino.cc/en/Tutorial/BlinkWithoutDelay

*/

// constants won’t change. Used here to
// set pin numbers:
const int ledPin = 8; // the number of the LED pin

// Variables will change:
int ledState = LOW; // ledState used to set the LED
long previousMillis = 0; // will store last time LED was updated

// the follow variables is a long because the time, measured in miliseconds,
// will quickly become a bigger number than can be stored in an int.
long interval = 1000; // interval at which to blink (milliseconds)

void setup() {
// set the digital pin as output:
pinMode(ledPin, OUTPUT);
}

void loop()
{
// here is where you’d put code that needs to be running all the time.

// check to see if it’s time to blink the LED; that is, if the
// difference between the current time and last time you blinked
// the LED is bigger than the interval at which you want to
// blink the LED.
unsigned long currentMillis = millis();

if(currentMillis – previousMillis > interval) {
// save the last time you blinked the LED
previousMillis = currentMillis;

// if the LED is off turn it on and vice-versa:
if (ledState == LOW)
ledState = HIGH;
else
ledState = LOW;

// set the LED with the ledState of the variable:
digitalWrite(ledPin, ledState);
}
}

With the script loaded on to the Arduino board the on board LED on the relay flashed as the NO (Normally Open) state was made… not particularly exciting but proved that side of the circuit.

[youtube=http://www.youtube.com/watch?v=n54ThO9-3lY&w=448&h=252]
Arduino blinking relay LED from NO (Normally Open) to NC (Normally Closed)

The build Part #3, switching the lights with the relay

The next step was to get the relay wired up as per the diagram and schematic below, the relay can also be clearly seen in the top right of the traffic light housing in the picture below.

Post3-TrafficLightsWiringWithRelayPost3-TrafficLightsWiringDiagram

You can see from the schematic that I chose to wire the Normally Closed (NC) terminal of the relay to the red light of the traffic lights. The thought behind this is that in an errored or unknown state I always wanted the traffic light to display red and to switch to green only with a known good build state.  So with the above all in place I could prove (using the same Arduino sketch as in step 2) that we could switch between NC (red light on) and NO (green light on).
Post3-TrafficLightsNOGreen

The build Step #4, talking to TeamCity

I went with a very simple approach for querying TeamCity for a build status, I have described in more detail when working out the best approach in the TeamCity developer forum but basically it involved developing a simple TeamCity plugin that returns just an HTTP header response, reflecting the status of a selected TeamCity build project, the plugin does the following:

  1. Exposes a non authenticated simpleBuildStatus page “http://<myTeamCityServer>:<port>/simpleBuildstatus.html?projectName=<myTeamCityProjectName>&guest=1” that returns an HTTP Response Header (no HTML page content in the response).
  2. The simpleBuildStatus page returns an ‘HTTP/1.1 200 OK’ in the HTTP Response Header if all build configurations under the selected build project are currently successful.
  3. The simplebuildStatus page returns an ‘HTTP/1.1 409 CONFLICT’ in the HTTP Response Header if any build configurations under the selected build project are currently unsuccessful

For reference I found the following resources useful when developing my first TeamCity plugin

http://www.jetbrains.com/idea/download/index.html; I used IntellijIDEA 10.5 (Community Edition)as my Java IDE of choice.
http://confluence.jetbrains.net/display/TCD6/Bundled+Development+Package; The sample TeamCity development package that demonstrates how to develop various plugins

http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u25-download-346242.html; Needed as a prerequisite to building the above sample TeamCity development package.

http://confluence.jetbrains.net/display/TCD6/Plugins+Packaging; This helped me understand the options for packaging the TeamCity Plugin, i.e. as a JAR that you drop into the TeamCity data directory/plugins folder.

http://confluence.jetbrains.net/display/TCD6/Installing+Additional+Plugins; And some more general basic information about installing TeamCity plugins.

The following two screen shots show the HTTP 200 OK and HTTP 409 CONFLICT status codes that the SimpleBuildStatus plugin returns for successful and failed builds respectively.

Post3-TCSimpleBuildStatusPluginHTTP200
Post3-TCSimpleBuildStatusPluginHTTP409

If a build name isn’t provided in the url the following HTTP 400 error is displayed

Post3-TCSimpleBuildStatusPluginError

To get the Arduino Uno chatting to TeamCity I bought an Arduino Ethernet shield at the same time as the board.  A “shield” in the Arduino world is simply an expansion board that sits on top of the Arduino board, providing additional functionality, in this case ethernet capability. http://www.shieldlist.org is a great resource for working out shield and board compatibility for more complex projects.

It was now time to author an Arduino sketch that would pole the TeamCity SimpleBuildStatus plugin for a project build status, checking the returned HTTP status code and switching the relay to Normally Open (NO) if it’s a 200 and to Normally Closed (NC) if it’s a 409.  Remember I have already wired the traffic lights up to the relay in step 3 above.

The resulting first cut at the sketch was as follows:

Traffic lights client

This sketch connects to a software build, continuous integration (CI) server
(e.g. TeamCity) using an Arduino Wiznet Ethernet shield and switches a SPDT relay
depending on the state (HTTP response code) returned from a buildStatus page.

HTTP/200 -> Build(s) successful, Green light on
HTTP/400 or any other value -> Build(s) broken, Red light on

Circuit:
* Ethernet shield attached to pins 10, 11, 12, 13
* VEZ Single Pole Double Through (SPDT) Relay attached to pins +5v, gnd, 8

created 23 April 2011
by Lloyd Holman

https://github.com/lholman/TeamCitySimpleBuildStatus#readme

This code is in the public domain.

Based on WebClient and BlinkWithoutDelay examples (by David A. Mellis)
*/

#include <SPI.h>
#include <Ethernet.h>

// Enter a MAC address and IP address for your controller below.
// The IP address will be dependent on your local network:
byte mac[] = { 0xD90, 0xDA2, 0xDDA, 0xD00, 0xD3D, 0xDB8 };
byte subnet[] = { 255, 255, 255, 0 };
byte ip[] = { 192,168,4,30 };
byte gateway[] = { 192, 168, 4, 1 };
byte server[] = { 10,100,101,8 };

// Initialize the Ethernet client library
// with the IP address and port of the server
// that you want to connect to (port 80 is default for HTTP):
Client client(server, 80);

int pinState = LOW; // used to set the digital pin driving the relay, HIGH or LOW
const int relayPin = 8; // the digital pin driving the relay

int GetServerStatus();

void setup() {
// set the digital pin as output:
pinMode(relayPin, OUTPUT);

// start the Ethernet connection:
Ethernet.begin(mac, ip, gateway);
// start the serial library:
Serial.begin(9600);
// give the Ethernet shield a second to initialize:
delay(1000);
}

void loop()
{
int buildStatus = GetServerStatus();
Serial.print(“buildStatus: “);
Serial.println(buildStatus);

if (buildStatus == 200 && pinState == LOW)
pinState = HIGH;
else if (buildStatus != 200 && pinState == HIGH)
pinState = LOW;

digitalWrite(relayPin, pinState);

delay(30000);
}

static int GetServerStatus()
{
int httpStatus = 0; //indicate a connection error with 0
int parseStatus = 0;

Serial.println(“connecting…”);
if (client.connect()) {
Serial.println(“connected”);
// Make the HTTP Get request:
client.println(“GET /simpleBuildStatus.html?projectName=DCP_R17&guest=1 HTTP/1.0″);
client.println();
}
else {
// if you didn’t get a connection to the server:
Serial.println(“connection failed”);
}

// wait for the response from the CI server.
bool gotStatus = false;
while (1){
while (client.available()) {
char c = client.read();
Serial.print(c);

switch(parseStatus) {
case 0:
if (c == ‘ ‘) parseStatus++; break; // skip “HTTP/1.1 ”
case 1:
if (c >= ‘0’ && c <= ‘9’) {
httpStatus *= 10;
httpStatus += c – ‘0’;
} else {
parseStatus++;
}
}

}
if (!client.connected()) {
break;
}
}

client.flush();
client.stop();
return httpStatus;
}

I have hosted the code for the TeamCity SimpleBuildStatusPlugin (discussed earlier), along with the corresponding Arduino sketch (above) to control the Arduino digital pin and Ethernet shield in a public github repo here: https://github.com/lholman/TeamCitySimpleBuildStatus#readme

The build Step #5, packaging it all up

I wanted to make a neat job so I modified the plastic case from the parts list, with a carful bit of measuring, drilling and cutting with a stanley knife I ended up with a rather neat little case made for the Arduino.

Post3-ArduinoAntistaticFoam

Post3-ArduinoInACustomBox

I then simply stuck the packaged Arduino box to the side of the traffic lights with some double sided sticky pads.  It was always my aim from the start to have the Arduino circuitry mounted outside of the lights, mainly to avoid the heat from the 40watt bulbs frying it.

Post3-TrafficLightsWithArduinoPackage1

Post3-TrafficLightsWithArduinoPackageCloseUp

Post3-TrafficLightsWithArduinoPackageLightsOn

The end product is a very noticeable build traffic light on the wall in our office:

Post3-TrafficLightsInTheOffice

Keep an eye on the Github repo as I plan to update with some of the following ideas:

  1. Multi project support for TeamCity.
  2. An audible piezo buzzer when status changes.
  3. Other CI server support, by simple plugins.