An Automated Continuous Deployment Solution Part 2

Deployment and Configuration of the Test VM

In part 2, I will explain how to copy product installation files to the newly-created test VM, execute remote commands, and carry out system configuration in preparation for running tests.

Powershell Remoting

Before continuing with the test automation process, I need to cover PowerShell Remoting. PowerShell Remoting is a mechanism which allows a controller system to execute commands on a remote host as if they were being entered at the remote console. This is exactly what we need to enable the TFS build agent which runs the CI scripts on the end of an installation build to attach to a test VM and configure it for use.

Our test automation environment is complicated by the fact that the TFS build agents which will orchestrate VM creation and configuration are attached to a domain, but the test VMs are not in a domain, at least for now. This makes PowerShell Remoting more challenging, but entirely possible.

Test Client

The Test VM template system can be configured as a one-off step. All VMs cloned from this retain the same configuration even after a new name and SID have been applied. There are three commands required:

  1. Enable-PSRemoting -Force
  2. Set registry permission for trusted hosts to be configured locally:
    New-Itemproperty -name LocalAccountTokenFilterPolicy -path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System -propertyType DWord -value 1
  3. Set the calling server(s) as trusted host(s):
    Set-item wsman:localhost\client\trustedhosts -value HostName -Force (no prompt)
    1. Build Agent Server

      Some of the server configuration can also be treated as a one time only step. The first two steps of the test client configuration must be carried out on each build agent system.

      The third step, using Set-Item to add a test client to the agent’s trusted hosts collection, must be carried out every time a new test client VM is created. Even though client names are re-used over time, each new cloned VM instance has a new SID and is therefore treated as a never seen before system, even if it shares its name with previously known systems.

      This means the final step must take place between the VM clone process described in part 1 of the article, and the software install and configuration steps described next. Adding an item to the trusted hosts collection is a privileged operation, which poses a problem for a script that will run non-interactively under the TFS build account.

      I have a solution, but it’s not ideal because it requires setting the OS User Account Control (UAC) to its lowest level, so no user interaction is required to run a process as Administrator. This is not recommended practice, and although our build agent systems are well protected and not normally used interactively, I would like to find a better way of doing it.

      In order to execute one command in a script with elevated privileges, a second PowerShell process is spawned with the Start-Process command, using the -Verb runAs argument:

      Start-Process powershell -Verb runAs -ArgumentList "-Command `"& { Set-item wsman:localhost\client\trustedhosts -Value $Name -Force }`""
      

      Notice the command we need to execute is contained in curly braces. Assuming UAC is set not to prompt, this will complete with no user interaction, and allow execution to resume at normal privilege level afterwards.

      File copy with PowerShell

      Copying installation and support packages to the new test system is likely to require alternative credentials to the ones running the TFS build process.

      # Construct credentials instance to connect to the target system.
      $securePassword = ConvertTo-SecureString $Password -AsPlainText -force
      $credential = New-Object System.Management.Automation.PsCredential("$Server\$UserName",$securePassword)
      # Map a named drive on the target system  
      # The drive doesn't have to be used to access the remote machine, but by connecting it, it establishes
      # authentication so following copy operations to UNC paths in the next section can succeed under the correct
      # credentials.  
      $targetDrive = New-PSDrive -Name TargetFolder -PSProvider FileSystem -Root $targetRoot -Credential $credential -Scope Script
      

      This script expects user name and password to be provided as parameters, but PowerShell also provides a facility to store credentials securely in the local file system without exposing them in clear text even to users with permission to read the credential file.

      Once credentials have been used to open a file share on the test VM, files required by the product installation can be copied over. In our case, a check is made in case the files already exist on the system before attempting to copy. For convenience, the test VM is referenced by the standard C$ share, and we assume that there is a C:\Windows\Temp folder. As test VMs are only created from specific template systems, this is easy to ensure.

      $targetRoot = "\\$Server\c$"
      $targetFolder = "$targetRoot\windows\temp"
      
      # Copy the installation package to the remote system. 
      #This command will block until the copy is complete.
      $source = Get-Item -Path $InstallBase\$BuildNumber
      $target = Get-Item -Path $targetFolder\$BuildNumber
      if ( $target -eq $NULL )
      {
          Copy-item $source -Destination $targetFolder -Container -Recurse
      }
      

      Remote Installation

      Now all installation and support files have been copied to the windows\temp folder on the test VM, they need to be executed to install the software under test. This is where PowerShell Remoting comes in.

      First, a remote session is created. The URI is a standard form which will work on any PowerShell host once remoting is enabled. Note that the credential object is reused from the file copy operations earlier in the script. This is the identity that remote operations will run as.

      # Create a remote session on the target system
      $sessionUri = "http://$Server`:5985/WSMAN"
      $session = New-PSSession -ConnectionUri $sessionUri -Credential $credential
      

      Once the session has been created, it can be passed as an argument to Invoke-Command. Everything inside the -ScriptBlock curly braces will execute on the test VM, not on the build agent.
      I’ve removed the very long silent installation command line from the example. You can see that a number of parameters are passed from the containing script into the script block, which will normally be used by the commands executing on the test VM.

      # Use the remote session to run the MBPM installation from the command line.
      Invoke-Command -Session $session -ScriptBlock {
          
          param($BuildNumber, $Server, $UserName, $Password, $DatabaseName, $DatabaseUser, $DatabasePassword)
              
          Start-process "C:\Windows\Temp\$BuildNumber\setup.exe" -Argumentlist "/s"
      
      } -ArgumentList $BuildNumber, $Server, $UserName, $Password, $DatabaseName, $DatabaseUser, $DatabasePassword
      

      Once the installation script block has executed, the remoting session must be disconnected. It’s important to be sure that the setup process has complete because the Start-Process call doesn’t necessarily block. If the remoting session is terminated too early, it will kill the setup process and leave the installation in an incomplete state. In this case, because we expect exactly one instance of setup.exe to be running, it’s possible to use PowerShell to keep checking the test VM for a process of this name and wait until it’s terminated of its own accord before closing down the session.

      $setupProcess = Get-Process -ComputerName $Server setup
      
      while ( $setupProcess -ne $NULL )
      {
          Start-Sleep -Seconds 10
          $setupProcess = Get-Process -ComputerName $Server setup
      }
      

      When setup is complete, shut down the remoting session:

      Remove-PSSession -Session $session
      

      Post-Installation Steps

      Finally, we need to do anything required following a product installation. In your case, nothing further may be needed, but the software I’m testing needs a number of database scripts to be run against a repository on the test VM and some services started up before test execution.

      You could do this by keeping the same remoting session that you used for installation open. Use another Invoke-Command script block to configure the system:

      Invoke-Command -Session $session -ScriptBlock {
          param($Server, $DatabaseName, $DatabaseUser, $DatabasePassword)
      
          # SQL Server snapins are required to execute database scripts - load them here if not already available.
          $snapin = Get-PSSnapin SqlServerCmdletSnapin100
          if ( $snapin -eq $NULL )
          {
              Add-PSSnapin SqlServerCmdletSnapin100
          }
          $snapin = Get-PSSnapin SqlServerProviderSnapin100
          if ( $snapin -eq $NULL )
          {
              Add-PSSnapin SqlServerProviderSnapin100
          }
      
          Invoke-SqlCmd -inputfile "$scriptRoot\myDbScript.sql" -serverinstance $Server -database $DatabaseName -UserName $DatabaseUser -Password $DatabasePassword
      
          # Start up the service
          Start-Service -DisplayName "My Service To Test"
      
          # Do anything else you need on the remote system before tests are run.
      }
      

      Part 3 will either concentrate on TFS build integration, or test execution, depending on which bit I create next…

An Automated Continuous Deployment and Test Solution – Part 1

Introduction

I work in a team responsible for a mature multi-tier BPM product. Over time, we’ve built up a TFS-based source control and build infrastructure which delivers a product installation, built from a number of components all built from a specified branch by a single click.

We also have a number of MSTest-based system test solutions to exercise end-to-end functionality between client, application, and data tiers, and an automated test suite which verifies browser-based client functionality. These test suites require manual configuration and product installation on to a target system, and manual result collation.

Objectives

I’ve recently been considering how to join the dots in our existing systems to deliver a fully automated end-to-end solution. There will be support for scheduled and manually triggered builds, product deployment on VMs running in our VMWare vCenter ESX system, automatic configuration and execution of test suites, and reporting to stakeholders at completion.

overview

Implementation

Windows PowerShell feels like a good fit as the glue that will hold elements of the test solution together. An add-in is available to allow interaction with VMWare ESX using PowerCLI, interaction with a remote operating system instance from the build agent is possible using PowerShell remoting, and other add-ins permit interaction with SQL server to execute database scripts.

VM Template

To start with, I created a single VM with an installation of Windows Server 2008 R2, SQL Server 2008 R2, IIS, and other software dependencies required by the software under test. This VM will never have our software installed on it, but will be a reference base from which to clone test systems. Eventually, a range of templates will be created to cover all supported operating systems and database platforms, but one step at a time!

The vSphere PowerCLI Environment

PowerCLI must be installed on each system that runs the automation script. The installation creates command prompts which automatically initialize the PowerCLI environment when opened. However, I need to run scripts from a TFS build, so can’t use the PowerCLI prompts.

Fortunately, any PowerShell session can be easily configured to use PowerCLI by executing a snap-in and a script.

# Add vSphere PowerCLI base cmdlets
Add-PSSnapin VMWare.VimAutomation.Core
# This script adds some helper functions and sets the appearance.
# You can pick and choose parts of this file for a fully custom appearance.
$var = ${Env:ProgramFiles(x86)} + "\VMware\Infrastructure\vSphere PowerCLI\Scripts\Initialize-PowerCLIEnvironment.ps1"
. $var

Once the session is configured, it’s time to connect to the server

Connect-VIServer -Server $VSphereHost -WarningAction SilentlyContinue

Cloning a VM with PowerShell

When the test system needs a new VM instance, it will clone one from the template VM. vCenter supports an OS Customization Spec which acts as a set of configuration instructions to apply to the new VM. The most important ones are to issue a new SID, which allows multiple VMs cloned from the same template to run simultaneously without conflict, and to apply the VM name to the guest OS host name.

The VMs must be added to a specific resource pool in a specific cluster.

# Get location that the VM is to be added to.
$cluster = Get-Cluster -Name $ClusterName
$resourcePool = Get-ResourcePool -Name $ResourcePoolName -Location $cluster

Then the OS customization spec stored in the system is accessed by name.

#Get the stored customization spec.
$custSpec = Get-OSCustomizationSpec -Name MBPMCI

Now the clone operation is starting using the New-VM cmdlet.

$newVmTask = New-VM -Name $Name -VM $TemplateVmName -VMHost $VmHost -ResourcePool $resourcePool -OSCustomizationSpec MBPMCI -RunAsync
$newVmTaskId = $newVmTask.Id

Waiting for the clone operation to complete

Cloning a VM, then customizing the guest OS, which requires at least one restart, takes a few minutes. The New-VM cmdlet will typically return long before this process is complete, even when invoked in its default synchronous mode.

The next stage of configuration can’t continue until the clone is complete and the guest OS is fully configured and running under its new identity. The solution to this is suggested by the last line of the previous code snippet. New-VM is executed with the optional -RunAsync parameter. This returns a task object instance, from which the unique Id is stored in a variable.

Now, it’s simply a case of executing a loop, checking on the state of the task, until it completes.

$newVmTaskId = $newVmTask.Id
while($newVmTask.State -ne "Success")
{
    if ($newVmTask.State -eq "Error")
    {
        Write-Host "VM Clone Failed";
        break
    }

    Start-Sleep -Seconds 10

    # Get-Task will return all current tasks, so filter down to the right one.
    $newVmTask = Get-Task | Where-Object {$_.Id -eq $newVmTaskId }
}

Now the VM clone is complete, and the guest OS can be started. The first thing that will take place during startup is OS customization, so we need another loop to wait for that. There’s an added complication that the guest OS will restart as part of this configuration process. The Get-VM cmdlet gets that named VM instance, and this can be used as a parameter to Get-VMGuest to get the guest OS instance.

As previously stated, the guest OS will be issued with a new SID and name during configuration. The host name change is the final part of the customization process, which gives us a condition to check which identifies when the OS is fully configured and ready for use.

# Get the VM that's just been created.
$newVm = Get-VM $Name

if ( $newVm -ne $NULL )
{
    Start-VM -VM $newVm -Confirm:$False
    $newVm = Get-VM $Name
    $vmGuest = Get-VMGuest -VM $newVM

    # Wait for VM initialization to complete.
    # The host name will continue to be that of the clone until the new SID has been applied
    # and the system rebooted.
    # Once both name and running state are reported, we can assume that the OS is ready for login.
    while($vmGuest.HostName -ne $Name -or $vmGuest.State -ne "Running" )
    {
        Start-Sleep -Seconds 10

        $vmGuest = Get-VMGuest -VM $newVM
    }
}

Part 2 – Deployment and Installation

An ASP.NET MVC4 Flickr Authentication Provider

ASP.NET MVC 4 includes great out-of-the-box authentication support for a number of social platforms, Facebook, Google, Yahoo, and others. In this post, I’ll explain what is required if you find yourself needing to integrate with a social network that isn’t already supported. As long as your chosen network provides an OAuth compatible mechanism, it’s possible to leverage this with a relatively small additional effort.

I am currently working on an ASP.NET MVC 4 project that integrates with Flickr. There is a third party library, Flickr.NET, which already provides an authentication solution but I wanted to integrate with the ASP.NET MVC architecture to keep my application design as standard as possible.

Registering the provider

An MVC 4 application, created with the Internet Application template, includes start up code to register authentication clients in App_Start\AuthConfig.cs.

The default AuthConfig.cs contains commented-out registrations for some of the major social network clients. My application is specific to Flickr, so I’ve removed all default content from the RegisterAuth method and replaced it with my own.

Image

There are three things happening here: The extraData collection is a mechanism for the OAuthWebSecurity type to store client-specific data for later use – in this case an icon for display as a log in button. FlickrOAuthClient is my custom authentication client and like all clients registered with the OAuthWebSecurity type it implements DotNetOpenAuth.AspNet.IAuthenticationClient. FlickrOAuthClient is being passed two items from the web.config file. The consumer secret and consumer key were provided by Flickr when I registered my app, their use will be explained later in the post. Finally, OAuthWebSecurity.RegisterClient allows the authentication system to use my provider.

The Flickr Authentication Process

There are three steps a client must perform in order to authenticate with Flickr:

  1. Get a request token.
  2. Get the user’s authorization.
  3. Exchange the request token for an access token.

More details of this flow and the parameters that are sent and received at each stage is documented by Flickr (http://www.flickr.com/services/api/auth.oauth.html).

Steps 1 and 3 require a request from client to Flickr. All clients must verify their app identity by passing the consumer key provided by Flickr when the app was first registered. This is passed in the oauth_consumer_key parameter.

The second item provided by Flickr when the app was registered is the consumer secret. Clients never send this over the network, but must use it to sign requests.

For an excellent detailed description of the other authentication parameters, please take a look at http://www.wackylabs.net/2011/12/oauth-and-flickr-part-1/. Also, part 2 of that article saved me a lot of time in understanding how to create and sign OAuth requests.

ASP.NET MVC 4 AccountController

A default MVC controller is created as part of the project. This supports standalone authentication, with application-specific user data storage, and additionally any OAuth clients registered in AuthConfig.cs at startup. In my case, I have only registered the Flickr client and I don’t want to expose the default user name/password authentication mechanism to the user as my application’s functionality is dependent on a link to Flickr.

Step 1 is to navigate from the application home page to /Account/Login, which invokes the AccountController.Login method to display the authentication start page (Views\Account\Login.cshtml).

Image

I edited the login UI to make it Flickr-specific. This is where the Flickr icon I added to my client’s extraData collection is used. The ASP.NET MVC 4 Internet Application template displays OAuth clients within the login page using Views\Account\_ExternalLoginsListPartial.cshtml. I edited this to show clickable icons rather than a list of client names.

Image

Clicking on the Flickr icon invokes /Account/ExternalLogin (AccountController.ExternalLogin method). This is still part of the standard generate application code. ExternalLogin is passed the provider name that I registered my client with, which is passed to the ExternalLoginResult internal class, which in turn invokes OAuthWebSecurity.RequestAuthentication with the provider name to ensure the correct authentication client is called.

Request a Token from Flickr

At this point, the custom authentication client comes into play. The OAuthWebSecurity.RequestAuthentication method calls the IAuthenticationClient.RequestAuthentication implementation on the client identified by provider name.

FlickrOAuthClient.RequestAuthentication assembles the parameters required by the first step of Flickr authentication. When the parameters have been assembled into a URL, a signature is generated using the  consumer secret provided by Flickr app registration. I will upload the code with a sample project in a later post, but won’t dive into the details here. One parameter worth mentioning is the callback URL. This must be a URL in the client site that Flickr redirects the user’s browser to once the Flickr authorization step is complete. This is the MVC application’s /Account/ExternalLoginCallback operation.

The next step is to make the token request by invoking a web request on http://www.flickr.com/services/api/auth.oauth.html using the signed query string assembled from OAuth parameters. If this is successful a token is returned. Now, FlickrOAuthClient redirects the current context to Flickr’s authorization page. The user will have to sign in to Flickr, if not already, then confirm the access they are willing to provide the client app. At no point will the client app have access to the user’s credentials for Flickr.

Image

I’ve hidden my app name on the screenshot to maintain an air of mystery!

Verifying Authorization

Assuming the user picked “OK, I’ll Authorize It” from the Flickr authorization page, Flickr will use the callback URL provided in the initial authentication request to pass control back to the client app.

AccountController.ExternalLoginCallback is another generated function as part of the ASP.NET MVC 4 Internet application template. The first thing it does is invoke OAuthWebSecurity.VerifyAuthentication, which uses the provider name to invoke my Flickr client’s implementation of IAuthenticationClient.VerifyAuthentication.

FlickrOAuthClient.VerifyAuthentication confirms that Flickr returned a token to the callback. This token must be used to sign the next and final Flickr OAuth request. The request must also include both token and verifier parameters returned by Flickr to the callback.

If this request succeeds, Flickr returns an OAuth token and secret which are required to create valid requests on the Flickr API. FlickrOAuthClient.VerifyAuthentication returns an AuthenticationResult instance to the controller’s ExternalLoginCallback.

Post Authentication

All Flickr authentication steps are now complete and the app can make use of the Flickr API to carry out operations on the logged in user’s behalf.

The ASP.NET MVC 4 OAuth mechanism needs to be updated to verify that the user is logged in. This is  done by calling OAuthWebSecurity.Login, passing the provider name, and Flickr user ID. This links to the identity and membership ASP mechanisms, so the property User.Identity is now populated and User.Identity.IsAuthenticated will be set to true.

Finally, a local account will be created, or updated if one corresponding to the Flickr user ID is already present. The default generated application code allows the user to enter their own choice of name but in this case I have cut out a step and automatically register their local application account with their Flickr user ID.

References

Using OAuth Providers with MVC 4

http://www.asp.net/mvc/tutorials/security/using-oauth-providers-with-mvc

Using OAuth with Flickr

http://www.flickr.com/services/api/auth.oauth.html

OAuth and Flickr

http://www.wackylabs.net/2011/12/oauth-and-flickr-part-1/

http://www.wackylabs.net/2011/12/oauth-and-flickr-part-2/

TFS Custom Check-in Policy for multiple Visual Studio versions

This article describes a solution to the requirement for creating and applying a single TFS custom check-in policy across multiple versions of Visual Studio. It isn’t intended to cover the details of writing custom check-in policies for TFS, there is plenty of information available elsewhere on this.

I work in an environment where we have both on-going development and support of historic releases of a product. This means most engineers have several versions of Visual Studio installed on their desktops, switching between them as required by the release they’re working on at any one time. It would be inconvenient to create a self-contained check-in policy solution targeted to each VS version. The purpose of this article is to show a way of structuring a solution that builds multiple targets from a single policy source base, and a simple installation project to package them and target the correct deployment for all supported VS versions found on any given system.

Because of a recent move to using JIRA for bug tracking, we had to disable the default check-in policy that enforces the association of at least one TFS work item for every source check in. Instead, we wanted to make sure an association with one or more JIRA issues is created with each and every source change. Although not quite as neat as the fully-integrated TFS approach, we need each check-in comment to contain at least one JIRA issue ID in the form ABC-1234.

Policy Solution and Projects

Solution Structure

Solution Structure

File Structure

File Structure

The policy logic itself is defined in a single source file, CheckJIRACommentLink.cs, which contains a type that inherits from Microsoft.TeamFoundation.VersionControl.Client.PolicyBase. This type requires a project reference to one of the Visual Studio TFS assemblies, Microsoft.TeamFoundation.VersionControl.Client.dll.

Each Visual Studio release has its own version of this assembly, and any check-in policy must reference the correct assembly for the Visual Studio version it targets.

There are three C# class library projects, one for each targeted version of Visual Studio, which are stored alongside each other in the same folder.

Visual Studio Version VS2008 VS2010 VS2012
Project Name JIRACommentPolicy2008.csproj JIRACommentPolicy2010.csproj JIRACommentPolicy2012.csproj
.VersionControl.Client.dll  assembly location $(ProgramFiles)\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies $(ProgramFiles)\Microsoft Visual Studio 10.0\Common7\IDE\ReferenceAssemblies\v2.0 $(ProgramFiles)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0
.NET Framework Version 3.5 4.0 4.0
Output Path bin\Release2008\ bin\Release2010\ bin\Release2012\
Assembly Info Source AssemblyInfo2008.cs AssemblyInfo2010.cs AssemblyInfo2012.cs

Additional projects can be created if there is a need to support VS2005, or when VS2013 is required.

One other difference between the policy projects is that I created a separate AssemblyInfo.cs file for each, with an assembly title and product that specifies which VS version it is targeted for. This helps avoid, or at least identify, any mistakes in deployment or registration of policy assemblies as these properties can be viewed in Windows Explorer.

Other Projects in the solution

There is a unit test project to verify that the policy logic works as expected. This just references one of the policy projects, JIRACommentPolicy2012, as each project is built on the same source code so there’s no need to repeat the tests for each target.

Finally, there is a simple WiX installation project. This packages up the three policy assemblies and combines a test to verify if each version of VS is present with a file copy and registry value to register it with VS.

WiX Setup Project

There is plenty of information about WiX available elsewhere, so I’m going to concentrate on the specifics of the check-in policy setup logic assuming a basic knowledge of setup. This was my first use of WiX and I was able to figure out how to use the functionality I needed without much trouble.

Does Visual Studio exist on the system?

Visual Studio Test

Visual Studio Test

Attempt to load a registry value that only exists when Visual Studio (2012 in this case) has been installed. Similar properties are populated for other VS versions.

Install the policy assembly, and create a registry value to register it with Visual Studio.  The installation source has been simplified slightly to aid readability.

Deployment Rules

Deployment Rules

Deploying the Policies

Once the installation has been created, it must be run on every client system that will be used to check in changes to the TFS project that will be protected by the policy.

The policy also needs to be added to the TFS server. This is done by logging in to TFS via Visual Studio 2012 (or whichever is the most recent version of VS you are targeting) using a TFS Project Administrator account. Once connected to the TFS service, use the Team menu, Team Project Settings->Source Control… to open this dialogue:

Policy Configuration Dialogue

Policy Configuration Dialogue

Click the “Add…” button to show a list of available policies and select your new one.

This step only needs to be done once. There is no need to repeat for the other versions of Visual Studio, or on other client installations. It is purely to configure the policy on the TFS server.

And that’s it. Anyone without the policy client installation who attempts to check in changes to the TFS project will see a warning message about the missing policy assembly. Anyone who has the policy installed will now see a warning until the policy is satisfied, and be unable to check in until this point.