Thursday, January 17, 2019

Azure DevOps D365 Build & Test Agent Using An Azure Container Instance – Part 1

What was I hoping to accomplish by doing this?


Working with Azure DevOps build and release pipelines can be a slow and tedious process, even more so using the hosted agents that are provided. I find it hard to keep track of filenames and the folders they end up in when I’ve got to keep in all in my head because it only exists for a short time and then is gone. It’s further complicated when needing to dynamically create any of these values. It feels a lot like trying to debug a plug-in only using the trace logs. Change code, deploy, execute, wait, review the log, and repeat. The difference is the waiting part is measured in minutes rather than seconds.

The goal was to create a build server that could be used to build and test things developed for Dynamics 365 CE. So that means being able to build and test .NET based plug-ins / workflows, JavaScript / TypeScript, run EasyRepro / Selenium UI tests, and be able to deploy as needed. All that, plus be faster because I’m impatient.

Containers at a high level


Prior to a few weeks ago I hadn’t put much thought into the concept of containers since the majority of Dynamics 365 CE development is done right inside the application. When scenarios came up where a plug-in wouldn’t work that functionality ended up on an existing VM that ran integration jobs or turned into an Azure Function. Functions and VMs really are very different from one another if you think about it. An Azure Function is cheap, easy to set up, and can scale but lacks flexibility when it comes to getting at lower level functionality that isn’t exposed or being able to install additional components. A virtual machine is usually expensive to run, requires constant maintenance, and is slow to start up but provides the ability to use a wider and more complex array of software.

In my eyes a container falls nicely in between. Using Azure to run the container you’ll end up paying for storage space for the images which will certainly be more than a Function but probably not more than a VM. A Function and a VM both bill based on compute time. The big different is that when a Function isn’t actually processing something it’s shut off and not adding to the bill. A VM on the other hand is accruing compute time as long as it’s turned on, whether it’s doing work or not. The pricing model for a container is closer to that of a VM but the rates appear to be cheaper and costs are calculated per second as opposed to per hour. Turning things on and off to reduce costs is more suited to containers as they can often be up and running in a few seconds while a VM could easily take a minute or more to full start up and get itself into a state where application can run.
To get an idea of the costs here’s what this is costing to run:
Roughly $114 / month if you left it running 24/7. If you turn it off when not in use then you’ll see the Container Instance costs drop.

Management is easier using a container versus a VM. On the VM there is the worry about patching and all the possible ways someone could hack in because of the various services running, open ports, etc. Windows based containers don’t run a full blow copy of the OS but rather a scaled down version (Nano Server or Windows Server Core) based on a specific build of the full OS. Less features, less chance for someone to exploit something. The other point is that these operating systems aren’t made to be patched in the traditional sense of running Windows Update. When it’s time to update you’re basically installing whatever components again from scratch on top of a new version of the OS image. Sounds painful but it’s really not once you’ve got the scripting in place (but up until that point it is very painful).

For more on containers: https://www.docker.com/resources/what-container


Plug-in compile & unit test build time comparison


I’m going to skip over the container build for the moment (covered in part 2) and go right to the end to show what kind of difference using a container made. The test case I used was compiling a bare bones plug-in and running a single unit test. As you can see from the time breakdown I think I managed to achieve what I was looking for.

Hosted VS2017 Agent
Task Time
Queue time 1s
Prepare job <1s
Initialize agent <1s
Initialize job 7s
Checkout 14s
NuGet Restore 1m 7s
MSBuild - Build Solution 54s
Visual Studio Test Platform Installer 8s
VsTest – Execute Unit Tests 35s
Publish – Test Results 5s
Post-job: Checkout <1s
Report build status <1s
Total 3m 14s

Private Agent Azure Container Service
Task Time
Queue time 1s
Prepare job <1s
Initialize agent N/A
Initialize job <1s
Checkout 3s
Command Line Script - NuGet Restore 4s
MSBuild - Build Solution 8s
Visual Studio Test Platform Installer 2s
VsTest – Execute Unit Tests 14s
Publish – Test Results 4s
Post-job: Checkout <1s
Report build status <1s
Total 38s

So what are the differences?


Queue Time
Both were 1 second when only running 1 build at a time. Each agent can only run 1 job at a time by default without getting into parallel builds, multiple agents, etc. When you start lining up multiple builds back-to-back the queue times on the hosted agent are going to be considerably longer.

Initialize Agent
Not applicable for privately hosted agents.

NuGet Restore
These packages needed to be restored for the test I ran:
  • FakeItEasy
  • FakeXrmEasy.9 (@jordimontana)
  • Microsoft.CrmSdk.CoreAssemblies
  • Microsoft.CrmSdk.Deployment
  • Microsoft.CrmSdk.Workflow
  • Microsoft.CrmSdk.XrmTooling.CoreAssembly
  • Microsoft.IdentityModel.Clients.ActiveDirectory
  • MSTest.TestAdapter
  • MSTest.TestFramework
On the Microsoft hosted agent, NuGet.exe is already installed. Using the NuGet build task, before it attempts to download any packages it first needs spend a couple seconds registering NuGet.org as a package provider. Then it downloads and installs all the packages because nothing is cached. This was particularly long on the build I’m using for comparison at 1 minute 7 seconds but the faster times still were taking in the neighborhood of 40 seconds.

On the container I pre-installed NuGet.exe so instead of using the NuGet build task I used a Command Line Script task and executed something like:

"C:\Program Files\NuGet\nuget.exe" restore $(Build.SourcesDirectory)\TestPlugins.sln -Verbosity Detailed –Noninteractive

After the first run of this build, all those packages were cached locally and available so it took only 4 seconds.

MSBuild - Build Solution
I couldn’t find anything referencing the specifications for the hosted servers. The Azure Container Instance had 2 vCPUs and 2 GB of memory. I suspect that’s more than gets assigned to the hosted agents and as a result the build time is considerably faster.

Visual Studio Test Platform Installer
This is an out of the box build task which installs VSTest.Console.exe needed to run .NET unit tests. In hindsight this step probably wasn’t needed on the hosted agent since it’s already installed by default.

I spent a fair amount of time trying to get this installed on the container image without success. Again in hindsight it would have been easier to install a full copy a Visual Studio 2017 (which would have included this) instead of trying to install the bare minimum number of components I thought I’d need for a capable D365 build & test server. The flip side though is the container image becomes larger, more costly, and more cumbersome to deal with. The bright side is that once it’s installed it’s available for future use without re-downloading and re-installing. The build task is smart like that and first checks if it’s there before blindly installing. That 2 seconds was just to check if it was installed. The bigger reason I wanted to get it installed was to simplify and reduce the number of steps a person would need to go though to create a build. It’s just one more thing for someone new coming in to forget and have to waste time on because the tests won’t run.

VsTest – Execute Unit Tests
I again attribute the difference to the virtual hardware specs likely being better.

Part 2 will cover what went into the creating the container.

Part 3 will cover getting up and running with Windows containers.