Friday, March 15, 2019

Run Your Automated D365 CE UI Tests From Azure DevOps

EasyRepro

If you hadn't heard about it already, EasyRepro is a UI testing framework for Dynamics 365 CE built on top of Selenium which is one of the more popular UI testing frameworks available today. Those who might have tried using Selenium directly with D365 CE in the past only found pain and suffering. For most people that ultimately lead to the conclusion that it wasn't worth investing huge amounts of time creating tests for little upfront return. In my opinion EasyRepro now makes creating UI tests feasible as it abstracts away most of complexities involved with Selenium development and boils down CE interactions to a single line of code in many cases. At the same time it’s still flexible enough to extend or use underlying core Selenium functionality. If you’re at all interested, download the Git repository and give it a test drive.

Running Tests in Azure DevOps

Once you've cloned the EasyRepro project and ran some of the examples on your local machine you'll quickly realize that you can't tie it up all day running tests. So if you’re planning on running dozens or hundreds of these tests you’ll need to find an alternative place to run them from. Azure DevOps (ADO) can be that alternative because it offers solutions for 2 different scenarios when it comes to running automated tests. These being running a group of tests as part of a build & release pipeline and the other being able to have a non-developers run individual tests on demand.

Project Set Up

For example purposes I'll use a Visual Studio test project referencing the Dynamics365.UIAutomation.Api (EasyRepro) libraries from NuGet. I've additionally added in a reference to Microsoft.Azure.KeyVault to handle credentials so we don't need to worry them getting into source control or having to worry about replacing them in any of the ADO processes. For this example we’ll just be using Google Chrome as the browser of choice for testing.

Build Set Up

Assuming tests are created and things are working locally, get the project checked into ADO so we can set up a build. It's going to work like a normal .NET project build.


Start off by creating a new build in ADO using an empty job. I’ve got a repository with the EasyRepro test project that I’m pulling the source from initially. This is executing on a private build agent (running inside a Window container) so I’m just using the command line to do a NuGet restore but you could also use the standard NuGet build task. Build the solution or project like normal using the release configuration. Then probably the most important step, use the Publish Build Artifacts build task to publish the bin/release folder to Azure Pipelines/TFS. This is what makes the compiled assembly and other references available to the release we’ll be setting up to run this.

To keep things updated, under Triggers, check Enable continuous integration so that the project gets rebuilt and published each time an update is made.

Release Set Up

In order for ADO to be able to run tests on demand we need to create a release pipeline. Again don’t start with an existing template, just pick the empty job. Select Artifacts & Add and then choose Build and use your project and the build pipeline you just created as the source. You have a few different options for the default version to use but I’d probably set it to use the latest each time.


In the first and only stage, open the tasks to begin setting up the test run. I start with a Visual Studio Test Platform Installer task. You might need this if you’re running a private build agent. If you’re using a Microsoft hosted agent you shouldn’t need this because it’s already installed but it being there won’t hurt anything. Then add a Visual Studio Test task and user version 2.*. Pay close attention to the configuration. Select tests using a Test run. Make sure the box is checked that indicated the Test mix contains UI tests. Last, make sure the test platform version is using the version Installed by Tools Installer.




Depending on the build agent, you may or may not need to install Chrome (or other browsers for that matter). If things aren’t working, try installing it with the following PowerShell script:

$Path = $env:TEMP;
$Installer = "chrome_installer.exe";
Invoke-WebRequest "http://dl.google.com/chrome/install/375.126/chrome_installer.exe" -OutFile $Path\$Installer;
Start-Process -FilePath $Path\$Installer -Args "/silent /install" -Verb RunAs -Wait;
Remove-Item $Path\$Installer

Test Plan Set Up For Manual Test Execution

Hooking individual tests up to test cases is only required if you want the ability to run them on demand from the Test Plans area in ADO. It works better if you can link the release that was created to the test plan but I’m pretty sure you’ll need either VS Enterprise, Test Professional, MSDN or Test Manager to do so. If you’ve already got a test plan, right click on in and go to Test plan settings. Select the build that was just created and leave the build number at latest. Then choose the pipeline and stage from the release setup.


Assuming you’ve got a Test Case created that matches one of the EasyRepro tests, head back to the project in Visual Studio, open up Test Explorer, and find the test. Right-click on it and choose Associate to Test Case. It’s probably a good idea to make sure you’re connected to the project in ADO before you do this.



Type in the Test Case Id then Add Association. Save and close and when you open the test case in ADO, the automation status should now say ‘Automated’. Repeat for any other tests.

Under Test Plans you should be able to select individual or groups of tests and then under the green Run button pick Run for web application to execute. Having the release tied directly to the test plan saves the user having to choose that same data each time which is a few extra clicks and possible confusion.


A window will open confirming and validating the tests. If there aren’t any errors you can select View test run and sit back and wait for the tests to complete. With any luck they will, otherwise you’ll need to retrace your steps and figure out what’s wrong.



If you aren’t able to edit the test plan you can use Run with options and it will open a window where you can choose the items that got configured when linking at the test plan leave. When selecting the build the user will need to either know the build number or know enough to select find and the pick the correct (likely the most recent) build and pipeline from the list. 

Running In Builds & Releases Automatically

Really it’s just a matter of combining the all the steps from the Build and Release setups above with the exception of the publish build artifact task.


And that’s it. Not much to it once you figure it out so hopefully this saves you the trouble.

Wednesday, February 27, 2019

Using Chocolatey to Distribute Developer Tools - Part 3

This is going to focus on setting up Azure DevOps to host the NuGet package feed and keeping the packages up to date. In case you missed the other posts in this series, part 1 provided an overview of Chocolatey and why you might want to use it as a D365 developer and part 2 focused on the package content and the scripts used to automate creation.

Feed setup


In your chosen Azure DevOps project head to Artifacts. If you don’t see that as an option, make sure under Project Settings that Artifacts are turned on. From there it’s just a matter of thinking up a name for the new feed.  Once created, use the Connect to Feed option to retrieve the url. We need this for the NuGet build task to push the package once it’s created as well as connecting from the client. The v3 endpoint url will be displayed here but I’m using the v2 endpoint because it was the only way I could get it to work. It’s straight forward to convert from v3 to the v2 url.

v3: https://{InstanceName}.pkgs.visualstudio.com/_packaging/{FeedName}/nuget/v3/index.json

v2: https://{InstanceName}.pkgs.visualstudio.com/_packaging/{FeedName}/nuget/v2

If you haven’t created a Personal Access Token (PAT) that has read/write access to Packages yet, do so now.

Pipeline setup


I created a different build for each package in the project.
The source will be the git repository hosting the project. This way the first thing that will happen is the project code will be downloaded to the build agent. We’ll need to reference the files for building the package around whatever we download as well as to check the current version we last built versus the current public version.


Step 1: PowerShell Script: Build package


You can inline the code from #1 in a PowerShell build task. If it’s determined a new package is available, the script will download the latest version and build the Chocolatey package from it.

Step 2: NuGet: Push to feed


Originally I want to make everything into a single PowerShell script and just use that but I wasn’t able to get the authentication working using the ApiKey parameter along with a Personal Access Token. Luckily using the existing NuGet build task worked fine. Use the command from #2 and select custom as the type. Additionally we don’t want this step to run if a new package wasn’t built, so to prevent that expand Control Options on the task and then Custom Conditions. Then use this to check the Pipeline Variable created in step #1 to determine if the task should run or not with this snippet.

eq(variables['ContinueUpdate'], 'true')


Step 3: PowerShell Script: Commit Updates


Inline the code from #3 in another PowerShell task to make sure your updated .nuspec file gets back into source control so it can be used next time the process runs. You’ll also want to make sure that your .gitignore file excludes .zip and .nupkg files so we don’t accidently store redundant copies of those. Don’t forget to add the same Control Option change as step 2.

Step 4: Build Triggers


The last thing is to set up a schedule so it keeps itself updated, this can be done from the Triggers tab inside the build. I’m using the free hosted agent and it doesn’t allow parallel builds so I staggered the start times on the day of the week I have this running on. You could very well run them all at the same time and they’d just queue up and run one at a time anyway.

Done!

Chocolatey GUI


Here’s what you end up with after all this work. Click install and you’ll have the latest version of the tool installed in a few seconds. Click uninstall and with any luck (if you cleaned up after yourself properly) everything will be gone.





To connect to Azure DevOps go to Settings and then Sources.
  • Id: Display name for the feed
  • Source: The v2 Azure DevOps feed url
  • Username: Anything
  • Password: The Personal Access Token created earlier (or a different one with Read access to Packages). Using the normal username / password combination wasn’t working here.

A few other notes


The download counts, package size, or package image don’t get displayed. That seems to be an issue between Chocolatey GUI and Azure DevOps. Setting up a feed from a different private NuGet source didn’t do this.

Download all the code here: https://github.com/jlattimer/D365Chocolatey

Friday, February 22, 2019

Using Chocolatey to Distribute Developer Tools - Part 2

This is going to focus on the Chocolatey package set up and the code used to build updated packages.

Project structure


A single git repository which contains folders for each package / application / tool. 
Each package folder contains the following:
  • .nuspec file which provides the metadata about the package
  • PowerShell file containing the scripts that will go into the Azure DevOps Build Tasks to:
    1. Build the updated package
    2. Push to the NuGet feed
    3. Commit the updated files back to source control
  • Tools folder containing 2 PowerShell files
    1. chocolateyInstall.ps1 which handles the installation once it’s on the target machine
    2. chocolateyUninstall.ps1 which handles the uninstallation from the target machine

.nuspec file


You can choose to change the metadata to whatever you’d like but I’m going using what the original packages contains. Later you’ll see in the code used to update the packages, I’m just reading from the downloaded content and updating the matching elements.
The package <id> you’ll want to change to something unique, at the very least a variant of the original. Assuming you’re running the package update process on the same place as your feed (like Azure DevOps) you might run into an issue where it uses your private feed first to retrieve the package as opposed to the public source you actually intended. I’m guessing that’s by design but the issue I ran into popped up in the script comparing the version in the .nuspec in source control versus the package that is publicly available. Since we aren’t specifying a version (because we won’t really know what it will be once this process is automated) it’s pulling back the first copy it finds based on the registered sources. When the compare happens it sees the same version and aborts because it doesn’t show there’s an update needing to happen.

If you’re going to have the process use the public package for metadata you can get away with filling in any dummy data and after the first pass it will get updated to the real values. If you want to look at what’s there ahead of time you could use this simple PowerShell to retrieve the package metadata.


Be careful of the version numbers as once you’ve pushed to the feed you cannot delete a package, you can only un-list it or publish a new version with a higher version number. If you use your own version numbering scheme it’s probably not as big a deal but more than likely you’ll want to use the same version number as the public package.

Full .nuspec documentation: https://docs.microsoft.com/en-us/nuget/reference/nuspec

Package building scripts



Part 1 – building the package


This is used in a PowerShell build task and starts off with variables for the public package we’re using as a base and the corresponding file/folder locations in the project.

Next is the version comparison between the latest version of the public package and what is currently in the .nuspec file. The Plug-in Registration Tool is on NuGet so we use NuGet.org as a source. If you’re using a Chocolatey packages as the source then you’ll need to use Chocolatey.org as the source. Using Azure DevOps, NuGet is already registered as a source but Chocolatey is not in which case you’ll need to use Register-PackageSource and target Chocolatey.org (example). Once the comparison is made if the version hasn’t increased then the process stops. Before doing so a Pipeline Variable is set so we can use it to prevent any future build tasks from running. The other thing of note is the <files> section which determines what should be included when the package command is run. In this case we want just the tools folder and the .zip file containing the actual content and we’ll be ignoring the readme file and this PowerShell script file.

Once it’s been determined an update is needed, the public package is downloaded. As part of the process I’m giving it common name that corresponds to a value in the install file. This is solely to have one less thing to change when reusing this code.
From the package used in the version comparison I’m updating the .nuspec in my package. This isn’t a necessity, just more for information purposes. I’m also setting a Pipeline Variable here with the new version number so I can use it in a later build task. 

At this point the only thing left is to run the Chocolatey command to create the package from the content of the current directory based on what is defined in the .nuspec file. Chocolatey is installed by default on the Azure DevOps hosted build agents but it you’re running this from anyplace else you’ll need to run the PowerShell command to install it first.

Part 2 – publishing the package


This is used with a NuGet build task to upload the completed package to the feed which is going to host it. Specific to Azure DevOps I’m using a Personal Access Token that has read/write access to Packages to authenticate since the feed isn’t public. Also note that I used the NuGet v2 endpoint as opposed to the newer v3 endpoint. The format looks like this:

https://{InstanceName}.pkgs.visualstudio.com/_packaging/{FeedName}/nuget/v2

Here’s where you’ll run into a 409 error uploading the package if you try to upload a package with a version number that already exists. I’ll cover more about the feed set up in part 3 of this blog.

Part 3 – Committing the changes


This is used in a separate PowerShell build task run after publishing. Since I’ve made modifications to the .nuspec file they’ll need to be committed back to source control so the next time this runs the package process won’t run unnecessarily after a version comparison. As part of the comments I’m using the Pipeline Variable I set earlier with the new version number we’re updating to. In the Azure DevOps build you also need to enable to option Allow scripts to access the OAuth token so that we can pass the System.AccessToken variable in the request header to authenticate.

Installing & uninstalling


In this case these scripts are very simple. After a user chooses to install a package it’s downloaded to the local machine, at which point  chocolateyInstall.ps1 kicks off. The SDK tools aren’t installable applications so “installation” is just a matter of copying the files somewhere. I chose the user’s Application Data folder and am creating a folder structure that will support installing tools from multiple sources. The Chocolatey unzip command will handle extraction and creation of any folders in the destination path that may be required. This is just unzipping my renamed package file I downloaded from the original source. To complete things I using the Chocolatey create shortcut command to make a short cut on the user’s desktop to the folder containing the executables.

When things get installed, Chocolatey is keeping the original package around so it knows what is on the machine to determine if updates are available, do reinstalls, and to access the uninstall script. The location will be similar to: C:\ProgramData\chocolatey\lib\YourPackage

As you might have guessed chocolateyUninstall.ps1 will run when the user chooses to uninstall the package. Since it’s not an installed application I’m just deleting the things I created during the install and deleting the package from the lib folder. There are a number of different Chocolatey commands you could use to uninstall but at the time when I worked on this I couldn’t get the one I believe I was supposed to use to work so I went down this path. For the full list of Chocolatey commands, check out their documentation.

That covers the package creation process. Part 3 will look at the setting up Azure Artifacts & Build Pipeline to keep things continuously updated as well as how to use the new feed in the Chocolatey clients. In case you missed in, part 1 gave an overview of Chocolatey and why you might want to use it in the Dynamics 365 space.

I’ve got everything up on GitHub so you can refer to that incase I don’t end up explaining something clearly enough

https://github.com/jlattimer/D365Chocolatey

Thursday, February 21, 2019

Using Chocolatey to Distribute Developer Tools - Part 1

First off, what is Chocolatey? Chocolatey is a package manager for applications. So NuGet for Windows quite literally as the package formats are the same. There’s one or more repositories (again it’s just NuGet) that keep track of all the packages available and then you connect to it using a client and download and install an application. It’s similar to NuGet in the sense that the community can contribute packages but there appears to be a lesser amount of contributions by “official” sources like Microsoft, Google, Apple, etc. However that doesn’t mean you aren’t able to download and install software from these companies from Chocolatey. Applications like Visual Studio, Chrome, and iTunes have all been download, most likely from a publicly available link, and converted into a package by someone in the community. I should call out right away that this isn’t meant to circumvent licensing or pirate anything. The original applications aren’t being modified in any way, just having some scripting added around it to automate installations. Since you’re basically relying on “some guy in his basement” to create these packages there is some amount of risk still that you could download something harmful. Granted the packages are virus scanned and community moderated but things can still slip through. No different than downloading something off NuGet, GitHub, or even the XrmToolBox for that matter. This just happens to be one of many approaches you could take if you are in a position to assert more control over the things your users / developers are downloading and installing. Making commonly used software easily available for people to install themselves lessens the need for them to search around the internet and potentially downloading something bad. Not to mention, self-service software installation can take some of the burden off the support team and free them up to do or things.

What goes into a package?


At the core there is a XML file which contains the metadata about the package, things like title, version, description, licensing info, dependencies, etc. can all be found here. This will be the source of information everyone sees in the download feeds. Instead of DLLs like a typical NuGet package, you’ll usually have an executable file of some sort or the files needed to run the application if an actual installation isn’t required. Around this is wrapped some PowerShell code which takes the downloaded content and performs the tasks required to get the application to a useable state. Ideally there should be no interaction from the user as one of the big selling points of Chocolatey is for system administrators to use it to silently install and manage software across a large number of machines. Each package is versioned by the repository so you can always go back to a previous version if need be and just like NuGet packages are immutable. Once it’s published that’s it, no changing it without increasing the version number. It can be painful if you’re developing packages but for consumers it can prevent the old bait and switch by tricking people into downloading something bad which once worked perfectly fine. Chocolatey will also keep track of what is installed on your machine and make updates available as they as released. Additional PowerShell can be put in place to run during the uninstall process so package creators can add any code required to assist with any cleanup that might need to happen on top of the application’s normal removal process.

But I’m a Dynamics person and not a system administrator


True, this is maybe geared a little more toward organizations rather than individuals but that doesn’t mean it couldn’t be put to use for personal use. For developers, just think about the next time you need to rebuild you primary development machine. Hunting down all the installs and clicking through everything is about a day long process, at least it is when I need to do it.

This is my example use case – creating a way to more easily install the Dynamics 365 CE SDK tools (Plug-in Registration Tool, Package Deployer, etc. ). Back when there was a single download for the SDK it was easy because all the tools were right there. I can understand why managing that was probably difficult and going to a solely online only based SDK was the way to go but it also introduced some new challenges making these tools available. The new process is documented on how to download them from NuGet using PowerShell but I’ll still say it would have been easier just to point people to the manual download link and then rename .nupkg to .zip and be done with it. Imagine this situation, you’re dealing with someone who isn’t really a developer but has just enough knowledge of the platform to know how plug-ins work and you’re trying to describe to them over the phone how to edit a plug-in step. You’ll end up asking them if the have the Plug-in Registration Tool installed and of course they’ll say “no” so then you start telling them them need to download it from NuGet and they’ll usually end up responding something to the effect of, “WTH is nugget?”. So then you just ask for remote access and do it yourself.

Getting started


Install Chocolatey
https://chocolatey.org/install
Now you’re probably thinking that this is all command line stuff that you don’t want to memorize just to make it “easier” to install the latest version of an application. Not to worry, there is a UI that can be installed to make things a little more friendly.

Install Chocolatey UI
https://chocolatey.org/packages/ChocolateyGUI
At this stage you should have Chocolatey up and running and can download packages the community has already made available.

Creating and hosting your own packages


You could go through the process of creating a package and then upload it to the main Chocolaty site but obviously if you’re creating something solely for personal or private use, public hosting it isn’t an option. The other thing I’d throw out there is that it might be a violation of some terms of service or other legalese somewhere that forbids redistributing copyrighted material in this way, so if you don’t want “the lawyers” giving you a call you better look for another alternative.

Remember way back in the first paragraph where I mentioned that Chocolatey and NuGet are basically the same thing? That means there are a number of ways to create your own private NuGet server. The software itself is open source so it if want to stand up your own copy and manage the hosting that is an option. There are also some paid services / products which may work for you. ProGet offers a commercial product that you run on your own server (which is a pretty good deal if you’ve got a lot of users and need AD integration) or MyGet which is a cloud based offering that has paid and free plans.

Instead of those I’m going to focus on using Azure DevOps to host packages & feeds and to automate the package maintenance process. Azure DevOps is free up to 5 users or included with certain MSDN subscriptions. Package management (Azure Artifacts) is a paid add-on which is available with the same free access and then charges per user, per month after that which last time I checked starts to get pricey for when you’ve got a lot of developers.

In part 2 I’ll go into the project & package set up and the code used to keep the packages up to date. As I mentioned earlier there will be examples showing how to convert the SDK Tools into packages as well as how you can replicate packages from other feeds (like Chocolatey.org) for use in your own curated feed. Part 3 will covert how to create Artifacts in Azure DevOps and how you can use a Build Pipeline to watch for new versions and automatically update the packages. That last part is probably the most important. Creating an installable package to only use it once is a waste of time. The real benefits come from installing or updating multiple times on multiple machines.

Friday, February 1, 2019

Get Latest Solution Patch Name With PowerShell

Someone might find this useful if trying to move solutions around using Azure DevOps & PowerShell. I wanted to automate export of the latest solution patch using the Microsoft.Xrm.Data.Powershell library as part of a build pipeline. It’s pretty straightforward but it requires the solution name, simple for the base solution but when dealing with a patch it’s not possible to predict what the name is going be in order to build it into your script.

I came up with this. Given a solution uniquename this sets build variables for the uniquename & version of the latest patch of a solution or the base solution's uniquename & version if no patches exist.