Friday, August 2, 2019

CRM DateTime Workflow Utilities v2.4.0.0

Added a few new features in this update:

  • Round to Quarter Hour
  • Round to Half Hour
  • Round to Hour
Exactly what you'd expect - round times up or down.


Or install using the XrmToolBox Solution Installer by @RajYRaman

Thursday, June 20, 2019

Cleaning Up Leftover WebDriver Processes

When developing tests using EasyRepro there are going to be plenty of times during the debugging process where you end up cancelling before the test runs to completion. When that happens the WebDriver for the browser you're testing on doesn't have the chance to shut down properly. The most obvious result of that is the browser being left open. Annoying, but easily fixed by just closing it. The other hidden side effect is that you end up with a number of leftover processes left open on your machine.



Probably not going to grind your machine to halt but it would be better if those processes got cleaned up without having to remember to manually go through and End task for every single one of them.

Fortunately there an easier (albeit not perfect) way to handle this. With a few lines of code you can check the running processes and stop any offenders. You'll notice that I added the code to the AssemblyInitialize method and not AssemblyCleanup. In a perfect world you'd do the clean up at the end after all tests have run. Unfortunately that won't work here, in the event you cancel a test or abort in some way AssemblyCleanup doesn't run. The next best thing is to run the clean up code before starting a new run of tests. Once this is in place you shouldn't have more than 1 process leftover at any given time.


using System.Diagnostics;

[AssemblyInitialize]
public static void AssemblyInitialize(TestContext testContext)
{
    var chromeProcesses = Process.GetProcessesByName("chromedriver");
    foreach (var process in chromeProcesses)
    {
        process.Kill();
    }
    var geckoProcesses = Process.GetProcessesByName("geckodriver");
    foreach (var process in geckoProcesses)
    {
        process.Kill();
    }
    var ieDriverServiceProcesses = Process.GetProcessesByName("IEDriverServer");
    // (Command line server for the IE driver)
    foreach (var process in ieDriverServiceProcesses)
    {
        process.Kill();
    }
}

Monday, June 17, 2019

Reporting on EasyRepro Test Runs

There were several issues raised on the EasyRepro project that requested a report of the results after a test run completed. One specifically referenced using the Extent Reporting Framework as a means to accomplish this. It seemed like a reasonable ask so I thought I’d give it a try.

I'd expect a decent report to provide not only the results of a run showing if tests passed or failed but also detail about what specifically was being tested and include screen shots throughout the process. This will make it easier for a non-developer to interpret and possibly troubleshoot tests in the event they fail.

Getting started

First thing’s first, add a reference to ExtentReports to your existing EasyRepro test project.

To keep things cleaner I'd suggesting using a base class for all your tests to cut down on the duplication. Except for a few helper methods, the majority of the code will reside in the MSTest initialize and cleanup methods.

AssemblyInitialize (runs once prior to any of the tests) primarily contains what is needed to set up the report instance. Be mindful of the report file name and output path. In addition to providing some nice looking visualizations for test output, Extent also provides the ability to add some additional environment level information to the dashboard so the consumer has a little more context. If you're changing values throughout the test run then this isn't going to be the right place to record them. In my example I'm showing the browser being tested along with the test user and D365 CE instance.

The final line creates a grouping for the individual tests contained in the class. This makes sense assuming all the tests in the class are related to one another.


protected static ExtentReports Extent;
protected static ExtentTest TestParent;
protected static ExtentTest Test;
protected static string AssemblyName;
public TestContext TestContext { get; set; }

[AssemblyInitialize]
public static void AssemblyInitialize(TestContext context)
{
    AssemblyName = Assembly.GetExecutingAssembly().GetName().Name;

    // http://extentreports.com/docs/versions/4/net/
    var dir = context.TestDir + "\\";
    const string fileName = "report.html";
    var htmlReporter = new ExtentV3HtmlReporter(dir + fileName);
    htmlReporter.Config.DocumentTitle = $"Test Results: {DateTime.Now:MM/dd/yyyy h:mm tt}";
    htmlReporter.Config.ReportName = context.FullyQualifiedTestClassName;
    htmlReporter.Config.Theme = Theme.Dark;

    // Add any additional contextual information
    Extent = new ExtentReports();
    Extent.AddSystemInfo("Browser", Enum.GetName(typeof(BrowserType), TestSettings.Options.BrowserType));
    Extent.AddSystemInfo("Test User", 
        System.Configuration.ConfigurationManager.AppSettings["OnlineUsername"]);
    Extent.AddSystemInfo("D365 CE Instance",
        System.Configuration.ConfigurationManager.AppSettings["OnlineCrmUrl"]);    
    Extent.AttachReporter(htmlReporter);
    context.AddResultFile(fileName););

    // Create a container for the tests in the class
    TestParent = Extent.CreateTest(context.FullyQualifiedTestClassName);
}

In TestInitialize (runs prior to each test) the main thing happening is adding the individual test to the group created for the class. It's initialized using the unit test method name pulled from the test context and the unit test description attribute (if one if present). The description isn't available in the test context but given the information available it can be retrieved.


[TestInitialize]
public void TestInitialize()
{
    // Get unit test description attribute
    var type = Type.GetType($"{TestContext.FullyQualifiedTestClassName}, {AssemblyName}");
    var methodInfo = type?.GetMethod(TestContext.TestName);
    var customAttributes = methodInfo?.GetCustomAttributes(false);
    DescriptionAttribute desc = null;
    if (customAttributes != null)
    {
        foreach (var n in customAttributes)
        {
            desc = n as DescriptionAttribute;
            if (desc != null)
                break;
        }
    }

    // Create individual test under the parent container / class
    Test = TestParent.CreateNode(TestContext.TestName, desc?.Description);
}

The only purpose of the code in TestCleanup (runs after to each test) is to set the Extent test result for the report. The goal was to differentiate between a test that passed, failed because of an exception, or failed because the criteria for passing was not met. There are a number of other statuses but I'm not sure how often you'd run into any of them.


[TestCleanup]
public void TestCleanup()
{
    // Sets individual Extent test result so it reflects correctly in the report
    if (Test.Status == Status.Error)
        return;

    switch (TestContext.CurrentTestOutcome)
    {
        case UnitTestOutcome.Error:
            Test.Fail("Test Failed - System Error");
            break;
        case UnitTestOutcome.Passed:
            Test.Pass("Test Passed");
            break;
        case UnitTestOutcome.Failed:
            Test.Fail("Test Failed");
            break;
        case UnitTestOutcome.Inconclusive:
            Test.Fail("Test Failed - Inconclusive");
            break;
        case UnitTestOutcome.Timeout:
            Test.Fail("Test Failed - Timeout");
            break;
        case UnitTestOutcome.NotRunnable:
        case UnitTestOutcome.Aborted:
            Test.Skip("Test Failed - Aborted / Not Runnable");
            break;
        case UnitTestOutcome.InProgress:
        case UnitTestOutcome.Unknown:
        default:
            Test.Fail("Test Failed - Unknown");
            break;
    }
}

AssemblyCleanup (runs after all of the tests) ensures the data collected gets written to the output file.


[AssemblyCleanup]
public static void AssemblyCleanup()
{
    Extent.Flush();
} 

I'm using 2 helper methods to support the reporting. AddScreenShot takes a screen shot and tags it with some text, in most cases it's a description of what state the page is in. LogExceptionAndFail grabs the error message and stacktrace, formats it, logs an error in the report, and rethrows so the test still fails due to the exception.


public void AddScreenShot(WebClient client, string title)
{
    var filename = Guid.NewGuid();
    var filePath = Path.Combine(TestContext.TestResultsDirectory, $"{filename}.png");
    // Wait for the page to be idle (UCI only)
    client.Browser.Driver.WaitForTransaction(5);
    client.Browser.TakeWindowScreenShot(filePath, ScreenshotImageFormat.Png);
    Test.Info(title, MediaEntityBuilder.CreateScreenCaptureFromPath(filePath).Build());
}

public void LogExceptionAndFail(Exception e)
{
    // Formats the exception details to look nice
    var message = e.Message + Environment.NewLine + e.StackTrace.Trim();
    var markup = MarkupHelper.CreateCodeBlock(message);
    Test.Error(markup);
    throw e;
}

Capturing report data

A unit test class will need to inherit from the pre-defined base class in order to output to the report. If a description is placed on the test it will show on the report and give the person looking at the report a better idea of what the test is trying to accomplish.

Various levels of text-based messages (Info, Warning, Debug, etc.) can be written to the report output depending on the type of information you'd like to surface.

To keep track of the test as it progresses I'm using one if the helper methods from the base class to take a screen shot and assign some text to it describing the operation that was just attempted. This also should help to capture the state of the page in case the next step should fail. This approach might not be conclusive in all cases seeing that many of the EasyRepro methods that interact with the page are performing multiple operations to get to the end result. A failure in the middle wouldn't be reflected in the prior screen shot. This is where capturing a video of the entire test comes in handy.

I've also wrapped each test in a try/catch block so that any exceptions can be run through the other helper method in order to capture the details and fail the test so it reports an error rather than a standard failure in the results.


[TestClass]
public class CreateAccount : TestBase
{
    [TestMethod]
    [Description("Test should fail due to an error")]
    public void CreateAccount_Error()
    {
        // Example log entries
        Test.Info("Log an information message");
        Test.Warning("Log a warning message");

        try
        {
            var client = new WebClient(TestSettings.Options);
            using (var xrmApp = new XrmApp(client))
            {
                xrmApp.OnlineLogin.Login(_xrmUri, _username, _password);
                AddScreenShot(client, "After login");

                xrmApp.Navigation.OpenApp(UCIAppName.Sales);
                AddScreenShot(client, $"After OpenApp: {UCIAppName.Sales}");

                xrmApp.Navigation.OpenSubArea("Sales", "Accounts");
                AddScreenShot(client, "After OpenSubArea: Sales/Accounts");

                xrmApp.CommandBar.ClickCommand("New");
                AddScreenShot(client, "After ClickCommand: New");

                // Field name is incorrect which will cause an exception
                xrmApp.Entity.SetValue("name344543", TestSettings.GetRandomString(5, 15));
                AddScreenShot(client, "After SetValue: name");

                xrmApp.Entity.Save();
                AddScreenShot(client, "After Save");

                Assert.IsTrue(true);
            }
        }
        catch (Exception e)
        {
            LogExceptionAndFail(e);
        }
    }
}

Report output

The report presents a nice looking dashboard which sums up detail about all the tests performed. In this example Tests reflects the number of unit test classes I included in the run and Steps shows the number of individual tests. If any of the steps fail, it reports the overall test has failed.


You can drill into each individual test so see what was logged, start/end times, and the duration. 
Clicking on a screen shot that was included will display a full-size version. I've also chosen to format the exception details in a code block, so they stand out from the regular text.


You can download the full sample from GitHub.

Tuesday, April 16, 2019

Capture Pictures & Video From EasyRepro Tests

It goes without saying that tests are going to fail from time to time. Luckily EasyRepro does a pretty good job of providing descriptive error messages to make troubleshooting issues easier. Inevitably the first person to deal with an issue is going to be the developer and most likely they'll need to re-run the test on their local machine to watch exactly what happens. Debugging or at least offloading some of the triaging tests can be made easier by capturing screenshots and/or videos of tests as they are running.

Screenshots

Alone this isn’t anything new as EasyRepro already has TakeWindowScreenShot which does a screen capture. It expects a path & file name and image format for parameters. I like to have the name of the test in my image name. We can use the TestContext to get the name of the test currently being executed for the file name.

You can get the TestContext object when it’s passed to the ClassInitialize method.


private static TestContext _testContext;

[ClassInitialize]
public static void SetupTests(TestContext testContext)
{
    _testContext = testContext;
}

I created a helper method to use when needing a screenshot. It will create a folder for the images, create a unique filename based on the test name & date, and then call TakeWindowScreenShot to grab the browser screen and save to disk. It also adds the file to the TestContext which is important if running tests in Azure DevOps.


private static void ScreenShot(InteractiveBrowser xrmBrowser, TestContext testContext)
{
    const ScreenshotImageFormat format = ScreenshotImageFormat.Jpeg;

    const string imagePath = "screenshots";
    Directory.CreateDirectory(imagePath);

    var testName = $"{testContext.TestName}_{DateTime.Now:yyyyMMddTHHmmss}";
    var filename = $"{imagePath}\\{testName}.{format.ToString().ToLower()}";
    xrmBrowser.TakeWindowScreenShot(filename, format);

    _testContext.AddResultFile(filename);
}

Videos

As it turns out this is really easy to set up (assuming tests are running under Visual Studio 2017 v15.5+). You’ll need to add a .runsettings file to your solution. To do so, right-click on your solution and add a new item (XML file) and make sure it’s named something like settings.runsettings, the .runsettings extension being key. Make sure this file gets into source control.

The basic content to get recordings working:


<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
   <RunConfiguration>
     <ResultsDirectory>.\TestResults</ResultsDirectory>
   </RunConfiguration>
   <DataCollectionRunSettings>
     <DataCollectors>
       <DataCollector uri="datacollector://microsoft/VideoRecorder/1.0"
                      assemblyQualifiedName="Microsoft.VisualStudio.TestTools.DataCollection.VideoRecorder.VideoRecorderDataCollector, Microsoft.VisualStudio.TestTools.DataCollection.VideoRecorder, Version=15.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
                      friendlyName="Screen and Voice Recorder">
         <!--Video data collector was introduced in Visual Studio 2017 version 15.5 -->
       </DataCollector>
     </DataCollectors>
   </DataCollectionRunSettings>
</RunSettings>

In Visual Studio, under the Test menu –> Test Settings select Select Test Settings File and choose the new .runsettings file.

And it should be as simple as that. After a test executes in the test results folder there should be a sub-folder with a guid for a name and inside it will be a .wmv file which is the recording of the test being run in the browser. The one downside is that it appears to be an all or nothing approach to capturing video. If this is enabled all tests in the run have individual recordings created. I couldn't find a way to start/stop recording on the fly from code.

Azure DevOps

To make use of the images and videos when running tests from Azure DevOps you’ll need to make a couple adjustments.

In the Visual Studio Test task under Settings file choose the .runsettings file you created.
Also if you aren’t already using a Publish Build Artifacts step after the tests run, do so and publish the TestResults folder.

Now any images that were added to the TestContext or videos captured will be available in the test results.

Monday, April 15, 2019

Keep Your EasyRepro NuGet Package References Correct

More than once while working with EasyRepro projects I’ve found myself in a situation where tests that were once working inexplicably stopped. After combing through code and verifying credentials I eventually figured out that references to one or more of the required Selenium components somehow got updated without my knowledge. An example of when this can be particularly frustrating is when the Chrome driver gets updated to the latest version which works with the always updated version of Chrome installed on my machine. Everything works fine when running tests locally. When deploying to Azure DevOps and running on an agent where an older version of Chrome is installed, everything fails because the latest driver doesn’t support older browser versions.

To avoid this issue I created a PowerShell script which will reset the Selenium component versions referenced in the project to what EasyRepro supports. Luckily this mismatch between driver and browser doesn’t seem to effect the opposite scenario of what I previously described, at least when Chrome is being used.

Older driver version + newer browser version = OK
New driver version + older browser version = NOT OK

Code


When this runs it will update the packages.config file in the project and make sure that the versions listed at the beginning of the script match. If there is a mismatch it will also update any references in the project file. If this makes an update when the project is open in Visual Studio you’ll be prompted about a conflicting modification (because of the background update), go ahead and select Overwrite and everything should be good.

There are 2 ways of using the script.

Use during development

1. Add a folder name Tools to your test project and add to it this script and a copy NuGet.exe.
2. Open the .csproj for your project and add these lines:

<Target Name="FixEasyRepro" BeforeTargets="EnsureNuGetPackageBuildImports">
   <Exec Command="powershell.exe -NonInteractive -ExecutionPolicy Unrestricted Tools\FixEasyReproPackageReferences.ps1" />
</Target>


This will execute the script prior to the automatic NuGet package restore which happens before the actual project build.

Use in Azure DevOps

The package restore and build process works a little different in Azure DevOps. The recommended approach is to use a NuGet build task to restore prior to executing the build task. The script will still run however the automatic package restore will not happen. If there was an update made you’d likely see a build failure because NuGet had already restored an incorrect package version. In order to maintain the modification to the project file so it works locally, add a PowerShell task which executes the script before the NuGet task runs. This will correct any mismatches so that the correct versions are restored. When the script re-runs during the build step, everything will already be correct and the build should complete.