Monday, June 17, 2019

Reporting on EasyRepro Test Runs

There were several issues raised on the EasyRepro project that requested a report of the results after a test run completed. One specifically referenced using the Extent Reporting Framework as a means to accomplish this. It seemed like a reasonable ask so I thought I’d give it a try.

I'd expect a decent report to provide not only the results of a run showing if tests passed or failed but also detail about what specifically was being tested and include screen shots throughout the process. This will make it easier for a non-developer to interpret and possibly troubleshoot tests in the event they fail.

Getting started

First thing’s first, add a reference to ExtentReports to your existing EasyRepro test project.

To keep things cleaner I'd suggesting using a base class for all your tests to cut down on the duplication. Except for a few helper methods, the majority of the code will reside in the MSTest initialize and cleanup methods.

AssemblyInitialize (runs once prior to any of the tests) primarily contains what is needed to set up the report instance. Be mindful of the report file name and output path. In addition to providing some nice looking visualizations for test output, Extent also provides the ability to add some additional environment level information to the dashboard so the consumer has a little more context. If you're changing values throughout the test run then this isn't going to be the right place to record them. In my example I'm showing the browser being tested along with the test user and D365 CE instance.

The final line creates a grouping for the individual tests contained in the class. This makes sense assuming all the tests in the class are related to one another.


protected static ExtentReports Extent;
protected static ExtentTest TestParent;
protected static ExtentTest Test;
protected static string AssemblyName;
public TestContext TestContext { get; set; }

[AssemblyInitialize]
public static void AssemblyInitialize(TestContext context)
{
    AssemblyName = Assembly.GetExecutingAssembly().GetName().Name;

    // http://extentreports.com/docs/versions/4/net/
    var dir = context.TestDir + "\\";
    const string fileName = "report.html";
    var htmlReporter = new ExtentV3HtmlReporter(dir + fileName);
    htmlReporter.Config.DocumentTitle = $"Test Results: {DateTime.Now:MM/dd/yyyy h:mm tt}";
    htmlReporter.Config.ReportName = context.FullyQualifiedTestClassName;
    htmlReporter.Config.Theme = Theme.Dark;

    // Add any additional contextual information
    Extent = new ExtentReports();
    Extent.AddSystemInfo("Browser", Enum.GetName(typeof(BrowserType), TestSettings.Options.BrowserType));
    Extent.AddSystemInfo("Test User", 
        System.Configuration.ConfigurationManager.AppSettings["OnlineUsername"]);
    Extent.AddSystemInfo("D365 CE Instance",
        System.Configuration.ConfigurationManager.AppSettings["OnlineCrmUrl"]);    
    Extent.AttachReporter(htmlReporter);
    context.AddResultFile(fileName););

    // Create a container for the tests in the class
    TestParent = Extent.CreateTest(context.FullyQualifiedTestClassName);
}

In TestInitialize (runs prior to each test) the main thing happening is adding the individual test to the group created for the class. It's initialized using the unit test method name pulled from the test context and the unit test description attribute (if one if present). The description isn't available in the test context but given the information available it can be retrieved.


[TestInitialize]
public void TestInitialize()
{
    // Get unit test description attribute
    var type = Type.GetType($"{TestContext.FullyQualifiedTestClassName}, {AssemblyName}");
    var methodInfo = type?.GetMethod(TestContext.TestName);
    var customAttributes = methodInfo?.GetCustomAttributes(false);
    DescriptionAttribute desc = null;
    if (customAttributes != null)
    {
        foreach (var n in customAttributes)
        {
            desc = n as DescriptionAttribute;
            if (desc != null)
                break;
        }
    }

    // Create individual test under the parent container / class
    Test = TestParent.CreateNode(TestContext.TestName, desc?.Description);
}

The only purpose of the code in TestCleanup (runs after to each test) is to set the Extent test result for the report. The goal was to differentiate between a test that passed, failed because of an exception, or failed because the criteria for passing was not met. There are a number of other statuses but I'm not sure how often you'd run into any of them.


[TestCleanup]
public void TestCleanup()
{
    // Sets individual Extent test result so it reflects correctly in the report
    if (Test.Status == Status.Error)
        return;

    switch (TestContext.CurrentTestOutcome)
    {
        case UnitTestOutcome.Error:
            Test.Fail("Test Failed - System Error");
            break;
        case UnitTestOutcome.Passed:
            Test.Pass("Test Passed");
            break;
        case UnitTestOutcome.Failed:
            Test.Fail("Test Failed");
            break;
        case UnitTestOutcome.Inconclusive:
            Test.Fail("Test Failed - Inconclusive");
            break;
        case UnitTestOutcome.Timeout:
            Test.Fail("Test Failed - Timeout");
            break;
        case UnitTestOutcome.NotRunnable:
        case UnitTestOutcome.Aborted:
            Test.Skip("Test Failed - Aborted / Not Runnable");
            break;
        case UnitTestOutcome.InProgress:
        case UnitTestOutcome.Unknown:
        default:
            Test.Fail("Test Failed - Unknown");
            break;
    }
}

AssemblyCleanup (runs after all of the tests) ensures the data collected gets written to the output file.


[AssemblyCleanup]
public static void AssemblyCleanup()
{
    Extent.Flush();
} 

I'm using 2 helper methods to support the reporting. AddScreenShot takes a screen shot and tags it with some text, in most cases it's a description of what state the page is in. LogExceptionAndFail grabs the error message and stacktrace, formats it, logs an error in the report, and rethrows so the test still fails due to the exception.


public void AddScreenShot(WebClient client, string title)
{
    var filename = Guid.NewGuid();
    var filePath = Path.Combine(TestContext.TestResultsDirectory, $"{filename}.png");
    // Wait for the page to be idle (UCI only)
    client.Browser.Driver.WaitForTransaction(5);
    client.Browser.TakeWindowScreenShot(filePath, ScreenshotImageFormat.Png);
    Test.Info(title, MediaEntityBuilder.CreateScreenCaptureFromPath(filePath).Build());
}

public void LogExceptionAndFail(Exception e)
{
    // Formats the exception details to look nice
    var message = e.Message + Environment.NewLine + e.StackTrace.Trim();
    var markup = MarkupHelper.CreateCodeBlock(message);
    Test.Error(markup);
    throw e;
}

Capturing report data

A unit test class will need to inherit from the pre-defined base class in order to output to the report. If a description is placed on the test it will show on the report and give the person looking at the report a better idea of what the test is trying to accomplish.

Various levels of text-based messages (Info, Warning, Debug, etc.) can be written to the report output depending on the type of information you'd like to surface.

To keep track of the test as it progresses I'm using one if the helper methods from the base class to take a screen shot and assign some text to it describing the operation that was just attempted. This also should help to capture the state of the page in case the next step should fail. This approach might not be conclusive in all cases seeing that many of the EasyRepro methods that interact with the page are performing multiple operations to get to the end result. A failure in the middle wouldn't be reflected in the prior screen shot. This is where capturing a video of the entire test comes in handy.

I've also wrapped each test in a try/catch block so that any exceptions can be run through the other helper method in order to capture the details and fail the test so it reports an error rather than a standard failure in the results.


[TestClass]
public class CreateAccount : TestBase
{
    [TestMethod]
    [Description("Test should fail due to an error")]
    public void CreateAccount_Error()
    {
        // Example log entries
        Test.Info("Log an information message");
        Test.Warning("Log a warning message");

        try
        {
            var client = new WebClient(TestSettings.Options);
            using (var xrmApp = new XrmApp(client))
            {
                xrmApp.OnlineLogin.Login(_xrmUri, _username, _password);
                AddScreenShot(client, "After login");

                xrmApp.Navigation.OpenApp(UCIAppName.Sales);
                AddScreenShot(client, $"After OpenApp: {UCIAppName.Sales}");

                xrmApp.Navigation.OpenSubArea("Sales", "Accounts");
                AddScreenShot(client, "After OpenSubArea: Sales/Accounts");

                xrmApp.CommandBar.ClickCommand("New");
                AddScreenShot(client, "After ClickCommand: New");

                // Field name is incorrect which will cause an exception
                xrmApp.Entity.SetValue("name344543", TestSettings.GetRandomString(5, 15));
                AddScreenShot(client, "After SetValue: name");

                xrmApp.Entity.Save();
                AddScreenShot(client, "After Save");

                Assert.IsTrue(true);
            }
        }
        catch (Exception e)
        {
            LogExceptionAndFail(e);
        }
    }
}

Report output

The report presents a nice looking dashboard which sums up detail about all the tests performed. In this example Tests reflects the number of unit test classes I included in the run and Steps shows the number of individual tests. If any of the steps fail, it reports the overall test has failed.


You can drill into each individual test so see what was logged, start/end times, and the duration. 
Clicking on a screen shot that was included will display a full-size version. I've also chosen to format the exception details in a code block, so they stand out from the regular text.


You can download the full sample from GitHub.