Monday, January 7, 2019

Log Azure DevOps Build Times to Application Insights

I’ve been working a lot with Azure DevOps recently to automate Dynamics 365 CE deployments. One of the first things I noticed was that using the hosted build agents resulted in slower builds than I was used to, comparatively speaking anyway to performing the same operations on your local machine. It’s not surprising given that for each build the agent is spun up and torn down and nothing can be re-used from previous runs. Whereas on your local machine NuGet packages are already downloaded, any build tools are already installed, etc. so naturally it’s way faster. Building a simple plug-in and running a few unit tests was taking close to 2 minutes. Which is fine I guess considering rarely would anyone be staring at it waiting for it to finish. The thought though was that once you started building real projects with lots of unit tests the build could end up taking quite a long time to finish, potentially backing up other processes. I figured it would be a good idea to keep an eye on how long they’re taking by showing the average time for a build on a dashboard in hopes that it would draw attention if the times started getting long without anyone really noticing. I checked the Visual Studio Marketplace but didn’t find any existing tools to handle this so I thought I’d put one together using Application Insights to store the data. There was already a free dashboard widget available which could surface that data so I figured it would be the easiest route to take.

I wanted to log the time at the end of the build (knowing the results will be off a few seconds) as opposed to relying on some other process like a Web Hook / Azure Function combination. I decided to use a PowerShell task as the last step of the build to do the logging.

Have a look.

Here’s a quick rundown on what the code is doing.

Initial portion if calculating the number of minutes based on when the build started and when the code ran. I’m using one of the predefined variables called System.PipelineStartTime to establish the start. I’m not actually sure that’s documented anyplace as far as I can tell but if you were to do a dump of all the system variables it shows up. The number of minutes you end up with is relatively close to what is reported if you looked at the completed build in the UI, give or take 6-10 seconds. I figured that was an acceptable margin for error since you’d probably never look to optimize a build if you were only going to gain a few seconds.

You’ll need to provide the Instrumentation Key for your Application Insights instance so there’s a basic check to make sure that was provided. Of course store that in a Variable Group or at a Build Variable instead of hard coding to make changing it easy. There’s also the side benefit of hoping that nobody trolls my GitHub account because I had example code with hard coded credentials to simplify things for those people who are still learning. :)

I chose to use the Metric type in Application Insights to record the data being that it’s numeric in nature. You can see the JSON body of the request that is going to be made, most of it is the boilerplate request. There are two places where you can make changes to the data being sent. Under ‘metrics’ the ‘name’ can be changed to anything you’d like. I used “Build Time” to describe the type of metric being recorded and the the build definition name so I knew which build it came from. You could make it anything but be aware that this is used for aggregation so you won’t want any dynamic values here. The other place is under ‘properties’. Those can be any key/value pairs you want, so if you want to capture detail about the build agent, values from the predefined build variables, or something else entirely you can put it here and you’ll be able to query on it in Application Insights.

Finally is gives a check to see if the request was successful. It certainly doesn’t hurt to be verbose with the logging. Trying to debug failed builds when you have to wait 2+ minutes between runs is painful so do yourself a favor and do whatever you can to speed that process up.

Another quick note, I’m using 2 different formats when referencing variables and that is on purpose. The majority are in the format like $(Build.DefinitionName) but the instance where where the pipeline start time is used it’s in the format like $Env:System_PipelineStartTime. The reason being is that I have this script in a Task Group (which basically combines multiple steps into 1 that can be reused) and by default it will scan through all the variables you’re using and turn them into parameters. In this case I want the value to be retrieved at run time instead. I could have jumped though a few hoops and exported and reimported the task group after editing by hand but I figured out that just using $Env: instead causes it to be skipped when parameters are gathered.

In a future post I’ll walk through getting the information on an Azure DevOps dashboard.

Monday, November 12, 2018

Use Xrm.WebApi in a stand alone Web Resource

I ran into this issue working on the last update to CRM REST Builder when adding the Xrm.WebApi functionality. Note this won’t effect you if you’ve got a web resource embedded in a form.

Attempting to execute a simple retrieve for example:"account"…
And I was greeted with this error:

Unexpected token u in JSON at position 0

What does that even mean? I bit of searching and I ran across this Stack Overflow question. So it seems Xrm.WebApi is looking for a variable which maps entity names with their corresponding entityset names. Easy fix since I had all the data so I dynamically built a variable with all the mappings which looked a bit like this (you only need to add the entities you're working with):
window["ENTITY_SET_NAMES"] = ['{"account":"accounts", "contact":"contacts"}'];
Got further along and started working implementing actions and functions and ran into the same error once again, WTF. After some tedious debugging through the Microsoft code I found there is another dependency on a similar variable. This one maps entities to their primary id fields. So I added something like this with all these mappings:
window["ENTITY_PRIMARY_KEYS"] = ['{"account":"accountid", "contact":"contactid"}'];
After that my problems were solved.

Monday, October 15, 2018

CRM REST Builder v2.6.0.0

So here it is… finally.
  • Added 9.x support
  • Added Xrm.WebApi
  • Better support for Action/Function parameters
    • Collection handling
    • Entity related parameters allow choosing the entity type if not specific and determines id field
  • Fixed a bunch of bugs
    • Displaying XML in results
    • Issue with fields containing “_value”
    • Decimal parameter conversion
    • Function parameter passing via query string
  • Other usability enhancements
    • Default unpopulated ids to empty guid when generating code
Also no longer supporting the 2011/2013 versions. They are still available on GitHub, just no new updates.

This will probably be the last major update for this so I can focus on creating a Web API only version that runs in the XrmToolBox.

Download here for CRM 2015/2016 & D365 CE (7.0+)

Log any issues here:

Thursday, July 5, 2018

D365 Application Insights – Plug-ins & Custom Workflows

Not too many people were probably using Application Insights directly in plug-ins and custom workflow steps since it was introduced because every example I ever saw was writing using set a set of assemblies to making interfacing with the service easy (much like we do with D365). Of course awhile back I tried to ILMerge those assemblies into a plug-in but needless to say that didn’t work. I know a few instances where individuals had the logging proxied through something like an Azure Function which I’m sure worked perfectly well but then also took on the extra dependency (and cost) of whatever was doing the relaying. Luckily now we can avoid doing all that because we’ve now established there is a REST API and we can call it from just about anywhere.

I put a wrapper around it using C# so that you can easily include it plug-ins and workflow code. It’s ready to use right away but of course it can be modified as you see fit.
I’ve built in these telemetry types:
  • Trace
  • Event
  • Metric
  • Dependency
  • Exception
For examples of setting up and using, see the GitHub site’s wiki page.
By default with each request it’s going to log the following D365/CRM specific data as custom dimensions:
  • ExecutionContext
    • InitiatingUserId
    • ImpersonatingUserId
    • CorrelationId
    • Message
    • Mode
    • Depth
    • InputParameters
    • OutputParameters
    • Stage (plug-in)
    • WorkflowCategory (workflow)
    • PrimaryEntityName
    • PrimaryEntityId
    • OrganizationName
  • Organization version
  • Source (Plug-in or Workflow)
This is in addition to the normal items that are logged like the date/time of the event, originating location (datacenter city & state), etc.. Again it’s using the ids rather than friendly names to reduce overhead a bit. This could easily be updated to pull the friendly name or else you can cross reference it back to D365/CRM later.

Logging traces, metrics, events, caught exceptions, and dependencies are all pretty straight forward. Examples for everything is on the wiki page.

The constructor of the logger will need a configuration passed to it in terms of an object or a JSON string so you’ll need to have a way to get that information into your code by using the plug-ins configurations, workflow inputs, etc. You can control the types of telemetry logged, percentages of the events to log, or to enable and disable context tracking.

Besides context values if you so choose, nothing logged will be automatic. You’ll need to incorporate the Application Insights logger into your own exception handling and performance tracking.

The other important point to bring up is the actual duration it takes to log something. On average from D365 Online it was taking ~700ms to get a response back that the item was accepted. Doesn’t sound like much but if your plug-in is already busy, this isn’t going to help. The first way to mitigate that is to only log a percentage of “normal”, non-exception events. This was one of the driving factors to build in the functionality right away. That operates at a broad level across the logging instance and telemetry type. You can always add it your own filters to reduce the number of normal results. For example if you are logging the duration it takes to call an external web service and it normally take 50ms to complete, wrap the logging statement in something that checks to see if it exceeds 65ms before sending data. The other way to create a custom action which runs asynchronously so you have more of a fire-and-forget approach. This will run in a fraction of the time it takes when you are waiting for a response. If you don’t feel like creating your own I’ve done so already here so you can just install the solution.

All the code is on GitHub:

I’ve packaged the source files for distribution so that you can either use it as is or make your own modifications.

NuGet C# source package:

Thursday, June 28, 2018

D365 Application Insights – JavaScript

If you’ve implemented the sample integration for Application Insights you know that a small JavaScript file was provided that was to be included on any forms where you wanted logging. This little script downloads a big script and then by having this, all sorts of useful data is to be magically logged. On the previously linked page it shows the user name, a page/entity name of some sort, and a page view duration measuring in the seconds. I’m not quite sure where that data is coming from but I ended up with data that looked more like this.

No way to identify which form the data came from, no way to identify which user (user_id doesn’t tie back to anything), and the page view durations are all in the milliseconds.

The browser timing (page load times) metrics weren’t any better.

495 millisecond page loads? That never happened… not ever.

It’s not all bad. It does capture exceptions on the form (not that you can trace it back to where), it provides some interesting data on browser use, client location, etc.

It also doesn’t tell you that there is an existing SDK that you can use to log your own telemetry. This is where what I did comes in. I’ve packaged everything together, but ultimately you still end up using the same small initialization script (that defaults some options to off) which pulls down the larger script but then I’ve added a layer on top that adds a few features and fixes some of the previously mentioned problems.

I’ve built onto these telemetry types:
  • Trace
  • Event
  • Metric
    • Page load duration (custom metric implementation)
    • Method execution duration (custom metric implementation)
  • Dependency
  • Exception
  • PageView
For examples of setting up and using, see the GitHub site’s wiki page.
By default with each request it’s going to log the following D365/CRM specific data as custom dimensions:
  • User Id
  • Entity Name
  • Entity Id
  • Form type
  • Form name
  • Organization name
  • Organization version
  • Source (JavaScript)
  • + any other custom dimensions of your choosing
This is in addition to the normal items that are logged like the date/time of the event, browser info, etc. It’s using the ids rather than friendly names to reduce overhead a bit. This could easily be updated to pull the friendly name or else you can cross reference it back to D365/CRM later. Either way, you’ll still have more contextual information about the data you’ve logged.

When setting up on a form you’ll pass a JSON configuration string as a parameter to the setup function. It controls what types of items will be logged (example turn off tracing in production), percentages of event types to log (so maybe you only 25% of total page view times), & debug mode. Exceptions at the form level should still be logged (regardless of you catching them or not), page view times, and page load times will be logged without any interaction so long as they aren’t disabled. All the other items rely on the developer to call the specific logging function when needed.

My implementation of a page view should be a bit more accurate as it logging the total time spent on the page just as the user is closing the form. And how does this happen? As I learned, all modern browsers have the Beacon API which includes the sendBeacon method whos exact purpose is to do things like send telemetry just as the user is leaving the page. Of course Internet Explorer 11 doesn’t have this and it’s still a supported browser, in which can I had to resort to a bit of a hack by including a very short synchronous request in the page’s beforeunload event. Not really perceptible but still ugly. 

Now my page view time is showing around 41 seconds, that sounds a little more reasonable. Also shown here is the current user id as user_AuthenticatedId and also the default D365/CRM specific custom dimensions.

Page load time isn’t really anything special (other than being more accurate) as it’s just using the browser’s performance APIs to read what the browser records as opposed to trying to start a timer yourself which would never really be close to accurate anyway. I’m logging them a custom metric.


7.4 seconds sounds a little more accurate.

Logging traces, metrics, events, caught exceptions, dependencies, and method execution times are all pretty straight forward. Logging XMLHTTPRequest dependency times an be done with 1 line of code. Pass the request and whatever name you want to the tracking function and then you won’t need to worry about wrapping a timer around anything or handling the end times in both successful and error events. Examples for everything is on the wiki page.

All the code is on GitHub:

I’ve packaged the source files for distribution so that you can either use it as is or make your own modifications. If you’re going to being doing the later, read the notes here and/or look at the gulp file in the source code to see how the files are being put together.

NuGet JavaScript/TypeScript source package: