Exposing Bugs with Tests in Microsoft Dynamics NAV

This post is not an introduction to automated testing in Microsoft Dynamics NAV. If you don’t know what I’m talking about you should check out Luc van Vugt’s blog here first.

If you aren’t writing any automated tests for your code at the moment you should consider starting. You know that really, but it can difficult to get going. You need time to learn and become familiar with the testing framework in the first place. Then it takes time to write the tests. And then time to run them. Time you don’t have, time you’d rather spend doing something else…etc.

I Don’t Have Time

On that point, it really doesn’t take long to write tests after you’ve done your first few. There are tons of ‘library’ functions that help you create test data and documents with minimal fuss. You want to create a customer, create a sales order with an item line and then post it? That’s three lines of code.

Start with Bugs

A good place to start might be with bugs. Half the challenge with writing tests is defining exactly what you’re testing i.e. “given some initial scenario when X happens the expected result is Y.” Breaking a big codebase into snippets like that can seem daunting. With a bug you’ve got all the information you need. “Under these circumstances, when X happens we are getting Z when we were expecting Y.”

1. Write the Test First

Rather than diving into the code to find and fix the cause of the problem, take a minute to write a test for it. Recreate the steps that are leading to the bug in your test. Run the test and confirm that it fails.

Why write the test first? If you are going to use the test to confirm that the bug has been fixed (and doesn’t reappear sometime in the future) you want to be sure that the test itself works. If you write the test after the fix and it passes is that because you’ve fixed the problem or because the test is incorrect and will always pass?

Capture the bug while you’ve got the steps and the code to reproduce it. That way, you know that you’re looking at the right thing and that you’re actually fixing the issue.

2. Fix the Bug

Now you can go ahead and put the bug fix in.

3. (Re)Run the Test

Once you think you’ve fixed the problem you can run the test and confirm that it now passes. A test that passes having previously failed ought to give you a lot more confidence than one that has always passed.

Even better, you’ve added a test to your library that you can re-run when you make future changes to your code. You’re much like likely to reintroduce a bug that you’ve seen before if you’ve got a test that is specifically looking out for it.

Using PowerShell Jobs for Parallel Execution

PowerShell is great and allows us to automate a bunch of tasks that would otherwise be repetitive and boring. Recently we’ve been investigating how to build a Microsoft Dynamics NAV environment (environment = SQL database, service tier and client) on demand on the local machine or separate SQL and NAV servers on the network.

Restoring the appropriate SQL backup, creating and configuring a service tier of the correct build and opening the correct client is all rather tedious and is ripe for some automation.

The details of how to build the environment isn’t really the subject of this post (maybe I’ll write about it one day) but how to make that process fast.

The sub-tasks of

  1. Restoring the SQL backup
  2. Obtaining the required object files from source control

can be relatively* time consuming, depending on how many objects need to be downloaded. However, there is no requirement for them to done in sequence. The whole process will be faster if we can do them at the same time.

We’re achieving this with a combination of sessions and jobs.

$Jobs = @()

$RestoreSQLScriptBlock = { [some code to restore the SQL backup] }
$SQLSession = New-PSSession $SQLServer
$Jobs += Invoke-Command –Session $SQLServer –ScriptBlock $RestoreSQLScriptBlock –AsJob
$CreateServerInstanceScriptBlock = { [some code to create NAV service tier] }
$NAVSession = New-PSSession $NAVServer
$Jobs += Invoke-Command –Session $NAVServer –ScriptBlock $CreateServerInstanceScriptBlock –AsJob
Write-Host 'Waiting for tasks to complete...'
Receive-Job $Jobs -Wait

If the above isn’t clear enough this is how it works:

  1. Create a $Jobs collection
  2. Create a script block variable to store the commands to execute on the target server(s)
  3. Create a remote PowerShell session on the target server(s)
  4. Use Invoke-Command to execute the script block in the remote session as a job and add the job details to the $Jobs collection
  5. Wait for all the jobs in $Jobs to complete before continuing with the rest of the script

Progress bars and output from the remote sessions are received by the parent session as it comes in. The remote server(s) need to be configured for PowerShell remoting, see Enable-PSRemoting.


*tens of seconds. Longer than I’m prepared to wait – I’m impatient.

Add Current Windows User to a NAV Database

With our proliferation of development databases for our Dynamics Additions products, creating a Microsoft Dynamics NAV user for the current Windows user is something that I do relatively frequently.

Combining as it does my twin passions for NAV and PowerShell I thought it was worth a quick post.

Import-Module ‘C:\Program Files\Microsoft Dynamics NAV\100\Service\Microsoft.Dynamics.Nav.Management.psm1’

New-NavServerUser [ServerInstance] -WindowsAccount (whoami)
New-NavServerUserPermissionSet [ServerInstance] -WindowsAccount (whoami) -PermissionSetId SUPER

This will create a new Windows authenticated user in the database for the current Windows user and assign the SUPER permission set to them.

Replace [ServerInstance] with the name of a NAV server connected to the relevant database. The path in the Import-Module line is the default location for the management module for NAV 2017. Replace with the correct path for your installation if it is different.

Function Overloading in Microsoft Dynamics NAV

Object oriented programming has the concept of function overloading, that is, an object can have multiple functions with the same name as long as their signatures (the combination of parameters and return value) are different. This allows an object to respond to a function call differently depending on the context.

But wait, what are you talking about? Microsoft Dynamics NAV doesn’t work like that. C/AL is not an object oriented language.

It’s true, but that doesn’t mean that some of the good reasons to use function overloading in other languages don’t apply to NAV development. Unscrambling the double negative in the previous sentence – the principle behind function overloading can still be helpful in NAV development.

Time for an example:

I’ve got an AddData function. We don’t need to worry too much about what it does for now other than it creates records in a table.


Later in the development I realise that I need a new field in the table that will be populated in some circumstances, but not most. The obvious solution is to add a new parameter to the existing function. But that will break my existing code, requiring a change to the function and any code that calls it.

Making changes to code that already works has an inherent risk that you’ll introduce bugs and so is something that should be minimised.

As already noted, I can’t have another function called AddData with a different signature, but I can do the next best thing.


Create a new function, AddDataWithControlType with the same parameters as AddData plus the new ControlType parameter I need. This has several benefits:

  • I avoid changing the AddData function and existing code that calls it – therefore I can’t introduce any bugs
  • My new function can reuse the AddData function that I know already works
  • I only call this new function when it is relevant. I don’t have to pass the ControlType parameter in scenarios where it won’t be used – it keeps the code cleaner and easier for someone else to read.

This is a fairly trivial example, but the concepts of minimising changes to existing code and reusing existing code are important and will help you to write more extensible and less buggy code.

An alternative approach to function overloading to consider is the Argument Table design pattern.

Working with XML in PowerShell

PowerShell variables are very accommodating creatures. You can assign any value to them and the variable will automatically store it in an appropriate data type. This is great 99% of the time – you can throw the result of any sort of cmdlet at them and leave PowerShell to sort itself out. Having assigned the value to the variable, PowerShell ISE’s intellisense will let you know the properties and methods you can access in the variable’s new state.

For the record, this is called ‘loose’ or ‘weak’ typing. You’ll find a similar approach in other languages, JavaScript or PHP, for example.

One time you might want to consider specifying the type of a variable in PowerShell is when working with xml. Take the following xml:


If I load the content of a file containing that xml into a variable with Get-Content, PowerShell will type the variable as a string and give me stringy properties and methods.

String Example

If I declare $OrderXml to be of type [xml], however, I get a different set of properties and methods. Far more useful for selecting nodes, iterating through NodeLists and other essential XML stuff.

Xml Example

The $Line variable even has properties for each of its child nodes, allowing you to access their values directly. This is the corresponding output:

PS C:\WINDOWS\system32> [xml]$OrderXml = Get-Content "C:\Users\James.Pearson\Desktop\test.xml"
foreach ($Line in $OrderXml.SelectNodes('/root/header/line'))

Context of Events in Microsoft Dynamics NAV

Microsoft Dynamics NAV 2016 introduced events, and a big change it was too. If we want to execute some code when something happens in the standard application we can do so without changing any standard code – as long as the ‘something’ has an event associated with it.

One of the most useful examples of this is table events. When a record is inserted, modified, renamed or deleted we want to know about it and have the option to execute some code when it happens.

2 important points to consider:

  1. Your event subscription will be triggered when temporary records are inserted/modified/renamed/deleted. You may not be interested in these events so you should EXIT when this is the case. See Record.ISTEMPORARY.
  2. Your event subscription will be triggered with and without the trigger code having been executed i.e. INSERT(TRUE) and INSERT(FALSE) will both trigger your event subscription. The RunTrigger parameter of your subscription will tell you which it was.

All clear so far? Good, let’s continue.

I’ve got a scenario where I want to know when Sales Line records have been inserted or deleted by the user. When the user inserts/deletes a record on a page INSERT(TRUE)/DELETE(TRUE) will always be called. This means my event subscriptions can test whether RunTrigger is true, and if not, exit without doing anything. This is necessary because there are some places where Sales Lines will be inserted/deleted that I don’t want to know about e.g. lines being DELETE(FALSE)ed when a sales order is posted.

But wait, there are some cases of INSERT(FALSE) that I do want to know about. For instance the Copy Document Mgt. codeunit. That inserts sales lines with INSERT(FALSE) so will be ignored by my code but it’s a scenario I want to react to.

Q: How can I tell why my event subscription is being triggered?

A: Put a breakpoint on your code, debug a session that calls your code and look at the call stack window.

Q: Yes, but how do I get at that information from my code?

A: It’s ugly, but there is a way.

The GETLASTERRORCALLSTACK function returns some text that has the details of all the object types, ids, functions and code line numbers that were being executed at the time of the last error.

If I throw an error in my code I can then use GETLASTERRORCALLSTACK to analyse the code that was being executed at that point in time and therefore know how my event subscription ended up being called.

Obviously, I can’t just do something like:


as the code will be stopped by the error. Instead I can use a TryFunction to throw the error in a way that won’t prevent the rest of the code from executing.

IF NOT ThrowError THEN

//analyse the call stack and decide what to do…

LOCAL [TryFunction] ThrowError()

I told you it was ugly 😉 but it might be a solution worth considering. A few other points to consider:

  • Handle try functions with care. Vjeko discusses the subject at some length here: http://vjeko.com/whats-new-in-nav-2016-splitting-atoms-with-tryfunction
  • None of this would be necessary if you were prepared to make modifications to some standard objects. If that is an option in your scenario then weigh the benefits of having a small/no footprint in standard NAV vs. the complexity/performance hit of this option.
  • Be mindful of how often your code might get executed.
    • Only resort to reading the call stack when absolutely necessary and then spend time making your code as efficient as possible.
    • It would be a shame to have beautiful context-aware event subscriptions that couldn’t be used because they made the system too slow.

Preview Microsoft Dynamics NAV Delta Files in Windows Explorer

A quick Windows Explorer tip for you.

If you do much work with NAV objects, exporting, comparing, merging, upgrading etc. then no doubt you work with PowerShell and .delta files a lot. You do use PowerShell, right? If not, you really should.

I’m a fan of the preview pane in Windows Explorer to sneak a peak at the contents of files without having to open them. It’s fine for .txt files, but not .delta files. Windows Explorer doesn’t know what .delta files are so doesn’t know how it should preview them.

That’s annoying. Fortunately you can tell Windows Explorer that .delta files are just text files. Once you’ve done that it will happily serve up a preview for you.

Add a string registry key to HKEY_CLASSES_ROOT\.DELTA\PerceivedType with a value of “text” (no quotes).



Working in “The Cloud”

Ah yes, “The Cloud”. It’s such a ubiquitous concept, mentioned in just about every IT context you can think of. However, it’s often pretty vague and can leave you wondering what on earth people are actually talking about and why you should listen.


Here’s a small but hopefully mildly interesting example of why cloud computing might be useful for you.

You may have noticed that there was a fairly substantial update to Windows 10 this week. I made the mistake of trying to install it during the day, meaning I couldn’t work on my laptop for around an hour and a half.

Fortunately, I spend most of my day working in Google Chrome or a remote desktop session to one of our servers in Microsoft Azure. The below covers most of what I need to do day to day:

  1. Email – Outlook on Office 365. I prefer the online version to the traditional desktop client these days.
  2. Writing requirements for my developers to work on – Visual Studio Online, accessed via the online portal.
  3. NAV development and testing – all done on a server in Azure which I connect to via an RDP session.
  4. Working on shared documents – SharePoint on Office 365, using the online versions of Office apps to edit them.
  5. Other document storage – I use a combination of OneDrive and Google Drive, both have the capability of editing your documents in the browser.
  6. Note taking – Evernote, which has a lovely online interface for reading and editing.
  7. Tecman’s internal Microsoft Dynamics NAV – hosted in Azure, access via the web client.
  8. Tecman’s internal Microsoft Dynamics CRM – also accessed through the browser.
  9. I use Skype and Skype for Business to keep in touch with colleagues – both via desktop clients but both have online alternatives (albeit with limited functionality).

While my laptop was out of action I was able to jump onto a different machine and carry on working – without installing any software or even logging on as myself. I can’t pretend that working entirely in the cloud is a flawless experience (yet), but it is possible – at least for the kinds of tasks that I need to do.

At the very least it is good to know that I can access all the systems and data that I need from any machine, anywhere with internet access.

Oooooo….the Clooouuud indeed.

How quickly could you be up and running on a different machine? What can you not do working from home that you can from the office?

Find more information at https://www.tecman.co.uk/Software/Applications/Office-365

Iterating Through Files in a Directory with FOREACH

Microsoft Dynamics NAV has a virtual “File” table which allows you to list files in a directory. You can apply a filter to this table e.g. File.SETFILTER(Path,’C:\*’) and loop through the records to interact with the files that you are interested in.

For years this was the best way to get at the files in a folder, especially when you didn’t already know the file names to expect. However, I suspect I’m not the only one to encounter quirks with this table e.g. having to set different filters to refresh the table, trouble reading the files from a network folder.

Thankfully, the addition of .Net support has given us a more reliable way to do this and NAV 2016’s FOREACH keyword makes it easier still. For the uninitiated, FOREACH allows you to iterate through all of the elements of an array without having to know how long the array is before you start.

FOREACH variable of array data type IN array variable DO BEGIN
  //something useful

This is perfect for iterating through files in a folder, like so:

FOREACH Filename IN Directory.GetFiles(directory path) DO BEGIN
  //the Filename variable contains the path to the current file

where Filename is a text variable and Directory is a DotNet variable of System.IO.Directory (which you’ll find in mscorlib).

Directory.GetFiles() returns the file names in the given directory as an array of strings. Helpfully, NAV maps .Net strings to C/AL text automatically, allowing you to FOREACH through the array with a text variable.


It’s been quite a while since I posted anything. I’ve been busy working on our Dynamics Additions for NAV. However, I spent a pleasant hour working with PowerShell yesterday afternoon on a solution I thought it was worth sharing.

The Problem

We have several servers in Azure for hosting NAV service tiers. We want some way of checking the licence that has been saved into each database (that doesn’t involve someone connecting the development environment to each database in turn).

The Solution

PowerShell…of course.

The plan is pretty straightforward.

  1. Load the NAV management module
  2. Get a list of service tiers that are running
  3. For each service tier export the licence information and read certain values that we are interested in: account no., name, configuration
  4. Build a custom object to collect the results and present in a GridView

The Code

if ([IO.File]::Exists('C:\Program Files\Microsoft Dynamics NAV\90\Service\Microsoft.Dynamics.Nav.Management.dll')) 
        Import-Module 'C:\Program Files\Microsoft Dynamics NAV\90\Service\Microsoft.Dynamics.Nav.Management.dll'
        Import-Module 'C:\Program Files\Microsoft Dynamics NAV\80\Service\Microsoft.Dynamics.Nav.Management.dll'
    $NAVServers = Get-NAVServerInstance | Where-Object -Property State -eq 'Running'
   $LicenceConfigs = @()
  foreach ($NAVServer in $NAVServers)
        $ServerConfig = Get-NAVServerConfiguration $NAVServer.ServerInstance
        $DatabaseName = ($ServerConfig | Where-Object -Property Key -EQ DatabaseName).Value
        if (($LicenceConfigs | Where-Object -Property Database -eq $DatabaseName).Count -eq 0)
            $LicenceConfig = New-Object System.Object
            $LicenceInfo = Export-NAVServerLicenseInformation $NAVServer.ServerInstance         
            $LicenceConfig | Add-Member -MemberType NoteProperty -Name 'Database' -Value $DatabaseName
            $LicenceConfig | Add-Member -MemberType NoteProperty -Name 'ServerInstance' -Value $NAVServer.ServerInstance
            $LicenceConfig | Add-Member -MemberType NoteProperty -Name 'AccountNo' -Value ([Regex]::Match($LicenceInfo,'VOICE Acc.*:.*').Value.Substring(26)).TrimEnd([Environment]::NewLine)
            $LicenceConfig | Add-Member -MemberType NoteProperty -Name 'Name' -Value ([Regex]::Match($LicenceInfo,'Licensed to *:.*').Value.Substring(26)).TrimEnd([Environment]::NewLine)
            if ([Regex]::Match($LicenceInfo,'Configuration *:.*').Success)
                $LicenceConfig | Add-Member -MemberType NoteProperty -Name 'Configuration' -Value ([Regex]::Match($LicenceInfo,'Configuration *:.*').Value.Substring(26)).TrimEnd([Environment]::NewLine)
                $LicenceConfig | Add-Member -MemberType NoteProperty -Name 'Configuration' -Value ''
            $LicenceConfigs += $LicenceConfig
    $LicenceConfigs | Sort-Object -Property DatabaseName

How it Works

First, we need to load the NAV management module. If it exists in the 90 folder (NAV 2016) use that, if not use the 2015 version in the 80 folder. Clearly, there is room for improvement here. I’ve assumed that NAV is installed in the default path and I’ll also need to change the path when we install a newer version of NAV. It’s a little ugly, but for my purposes this is OK.

Having done that, find NAV service tiers which are running and foreach through them. Find the database that the service tier is connected to and only proceed if we don’t already have a result for that database. We have multiple service tiers connecting to the same database but they will all share the same licence details.

Use the Export-NAVServerLicenseInformation cmdlet to obtain the text of the licence information (the same thing you can find in the development environment with Tools, License Information).

Use Regex to match the particular lines of information that we are interested in and write the values into the properties of a custom object. Add that object to an array and finally sort that array by the DatabaseName property.

Executing Remotely

The above gets us an array of objects with the information that we are after for the local machine. All good and well, but we’ve got several servers that we need to retrieve the information from.

If you haven’t already, run Enable-PSRemoting on each of the severs you need to execute commands on remotely. This punches a hole in the firewall, sets up some security and performs some other wizardry.

First, package all of the above code into a script block. Like so,

$ScriptBlock = { all of the above }

The script will no longer be executed, but saved into the $ScriptBlock variable for later use. We can use it with Invoke-Command like this:

Invoke-Command $ScriptBlock –ComputerName remote_computer_name

The script will be executed on the remote server and results returned to the parent PowerShell session from whence it was called. I want to gather the results from each of the servers and display them into a single output. I can do that with another array, something like the below:

$Results = @()
$Results += Invoke-Command $ScriptBlock -ComputerName first_server 
$Results += Invoke-Command $ScriptBlock -ComputerName last_server
$Results | Out-GridView -Title 'Dynamics NAV Licence Details'