Business Central Docker on Windows 10

In Advania we are switching more and more to using the Docker images for Dynamics NAV and Business Central development.

Since version 1809 of Windows 10 and the latest blog post from Arend-Jan Kauffmann we are moving to using the Docker EE engine instead of the Docker Desktop setup.

Using the latest Windows 10 version and the latest version of Docker means that we can now use “Process Isolation” images when running NAV and Business Central.

Not using process isolation images on Windows 10 requires Hyper-V support. Inside Hyper-V a server core is running as the platform for the processes executed by the container created from the image. If using process isolation images then the Windows 10 operating system is used as foundation and Hyper-V server core is not needed. Just this little fact can save up to 4GB of memory usage by the container.

Freddy Kristiansen announced in this blog that his PowerShell Module, NAVContainerHelper, had support for selecting the proper Docker Image based on the host capabilities.

We have had some issues with our Windows installations and I wanted to give you the heads up and how these issues where resolved.

First thing first, make sure that you are running Windows 10 version 1809 or newer. Execute

winver.exe

in Windows-R to get this displayed.

Optional, make sure to remove the Hyper-V support if you are not using any virtual machines on your host. If you have version 1903 or later I suggest enabling the Hyper-V feature.

Restart your computer as needed.

Start PowerShell ISE as Administrator.

Copy from Arend-Jan‘s blog the Option 1: Manual installation script into the script editor in Powershell ISE and execute by pressing F5.

If you have older Docker Images download you should remove them. Executing

docker rmi -f (docker images -q)

in your PowerShell ISE prompt.

Now to the problems we have encountered.

The NAVContainerHelper added a support for the process isolation images just a few releases ago. Some of our machines had older versions installed and that gave us problems. Execute

Get-Module NAVContainerHelper -ListAvailable

in PowerShell ISE prompt to make sure you have version 0.5.0.5 or newer.

If you have any other versions installed use the File Explorer to delete the “navcontainerhelper” folder from

 C:\Program Files (x86)\WindowsPowerShell\Modules

and

C:\Program Files\WindowsPowerShell\Modules

Then execute

Install-Module NAVContainerHelper

in PowerShell ISE prompt to install the latest versions. Verify the installation.

We also had problems downloading the images. Getting the error “read tcp 172.16.4.17:56878->204.79.197.219:443: wsarecv: An existing connection was forcibly closed by the remote host.“.

My college in Advania, Sigurður Gunnlaugsson, figured out that multiple download threads caused network errors.

In PowerShell ISE prompth execute

Stop-Service docker
dockerd --unregister-service

to remove the docker service. Then re-register docker service using

dockerd --exec-opt isolation=process --max-concurrent-downloads 1 --register-service
Start-Service docker

in the PowerShell ISE prompt.

This should result in only one download thread and this way our download was able to complete.

More details on Docker images for Dynamics NAV and Business Central can be found in here.

Waldo’s Blog on Docker Image Tags

AdvaniaGIT and Docker

Tobias Fenster on Docker

Freddy´s Blog

JSON Interface – prerequisites

There are two objects we use in all JSON interfaces. We use the TempBlob table and our custom JSON Interface Codeunit.

Abstract

JSON interface uses the same concept as a web service. The endpoint is defined by the Codeunit Name and the caller always supplies a form of request data (JSON) and expects a response data (JSON).

These interface calls therefore are only internal to the Business Central (NAV) server and are very fast. All the data is handled in memory only.

We define these interfaces by Endpoints. Some Endpoints have Methods. We call these Endpoints with a JSON. The JSON structure is predefined and every interface respects the same structure.

We have a single Codeunit that knows how to handle this JSON structure. Passing JSON to an interface requires a data container.

Interface Data

TempBlob is table 99008535. The table is simple but is has a lot of useful procedures.

table 99008535 TempBlob
{
    Caption = 'TempBlob';

    fields
    {
        field(1;"Primary Key";Integer)
        {
            Caption = 'Primary Key';
            DataClassification = SystemMetadata;
        }
        field(2;Blob;BLOB)
        {
            Caption = 'Blob';
            DataClassification = SystemMetadata;
        }
    }

    keys
    {
        key(Key1;"Primary Key")
        {
        }
    }
}

Wikipedia says: A Binary Large OBject (BLOB) is a collection of binary data stored as a single entity in a database management system. Blobs are typically imagesaudio or other multimedia objects, though sometimes binary executable code is stored as a blob. Database support for blobs is not universal.

We use this BLOB for our JSON data when we send a request to an interface and the interface response is also JSON in that same BLOB field.

For people that have been working with web requests we can say that TempBlob.Blob is used both for RequestStream and for ResponseStream.

TempBlob is only used as a form of Stream. We never use TempBlob to store data. We never do TempBlob.Get() or TempBlob.Insert(). And, even if the name indicates that this is a temporary record, we don’t define the TempBlob Record variable as temporary. There is no need for that since we never do any database call for this record.

Interface Helper Codeunit

We use a single Codeunit in all our solutions to prepare both request and response JSON and also to read from the request on the other end.

We have created a Codeunit that includes all the required procedures for the interface communication.

We have three functions to handle the basics;

  • procedure Initialize()
  • procedure InitializeFromTempBlob(TempBlob: Record TempBlob)
  • procedure GetAsTempBlob(var TempBlob: Record TempBlob)

A typical flow of executions is to start by initializing the JSON. Then we add data to that JSON. Before we execute the interface Codeunit we use GetAsTempBlob to write the JSON into TempBlob.Blob. Every Interface Codeunit expects a TempBlob record to be passed to the OnRun() trigger.

codeunit 10008650 "ADV SDS Interface Mgt"
{
    TableNo = TempBlob;

    trigger OnRun()
    var
        Method: Text;
     begin
        with JsonInterfaceMgt do begin
            InitializeFromTempBlob(Rec);
...

Inside the Interface Codeunit we initialize the JSON from the passed TempBlob record. At this stage we have access to all the data that was added to the JSON on the request side.

And, since the interface Codeunit will return TempBlob as well, we must make sure to put the response JSON in there before the execution ends.

with JsonInterfaceMgt do begin
    Initialize();
    AddVariable('Success', true);
    GetAsTempBlob(Rec);
end;

JSON structure

The JSON is an array that contains one or more objects. An JSON array is represented with square brackets.

[]

The first object in the JSON array is the variable storage. This is an example of a JSON that passes two variables to the interface Codeunit.

[
  {
    "TestVariable": "TestVariableValue",
    "TestVariable2": "TestVariableValue2"
  }
]

All variables are stored in the XML format, using FORMAT(<variable>,0,9) and evaluated back using EVALUATE(<variable>,<json text value>,9). The JSON can then have multiple record related objects after the variable storage.

Adding data to the JSON

We have the following procedures for adding data to the JSON;

  • procedure AddRecordID(Variant: Variant)
  • procedure AddTempTable(TableName: Text; Variant: Variant)
  • procedure AddFilteredTable(TableName: Text; FieldNameFilter: Text; Variant: Variant)
  • procedure AddRecordFields(Variant: Variant)
  • procedure AddVariable(VariableName: Text; Value: Variant)
  • procedure AddEncryptedVariable(VariableName: Text; Value: Text)

I will write a more detailed blog about each of these methods and give examples of how we use them, but for now I will just do a short explanation of their usage.

If we need to pass a reference to a database table we pass the Record ID. Inside the interface Codeunit we can get the database record based on that record. Each Record ID that we add to the JSON is stored with the Table Name and we use either of these two procedures to retrieve the record.

  • procedure GetRecord(var RecRef: RecordRef): Boolean
  • procedure GetRecordByTableName(TableName: Text; var RecRef: RecordRef): Boolean

If we need to pass more than one record we can use pass all records inside the current filter and retrieve the result with

  • procedure UpdateFilteredTable(TableName: Text; KeyFieldName: Text; var RecRef: RecordRef): Boolean

A fully populated temporary table with table view and table filters can be passed to the interface Codeunit by adding it to the JSON by name. When we use

  • procedure GetTempTable(TableName: Text; var RecRef: RecordRef): Boolean

in the interface Codeunit to retrieve the temporary table we will get the whole table, not just the filtered content.

We sometimes need to give interface Codeunits access to the record that we are creating. Similar to the OnBeforeInsert() system event. If we add the record fields to the JSON we can use

  • procedure GetRecordFields(var RecRef: RecordRef): Boolean

on the other end to retrieve the record and add or alter any field content before returning it back to the caller.

We have several procedures available to retrieve the variable values that we pass to the interface Codeunit.

  • procedure GetVariableValue(var Value: Variant; VariableName: Text): Boolean
  • procedure GetVariableTextValue(var TextValue: Text; VariableName: Text): Boolean
  • procedure GetVariableBooleanValue(var BooleanValue: Boolean; VariableName: Text): Boolean
  • procedure GetVariableDateValue(var DateValue: Date; VariableName: Text): Boolean
  • procedure GetVariableDateTimeValue(var DateTimeValue: DateTime; VariableName: Text): Boolean
  • procedure GetVariableDecimalValue(var DecimalValue: Decimal; VariableName: Text): Boolean
  • procedure GetVariableIntegerValue(var IntegerValue: Integer; VariableName: Text): Boolean
  • procedure GetVariableGUIDValue(var GuidValue: Guid; VariableName: Text): Boolean
  • procedure GetVariableBLOBValue(var TempBlob: Record TempBlob; VariableName: Text): Boolean
  • procedure GetVariableBLOBValueBase64String(var TempBlob: Record TempBlob; VariableName: Text): Boolean
  • procedure GetEncryptedVariableTextValue(var TextValue: Text; VariableName: Text): Boolean

We use Base 64 methods in the JSON. By passing the BLOB to TempBlob.Blob we can use

TextValue := TempBlob.ToBase64String();

and then

TempBlob.FromBase64String(TextValue);

on the other end to pass a binary content, like images or PDFs.

Finally, we have the possibility to add and encrypt values that we place in the JSON. On the other end we can then decrypt the data to be used. This we use extensively when we pass sensitive data to and from our Azure Function.

Calling an interface Codeunit

As promised I will write more detailed blogs with examples. This is the current list of procedures we use to call interfaces;

  • procedure ExecuteInterfaceCodeunitIfExists(CodeunitName: Text; var TempBlob: Record TempBlob; ErrorIfNotFound: Text)
  • procedure TryExecuteInterfaceCodeunitIfExists(CodeunitName: Text; var TempBlob: Record TempBlob; ErrorIfNotFound: Text): Boolean
  • procedure TryExecuteCodeunitIfExists(CodeunitName: Text; ErrorIfNotFound: Text) Success: Boolean
  • procedure ExecuteAzureFunction() Success: Boolean

The first two expect a JSON to be passed using TempBlob. The third one we use to check for a simple true/false. We have no request data but we read the ‘Success’ variable from the response JSON.

For some of our functionality we use an Azure Function. We have created our function to read the same JSON structure we use internally. We also expect our Azure Function to respond with the sames JSON structure. By doing it that way, we can use the same functions to prepare the request and to read from the response as we do for our internal interfaces.

AdvaniaGIT: About the build steps

The goal of this post is to demo from start to finish the automated build and test of an AL solution for Microsoft Dynamics 365 Business Central.

About the build steps

All build steps are execute in the same way.  In the folder ‘C:\AdvaniaGIT\Scripts’ the script ‘Start-CustomAction.ps1’ is executed with parameters.

param
(
[Parameter(Mandatory=$False, ValueFromPipelineByPropertyname=$true)]
[String]$Repository = (Get-Location).Path,
[Parameter(Mandatory=$True, ValueFromPipelineByPropertyName=$true)]
[String]$ScriptName,
[Parameter(Mandatory=$False, ValueFromPipelineByPropertyName=$true)]
[String]$InAdminMode='$false',
[Parameter(Mandatory=$False, ValueFromPipelineByPropertyName=$true)]
[String]$Wait='$false',
[Parameter(Mandatory=$False, ValueFromPipelineByPropertyName=$true)]
[HashTable]$BuildSettings
)

The AdvaniaGIT custom action is executed in the same way from a build machine and from a development machine.

When we created the container in our last post from Visual Studio Code with the command (Ctrl+Shift+P) ‘Build NAV Environment’, Visual Studio Code executed

Start-AdvaniaGITAction -Repository c:\Users\navlightadmin\businesscentral -ScriptName "Build-NavEnvironment.ps1" -Wait $false

From the build task we execute ‘C:\AdvaniaGIT\Scripts\Start-CustomAction.ps1’ with these parameters

-ScriptName Build-NavEnvironment.ps1 -Repository $(System.DefaultWorkingDirectory) -BuildSettings @{BuildMode=$true}

We can see that these commands are almost the same.  We have the one additional parameter in the build script to notify the scripts that we are in Build Mode.

Each AdvaniaGIT build or development machine has a ‘C:\AdvaniaGIT\Data\GITSettings.Json’ configuration file.

When the scripts are started this file is read and all the settings imported.  Then the repository setup file is imported.  The default repository setup file is ‘setup.json’ as stated in GIT settings.  If the same parameters are in both the machine settings and in the repository settings then the repository settings are used.

The same structure is used for the ‘BuildSettings’ parameter that can be passed to the custom action.  The build settings will overwrite the same parameter in both the machine settings and the repository settings.

The default settings are built around the folder structure that I like to use.  For example, we have our C/AL objects in the ‘Objects’ folder.  Microsoft has their objects in then ‘BaseApp’ folder.  Just by adding the ‘objectsPath’ parameter to the repository settings for the Microsoft repository I can use their structure without problems.

If I wan’t to execute the same exact functionality in Visual Studio Code as I expect to get from my build script I can add the ‘BuildSettings’ parameter to the command.

Start-AdvaniaGITAction -Repository c:\Users\navlightadmin\businesscentral -ScriptName "Build-NavEnvironment.ps1" -Wait $false -BuildSettings @{BuildMode=$true}

The folder structure

The structure is defined in settings files.  By default I have the ‘AL’ folder for the main project and the ‘ALTests’ folder for the test project.  Example can be seen in the G/L Source Names repository.

In C/AL we are using deltas and using the build server to merge our solutions to a single solution.  Therefore we have a single repository for a single NAV version and put our solutions and customization into branches.

In AL this is no longer needed.  We can have a dedicated repository for each solution if we like to, since the scripts will not be doing any merge between branches.

AdvaniaGIT: Setup and configure the build machine

The goal of this post is to demo from start to finish the automated build and test of an AL solution for Microsoft Dynamics 365 Business Central.

Setup and configure the build machine

We will create our build machine from a standard Windows 2016 template in Azure.

Docker containers and container images will take a lot of disk space.  The data are stored in %ProgramData%\docker

It if obvious that we will not be able to store the lot on the system SSD system drive.  To solve this I create an 1TB HDD disk in Azure.

After starting the Azure VM and opening the Server Manager to look at the File and Storage Service we can see the new empty disk that need configuration.

Right click the new drive to create a new volume.

And assign the drive letter

Next go to Add roles and features to add the Containers feature.  More information can be found here.  We also need to add ‘.NET Framework 3.5 Features’.

I also like to make sure that all Microsoft updates have been installed.

Now I start PowerShell ISE as Administrator.

As Windows Servers are usually configured in a way that prohibits downloads I like to continue the installation task in PowerShell.

To enable all the scripts to be executes we need to change the execution policy for PowerShell scripts.  Executing

Set-ExecutionPolicy -ExecutionPolicy Unrestricted

will take care of that. 

Confirm with Yes to all.

To make sure that all the following download functions will execute successfully we need to change the TLS configuration with another PowerShell command.

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

Let’s download Visual Studio Code!  Use the following PowerShell command

Invoke-WebRequest -Uri https://go.microsoft.com/fwlink/?Linkid=852157 -OutFile "${env:USERPROFILE}\Desktop\VSCodeInstallation.exe"

to download the installation file to your desktop.  Start the installation.  During installation I like to select all available additional tasks.

We also need to download GIT.  Using the following PowerShell command

Invoke-WebRequest -Uri https://github.com/git-for-windows/git/releases/download/v2.18.0.windows.1/Git-2.18.0-64-bit.exe -OutFile "${env:USERPROFILE}\Desktop\GITInstallation.exe"

will download the latest version at the time of this blog post.  The only thing I change from default during GIT setup is the default editor.  I like to use Visual Studio Code.

Go ahead and start Visual Studio Code as Administrator.

Add the AdvaniaGIT extension to Visual Studio Code

Install AdvaniaGIT PowerShell Scripts!  We access the commands in Visual Studio Code by pressing Ctrl+Shift+P.  From there we type to search for the command ‘Advania: Go!’ and the when selected we press enter.

You will get a small notification dialog asking you to switch to the AdvaniaGIT terminal window.

Accept the default path for the installation but select No to the two optional installation options.

We need a development license to work with NAV and Business Central.  This license you copy into the ‘C:\AdvaniaGIT\License’ folder.  In the ‘GITSettings.json’ file that Visual Studio Code opened during AdvaniaGIT installation we need to point to this license file.

The DockerSettings.json file is also opened during installation and if you have access to the insider builds we need to update that file.

{
    "RepositoryPath":  "bcinsider.azurecr.io",
    "RepositoryUserName":  "User Name from Collaborate",
    "RepositoryPassword":  "Password from Collaborate",
    "ClientFolders":  []
}

If not make sure to have all setting blank

{
  "RepositoryPath":  "",
  "RepositoryUserName":  "",
  "RepositoryPassword":  "",
  "ClientFolders":  []
}

Save both these configuration files and restart Visual Studio Code.  This restart is required to make sure Visual Studio Code recognizes the AdvaniaGIT PowerShell modules.

Let’s open our first GIT repository.  We start by opening the NAV 2018 repository.  Repositories must have the setup.json file in the root folder to support the AdvaniaGIT functionality.

I need some installation files from the NAV 2018 DVD and I will start by cloning my GitHub NAV 2018 respository.  From GitHub I copy the Url to the repository.  In Visual Studio Code I open the commands with Ctrl+Shift+P and execute the command ‘Git: Clone’.

I selected the default folder for the local copy and accepted to open the repository folder.  Again with Ctrl+Shift+P I start the NAV Installation.

The download will start.  The country version we are downloading does not matter at this point.  Every country has the same installation files that we require.

This will download NAV and start the installation.  I will just cancel the installation and manually install just what I need.

  • Microsoft SQL Server\sqlncli64
  • Microsoft SQL Server Management Objects\SQLSysClrTypes
  • Microsoft Visual C++ 2013\vcredist_x64
  • Microsoft Visual C++ 2013\vcredist_x86
  • Microsoft Visual C++ 2017\vcredist_x64
  • Microsoft Visual Studio 2010 Tools For Office Redist\vstor_redist

To enable the windows authentication for the build containers we need to save the windows credentials.  I am running as user “navlightadmin”.  I securely save the password for this user by starting a command (Ctrl+Shift+P) and select to save container credentials.

For all the docker container support I like to use the NAV Container Helper from Microsoft.  With another command (Ctrl+Shift+P) I install the container helper to the server.

To complete the docker installation I execute.

Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force
Install-Module -Name DockerMsftProvider -Force
Install-Package -Name docker -ProviderName DockerMsftProvider -Force

in Visual Studio Code Terminal.

We need to point docker to our data storage drive.  Kamil Sacek already pointed this out to us.

I use Visual Studio Code to update the docker configuration.  As pointed out here the default docker configuration file can be found at ‘C:\ProgramData\Docker\config\daemon.json’. If this file does not already exist, it can be created.  I update the ‘data-root’ configuration.

Now let’s restart the server by typing

Restart-Computer -Force

or manually.

After restart, open Visual Studio Code as Administrator.

Now to verify the installation let’s clone my Business Central repository.  Start command (Ctrl+Shift+P) ‘Git: Clone’ and paste in the Url to the repository.

This repository has a setup.json that points to the Business Central Sandbox.

Make sure to have the Integrated Terminal visible and let’s verify the installation by executing a command (Ctrl+Shift+P) ‘Advania: Build NAV Environment’ to build the development environment.

The image download should start…

You should now be able to use the command (Ctrl+Shift+P) ‘Advania: Start Client’,  ‘Advania: Start Web Client’, ‘Advania: Start FinSql’ and ‘Advania: Start Debugger’ to verify all the required NAV/BC functionality.

If you are happy with the results you should be able to install the build agent as shown by Soren Klemmensen here.

 

My Soap Service Proxy Codeunit

Up to now we in Advania have been using the method described here on my blog to connect to most of the Soap web services that we needed to integrate with.

The problem with this method is that we have to manage a lot of DLLs.  This has caused some issues and problems.

Another thing is that we are moving to AL.  And in AL we can’t just throw in a custom DLL to do all the work.

In C/AL We can do this with standard dotnet objects

        DOMDoc := DOMDoc.XmlDocument;
        DOMProcessingInstruction := DOMDoc.CreateProcessingInstruction('xml','version="1.0" encoding="utf-8"');
        DOMDoc.AppendChild(DOMProcessingInstruction);
        DOMElement := DOMDoc.CreateElement('soap:Envelope','http://schemas.xmlsoap.org/soap/envelope/');
        DOMElement.SetAttribute('xmlns:soap','http://schemas.xmlsoap.org/soap/envelope/');
        DOMElement.SetAttribute('xmlns:xsi','http://www.w3.org/2001/XMLSchema-instance');
        DOMElement.SetAttribute('xmlns:xsd','http://www.w3.org/2001/XMLSchema');
        DOMElement.SetAttribute('xmlns:ws','http://ws.msggw.siminn');

        DOMElement2 := DOMDoc.CreateElement('soap:Header','http://schemas.xmlsoap.org/soap/envelope/');
        DOMElement.AppendChild(DOMElement2);

        DOMElement2 := DOMDoc.CreateElement('soap:Body','http://schemas.xmlsoap.org/soap/envelope/');
        DOMElement3 := DOMDoc.CreateElement('ws:sendSMS','http://ws.msggw.siminn');
        DOMElement4 := DOMDoc.CreateElement('ws:username','http://ws.msggw.siminn');
        DOMElement4.InnerText := SMSSetup."Service User Name";
        DOMElement3.AppendChild(DOMElement4);
        DOMElement4 := DOMDoc.CreateElement('ws:password','http://ws.msggw.siminn');
        DOMElement4.InnerText := SMSSetup."Service Password";
        DOMElement3.AppendChild(DOMElement4);
        DOMElement4 := DOMDoc.CreateElement('ws:source','http://ws.msggw.siminn');
        DOMElement4.InnerText := SMSSetup.Sender;
        DOMElement3.AppendChild(DOMElement4);
        DOMElement4 := DOMDoc.CreateElement('ws:destination','http://ws.msggw.siminn');
        DOMElement4.InnerText := SendTo;
        DOMElement3.AppendChild(DOMElement4);
        DOMElement4 := DOMDoc.CreateElement('ws:text','http://ws.msggw.siminn');
        DOMElement4.InnerText := SendText;
        DOMElement3.AppendChild(DOMElement4);
        DOMElement4 := DOMDoc.CreateElement('ws:encoding','http://ws.msggw.siminn');
        DOMElement4.InnerText := '0';
        DOMElement3.AppendChild(DOMElement4);
        DOMElement4 := DOMDoc.CreateElement('ws:flash','http://ws.msggw.siminn');
        DOMElement4.InnerText := '0';
        DOMElement3.AppendChild(DOMElement4);
        DOMElement2.AppendChild(DOMElement3);
        DOMElement.AppendChild(DOMElement2);
        DOMDoc.AppendChild(DOMElement);

        HttpWebRequest := HttpWebRequest.Create(SMSSetup."SOAP URL");
        HttpWebRequest.Timeout := 30000;
        HttpWebRequest.UseDefaultCredentials(TRUE);
        HttpWebRequest.Method := 'POST';
        HttpWebRequest.ContentType := 'text/xml; charset=utf-8';
        HttpWebRequest.Accept := 'text/xml';
        HttpWebRequest.Headers.Add('soapAction','urn:sendSMS');
        MemoryStream := HttpWebRequest.GetRequestStream;
        DOMDoc.Save(MemoryStream);
        MemoryStream.Flush;
        MemoryStream.Close;

        NAVWebRequest := NAVWebRequest.NAVWebRequest;
        IF NOT NAVWebRequest.doRequest(HttpWebRequest,HttpWebException,HttpWebResponse) THEN
          ERROR(Text003,HttpWebException.Status.ToString,HttpWebException.Message);

        MemoryStream := HttpWebResponse.GetResponseStream;
        DOMResponseDoc := DOMResponseDoc.XmlDocument;
        DOMResponseDoc.Load(MemoryStream);
        MemoryStream.Flush;
        MemoryStream.Close;

        ReceivedNameSpaceMgt := ReceivedNameSpaceMgt.XmlNamespaceManager(DOMResponseDoc.NameTable);
        ReceivedNameSpaceMgt.AddNamespace('ns','http://ws.msggw.siminn');
        DOMNode := DOMResponseDoc.SelectSingleNode('//ns:return',ReceivedNameSpaceMgt);

        Response := DOMNode.InnerText;
        Success :=  Response = 'SUCCESS';
        IF ShowResult AND Success THEN
          MESSAGE(Text001)
        ELSE IF ShowResult AND NOT Success THEN
          ERROR(Text005,Response);

AL code to do the same with the built in AL objects but that code is not much shorter.

With a custom proxy DLL the code would be

Proxy := Proxy.SMSWS;
Proxy.Url := SMSSetup."SOAP URL";
Response := Proxy.sendSMS(Username,Password,SenderText,SendTo,SendText,'0',FALSE,FALSE,'0');
Success :=  Response = 'SUCCESS';
IF ShowResult AND Success THEN
  MESSAGE(Text001)
ELSE IF ShowResult AND NOT Success THEN
  ERROR(Text005,Response);

With this example we can easily see why we have chosen to create a proxy DLL for most of the Soap services.

I wanted to find a way to make things easier in AL and I remembered having dealt with C/AL objects by Vjeko from some time ago.  I took another look and that code helped me to get started.

The result is a Soap Proxy Client Mgt. Codeunit in C/AL that I have sent to Microsoft’s cal-open-library project asking to have this code put into the standard C/AL library.

Using this Codeunit the code will be like this.

  WITH SoapProxyClientMgt DO BEGIN
    CreateSoapProxy(SMSSetup."SOAP URL");
    InitParameters(9);
    SetParameterValue(Username,1);
    SetParameterValue(Password,2);
    SetParameterValue(SenderText,3);
    SetParameterValue(SendTo,4);
    SetParameterValue(SendText,5);
    SetParameterValue('0',6);
    SetParameterValue(FALSE,7);
    SetParameterValue(FALSE,8);
    SetParameterValue('0',9);
    InvokeMethod('SMSWS','sendSMS',TempBlob);
    XmlBuffer.LoadFromText(TempBlob.ReadAsTextWithCRLFLineSeparator);
    IF XmlBuffer.FindNodesByXPath(XmlBuffer,'/string') THEN
      Response := XmlBuffer.Value;
    Success :=  Response = 'SUCCESS';
    IF ShowResult AND Success THEN
      MESSAGE(Text001)
    ELSE IF ShowResult AND NOT Success THEN
      ERROR(Text005,Response);
  END;

What about AL?

For now this C/AL Codeunit is not in the standard CRONUS database.  I need to import the C/AL code and make sure that AL will be able to use that Codeunit.  You can see how to do this in my last blog post.

This C/AL Code will directly convert to AL and is ready to use.

          with SoapProxyClientMgt do begin
            CreateSoapProxy(SMSSetup."SOAP URL");
            InitParameters(9);
            SetParameterValue(Username,1);
            SetParameterValue(Password,2);
            SetParameterValue(SenderText,3);
            SetParameterValue(SendTo,4);
            SetParameterValue(SendText,5);
            SetParameterValue('0',6);
            SetParameterValue(false,7);
            SetParameterValue(false,8);
            SetParameterValue('0',9);
            InvokeMethod('SMSWS','sendSMS',TempBlob);
            XmlBuffer.LoadFromText(TempBlob.ReadAsTextWithCRLFLineSeparator);
            if XmlBuffer.FindNodesByXPath(XmlBuffer,'/string') then                        
              Response := XmlBuffer.Value;        
            Success :=  Response = 'SUCCESS';
            if ShowResult and Success then
              MESSAGE(Text001)
            else if ShowResult and not Success then
              ERROR(Text005,Response);
          end;

More examples on how to use this Proxy Codeunit will follow.  Stay tuned…

C/AL and AL Side-by-Side Development with AdvaniaGIT

Microsoft supports Side-by-Side development for C/AL and AL.  To start using the Side-by-Side development make sure you have the latest version of AdvaniaGIT add-in for Visual Studio Code and update the PowerShell scripts by using the “Advania: Go!” command.

When the Business Central environment is built use the “Advania: Build C/AL Symbol References for AL” to enable the Side-by-Side development for this environment.  This function will reconfigure the service and execute the Generate Symbol References command for the environment.  From here on everything you change in C/AL on this environment will update the AL Symbol References.

So let’s try this out.

I converted my C/AL project to AL project with the steps described in my previous post.  Then selected to open Visual Studio Code in AL folder.

In my new Visual Studio Code window I selected to build an environment – the Docker Container.

When AdvaniaGIT builds a container it will install the AL Extension for Visual Studio Code from that Container.  We need to read the output of the environment build.  In this example I am asked to restart Visual Studio Code before reinstalling AL Language.  Note that if you are not asked to restart Visual Studio Code you don’t need to do that.

After restart I can see that the AL Language extension for Visual Studio Code is missing.

To fix this I execute the “Advania: Build NAV Environment” command again.  This time, since the Container is already running only the NAV license and the AL Extension will be updated.

Restart Visual Studio Code again and we are ready to go.

If we build new environment for our AL project we must update the environment settings in .vscode\launch.json.  This we can do with a built in AdvaniaGIT command.

We can verify the environment by executing “Advania: Check NAV Environment”.  Everything should be up and running at this time.

Since we will be using Side-by-Side development for C/AL and AL in this environment we need to enable that by executing “Advania: Build C/AL Symbol References for AL”.

This will take a few minutes to execute.

Don’t worry about the warning.  AdvaniaGIT takes care of restarting the service.  Let’s download AL Symbols and see what happens.

We can see that AL now recognizes the standard symbols but my custom one; “IS Soap Proxy Client Mgt.” is not recognized.  I will tell you more about this Codeunit in my next blog post.

I start FinSql to import the Codeunit “IS Soap Proxy Client Mgt.”

Import the FOB file

Close FinSql and execute the “AL: Download Symbols” again.  We can now see that AL recognizes my C/AL Codeunit.

Now I am good to go.

Using the Translation Service for G/L Source Names

Until now I have had my G/L Source Names extension in English only.

Now the upcoming release of Microsoft Dynamics 365 Business Central I need to supply more languages.  What does a man do when he does not speak the language?

I gave a shout out yesterday on Twitter asking for help with translation.  Tobias Fenster reminded me that we have a service to help us with that.  I had already tried to work with this service and now it was time to test the service on my G/L Source Names extension.

In my previous posts I had created the Xliff translation files from my old ML properties.  I manually translated to my native language; is-IS.

I already got a Danish translation file sent from a colleague.

Before we start; I needed to do a minor update to the AdvaniaGIT tools.  Make sure you run “Advania: Go!” to update the PowerShell Script Package.  Then restart Visual Studio Code.

Off to the Microsoft Lifecycle Services to utilize the translation service.

Now, let’s prepare the Xliff files in Visual Studio Code.  From the last build I have the default GL Source Names.g.xlf file.  I executed the action to create Xliff files.

This action will prompt for a selection of language.  The selection is from the languages included in the NAV DVD.

After selection the system will prompt for a translation file that is exported from FinSql.  This I already showed in a YouTube Video.  If you don’t have a file from FinSql you can just cancel this part.  If you already have an Xliff file for that language then it will be imported into memory as translation data and then removed.

This method is therefore useful if you want to reuse the Xliff file data after an extension update.  All new files will be based on the g.xlf file.

I basically did this action for all 25 languages.  I already had the is-IS and da-DK files, so they where updated.  Since the source language is en-US all my en-XX files where automatically translated.  All the other languages have translation state set to “needs-translation”.

</trans-unit><trans-unit id="Table 102911037 - Field 1343707150 - Property 2879900210" size-unit="char" translate="yes" xml:space="preserve">
  <source>Source Name</source>
  <target state="needs-translation"></target><note from="Developer" annotates="general" priority="2" />
  <note from="Xliff Generator" annotates="general" priority="3">Table:O4N GL SN - Field:Source Name</note>
</trans-unit>

All these files I need to upload to the Translation Service.  From the Lifecycle Services menu select the Translation Service.  This will open the Translation Service Dashboard.

Press + to add a translation request.

I now need to zip and upload the nl-NL file from my Translations folder.

After upload I Submit the translation request

The request will appear on the dashboard with the status; Processing.  Now I need to wait for the status to change to Completed.  Or, create requests for all the other languages and upload files to summit.

When translation has completed I can download the result.

And I have a translation in state “needs-review-translation”.

<trans-unit id="Table 102911037 - Field 1343707150 - Property 2879900210" translate="yes" xml:space="preserve">
  <source>Source Name</source>
  <target state="needs-review-translation" state-qualifier="mt-suggestion">Bronnaam</target>
  <note from="Xliff Generator" annotates="general" priority="3">Table:O4N GL SN - Field:Source Name</note>
</trans-unit>

Now I just need to complete all languages and push changes to GitHub.

Please, if you can, download your language file and look at the results.

Why do we need Interface Codeunits

And what is an interface Codeunit?

A Codeunit that you can execute with CODEUNIT.RUN to perform a given task is, from my point of view, an interface Codeunit.

An interface Codeunit has a parameter that we put in the

This parameter is always a table object.

We have multiple examples of this already in the application.  Codeunits 12 and 80 are some.  There the parameter is a mixed set of data and settings.  Some of the table fields are business data being pushed into the business logic.  Other fields are settings used to control the business logic.

Table 36, Sales Header, is used as the parameter for Codeunit 80.  Fields like No., Bill-to Customer No., Posting Date and so on are business data.  Fields like Ship, Invoice, Print Posted Documents are settings used to control the business logic but have no meaning as business data.

Every table is then a potential parameter for an interface Codeunit.  Our extension can easily create a table that we use as a parameter table.  Record does not need to be inserted into the table to be passed to the Codeunit.

Let’s look at another scenario.  We know that there is an Interface Codeunit  with the name “My Interface Codeunit” but it is belongs to an Extensions that may and may not be installed in the database.

Here we use the virtual table “CodeUnit Metadata” to look for the Interface Codeunit before execution.

This is all simple and strait forward.  Things that we have been doing for a number of years.

Using TempBlob table as a parameter also gives us flexibility to define more complex interface for the Codeunit.  Tempblob table can store complex data in Json or Xml format and pass that to the Codeunit.

Let’s take an example.  We have an extension that extends the discount calculation for Customers and Items.  We would like to ask this extensions for the discount a given customer will have for a given Item.  Questions like that we can represent in a Json file.

{
    "CustomerNo": "C10000",
    "ItemNo": "1000"
}

And the question can be coded like this.

The Interface Codeunit could be something like

With a Page that contains a single Text variable (Json) we can turn this into a web service.

That we can use from C# with a code like

var navOdataUrl = new System.Uri("https://nav2018dev.westeurope.cloudapp.azure.com:7048/NAV/OData/Company('CRONUS%20International%20Ltd.')/AlexaRequest?$format=json");
var credentials = new NetworkCredential("navUser", "+lppLBb7OQJxlOfZ7CpboRCDcbmAEoCCJpg7cmAEReQ=");
var handler = new HttpClientHandler { Credentials = credentials };

using (var client = new HttpClient(handler))
{
    var Json = new { CustomerNo = "C10000", ItemNo = "1000" };
    JObject JsonRequest = JObject.Parse(Json.ToString());
    JObject requestJson = new JObject();                
    JProperty jProperty = new JProperty("Json", JsonRequest.ToString());
    requestJson.Add(jProperty);
    var requestData = new StringContent(requestJson.ToString(), Encoding.UTF8, "application/json");
    var response = await client.PostAsync(navOdataUrl,requestData);
    dynamic result = await response.Content.ReadAsStringAsync();

    JObject responseJson = JObject.Parse(Convert.ToString(result));
    if (responseJson.TryGetValue("Json", out JToken responseJToken))
    {
        jProperty = responseJson.Property("Json");
        JObject JsonResponse = JObject.Parse(Convert.ToString(jProperty.Value));
        Console.WriteLine(JsonResponse.ToString());
    }
}

This is just scratching the surface of what we can do.  To copy a record to and from Json is easy to do with these functions.

And even if I am showing all this in C/AL there should be no problem in using the new AL in Visual Studio Code to get the same results.

Upgrading my G/L Source Names Extension to AL – step 4

We are on a path to upgrade G/L Source Names from version 1 to version 2.  This requires conversion from C/AL to AL, data upgrade and number of changes to the AL code.

A complete check list of what you need to have in your AL extension is published by Microsoft here.

Our task for today is to translate the AL project into our native language.

To make this all as easy as I could I added new functionality to the AdvaniaGIT module and VS Code extension.  Make sure to update to the latest release.

To translate an AL project we need to follow the steps described by Microsoft here.

To demonstrate this process I created a video.

 

Upgrading my G/L Source Names Extension to AL – step 3

When upgrading an extension from C/AL to AL (version 1 to version 2) we need to think about the data upgrade process.

In C/AL we needed to add two function to an extension Codeunit to handle the installation and upgrade.  This I did with Codeunit 70009200.  One function to be execute once for each install.

PROCEDURE OnNavAppUpgradePerDatabase@1();
VAR
  AccessControl@70009200 : Record 2000000053;
BEGIN
  WITH AccessControl DO BEGIN
    SETFILTER("Role ID",'%1|%2','SUPER','SECURITY');
    IF FINDSET THEN REPEAT
      AddUserAccess("User Security ID",PermissionSetToUserGLSourceNames);
      AddUserAccess("User Security ID",PermissionSetToUpdateGLSourceNames);
      AddUserAccess("User Security ID",PermissionSetToSetupGLSourceNames);
    UNTIL NEXT = 0;
  END;
END;

And another function to be executed once for each company in the install database.

PROCEDURE OnNavAppUpgradePerCompany@2();
VAR
  GLSourceNameMgt@70009200 : Codeunit 70009201;
BEGIN
  NAVAPP.RESTOREARCHIVEDATA(DATABASE::"G/L Source Name Setup");
  NAVAPP.RESTOREARCHIVEDATA(DATABASE::"G/L Source Name User Setup");
  NAVAPP.DELETEARCHIVEDATA(DATABASE::"G/L Source Name");

  NAVAPP.DELETEARCHIVEDATA(DATABASE::"G/L Source Name Help Resource");
  NAVAPP.DELETEARCHIVEDATA(DATABASE::"G/L Source Name User Access");
  NAVAPP.DELETEARCHIVEDATA(DATABASE::"G/L Source Name Group Access");

  GLSourceNameMgt.PopulateSourceTable;
  RemoveAssistedSetup;
END;

For each database I add my permission sets to the installation users and for each company I restore the setup data for my extension and populate the lookup table for G/L Source Name.

The methods for install and upgrade have changed in AL for extensions version 2.  Look at the AL documentation from Microsoft for details.

In version 2 I remove these two obsolete function from my application management Codeunit and need to add two new Codeunits, one for install and another for upgrade.

codeunit 70009207 "O4N GL Source Name Install"
{
    Subtype = Install;
    trigger OnRun();
    begin
    end;

    var
    PermissionSetToSetupGLSourceNames : TextConst ENU='G/L-SOURCE NAMES, S';
    PermissionSetToUpdateGLSourceNames : TextConst ENU='G/L-SOURCE NAMES, E';
    PermissionSetToUserGLSourceNames : TextConst ENU='G/L-SOURCE NAMES';

    
    trigger OnInstallAppPerCompany();
    var
        GLSourceNameMgt : Codeunit "O4N GL SN Mgt";
    begin
        GLSourceNameMgt.PopulateSourceTable;
        RemoveAssistedSetup;
    end;

    trigger OnInstallAppPerDatabase();
    var
        AccessControl : Record "Access Control";
    begin
        with AccessControl do begin
            SETFILTER("Role ID",'%1|%2','SUPER','SECURITY');
            if FINDSET then repeat
                AddUserAccess("User Security ID",PermissionSetToUserGLSourceNames);
                AddUserAccess("User Security ID",PermissionSetToUpdateGLSourceNames);
                AddUserAccess("User Security ID",PermissionSetToSetupGLSourceNames);
            until NEXT = 0;
        end;
    end;

  local procedure RemoveAssistedSetup();
  var
    AssistedSetup : Record "Assisted Setup";
  begin
    with AssistedSetup do begin
      SETRANGE("Page ID",PAGE::"O4N GL SN Setup Wizard");
      if not ISEMPTY then
        DELETEALL;
    end;
  end;

  local procedure AddUserAccess(AssignToUser : Guid;PermissionSet : Code[20]);
  var
    AccessControl : Record "Access Control";
    AppMgt : Codeunit "O4N GL SN App Mgt.";
    AppGuid : Guid;
  begin
    EVALUATE(AppGuid,AppMgt.GetAppId);
    with AccessControl do begin
      INIT;
      "User Security ID" := AssignToUser;
      "App ID" := AppGuid;
      Scope := Scope::Tenant;
      "Role ID" := PermissionSet;
      if not FIND then
        INSERT(true);
    end;
  end;

}

In the code you can see that this Codeunit is of Subtype=Install.  This code will  be executed when installing this extension in a database.

To confirm this I can see that I have the G/L Source Names Permission Sets in the Access Control table .

And my G/L Source Name table also has all required entries.

Uninstalling the extension will not remove this data.  Therefore you need to make sure that the install code is structured in a way that it will work even when reinstalling.  Look at the examples from Microsoft to get a better understanding.

Back to my C/AL extension.  When uninstalling that one the data is moved to archive tables.

Archive tables are handled with the NAVAPP.* commands.  The OnNavAppUpgradePerCompany command here on top handled these archive tables when reinstalling or upgrading.

Basically, since I am keeping the same table structure I can use the same set of commands for my upgrade Codeunit.

codeunit 70009208 "O4N GL SN Upgrade"
{
    Subtype=Upgrade;
    trigger OnRun()
    begin
        
    end;
    
    trigger OnCheckPreconditionsPerCompany()
    begin

    end;

    trigger OnCheckPreconditionsPerDatabase()
    begin

    end;
    
    trigger OnUpgradePerCompany()
    var
        GLSourceNameMgt : Codeunit "O4N GL SN Mgt";
        archivedVersion : Text;
    begin
        archivedVersion := NAVAPP.GetArchiveVersion();
        if archivedVersion = '1.0.0.1' then begin
            NAVAPP.RESTOREARCHIVEDATA(DATABASE::"O4N GL SN Setup");
            NAVAPP.RESTOREARCHIVEDATA(DATABASE::"O4N GL SN User Setup");
            NAVAPP.DELETEARCHIVEDATA(DATABASE::"O4N GL SN");

            NAVAPP.DELETEARCHIVEDATA(DATABASE::"O4N GL SN Help Resource");
            NAVAPP.DELETEARCHIVEDATA(DATABASE::"O4N GL SN User Access");
            NAVAPP.DELETEARCHIVEDATA(DATABASE::"O4N GL SN Group Access");

            GLSourceNameMgt.PopulateSourceTable;
        end;
    end;

    trigger OnUpgradePerDatabase()
    begin

    end;

    trigger OnValidateUpgradePerCompany()
    begin

    end;

    trigger OnValidateUpgradePerDatabase()
    begin

    end;
    
}

So, time to test how and if this works.

I have my AL folder open in Visual Studio Code and I use the AdvaniaGIT command Build NAV Environment to get the new Docker container up and running.

Then I use Update launch.json with current branch information to update my launch.json server settings.

I like to use the NAV Container Helper from Microsoft  to manually work with the container.  I use a command from the AdvaniaGIT module to import the NAV Container Module.

The module uses the container name for most of the functions.  The container name can be found by listing the running Docker containers or by asking for the name that match the server used in launch.json.

I need my C/AL extension inside the container so I executed

Copy-FileToNavContainer -containerName jolly_bhabha -localPath C:\NAVManagementWorkFolder\Workspace\GIT\Kappi\NAV2017\Extension1\AppPackage.navx -containerPath c:\run

Then I open PowerShell inside the container

Enter-NavContainer -containerName jolly_bhabha

Import the NAV Administration Module

Welcome to the NAV Container PowerShell prompt

[50AA0018A87F]: PS C:\run> Import-Module 'C:\Program Files\Microsoft Dynamics NAV\110\Service\NavAdminTool.ps1'

Welcome to the Server Admin Tool Shell!

[50AA0018A87F]: PS C:\run>

and I am ready to play.  Install the C/AL extension

Publish-NAVApp -ServerInstance NAV -IdePath 'C:\Program Files (x86)\Microsoft Dynamics NAV\110\RoleTailored Client\finsql.exe' -Path C:\run\AppPackage.navx -SkipVerification

Now I am faced with the fact that I have opened PowerShell inside the container in my AdvaniaGIT terminal.  That means that my AdvaniaGIT commands will execute inside the container, but not on the host.

The simplest way to solve this is to open another instance of Visual Studio Code.  From there I can start the Web Client and complete the install and configuration of my C/AL extension.

I complete the Assisted Setup and do a round trip to G/L Entries to make sure that I have enough data in my tables to verify that the data upgrade is working.

I can verify this by looking into the SQL tables for my extension.  I use PowerShell to uninstall and unpublish my C/AL extension.

Uninstall-NAVApp -ServerInstance NAV -Name "G/L Source Names"
Unpublish-NAVApp -ServerInstance NAV -Name "G/L Source Names"

I can verify that in my SQL database I now have four AppData archive tables.

Pressing F5 in Visual Studio Code will now publish and install the AL extension, even if I have the terminal open inside the container.

The extension is published but can’t be installed because I had previously installed an older version of my extension.  Back in my container PowerShell I will follow the steps as described by Microsoft.

[50AA0018A87F]: PS C:\run> Sync-NAVApp -ServerInstance NAV -Name "G/L Source Names" -Version 2.0.0.0
WARNING: Cannot synchronize the extension G/L Source Names because it is already synchronized.
[50AA0018A87F]: PS C:\run> Start-NAVAppDataUpgrade -ServerInstance NAV -Name "G/L Source Names" -Version 2.0.0.0
[50AA0018A87F]: PS C:\run> Install-NAVApp -ServerInstance NAV -Tenant Default -Name "G/L Source Names"
WARNING: Cannot install extension G/L Source Names by Objects4NAV 2.0.0.0 for the tenant default because it is already installed.
[50AA0018A87F]: PS C:\run>

My AL extension is published and I have verified in my SQL server that all the data from the C/AL extension has been moved to the AL extension tables and all the archive tables have been removed.

Back in Visual Studio Code I can now use F5 to publish and install the extension again if I need to update, debug and test my extension.

Couple of more steps left that I will do shortly.  Happy coding…