Add translations to your NAV/BC Server

Yesterday I got a question via LinkedIn. I need to add Spanish translation to my W1 instance. How do I do that?

So, let me walk you through that process.

Here is my Business Central setup. It is the Icelandic Docker Container, so I have Icelandic and English. Switching between Icelandic and English works just fine.

Switching to Spanish gives me a mix of Spanish and English.

The Spanish translation for the platform is shipped with the DVD image and automatically installed. So are a lot of other languages.

Icelandic and English are built in captions in the C/AL code. And even if all these languages are shipped with the platform, these languages are not shipped with the application.

There is a way to get these application translations from the appropriate release and add them to your application.

Let’s start in VS Code where I have cloned my Business Central repository from GitHub. I opened the workspace file and also opened “setup.json” from the root folder of my repository.

This configuration points to the W1 Business Central OnPrem Docker Image. Now, let’s point to the Spanish one.

And let’s build a container.


Switching the Terminal part to AdvaniaGIT, I see that I am now pulling the Spanish Docker image down to my laptop.

This may take a few minutes…

After the container is ready I start FinSql.exe

Just opening the first table and properties for the first field I can verify than I have the Spanish captions installed.

So, let’s export these Spanish captions by selecting all objects except the new trigger codeunits (Business Central only) and selecting to export translation…

Save the export to a TXT file.

Opening this file in Visual Studio Code, we can see that the code page does not match the required UTF-8 format. Here we can also see that we have English in lines with A1033 and Spanish in lines with A1034.

We need to process this file with PowerShell. Executing that script can also take some time…

This script reads the file using the “Oem” code page. This code page is the one FinSql uses for import and export. We read through the file and every line that is identified as Spanish is the added to the output variable. We end by writing that output variable to the same file using the “utf8” code page.

Visual Studio Code should refresh the file automatically.

We need to create a “Translations” folder in the server folder. The default server uses the root Translations folder.

If you have instances then the “Translations” folder needs to be in the Instance.

Since I am running this in a container I may need to create this folder in the container.

Then, copy the updated file to the “Translations” folder.

And make sure it has been put into the correct path.

We need to restart the service instance.

Then in my Web Client I can verify that the Spanish application language is now available.

That is it!

Here is the PowerShell script

In this post I used both AdvaniaGIT and NAVContainerHelper tools. Good luck.

Business Central Docker on Windows 10

In Advania we are switching more and more to using the Docker images for Dynamics NAV and Business Central development.

Since version 1809 of Windows 10 and the latest blog post from Arend-Jan Kauffmann we are moving to using the Docker EE engine instead of the Docker Desktop setup.

Using the latest Windows 10 version and the latest version of Docker means that we can now use “Process Isolation” images when running NAV and Business Central.

Not using process isolation images on Windows 10 requires Hyper-V support. Inside Hyper-V a server core is running as the platform for the processes executed by the container created from the image. If using process isolation images then the Windows 10 operating system is used as foundation and Hyper-V server core is not needed. Just this little fact can save up to 4GB of memory usage by the container.

Freddy Kristiansen announced in this blog that his PowerShell Module, NAVContainerHelper, had support for selecting the proper Docker Image based on the host capabilities.

We have had some issues with our Windows installations and I wanted to give you the heads up and how these issues where resolved.

First thing first, make sure that you are running Windows 10 version 1809 or newer. Execute

in Windows-R to get this displayed.

Make sure to remove the Hyper-V support if you are not using any virtual machines on your host.

Restart your computer as needed.

Start PowerShell ISE as Administrator.

Copy from Arend-Jan‘s blog the Option 1: Manual installation script into the script editor in Powershell ISE and execute by pressing F5.

If you have older Docker Images download you should remove them. Executing

in your PowerShell ISE prompt.

Now to the problems we have encountered.

The NAVContainerHelper added a support for the process isolation images just a few releases ago. Some of our machines had older versions installed and that gave us problems. Execute

in PowerShell ISE prompt to make sure you have version 0.5.0.5 or newer.

If you have any other versions installed use the File Explorer to delete the “navcontainerhelper” folder from

and

Then execute

in PowerShell ISE prompt to install the latest versions. Verify the installation.

We also had problems downloading the images. Getting the error “read tcp 172.16.4.17:56878->204.79.197.219:443: wsarecv: An existing connection was forcibly closed by the remote host.“.

My college in Advania, Sigurður Gunnlaugsson, figured out that multiple download threads caused network errors.

In PowerShell ISE prompth execute

to remove the docker service. Then re-register docker service using

in the PowerShell ISE prompt.

This should result in only one download thread and this way our download was able to complete.

More details on Docker images for Dynamics NAV and Business Central can be found in here.

Waldo’s Blog on Docker Image Tags

AdvaniaGIT and Docker

Tobias Fenster on Docker

Freddy´s Blog

Use references to break compile dependencies

I was looking into a customer App yesterday. That app had a dependency defined in app.json.

I wanted to look at the real requirements for this dependency. I found 1 (one) place in my code where this dependent App was used.

In Iceland we add a field to the Customer table (Cust.”ADV Registration No.”). Every business entity in Iceland has a registration number. A company only has one registration number but can have multiple VAT numbers. We already have that registration number field in the Company Information record, but we also add it to Customer, Vendor and Contact records. The Employee social security number equals to the registration number for an individual.

To be able to remove the compile dependency, and therefore the installation dependency I did the following:

Removed the dependency App from app.json

Added a variable to the report

Changed the data set configuration to use this variable

Located the code that fetches the Customer record and added the reference way to get the required data

There are both positive and negative repercussion of these changes.

The positive is that we can now install, uninstall both apps without worrying about the compile dependency.

The negative is that breaking changes to the dependent App does not break the installation of this customer App.

So, what happens if the dependent App is not installed? The FindFieldByName will return false and the variable will be blank text.

Since we have adapted the policy that Microsoft uses; no breaking table changes, this field should just be there.

If the data is required and will break the functionality if not present we can change the code to something like this.

Logging your App Activity

It is good practice to have some audit log of what uses do in the application. Some versions ago Microsoft introduced the Change Log to log data changes. How about logging an action execution?

One of the built in solutions in Business Central can be used to solve this. We now have the Activity Log (Table 710).

To use the Activity Log we need to have a record to attach the activity log to. All our Apps have a Setup table that usually only have one record. I like to attach my Activity Log to that record.

To show the Activity Log from that record you can add this action to that record’s page.

The logging part can be something like this.

We also have the possibility to log details. Both a text value and also from an in-stream.

In Business Central we have information about the execution context. I pass that execution context into the LogActivity. This gives me information on the session that is executing the code.

Using this logic we can log all execution during install, upgrade and normal user cases. If we need information on the variables we can log them into the detailed information using either JSON or XML.

Event subscription and performance

When we design and write our code we need to think about performance.

We have been used to thinking about database performance, using FindFirst(), FindSet(), IsEmpty() where appropriate.

We also need to think about performance when we create our subscriber Codeunits.

Let’s consider this Codeunit.

Every time any user posts a sales document this subscriber will be executed.

Executing this subscriber will need to load an instance of this Codeunit into the server memory. After execution the Codeunit instance is trashed.

The resources needed to initiate an instance of this Codeunit and trash it again, and doing that for every sales document being posted are a waste of resources.

If we change the Codeunit and make it a “Single Instance”.

What happens now is that Codeunit only has one instance for each session. When the first sales document is posted then the an instance of the Codeunit is created and kept in memory on the server as long as the session is alive.

This will save the resources needed to initialize an instance and tear it down again.

Making sure that our subscriber Codeunits are set to single instance is even more important for subscribers to system events that are frequently executed.

Note that a single instance Codeunit used for subscription should not have any global variables, since the global variables are also kept in memory though out the session lifetime.

Make sure that whatever is executed inside a single instance subscriber Codeunit is executed in a local procedure. The variables inside a local procedure are cleared between every execution, also in a single instance Codeunit.

If your custom code executes every time that the subscriber is executed then I am fine with having that code in a local procedure inside the single instance Codeunit.

Still, I would suggest putting the code in another Codeunit, and keeping the subscriber Codeunit as small as possible.

This is even more important if the custom code only executes on a given condition.

An example of a Codeunit that you call from the subscriber Codeunit could be like this.

And I change my subscriber Codeunit to only execute this code on a given condition.

This pattern makes sure that the execution is a fast as possible and no unneeded variables are populating the server memory.

User Group Focus 2019


Dynamics 365 Business Central & NAV  
March 13-14, 2019

On March 11th and 12th I will be teaching a VSCode and Modern NAV Development. This course will be held from 8:00am-5:00pm each day

The goal of the workshop is to learn about the new development tool for Business Central (Dynamics NAV), VSCode, GIT source control management and to experience what AL programming is about 
• What makes AL different from C/AL 
• How do you build and deploy a new BC feature 
• How can I convert my current code into AL 
• How to get ready for publishing your IP to AppSource 
• How to use GIT for you code 

On the Developer track I will host three sessions.

Wednesday, March 13, 201910:15 AM – 11:45 AM, Room: Founders III

DEV75: How to Prepare Your Code for ALBCUG/NAVUG

Ready to completely re-think your all-in-one C/AL application? How about we try this: figure out how to split the code into “bricks” by functionality and/or processes, then turn that pile of bricks back into a usable solution. Can you migrate your customer data from the all-in-one C/AL database to the new continuous delivery cycle, replacing C/AL bricks with AL bricks. Let’s find out!

Wednesday, March 13, 20194:00 PM – 5:30 PM, Room: Founders II

DEV78: How I Got my Big Database Upgraded to Business Central

Your database upgrade takes longer than your available downtime window – bit of a problem, right? How about when executing all the upgrade processes on your database will take close to 10 days? Yeah, that’s a big problem. Of course you cannot stop a business for 10 days, but how do you shrink that to fit the 30-hour window over the weekend? You’ll hear the real life story and learn about the tools and methods you can use to streamline your upgrades.

Thursday, March 14, 20198:00 AM – 9:30 AM, Room: Founders III

DEV79: Breaking the Compilation Dependencies

Going to the extension model requires a simple structure to allow multiple extensions to talk to each other without having to put all of them into a compile dependency or into the same extension. Applying the standard API pattern inside the Business Central Service tier will give us the possibility to do all required functionality in a fast and easy way. This session is about explaining this pattern and giving some examples on how we have been using this pattern.

JSON Interface – examples

We have several ways of using the JSON interfaces. I will give few examples with the required C/AL code. I will be using Advania’s Online Banking solution interfaces for examples.

The Advania’s Online Banking solution is split into several different modules. The main module has the general framework. Then we have communication modules and functionality modules.

On/Off Question

A communication module should not work if the general framework does not exist or is not enabled for the current company. Hence, I need to ask the On/Off question

This is triggered by calling the solution enabled Codeunit.

The interface function will search for the Codeunit, check for execution permissions and call the Codeunit with an empty request BLOB.

The “Enabled” Codeunit must respond with a “Success” variable of true or false.

The “Enabled” Codeunit will test for Setup table read permission and if the “Enabled” flag has been set in the default record.

This is how we can make sure that a module is installed and enabled before we start using it or any of the dependent modules.

Table Access Interface

The main module has a standard response table. We map some of the communication responses to this table via Data Exchange Definition. From other modules we like to be able to read the response from the response table.

The response table uses a GUID value for a primary key and has an integer field for the “Data Exchange Entry No.”. From the sub module we ask if a response exists for the current “Data Exchange Entry No.” by calling the interface.

The Interface Codeunit for the response table will filter on the “Data Exchange Entry No.” and return the RecordID for that record if found.

If the response is found we can ask for the value of any field from that record by calling

Processing Interface

Some processes can be both automatically and manually executed. For manual execution we like to display a request page on a Report. On that request page we can ask for variables, settings and verify before executing the process.

For automatic processing we have default settings and logic to find the correct variables before starting the process. And since one module should be able to start a process in the other then we use the JSON interface pattern for the processing Codeunit.

We also like to include the “Method” variable to add flexibility to the interface. Even if there is only one method in the current implementation.

Reading through the code above we can see that we are also using the JSON interface to pass settings to the Data Exchange Framework. We put the JSON configuration into the “Table Filters” BLOB field in the Data Exchange where we can use it later in the data processing.

From the Report we start the process using the JSON interface.

The ExecuteInterfaceCodeunitIfExists will also verify that the Interface Codeunit exists and also verify the permissions before executing.

Extensible Interface

For some tasks it might be simple to have a single endpoint (Interface Codeunit) for multiple functionality. This can be achieved by combining Events and Interfaces.

We start by reading the required parameters from the JSON and then we raise an event for anyone to respond to the request.

We can also pass the JSON Interface Codeunit, as that will contain the full JSON and will contain the full JSON for the response.

One of the subscribers could look like this

Registration Interface

This pattern is similar to the discovery pattern, where an Event is raised to register possible modules into a temporary table. Example of that is the “OnRegisterServiceConnection” event in Table 1400, Service Connection.

Since we can’t have Event Subscriber in one module listening to an Event Publisher in another, without having compile dependencies, we have come up with a different solution.

We register functionality from the functionality module and the list of modules in stored in a database table. The table uses a GUID and the Language ID for a primary key, and then the view is filtered by the Language ID to only show one entry for each module.

This pattern gives me a list of possible modules for that given functionality. I can open the Setup Page for that module and I can execute the Interface Codeunit for that module as well. Both the Setup Page ID and the Interface Codeunit ID are object names.

The registration interface uses the Method variable to select the functionality. It can either register a new module or it can execute the method in the modules.

In the “ExecuteMethodInApps” function I use the filters to make sure to only execute each Interface Codeunit once.

The registration is executed from the Setup & Configuration in the other module.

Extend functionality using the Registered Modules.

As we have been taught we should open our functionality for other modules. This is done by adding Integration Events to our code.

Where the Subscriber that needs to respond to this Publisher is in another module we need to extend the functionality using JSON interfaces.

First, we create a Codeunit within the Publisher module with Subscribers. The parameters in the Subscribers are converted to JSON and passed to the possible subscriber modules using the “ExecuteMethodInApps” function above.

The module that is extending this functionality will be able to answer to these request and supply the required response.

Azure Function

The last example we will show is the Azure Function. Some functionality requires execution in an Azure Function.

By making sure that our Azure Function understands the same JSON format used in our JSON Interface Codeunit we can easily prepare the request and read the response using the same methods.

We have the Azure Function Execution in that same JSON Codeunit. Hence, easily prepare the request and call the function in a similar way as for other interfaces.

The request JSON is posted to the Azure Function and the result read with a single function.

We use the “OnBeforeExecuteAzureFunction” event with a manual binding for our Unit Tests.

In the Azure Function we read the request with standard JSON functions

Then based on the Method we call each functionality with the request and write the response to the response JSON.

Conclusion

Having standard ways of talking between modules and solutions has opened up for a lot of flexibility. We like to keep our solutions as small as possible.

We could mix “Methods” and “Versions” if we at later time need to be able to extend some of the interfaces. We need to honor the contract we have made for the interfaces. We must not make breaking changes to the interfaces, but we sure can extend them without any problems.

By attaching the JSON Interface Codeunit to the post I hope that you will use this pattern in your solutions. Use the Code freely. It is supplies as-is and without any responsibility, obligations or requirements.

JSON Interface – prerequisites

There are two objects we use in all JSON interfaces. We use the TempBlob table and our custom JSON Interface Codeunit.

Abstract

JSON interface uses the same concept as a web service. The endpoint is defined by the Codeunit Name and the caller always supplies a form of request data (JSON) and expects a response data (JSON).

These interface calls therefore are only internal to the Business Central (NAV) server and are very fast. All the data is handled in memory only.

We define these interfaces by Endpoints. Some Endpoints have Methods. We call these Endpoints with a JSON. The JSON structure is predefined and every interface respects the same structure.

We have a single Codeunit that knows how to handle this JSON structure. Passing JSON to an interface requires a data container.

Interface Data

TempBlob is table 99008535. The table is simple but is has a lot of useful procedures.

Wikipedia says: A Binary Large OBject (BLOB) is a collection of binary data stored as a single entity in a database management system. Blobs are typically imagesaudio or other multimedia objects, though sometimes binary executable code is stored as a blob. Database support for blobs is not universal.

We use this BLOB for our JSON data when we send a request to an interface and the interface response is also JSON in that same BLOB field.

For people that have been working with web requests we can say that TempBlob.Blob is used both for RequestStream and for ResponseStream.

TempBlob is only used as a form of Stream. We never use TempBlob to store data. We never do TempBlob.Get() or TempBlob.Insert(). And, even if the name indicates that this is a temporary record, we don’t define the TempBlob Record variable as temporary. There is no need for that since we never do any database call for this record.

Interface Helper Codeunit

We use a single Codeunit in all our solutions to prepare both request and response JSON and also to read from the request on the other end.

We have created a Codeunit that includes all the required procedures for the interface communication.

We have three functions to handle the basics;

  • procedure Initialize()
  • procedure InitializeFromTempBlob(TempBlob: Record TempBlob)
  • procedure GetAsTempBlob(var TempBlob: Record TempBlob)

A typical flow of executions is to start by initializing the JSON. Then we add data to that JSON. Before we execute the interface Codeunit we use GetAsTempBlob to write the JSON into TempBlob.Blob. Every Interface Codeunit expects a TempBlob record to be passed to the OnRun() trigger.

Inside the Interface Codeunit we initialize the JSON from the passed TempBlob record. At this stage we have access to all the data that was added to the JSON on the request side.

And, since the interface Codeunit will return TempBlob as well, we must make sure to put the response JSON in there before the execution ends.

JSON structure

The JSON is an array that contains one or more objects. An JSON array is represented with square brackets.

The first object in the JSON array is the variable storage. This is an example of a JSON that passes two variables to the interface Codeunit.

All variables are stored in the XML format, using FORMAT(<variable>,0,9) and evaluated back using EVALUATE(<variable>,<json text value>,9). The JSON can then have multiple record related objects after the variable storage.

Adding data to the JSON

We have the following procedures for adding data to the JSON;

  • procedure AddRecordID(Variant: Variant)
  • procedure AddTempTable(TableName: Text; Variant: Variant)
  • procedure AddFilteredTable(TableName: Text; FieldNameFilter: Text; Variant: Variant)
  • procedure AddRecordFields(Variant: Variant)
  • procedure AddVariable(VariableName: Text; Value: Variant)
  • procedure AddEncryptedVariable(VariableName: Text; Value: Text)

I will write a more detailed blog about each of these methods and give examples of how we use them, but for now I will just do a short explanation of their usage.

If we need to pass a reference to a database table we pass the Record ID. Inside the interface Codeunit we can get the database record based on that record. Each Record ID that we add to the JSON is stored with the Table Name and we use either of these two procedures to retrieve the record.

  • procedure GetRecord(var RecRef: RecordRef): Boolean
  • procedure GetRecordByTableName(TableName: Text; var RecRef: RecordRef): Boolean

If we need to pass more than one record we can use pass all records inside the current filter and retrieve the result with

  • procedure UpdateFilteredTable(TableName: Text; KeyFieldName: Text; var RecRef: RecordRef): Boolean

A fully populated temporary table with table view and table filters can be passed to the interface Codeunit by adding it to the JSON by name. When we use

  • procedure GetTempTable(TableName: Text; var RecRef: RecordRef): Boolean

in the interface Codeunit to retrieve the temporary table we will get the whole table, not just the filtered content.

We sometimes need to give interface Codeunits access to the record that we are creating. Similar to the OnBeforeInsert() system event. If we add the record fields to the JSON we can use

  • procedure GetRecordFields(var RecRef: RecordRef): Boolean

on the other end to retrieve the record and add or alter any field content before returning it back to the caller.

We have several procedures available to retrieve the variable values that we pass to the interface Codeunit.

  • procedure GetVariableValue(var Value: Variant; VariableName: Text): Boolean
  • procedure GetVariableTextValue(var TextValue: Text; VariableName: Text): Boolean
  • procedure GetVariableBooleanValue(var BooleanValue: Boolean; VariableName: Text): Boolean
  • procedure GetVariableDateValue(var DateValue: Date; VariableName: Text): Boolean
  • procedure GetVariableDateTimeValue(var DateTimeValue: DateTime; VariableName: Text): Boolean
  • procedure GetVariableDecimalValue(var DecimalValue: Decimal; VariableName: Text): Boolean
  • procedure GetVariableIntegerValue(var IntegerValue: Integer; VariableName: Text): Boolean
  • procedure GetVariableGUIDValue(var GuidValue: Guid; VariableName: Text): Boolean
  • procedure GetVariableBLOBValue(var TempBlob: Record TempBlob; VariableName: Text): Boolean
  • procedure GetVariableBLOBValueBase64String(var TempBlob: Record TempBlob; VariableName: Text): Boolean
  • procedure GetEncryptedVariableTextValue(var TextValue: Text; VariableName: Text): Boolean

We use Base 64 methods in the JSON. By passing the BLOB to TempBlob.Blob we can use

and then

on the other end to pass a binary content, like images or PDFs.

Finally, we have the possibility to add and encrypt values that we place in the JSON. On the other end we can then decrypt the data to be used. This we use extensively when we pass sensitive data to and from our Azure Function.

Calling an interface Codeunit

As promised I will write more detailed blogs with examples. This is the current list of procedures we use to call interfaces;

  • procedure ExecuteInterfaceCodeunitIfExists(CodeunitName: Text; var TempBlob: Record TempBlob; ErrorIfNotFound: Text)
  • procedure TryExecuteInterfaceCodeunitIfExists(CodeunitName: Text; var TempBlob: Record TempBlob; ErrorIfNotFound: Text): Boolean
  • procedure TryExecuteCodeunitIfExists(CodeunitName: Text; ErrorIfNotFound: Text) Success: Boolean
  • procedure ExecuteAzureFunction() Success: Boolean

The first two expect a JSON to be passed using TempBlob. The third one we use to check for a simple true/false. We have no request data but we read the ‘Success’ variable from the response JSON.

For some of our functionality we use an Azure Function. We have created our function to read the same JSON structure we use internally. We also expect our Azure Function to respond with the sames JSON structure. By doing it that way, we can use the same functions to prepare the request and to read from the response as we do for our internal interfaces.

Compile Dependencies

As we move from C/AL and all-in-one database we need to redesign how we make one application or a functionality available to the next.

Microsoft recommends that we build multiple AL extensions, not putting all our code into a single extension.

My goal is to be able to develop, test and execute an extension regardless of if other extensions are installed or not. Some functionality may depend on another extensions. Still I don’t want to add compile dependencies between these extensions.

For just over a year we have been designing and implementing methods to solve this task. In the coming blog entries I will be sharing these methods one by one. These JSON interface methods I hope will help you solving your tasks as well.

In last Directions EMEA I had a talk about the path Advania is on from C/AL all-in-one database to multiple AL extensions. Here below is the content of that talk. Feel free to skip that reading and wait for the upcoming blogs about the technical solutions.

From the start we have been calling this project Lego®.  I will spend some time on detailing the path we have been on for almost three years.  We have not yet started converting to AL but is seems that the tooling that Microsoft delivers with the fall release of Business Central will be the one we can use to start the conversion from C/AL to AL.  We have done some tests and they are promising.

This new way of doing things have enabled new methods of doing our business and opened for new dialog with our customers. 

A brick should be able to stand alone.  The only dependencies allowed is on the standard Microsoft code and on Advania’s IS localization that we call IS365.  There are ways to make the solution or functionality available in the base application without having to modify any of the standard objects.  A discovery pattern is where the base application makes a call for Assisted Setup, Business Setup or Service Connections.  By responding to these calls, these events, the setup for the customized functionality is made accessible to the user.

The interface pattern that we created is the key here.  Using this interface pattern we are able to break compile dependencies without breaking the connection between individual solutions and functionalities.  I will spend some time on detailing how we have implemented this interface pattern in our solutions.

When Microsoft talks about a slice in their development they are talking about a similar thing as we do when we talk about a brick.  A peace of code that they check in and needs to work independently.  The main difference is in dependencies and connectivity.  A slice can be dependent on another slice where a brick as we define it can’t have compile dependencies on any other brick.

Object wise, a brick can vary greatly in size and complexity.  Some bricks only have a handful of objects.  Another bricks can have hundreds of objects.  The largest bricks are our ISV solutions.  Still we have broken most of them down to smaller bricks.  Even if Iceland is small in population, we have most of the same complexity as other countries when it comes to delivering quality software.  We need to supply our customers with solutions to handle all the common business processes.  Icelandic partners have been very independent and our extensive experience with Business Central and previous versions means that we already have all the solutions for all these business processes. These solutions we need to handle as a product and we keep all coding in line with our new design principles.

We also have a number of bricks that have only one functionality. As an example, one brick can download the Post Code list for Iceland for update in the application.  Another brick can interface with the online banking web services.  The third one can use the Hardware Hub to scan documents or read a serial connected bar code reader.

These small bricks also have a way to register them selves.  Most of them in Service Connections.  Some of them register them selves as added functionality in other bricks.

One example is our Notification App.  That app is used to send notifications to users.  These notifications can be both via email and to the built in user notifications on the role center.  We have another app that has only one functionality.  That one enables sending an SMS via one of the telecommunication service providers in Iceland.  By installing and configuring  SMS app the notification app extends the functionality and enables notifications via SMS.

The SMS app has an interface that enables all other apps to verify the availability and use it so send SMS.

About three year ago when we came home from NAV TechDays, we brought Kamil Sazek, an MVP from Check Republic with us.  Kamil had already done some source control management solutions for his company using TFS.  At that time everyone wanted to move to GIT and Microsoft was moving to GIT as well.  We spent a week up in Iceland applying Kamil’s knowledge and his scripts to an indoor GIT source control management system.  We had our preferences and our design principles and we boiled everything together and started our road to source control management.  At that time AdvaniaGIT was born.  AdvaniaGIT is the toolbox that we use in our day to day work.

GIT has repositories and branches within repositories.  GIT has a lot more but that is where we started.  We started our NAV 2016 work on GIT and decided not to apply this way of work to older releases.  Most of our NAV 2016 work was done this way.  Some of course not – but that was not a technical problem, it takes time to update the employee mentality and knowledge. 

The first branch in a repository is the master branch.  In there we store Microsoft’s base app, the W1 code base.  Looking at the GIT history for the master branch, we can see every CU of every release since NAV 2016.

Since we are one of the favorite countries for NAV and now Business Central we also have an Microsoft Icelandic release.  That release in every CU since NAV 2016 is also available in our IS branch.  The branching structure for C/AL is very different from the branching structure for AL.  For C/AL we have one repository for each NAV and Business Central version.  For AL we have one repository for each app and one for each customer.

As most of you probably release then there is a gap between the localized version delivered by Microsoft and the one a partner likes to offer to the local market.  The same thing applies in Iceland.  This gap we are closing with multiple bricks.  There is, however, a set of functionality and data that we must add to the base application to be used in multiple bricks.  Every Icelander has a social security number, and every company has a registration number following the same structure.  This field we add to customers, vendors, contacts and all types of bank accounts. 

We call this a base branch.  A branch that we base every brick on and compare every brick to.

Every month we receive a new CU.  We always develop in the latest CU.  When a new update is available we update our NAV and Business Central installation and update the object export both with W1 release in the master branch and with the Icelandic release in the IS branch.

We then use the standard GIT Merge functionality to merge the updates from the IS branch to the IS365 branch.  Merging all the updates from Microsoft into our own base localization branch.  By updating the IS365 base branch with a new update every build that is based on that branch will use the updated code automatically.

Every time a developer opens a branch to work on the first thing that must be done is to use the GIT Merge from the IS365 branch.  By doing this we make sure that we are developing on the latest update and comparing our brick to the current update that is contained in the IS365 branch.  When development is done all objects are exported back to GIT and the brick can be compared to the base IS365 branch and the deltas can be extracted and stored.

This is us in releases prior to NAV 2016.  Building a solution for a customer was a lot of work.  Manually bringing all the code together into a solution and then if we wanted to update with a new CU that also was a lot of work.

Every solution and every functionality installed was very closely linked and we could easily call a function in one solution from another solution.  Life was good and the developer was happy.  Had everything available and had everything under control.

The problem here is that this was not a good business.  Applying an update to customers was a lot of work.  It was very hard to use the same code and the same solution for multiple customers.  Our shelf products needed to be manually added to the solution.  And we had no automated testing to ensure the quality of the code and solution.

Not to mention upgrades.  No one really knew what was being used and what not.  No one was able to keep track of all the changes done.  We did have some comments in the code.  Some helpful, some not.  Big upgrades are very expensive both for the customer and also for the partner.

Microsoft also saw this as a problem and started to work on Events and Extensions.  After three years of that work Extensions are brilliant.  It is very important to see where Microsoft is going.  Not where they are.  When we see where Microsoft is going we need to act our selves in line with that future.  We could not wait for three years for Extensions to be ready and then start working.  The endpoint was clear.  Microsoft base application with Advania Extensions was the way to go.  The road to that solution was not as clear.  That is when our LEGO® method was born.

It was clear that in order to move to Extensions we first needed to break everything apart.  Take all the bricks apart and put them in a pile on the desk.  We started this in NAV 2016.  But in NAV 2016 we played with LEGO® Duplo.  The big bricks.  We put a lot of the functionality in our core app.  Still we where able to use the method and we where able to build customer solutions and apply new CUs without to much work.  We already got a lot of repeatability there.  This took a few months but about six months after the NAV 2016 release we where able to start updating our customers to NAV 2016 solutions built with the brick methods.  These customers we where able to update every month with new CUs.  Still we did not update all of them.  Remember that I said that was not a technical problem.  Our staff has the same problem as our customers.  We tend to fall back to the way we are used to doing things.

We where still working on NAV 2016 when NAV 2017 was released and we did not move our bricks to NAV 2017.  Just spent a little more time educating our staff and preparing for the small bricks.  When NAV 2018 was released we started full force.  Each brick was a potential Extension and what used to be in one brick is now in multiple bricks.  We spent some time on the code we had for NAV 2016 to figure out where to break things apart and what to keep together.  Everything we coded in C/AL in NAV 2018 has to follow the same rules that are set for Extension development.  We removed DotNet variables one by one and made sure not to change anything in the standard application.

As mentioned earlier we have some data requirements that are commonly used in our bricks.  We also have some verification methods both for registration numbers and bank account numbers.  There are some shared events and some share functionality.

We did need to use some DotNet variables.  We also needed to add some events to the base applications.  When we needed to do this we did it in the base IS365 brick.  And everything we did to break the Extension model we sent to Microsoft.  Through Microsoft’s AL Issues on GitHub we where able to request new events.  Through Microsoft’s cal-open-library on GitHub we where able to submit request for DotNet wrappers.  So we broke the Extension rules in NAV 2018 only in the base app, knowing that in Business Central Microsoft would deliver everything we needed to get our changes out of the base application.

More complex DotNet usage we moved to Azure Functions.  A simple REST web request from our brick can trigger complex tasks using Azure Function.  This has worked beautifully for us.

We created a few Codeunits with helper functionality.  Codeunits to handle some common tasks.  As an example we have one Codeunit that every brick can use both to call interfaces and create the responce from an interface.  We also like to use the built in Error Messages but needed some added functionality.  It made sense to have that functionality in the base app and make it available to all the bricks.

We call Azure Functions via the base app and we have ways to handle files from Azure Blob, Azure File System and Dropbox via the base app as well. 

We use interfaces to communicate between different bricks.  We can think about an interface like we think about a REST web service, or an Azure Function.  We supply a request JSON string to the interface and the interface response is also a JSON string.

For this we use the JSON Handler Codeunit in the base app, the IS365.

A REST web service has an http endpoint.  Our endpoints are identified by the Codeunit name.  As required by AppSource, we are using the prefix ADV for all objects, fields and functions in our bricks.  Therefore we can be sure that our Codeunit name is unique and we know who we are talking to.  We can think about this name similar as a domain name.  We use domain name lookup to find the correct IP address for the domain and then the request is sent to that IP address.  We follow the same rule with our Codeunit interfaces.  We lookup the Codeunit by name and if it exists and we have required permissions to use it we will get the object Id.  That object Id we can then use in a simple CODEUNIT.RUN method.

Our JSON Handler Codeunit has methods to execute a Codeunit with or without the JSON string parameter.  The On/Off methods are called without any parameters but all the others are using the TempBlob record where both the JSON request string and response string are stored in the Blob field.

One of our rules is that every brick must have a setup table, a setup page and an interface that can respond to a enabled query. 

If we look at the first code line where we ask if the brick is installed, we can see that this is similar to doing a lookup for a domain name. 

The second line where we ask if the brick is enabled we are simply doing a IP ping.

No matter how complex or simple the brick solution or functionality is.  We always require an setup table with the enabled flag.  We can make sure that all logic and all required setup data is available when the user tries to enable the solution.  We can even ping another brick if that one is required before allowing a user to enable the solution.   

The C/AL code here is pretty simple.  The Codeunit will throw an error if the brick is not configured and enabled. 

The JSON interface is the most used scenario.  Using the JSON Helper Codeunit we can add variables and records, both temporary and not temporary to the JSON string.  For records that are temporary every record in the tables is stored in the JSON string and the table filters and table key is also included.  For database tables we can both send a record Id and field values.

We prepare our request by adding to the json string.  The same thing is done inside the interface Codeunit when the response json string is created.  It starts with the Initialize line.

We read from the request json string with the InitializeFromTempBlob procedure.  After that we have an easy access to all the information we store in the json string.  We applied the same json structure to our Azure Functions.  The functions read the request from the json string created by this Helper Codeunit and the response from the Azure function is read by this same Json Helper Codeunit.

A JSON interface Codeunit will do some work and respond with a result.

There are some cases where we need to trigger an Event in another brick.  We can also enable this with an JSON event interface.  See the code.  In this case the Integration Event is a local procedure inside the same Codeunit.  This Integration Event could just as well be an public procedure in another C/AL or AL object. 

We need to supply the required parameters for the Integration Event to execute properly using the same json Helper Codeunit as before.  In this case we are also passing the TempBlob to the Event but that is not a requirement in our pattern.  An Event Interface is not required to have a response JSON.

We also use JSON registration interfaces to register a functionality in another brick.  When we have a configuration where the user can select a functionality based from a number of lines, we should be able to add to that selection from another brick with a Registration Interface.  In this example we have a new type of bank connection that is available when a brick in installed.  It needs to be available in another brick that has the bank connection framework.  We add this new method for bank connection using the registration interface and include information about the setup page and the assisted setup page.  This information is also in a text format, the page names.  We use the same name lookup instead of focusing on the object ids.

Same thing happens here, we have a list of bank interfaces that can come from multiple bricks, each handling that specific type of bank connection.

Then finally let’s talk about the Data Exchange Framework.  When ever we need to transfer data in or out of the application we use the Data Exchange Framework.  When we receive an Xml response inside the JSON response from an Azure Function.  We pass that Xml to the Data Exchange Framework to work with.  When we need to create an Xml file to send inside a JSON request to the Azure Function we also create that Xml file with the Data Exchange Framework.

Another example is in our Payroll solution.  The Payroll solution can import data to be used in the salary calculation.  That import starts a Data Exchange import process.  By doing it that way we can map any type of file to the import structure in our Payroll solution.  We already had this done in NAV 2016 for external time registration solutions.  On the other hand we also have a time registration solution in NAV and we used to have an action inside the Payroll that would start a report in the time registration module.  In that time registration report we had both the time registration tables and the payroll tables and we just moved the required data between these two tables.  This we can no longer do as these two solution are in two separate bricks.

The solution here was to agree on an Xml format that we can use to pass data from the time registration into the standard payroll import methods.  We can easily specify a Codeunit from the time registration brick in the data exchange definition that is used by the payroll import function.  That time registration Codeunit takes creates the Xml data that we can import.

Now to the configuration.  In every branch we have one configuration file.  This file is named setup.json and it is a simple JSON file that is easy to edit in VS Code.  Our AdvaniaGIT tools require this configuration in order to pick the correct versions and build the correct environment, both for the developer and on the build server.  Remember that we are still talking about C/AL.  We need to have information about the base branch for each brick.  Every branch must have a unique id.  Similar to that every AL app must have a unique Id.

When the developer starts working the local development environment is built and configured in line with this config.  One config line can be added, and AdvaniaGIT will start a Docker Container instead of using the locally installed NAV or Business Central.  The environment that is built is linked to the branch with the branch id and every action in AdvaniaGIT uses that link to work with the correct development environment.  We store all objects as text files in our branches.  We need this to be able to compare our changed objects to a set of standard objects.

When we start the development work on a new solution, a new functionality or a new customer we need to create a new branch.  After creating the new branch we code in C/AL and export the code changes to GIT.  The code changes are then converted to delta files and used in builds.

Every brick branch also has a setup.json.  This setup.json points to the IS365 as the base branch.  The brick branch has a different object id offset and a different branch id.

A brick branch will contain delta files that will describe the changes that brick makes to the IS365 branch.  These changes must comply to the Extension rules to be able to merge without errors along with all the other bricks.

We do a daily build that we call Test Customer where we build the IS365 branch with all the bricks merged into a single solution to make sure that we can always merge all the bricks without errors.

Each customer has it own branch as well.  The customer branch is also based on the IS365 branch.  In this example we are building a solution that we call PUB2018.  This is the solution we use in Advania’s public cloud.  Notice that here we specify a delta branch list.  This is a list of bricks that are to be merged into this customer solution.  The build server will then take care of that.  If there are any customization for this customer they are done in the same way as we did in every brick and stored as delta files in the customer branch.  Same goes with the customer unit tests, they are also stored on the customer branch.

For the customers that we are implementing NAV 2018 for we do the base build in C/AL.  However, all the customization done for the customer is done in AL.  The AL code is automatically built on the build server and deployed to our cloud services during the update period each night.  A change that we commit will be built instantly by the build server and if successful the new App file will be copied to the cloud servers and applied during the next upgrade window.

Over to the builds.  This is an essential part of the development process.  Nothing is finished until it has gone through build and test.  The build server is configured to start Docker Containers and uses the same AdvaniaGIT functions that the developer uses to build, merge and test the environments.  We store database backups on an ftp server.  These backups are brought in when the build process starts and updated after a successful build.

We have configured build processes that result in FOB files, in AL symbol package files, in AL apps, in database backups and database bacpacs.

Remember the delta branch list that I talked about.  That delta branch list is for the build server to select what to merge during the build.

After a successful build we have the option to deploy the result.  This we have also automated.  The build server can update environments by importing a new fob or by updating the AL app installed.  We can deploy the artifacts both internally in Advania’s cloud and also directly to the customers on premise solution.

We also like to automate the data upgrade when we are planning to update a customer or a solution from one version to another.  In that case we have a multi tenant setup on the upgrade machine.  We bring in the latest backup of the production database.  We restore that database and remove the old application from the database.  Running on that machine is an multi tenant NAV instance with the required upgrade code and the required configuration to run long SQL transactions.  We next mount the data database to this application and execute the sync and upgrade tasks.  If successful we dismount the data and mount to another multi tenant NAV instance.  This NAV instance has the acceptance code and we update that one by automatically importing a fob file from the latest customer build.

By automating this we can execute multiple data upgrades for the customer to verify the process before everything is ready to put the new release into production.

Finally we turn to the hot topic.  We have not started this task yet, but we will soon after we arrive back home.  Business Central now is fully capable of supporting all our needs in AL.  We can, if we like, use DotNet variables for local deployments.  We can select to mix C/AL bricks and AL bricks.  The beauty is that we can now start converting our IP brick by brick.  As we applied the extensions rules both with code change and naming in C/AL we will be able to convert the C/AL code to AL and with minimal changes we should be able to create the AL app.  An AL app does not extend the standard tables like C/AL did.  It creates companion tables based on the table that is to be extended and the App GUID.  We therefore must move the C/AL data out of the original place and into temporary upgrade tables.  By using record references in our AL install Codeunit we can bring the data into our installed app if there are upgrade tables from the C/AL brick.  Since we are using the same object and field numbers we can not use the obsolete field option that keeps the C/AL data in place, inaccessible for the user but still accessible by the AL install Codeunit.

We expect that removing the C/AL brick and replacing with AL will be a simple process when these guidelines are followed.

Here is a short overview of our tools.  We use the Atlassian products; Jira for issues, Bitbucket as the GIT server and Bamboo as the build server.  This not the only way.  We can use both GitHub and Microsoft DevOps with the same tools that we have.

We use AdvaniaGIT.  AdvaniaGIT is a set of powershell modules and a VS Code extension.  It is available on GitHub and please, if you can, use it.

AdvaniaGIT: Configure Build and Test using Visual Studio Online

The goal of this post is to demo from start to finish the automated build and test of an AL solution for Microsoft Dynamics 365 Business Central.

Configure build steps

The build steps using AdvaniaGIT are similar to the build step Soren describes here.

The first two steps and the last one is automatically created.  We create PowerShell steps to be executed in between.

To begin with I like to make sure that the previous build environment is removed.  If everything is configured and working correctly the environment is removed in the end of every build.

Since I am working on daily builds I like to leave the environment active after the build so that I can access the latest environment any time to look at code and configuration changes.

When putting together this build I used some of the information found on Tobias Fenster blog and also from Kamil Sacek blog.

Remove existing container

Every PowerShell  task executes the basic command with different parameters.  The script path is

The Arguments are defined for each task.  I always select which script to execute and always add the ‘BuildMode=$true’ settings.

Remember that the ‘setup.json’ configuration and the machine configuration in the read as every action is started.

the ‘setup.json’ parameter ‘projectName’ is used as the docker container name.  This first action will look for a container matching the ‘projectName’ and remove it.

Build and Update Docker Container

This task will start download the docker image that is defined in the ‘dockerImage’ property in ‘setup.json’. 

To make sure that docker allocates enough memory for the build use the ‘dockerMemoryLimit’ parameter and allow at least 4G of memory.

The ‘dockerTestToolkit’ parameter is required if the AL Test application uses the standard test toolkit libraries.  The G/L Source Names test application uses some of the standard libraries and therefore, using this parameter, I trigger the toolkit import into my docker container.

Initialize Test Company

The daily builds are not shipped with the CRONUS company.  To be able to execute the test using the NAV Client I need to have a company marked for evaluation.

This action will remove the existing company, create a new evaluation company and run the Company Initialize Codeunit.

If you are not running tests then this step is not needed.

Download AL Add-in

The build machine will create a folder in ‘C:\AdvaniaGIT\Workspace’ using the ‘branchId’ from ‘setup.json’.  That said, we can only have one build process running for each GIT branch.  I have not found the need to allow multiple concurrent builds for a GIT branch.  

The AL extension for VS Code is downloaded and extracted to the ‘vsix’ folder.

Download AL Symbols

The symbol files (app files) are required to build the AL application.  These two files are downloaded from the docker container into our work space ‘Symbols’ folder.

Build AL Solution

The AL app will be compiled and created in the work space ‘out’ folder.  I add the ‘Build.BuildID’ from Visual Studio to my app version in every build.  Remember that the AL solution is in the AL folder on my GIT branch as stated in the machine configuration.  Overwrite that configuration in ‘setup.json’ or with ‘BuildSettings’ parameter if needed.

Copy AL Solution to Symbols

The ‘APP’ file is copied from the work space ‘out’ folder to the work space ‘Symbols’ folder.  This is required since I am building the test app on top of the solution app.  If you are not running tests the you can skip this and the next five tasks.

Build AL Test Solution

This task is almost identical to the AL solution build task.  Here the machine parameters for the AL solution and the Test solution are combined into a single folder ‘ALTests’ to locate the folder containing the AL test application source code.

The test application ‘APP’ file is copied to the work space ‘out’ folder.

Install AL Extension

All the ‘APP’ files from the work space ‘out’ folder are published and installed in the docker container.

Execute AL Test Codeunits

This task requires the ‘idRange’ in the ‘app.json’ to be specified.

Every Codeunit in this number range with ‘subtype=upgrade’ will be added to the test pipeline.

The server instance is restarted before the tests are executed.  I have asked Microsoft to do one code change to be able to skip this step.

The tests are then executed using the NAV Client that is copied to the build machine and started from there.

Save Test Results

Test results are downloaded from the docker container database and formatted into the VSTest Xml format.  The ‘TestResultPath’ is a sub folder of the repository path.

Publish Test Results

This task is a built-in Visual Studio task.  The test result files must match the ‘TestResultPath’ in the previous step.

The ‘$(System.DefaultWorkingDirectory)’ is the repository path.

Copy AL Solution to Artifact folder

The ‘Artifact’ folder is created in the repository folder and every ‘APP’ in the work space ‘out’ folder is copied to this folder.

Sign AL Solution in Artifact folder

The code signing certificate is needed and must be available on the build machine.  The ‘signtool.exe’ to be used, the certificate path and the certificate password are specified in the machine setup ‘GITSettings.json’.

Publish Artifact: App

This is also a Visual Studio task that will upload all files in the ‘Artifact’ sub folder in the GIT repository to the build artifacts.

Remove Container

In my setup I disable this and the next task.  This is done so that I can work with the results and the development environment after the build.  This task is also configured to be executed even if something fails in the build pipeline.

Remove Branch Work Folder

The folder created in the work space by adding the branch ID will take up space on the build machine.  This task will remove that folder and everything it contains.

Delete Logs

When AdvaniaGIT executes custom actions a log folder is created.  The log folders are located in ‘C:\AdvaniaGIT\Log’ folder.

When the NAV Client is copied to then host it is stored in a log folder.

Every sub folder that is older than seven days will be removed from the log folder in this task.

Conclusion

I configured my build machine as described in my previous post.

I use my free subscription to Visual Studio Online to store my source code in GIT.

I installed the build agent on my build server and connected to my Visual Studio Online subscription.

I added the docker repository information and login credentials to the ‘DockerSettings.json’ to be able to build the daily builds.

AdvaniaGIT is accessible on GitHub.

Good luck!