In last week’s post I discussed the Dynamics NAV test automation suite. Running Microsoft standard automated tests ensures that your modifications and add-on solutions do not interfere with vanilla NAV. However, all non-standard/custom business logic needs to be tested as well.

Traditionally, testing takes place in a test environment using sample customer data. During manual testing the application consultants and key users use prebuilt test scripts to test data validation and new functionality. The goal of automation is to reduce the number of test cases to be run manually—not eliminate manual testing all together.

Creating an automated test is more time consuming and expensive than running it once manually. What to automate, when to automate, or even whether one really needs automation are decisions the testing or development team must make. A good candidate for test automation is a test case that is executed repeatedly, is business critical/high-risk, is difficult to perform manually or is time consuming. Automating unstable features or features that are undergoing change should be avoided. Automated vs manual testing can be decided by a basic cost-benefit analysis. If it takes 5 times as long to build, verify and document an automated test, then after the number of automated test runs exceed 5, testing from that point is essentially free.

Test automation should be seen as a long-term investment in the quality of your systems.

Test automation should be seen as a long-term investment in the quality of your systems. The good news is that we can build custom automated tests on top of the Microsoft test toolkit. There are over 60 standard libraries containing numerous generic and application-specific functions that can be reused when building custom tests.

The following best practices should be followed when designing and developing custom automated tests:

  • Tests should follow the same Given-When-Then style. This approach was developed by Dan North and Chris Matts as part of Behaviour-Driven Development (BDD).
  • Tests should not depend on the data in test database. Data should be created on the fly using random generator functions. Hardcoded values in tests should be avoided.
  • Tests should leave the system in the same state as when the test started. You can use the TransactionModel property in test functions and the TestIsolation property in test runner codeunits to control the transactional behaviour. A common scenario is that each test function starts a separate write transaction and all the changes are rolled back after each test codeunit.
  • Test code should test that code works under successful and failing conditions (positive and negative tests). To test failing conditions, you can use the ASSERTERROR keyword.
  • Test code should be kept separate from the code being tested. That way, you can release the tested code to a production environment without releasing the test code.
  • Tests should not require user intervention. Special handler functions should be created to handle all UI interactions.
  • Code Coverage should be monitored to track the extent to which the application code is covered by tests.

There are third-party GUI capture/replay tools available that track user interactions with the product and build a script from them. It would be great if we had such a tool integrated within standard NAV as this would enable users to record new test scripts without any development required.

Testing is an iterative process. Automated tests should be executed every time an enhancement is made in the application (regression testing). During a typical development it is unacceptable to have to wait hours to get results from running all tests. That’s why the Dynamics NAV Test Tool gives us an option to select automated tests only for modified or selected objects (based on test coverage maps). Running automated tests frequently will ensure that new development doesn’t break any existing functionality. Now, with a combination of Microsoft’s default testing suite, and tests developed specifically for your organisation, new implementations and enhancements can go-live with minimum bugs.