Discover The Art of Modelling

The Art of Modelling series from Curiosity Software boost foundation thinking around the art of model-based testing, test case design and test automation, told through a series of 90 sec–2 min clips.  

Book a Demo

 

Overview

From Curiosity Software we set about compiling, beyond the technology, some best practice in modelling out test cases.

To boost foundation thinking we show key techniques introducing baseline element of the tool, through which the three amigos: the devs, test engineers and Product Owners - BAs, can scope ideas; giving oversight on change, avoiding gaps; ambiguities—to favour end-users.

Where models can get repetitive, we posit ways to synchronize effort amongst teams to garner success—the approach being iterative and progressive. 

Where one clip encourages you to consider the mindset of modelling out business logic or requirements, Curiosity’s subject-matter experts give you their take on it. In this example, the iterations of modelling out a UI from existing, to concept and evolving.

Drawing us to a close, we consider form, cause, and change with Ben Riley, our CIO, and George Blundell, Technical Analyst. The focus on leveraging Test Scenarios and Test Cases. 

Let’s round out on collaboration effort: the devs, test engineers and Product Owners - BAs; in sync, easily reflecting a constant change of logic; requirements. 

 

 

 

A Baseline Grammar

Whether a developer, business analyst or tester you are no doubt an end-user, you constantly build mental models day-to-day as part of your work, rest and play. So, how many mental models did you already make this morning or even this week?  

Stepping back, models have generally been the basis for software design since testing existed, and just as mental models adapt on-the-fly best-practice modelling blends individual thought into collaborative work. A deeper dive, then, into the baseline grammar for best practice modelling, we present a basic flow in the context of an ATM transaction.  

From that, the best practice when modelling out a systems behaviours leads to teams being more tactical in approach. To reiterate: tactically creating models that use baseline grammar: nodes, task blocks, conditions, considering both an invalid plus valid end node, use best-practice modelling to sharpen & share diverse thinking with teams to model out and derive effective test cases.   

 

 

Enable Exploratory Modelling

No matter the system under test, modelling is more broadly concerned with enquiring, applying and informing, and is closely aligned to evolving your teams’ intelligences. This enquiring, applying and informing helps focus team intelligences and synchronizes effort & motivations around test design and generation.  

To benefit from team intelligences around test design and generation, be sure first to make early investigations which will ultimately transform good individual contributions into aligned collaborative effort. Such early investigations of a system with aligned collaborative effort, be it around test design and generation of data combinations, user interface journeys or exposing complexity, the outcome enables exploratory modelling, one that’s iterative.  

Stepping back let’s define exploratory modelling relative to the process of enquiring, applying and informing. Around enquiring orbits the idea of imperfect sources of knowledge from which a perspective can be decided as a leaping off point. The main question being: what are your main concerns? 

Then comes modelling out that enquiry from which you can adjust test cases according to the coverage to reveal complexity, ambiguity or gaps. This helps flush out anomalies and challenges the sources of knowledge to produce better run results and test execution.  

So best practice includes the need to enquire, model, inform and apply to derive effective test design and generation that suits the diversity of thinking within your team. 

 

Modelling an Existing UI

This video introduces a simple customer login screen where emails and password fields and different elements of the application get picked up, from -to URL to eventually clicking on the sign-in button and in the model you see how different journeys get tracked. 

We also take a closer look at using all three major block types of the test Modeller tool in conjunction with each other, to map out the different combinations of behaviour on the UI. This is a complete model which demonstrates some of the more basic best practices for modelling an existing UI relative to combinations of data but also combinations of user behaviour that can exist on top of a webpage.

 

Modelling a UI Concept

Modelling upfront is a key part of test or behaviour driven development, from which user stories are generated. At which point, developers can action the user stories to write an application or at least produce initial test scenarios to test it against, or as soon as the application is ready and available to be tested.

You'll see that much of the same logic exists as in segment 1 [Modelling an existing UI] with automation [Waypoints] used as simple placeholders now waiting for the application to be built. Also waiting for automation that can then overlaid on top of the model.

Despite the application not actually existing, at the start we lay out three blocks to explain the high level goal or high level use case of this page. In this segment [2 of 3] it considers some key differences involved in modelling a UI when the application doesn't already exist, rather than an existing one.

 

Modelling an Evolving UI

As a UI evolves according to requirements, a model gives a focus for different stakeholders to collaborate within the Test Modeller canvas. And so a model will easily evolve from day one through to possibly day 20. From coming in and explaining some of the most basic functionality or journeys on your app through to where a model has its different nodes rules applied, the outcome is to embed data ready for automation.

The example considers the credentials as seen in segment 2 [modelling a UI concept] in which different combinations of credentials are mapped out simply as either invalid or valid. However, as this model evolves to reflect the UI’s requirements, its scope is increased. This includes not only the email, but also the passwords decision trees.

Additionally, rule-based generation [Rules] are used on task blocks, essentially saying that for any instance given the logic as invalid data, will be forced down the invalid credentials route to inform test cases. The invalid password gets pushed through into a negative scenario, where we end up with a necessary error state.

 

Scope and Articulate Flow

Decision gates laid down on the Test Modeller canvas serve to generate tests. These tests are based on coverage types. In this example, test generation results in four possible flows, which terminate in being able to make a perfect cup of tea or not.

In this video, you’ll meet Huw Price (Managing Director, Curiosity Software) as he navigates the need to throw down a rough process flow to help articulate a series of data states as decision groups ahead of starting to visually model. This approach favours seeing the bigger picture in which test steps can be agreed, modified and refined in the model amongst all stakeholders including testers, business users, SMEs, developers, SDETS, manual testers and likely end-users.

 

 

Be Mindful of Decision Gates

Use visual representations, to garner a common understanding of what the process should look like in this case. What we're doing is we're putting everything, so it is visual. . So what we've had to do is actually double up with the Logic Gates in terms of wanting sugar and wanting milk. The reason we're doing, this is because we need to put it all into a visual representation. So once you've finished with your preliminary model, what you need to do then is to start thinking about creating some tests or user stories.

In this video, you’ll meet Huw Price (Managing Director, Curiosity Software) introducing techniques that you can use to test every scenario along various paths of a mode. This features ahead of the subsequent clip, which shows the use of Rules as a way of limiting the amount of possible scenarios, but equally how a mixed approach using Rules with Logic Gates can be used to inform best practice modelling when executing test steps.

 

Overlay Logic to Reduce Repetition

This video shows you a different way to solve the same problem, where we're going to use constraints or Boolean logic to overlay onto the model. And what we've done is add in a new endpoint called no tea. What you'll see is that we have a little hexagon here; we have some rules to find here. Associated with each block in the Assignment we've picked up, HaveTea Yes, HaveTea No, like so; and we've done that for each of the data states. You need to go in and define a rule; and we've created a very simple one here. So if any of these is not true, we cannot make a cup of tea.

In this video, you’ll meet Huw Price (Managing Director, Curiosity Software) as he builds on the concept of a purely visual model to use of constraints (logic Rules), expressed as Boolean states. Where the previous clip introduced a fully visual model, logic Gates though had to be repeated, which may be tricky to maintain. So Boolean states are a more embedded solution and though less overtly visualized - you have to look at the Boolean logic itself - if you're only interested in the user stories or test cases then that's the way to go. Finally, what you may find is to use a mixed approach where you're balancing the visual elements to make the model more understandable while overlaying complexity through Boolean logic states.

 

Align Collaborative Effort

It’s time to talk a little bit about collaborative effort and how it can impact your test case design and coverage that you’re able to achieve within your tests. We want to build high quality tests while confirming that we’ve got good coverage levels. To do this, we need to align our collaborative efforts.  

Determining the critical points of a model allows the team in unison to work together whilst minimizing the impact of change or the risk of that particular change is really important. Now with many teams working together on a single system, you can see how dependencies and risks can be understood. Using Coverage Profiles, for example, can really help us understand the impact of a wider change, especially when part of a bigger narrative - multiple teams prioritizing their tests, prioritizing their work in different orders.  

Use Test Modeller’s Coverage system to derive an assured set of Test Cases; and that can be from a team’s perspective, it can be from a group’s perspective. But as long as we’ve got priority and flexibility, it allows us to pinpoint where we want our testing to focus. So let’s get into some real examples that we want to talk through and show this in action -  making sure that we maintain high quality, good criteria, and a defined Test Objective. 

 

Specify Your Criteria

This clip takes a quick look at how we can use Coverage Profiles on quite a simple model, but also to make sure that we're getting highly efficient tests. Coverage and Coverage Profiles inside Test Modeller are a really important way of being able to make sure that we are testing all the Scenarios that we want to, and that we are making sure that those combinations are correct and sensible.

Follow the model how we’ve modelled out a straightforward inventory, model here, that's got a few Scenarios in it. So, you can see that we've got a series of products that are part of our inventory. Depending on the stock level of those products, we'll then declare the amount of discount that is potentially available. If we've got a high amount of that stock, our discount will be less.

The Coverage Profile being used varies based on the criteria that needs testing – as an example, low discount and high discount. What this Coverage is going to essentially give us are the specific amounts of tests and really reduce potentially the number of combinations, making sure that our low discount model is sort of enforced; and you'll see the model highlighting those details for us.

Equally, if we switch that round to a high discount model, - it's already been generated - we can see that we are still testing all the various Coverage combinations for each of those products, but we are changing the amount of discount by the Tags that we associate onto our Coverage Profile.

 

Define The Test Objective

This clip shows you how, in an end-to-end test flow, we can use Coverage in order to target the kind of tests we want for a very specific Use Case. Here is a master model, which has the end-to-end test flow. And here we have the specific module that he’s working on.

Shown is that, for this online store, we've added a new store branded credit card, and that store branded credit card gets an additional 2% for any purchase that is made. So first Mark Tags some of these nodes has either store-branded or non-store branded: that doesn't change anything in this model; but if I go back to the End-to-End model and look at the Coverage profiles, we can define a specific way of… a specific set of Coverage, so that for anything that is Store-Branded-Card there is defined a High Coverage and anything that isn't Store-Branded-Card, is seen to be excluded for now.

On generate the Coverage and seeing this reflected in the Test Cases we can see – on opening the Subflow – that only Store-Branded-Cards; and then opening the previous module, which Ben (previous clip) was working on, we see that the flow is going through all the different discounts that are possible. So I'm combining a Full Coverage of the types of discounts with the very specific Coverage of the Store-Branded-Card.

 

Focus Your Test Artifacts

As part of this video, we'll be talking about Test Artifacts. We want to discuss focusing on them, verifying business logic, driving better standards into our development life cycle. Myself (Ben Riley, CIO) and George Blundell (Technical Analyst) are going to talk about some of the different aspects that we need to consider here.

So we're going to start talking about living documentation and the different decisions that affect different team members depending on the models that we build. So if we look at this from the team level, we've got different business analysts, testers, developers, that all need to work in harmony. They need to understand the requirement as to if you have a model, part of this model, we're going to be mapping out a particular User Journey, End-to-End process, a different Scenario based on whatever our team is developing. As part of that, we may have Subflows or subprocesses, which teams also need to understand, but not necessarily to the level of depth as maybe where it's come from.

In this case, we'd refer to those Test Scenarios as Subflows, which means that we can have very complex logic tied into a much simpler, understandable flow. It gives us an End-to-End view of what needs to happen; but also when it comes to testing, we make sure that we don't miss out on those details Or we can show them if we need to, we can walk through them with different team members. But at the end of the day, we can pull out all the tests and Test Cases that we need at a click of a button. So that we are confident in the quality that we're instilling across our model.

As part of that, we then have Test Artifacts at a team level for different users. We've got different Test Scenarios that cover different paths, personas; different ideas that they might have to be able to test properly.

 

 

Verify Your Business Logic

You cannot model everything in your target system and every single combination of user behaviour. Therefore, you must be selective with what you choose to model. Decide to focus on the areas of the system and user behaviours that are most impactful to the business, using a more risk-based approach.

Here we have an example of an e-commerce website, and to illustrate an approach, we can use the inventory check screen as an example. This can be considered crucial functionality, as you cannot sell an item if there is no stock available.

Within this Subflow, we see an example of a modelled decision table. In order for an item to show is available, there must be sufficient stock, and the user level must be high enough for the given stock level. What we've done is actually enforce these using a set of Rules.

So for example, if we check on our inventory available endpoint, the requirements here are for either the stock availability to be high, or if the stock availability is low, then the user level must be greater than or equal to three. This model produces a decision table, as shown.

Because we care about this part of the system, we spend some time modelling out all the possible user behaviours. On the other hand, if there's a part of the system that we care less about – for example, if we go back to our main model, we could pick up another one of these Subflows – if it's not that important, then we don't really need to model it in too much detail.

In addition, using Tags is a good way to define exactly how rigorously you want to test parts of your application. What we've done here is overlay an inventory check tag on our Subflow and in our Coverage Profile, we're actually defining that we want to exhaustively test our inventory checks Subflow. That allows you to take a much more risk -based approach to testing and verify exactly what business logic you need on your models.

 

 

Drive an Adaptable SDLC

Models that capture all the required business logic become multipurpose assets that help to drive better standards throughout the software development life cycle (SDLC). They become the central source of truth for sprint teams and a visual knowledge base for business logic, application behaviour, data rules, and the like.

As we've discussed previously in this series, Test Artifacts are generated automatically from the model. These artifacts not only include Test Cases for testers, but User Stories for developers and Test Data for SDETS. Because of this automated approach, the key to a successful software development lifecycle is to maintain that central source of truth.

Once a change occurs, map that change onto the model and regenerate the associated assets. This enables teams to react significantly faster to those changes than they could before, where previously the maintenance of test assets was often more of a manual process.

To support that process, Test Modeller has a wide range of Importers and Integrations with many tools used in software development. These range from model imports and exports of VISIO and BPMN diagrams to two-way integrations with Test Case management tools like qTest and Jira, to code repository connections with technologies like Git and Bitbucket.

All of these integrations allow Test Modeller to synchronize with your team's preferred tools and way of working to provide a platform to improve software standards and drive an adaptable software development lifecycle.

 

Speak with an expert

Discover how Curiosity's Test Modeller can help automate your testing today!

Book a Demo