Discover The Art of Modelling

The Art of Modelling series from Curiosity Software boost foundation thinking around the art of model-based testing, test case design and test automation, told through a series of 90 sec–2 min clips.  

let's talk


A Baseline Grammar

Whether a developer, business analyst or tester you are no doubt an end-user, you constantly build mental models day-to-day as part of your work, rest and play. So, how many mental models did you already make this morning or even this week?  

Stepping back, models have generally been the basis for software design since testing existed, and just as mental models adapt on-the-fly best-practice modelling blends individual thought into collaborative work. A deeper dive, then, into the baseline grammar for best practice modelling, we present a basic flow in the context of an ATM transaction.  

From that, the best practice when modelling out a systems behaviours leads to teams being more tactical in approach. To reiterate: tactically creating models that use baseline grammar: nodes, task blocks, conditions, considering both an invalid plus valid end node, use best-practice modelling to sharpen & share diverse thinking with teams to model out and derive effective test cases.   


Enable Exploratory Modelling

No matter the system under test, modelling is more broadly concerned with enquiring, applying and informing, and is closely aligned to evolving your teams’ intelligences. This enquiring, applying and informing helps focus team intelligences and synchronizes effort & motivations around test design and generation.  

To benefit from team intelligences around test design and generation, be sure first to make early investigations which will ultimately transform good individual contributions into aligned collaborative effort. Such early investigations of a system with aligned collaborative effort, be it around test design and generation of data combinations, user interface journeys or exposing complexity, the outcome enables exploratory modelling, one that’s iterative.  

Stepping back let’s define exploratory modelling relative to the process of enquiring, applying and informing. Around enquiring orbits the idea of imperfect sources of knowledge from which a perspective can be decided as a leaping off point. The main question being: what are your main concerns? 

Then comes modelling out that enquiry from which you can adjust test cases according to the coverage to reveal complexity, ambiguity or gaps. This helps flush out anomalies and challenges the sources of knowledge to produce better run results and test execution.  

So best practice includes the need to enquire, model, inform and apply to derive effective test design and generation that suits the diversity of thinking within your team. 


Modelling an Existing UI

This video introduces a simple customer login screen where emails and password fields and different elements of the application get picked up, from -to URL to eventually clicking on the sign-in button and in the model you see how different journeys get tracked. 

We also take a closer look at using all three major block types of the test Modeller tool in conjunction with each other, to map out the different combinations of behaviour on the UI. This is a complete model which demonstrates some of the more basic best practices for modelling an existing UI relative to combinations of data but also combinations of user behaviour that can exist on top of a webpage.


Modelling a UI Concept

Modelling upfront is a key part of test or behaviour driven development, from which user stories are generated. At which point, developers can action the user stories to write an application or at least produce initial test scenarios to test it against, or as soon as the application is ready and available to be tested.

You'll see that much of the same logic exists as in segment 1 [Modelling an existing UI] with automation [Waypoints] used as simple placeholders now waiting for the application to be built. Also waiting for automation that can then overlaid on top of the model.

Despite the application not actually existing, at the start we lay out three blocks to explain the high level goal or high level use case of this page. In this segment [2 of 3] it considers some key differences involved in modelling a UI when the application doesn't already exist, rather than an existing one.


Modelling an Evolving UI

As a UI evolves according to requirements, a model gives a focus for different stakeholders to collaborate within the Test Modeller canvas. And so a model will easily evolve from day one through to possibly day 20. From coming in and explaining some of the most basic functionality or journeys on your app through to where a model has its different nodes rules applied, the outcome is to embed data ready for automation.

The example considers the credentials as seen in segment 2 [modelling a UI concept] in which different combinations of credentials are mapped out simply as either invalid or valid. However, as this model evolves to reflect the UI’s requirements, its scope is increased. This includes not only the email, but also the passwords decision trees.

Additionally, rule-based generation [Rules] are used on task blocks, essentially saying that for any instance given the logic as invalid data, will be forced down the invalid credentials route to inform test cases. The invalid password gets pushed through into a negative scenario, where we end up with a necessary error state.


Scope and Articulate Flow

Decision gates laid down on the Test Modeller canvas serve to generate tests. These tests are based on coverage types. In this example, test generation results in four possible flows, which terminate in being able to make a perfect cup of tea or not.

In this video, you’ll meet Huw Price (Managing Director, Curiosity Software) as he navigates the need to throw down a rough process flow to help articulate a series of data states as decision groups ahead of starting to visually model. This approach favours seeing the bigger picture in which test steps can be agreed, modified and refined in the model amongst all stakeholders including testers, business users, SMEs, developers, SDETS, manual testers and likely end-users.



Be Mindful of Decision Gates

Use visual representations, to garner a common understanding of what the process should look like in this case. What we're doing is we're putting everything, so it is visual. . So what we've had to do is actually double up with the Logic Gates in terms of wanting sugar and wanting milk. The reason we're doing, this is because we need to put it all into a visual representation. So once you've finished with your preliminary model, what you need to do then is to start thinking about creating some tests or user stories.

In this video, you’ll meet Huw Price (Managing Director, Curiosity Software) introducing techniques that you can use to test every scenario along various paths of a mode. This features ahead of the subsequent clip, which shows the use of Rules as a way of limiting the amount of possible scenarios, but equally how a mixed approach using Rules with Logic Gates can be used to inform best practice modelling when executing test steps.


Overlay Logic to Reduce Repetition

This video shows you a different way to solve the same problem, where we're going to use constraints or Boolean logic to overlay onto the model. And what we've done is add in a new endpoint called no tea. What you'll see is that we have a little hexagon here; we have some rules to find here. Associated with each block in the Assignment we've picked up, HaveTea Yes, HaveTea No, like so; and we've done that for each of the data states. You need to go in and define a rule; and we've created a very simple one here. So if any of these is not true, we cannot make a cup of tea.

In this video, you’ll meet Huw Price (Managing Director, Curiosity Software) as he builds on the concept of a purely visual model to use of constraints (logic Rules), expressed as Boolean states. Where the previous clip introduced a fully visual model, logic Gates though had to be repeated, which may be tricky to maintain. So Boolean states are a more embedded solution and though less overtly visualized - you have to look at the Boolean logic itself - if you're only interested in the user stories or test cases then that's the way to go. Finally, what you may find is to use a mixed approach where you're balancing the visual elements to make the model more understandable while overlaying complexity through Boolean logic states.


Speak with an expert

Discover how Curiosity's Test Modeller can help automate your testing today!

let's talk