Model-Based Test Automation for the Mainframe
Generate end-to-end tests and complete test data rapidly, executing them in a single click across multiple mainframe data sources. Test Modeller makes mainframe test design simple, using a transparency layer to interact with all major mainframe systems. Define comprehensive test cases and dynamic test data in easy-to-use flowcharts, executing optimised tests using a range of open source, homegrown, and commercial frameworks.
Take the complexity out of Mainframe testing
Organisations rely on mainframes for their core business processes, but testing these business-critical systems can be massively complex.
Rigorously testing end-to-end processes across multiple mainframe data sources requires a vast number of test cases, but the complex journeys that data might take through joined-up systems are hard to identify manually. This especially true when faced with poorly documented legacy systems and technical debt, while the sheer number of tests required makes manual test design impossible. Often, much business critical system logic instead goes untested and remains exposed to potentially costly defects.
Test data is furthermore required for every possible test case, and consistent data must be found or made for testing across joined-up mainframe systems. This data is highly complex and data bottlenecks mount as QA Teams wait for or manually input data, sometimes via green screens. More delays arise as automated tests fail due to inconsistent test data. Automated test execution must furthermore connect to each mainframe system, with automated test scripts created for each test case. This is another complex process, that is again slow and error-prone.
Every scenario covered, every data source tested
Test Modeller makes it possible to test end-to-end processes rigorously in-sprint, even when faced with numerous mainframe systems and legacy components.
A wide range of connectors, scanners and recorders build visual flowchart models rapidly, working to pay off technical debt and model very distinct journey that data can flow through mainframe systems. These “paths” are equivalent to test cases that can be identified automatically, using mathematical algorithms to generate the smallest set of test cases needed for maximum coverage.
The data and automated test logic needed to execute the test cases can furthermore be defined simply at the model level, using visual builders and fill-in-the-blank forms. Test Modeller then complies the end-to-end test scripts automatically, using a “transparency layer” to execute the tests across all major Mainframe data sources. Run results are formulated automatically, for automated mainframe testing that is rapid, reliable, and can keep up with fast sprint cycles.
End-to-end test automation, whatever the data sources
Watch this example of rigorous testing an HR Process involving DB2 and VSAM to learn how end-to-end mainframe tests can be designed and maintained rapidly from simple flowchart models. You will see how:
Easy-to flowcharts design complete test cases visually, mapping the end-to-end journeys that data can flow through joined-up mainframe data sources.
A transparency layer created by Ostia’s Portus enables the tests to interact with all major mainframe data sources, including DB2, VSAM, Q\ISAM, IMS, CICS and MQ.
Calls to mainframe data sources defined quickly at the model level using a simple form to specify how test steps (blocks) should add, update, read, and delete data.
Over 500 synthetic test data functions define any data needed for complete test coverage. The dynamic functions are defined using a visual data builder and resolve “just in time” as tests are created, ensuring complete and up-to-date test data for every automated test.
A visual, “fill-in-the-blanks” automation builder defines automated test logic needed to execute the end-to-end tests. Out-of-the-box action packs are provided, while objects and actions can be synchronised from a range of open source, homegrown, and commercial frameworks.
Automated coverage algorithms generate the smallest set of tests required to test each distinct end-to-end journey through connected mainframe systems. The rigorous tests and data come equipped with the calls needed to interact with multiple mainframe systems, and cover every distinct negative and positive scenario in an executable number of tests.
A high-speed, automated workflow compares the resultant data to expected results defined at the model level, rapidly and reliably formulating test run results.
Test maintenance is possible in minutes, updating the test logic in flowcharts to re-generate complete test cases automated tests and data.