Search
Who is online?
In total there is 1 user online :: 0 Registered, 0 Hidden and 1 Guest None
Most users ever online was 36 on Wed Jan 10, 2018 12:25 am
Types of CAST tools
Page 1 of 1
Types of CAST tools
There are numerous types of computer-aided software testing tools. Here, I'm going to describe some of these:
1. Requirements testing tools provide automated support for the verification and validation of requirements models, such as consistency checking and animation.
2. Static analysis tools provide information about the quality of the software by examining the code, rather than buy running test cases through the code. Static analysis tools usually give objective measurements of various characteristics of the software, such as the cyclomatic complexity measures and other quality metrics.
3. Test design tools generate test cases from a specification that must normally be held in a CASE tool repository or from formally specified requirements held in the tools itself. Some tools generate test cases from an analysis of the code.
4. Test data preparation tools enable data to be selected from existing databases or created, generated, manipulated and edited fro use in tests. The most sophisticated tools can deal with a range of file and database formats.
5. Character-based test running tools provide test capture and replay facilities for dumb-terminal based applications. The tools simulate user-entered terminal keystrokes and capture screen responses for later comparison. Test procedures are normally captured in a programmable script language; data, test cases and expected results may be held in separate test repositories. These tools are most often used to automate regression testing.
6. GUI test running tools provide test capture and replay facilities for WIMP interface based applications. The tools simulate mouse movement, button clicks and keyboard inputs and can recognize GUI objects such as windows, fields, buttons and other controls. Object states and bitmap images can be captured for later comparison. Test procedures are normally captured in a programmable script language; data, test cases and expected results may be held in separate test repositories. These tools are most often used to automate regression testing.
7. Test harnesses and drivers are used to execute software under test, which may not have a user interface, or to run groups of existing automated test scripts, which can be controlled by the tester. Some commercially available tools exist, but custom-written programs also fall into this category. Simulators are used to support tests where code or other systems are either unavailable or impracticable to use (e.g. testing software to cope with nuclear meltdowns).
8. Performance test tools have two main facilities: load generation and test transaction measurement. Load generation is done either by driving application using its user interface or by test drivers, which simulate load generated by application on architecture. Records of numbers of transactions executed are logged. Driving application using its user interface, response time measurements are taken for selected transactions and these are logged. Performance testing tools normally provide reports based on test logs, and graphs of load against response times.
9. Dynamic analysis tools provide run-time information on state of executing software. These tools are most commonly used to monitor allocation, use and deĀ¬-allocation of memory, flag memory leaks, unassigned pointers, pointer arithmetic and other errors difficult to find 'statically'.
10. Debugging tools are mainly used by programmers to reproduce bugs and investigate the state of programs. Debuggers enable programmers to execute programs line by line, to halt program at any program statement and to set and examine program variables.
11. Comparison tools are used. to detect differences between actual results and expected results. Standalone comparison tools normally deal with a range of file or database formats. Test running tools usually have built-in comparators that deal with character screens, Gill objects or bitmap images. These tools often have filtering or masking capabilities, whereby they can 'ignore' rows or columns of data or areas on screens.
12. Test management tools may have several capabilities. Test ware management is concerned with creation, management and control of test documentation, e.g. test plans, specifications, and results. Some tools support project management aspects of testing, for example, scheduling of tests, logging of results and management of incidents raised during testing. Incident management tools may also have work flow-oriented facilities to track and control allocation, correction and retesting of incidents. Most test management tools provide extensive reporting and analysis facilities.
13. Coverage measurement (or analysis) tools provide objective measures of structural test coverage when test are executed. Programs to be tested are instrumented before compilation. Instrumentation code dynamically captures coverage data in a log file without affecting functionality of program under test. After execution, log file is analyzed and coverage statistics generated. Most tools provide statistics on most common coverage measures such as statement or branch coverage.
1. Requirements testing tools provide automated support for the verification and validation of requirements models, such as consistency checking and animation.
2. Static analysis tools provide information about the quality of the software by examining the code, rather than buy running test cases through the code. Static analysis tools usually give objective measurements of various characteristics of the software, such as the cyclomatic complexity measures and other quality metrics.
3. Test design tools generate test cases from a specification that must normally be held in a CASE tool repository or from formally specified requirements held in the tools itself. Some tools generate test cases from an analysis of the code.
4. Test data preparation tools enable data to be selected from existing databases or created, generated, manipulated and edited fro use in tests. The most sophisticated tools can deal with a range of file and database formats.
5. Character-based test running tools provide test capture and replay facilities for dumb-terminal based applications. The tools simulate user-entered terminal keystrokes and capture screen responses for later comparison. Test procedures are normally captured in a programmable script language; data, test cases and expected results may be held in separate test repositories. These tools are most often used to automate regression testing.
6. GUI test running tools provide test capture and replay facilities for WIMP interface based applications. The tools simulate mouse movement, button clicks and keyboard inputs and can recognize GUI objects such as windows, fields, buttons and other controls. Object states and bitmap images can be captured for later comparison. Test procedures are normally captured in a programmable script language; data, test cases and expected results may be held in separate test repositories. These tools are most often used to automate regression testing.
7. Test harnesses and drivers are used to execute software under test, which may not have a user interface, or to run groups of existing automated test scripts, which can be controlled by the tester. Some commercially available tools exist, but custom-written programs also fall into this category. Simulators are used to support tests where code or other systems are either unavailable or impracticable to use (e.g. testing software to cope with nuclear meltdowns).
8. Performance test tools have two main facilities: load generation and test transaction measurement. Load generation is done either by driving application using its user interface or by test drivers, which simulate load generated by application on architecture. Records of numbers of transactions executed are logged. Driving application using its user interface, response time measurements are taken for selected transactions and these are logged. Performance testing tools normally provide reports based on test logs, and graphs of load against response times.
9. Dynamic analysis tools provide run-time information on state of executing software. These tools are most commonly used to monitor allocation, use and deĀ¬-allocation of memory, flag memory leaks, unassigned pointers, pointer arithmetic and other errors difficult to find 'statically'.
10. Debugging tools are mainly used by programmers to reproduce bugs and investigate the state of programs. Debuggers enable programmers to execute programs line by line, to halt program at any program statement and to set and examine program variables.
11. Comparison tools are used. to detect differences between actual results and expected results. Standalone comparison tools normally deal with a range of file or database formats. Test running tools usually have built-in comparators that deal with character screens, Gill objects or bitmap images. These tools often have filtering or masking capabilities, whereby they can 'ignore' rows or columns of data or areas on screens.
12. Test management tools may have several capabilities. Test ware management is concerned with creation, management and control of test documentation, e.g. test plans, specifications, and results. Some tools support project management aspects of testing, for example, scheduling of tests, logging of results and management of incidents raised during testing. Incident management tools may also have work flow-oriented facilities to track and control allocation, correction and retesting of incidents. Most test management tools provide extensive reporting and analysis facilities.
13. Coverage measurement (or analysis) tools provide objective measures of structural test coverage when test are executed. Programs to be tested are instrumented before compilation. Instrumentation code dynamically captures coverage data in a log file without affecting functionality of program under test. After execution, log file is analyzed and coverage statistics generated. Most tools provide statistics on most common coverage measures such as statement or branch coverage.
puneet- Posts : 21
Reward Points : 41
Join date : 2010-08-22
Similar topics
» Can anyone say about the types of mobile application testing?
» Can anyone say how different types of Mobile Application Testing improve the credibility of apps?
» Can you say some top iOS automation tools used by iPhone app testing services?
» Can you say some useful fragmentation tools used by android app testing services?
» Webinar on Continuous Integration Using Microsoft Tools
» Can anyone say how different types of Mobile Application Testing improve the credibility of apps?
» Can you say some top iOS automation tools used by iPhone app testing services?
» Can you say some useful fragmentation tools used by android app testing services?
» Webinar on Continuous Integration Using Microsoft Tools
Page 1 of 1
Permissions in this forum:
You cannot reply to topics in this forum
Mon Mar 06, 2017 3:29 am by JebaQpt
» Can you say some unavoidable aspects for all Android App Testing Services?
Thu Oct 20, 2016 3:12 am by testbytes
» Can someone say about the things to look for while hiring iPad App Testing Services?
Fri Oct 14, 2016 5:26 am by testbytes
» Can you suggest some top testing frameworks used by Android App Testing Services?
Fri Oct 14, 2016 5:24 am by testbytes
» Can someone say why should hire specialist iPad app testing services?
Fri Oct 14, 2016 5:21 am by testbytes
» Can someone say how Android mobile app testing will become crucial in next 5 years?
Wed Aug 17, 2016 5:44 am by testbytes
» Can you say how iPhone app testing services fare in next few years?
Wed Aug 17, 2016 5:33 am by testbytes
» Can anyone say how different types of Mobile Application Testing improve the credibility of apps?
Wed Aug 17, 2016 5:07 am by testbytes
» Can you say some useful fragmentation tools used by android app testing services?
Mon Jul 25, 2016 1:01 am by testbytes