Testing throughout the software life cycle istqb




















I Software development models a V-model sequential development model Although variants of the V-model exist, a common type of V-model uses four test levels, corresponding to the four development levels.

The four levels used in this syllabus are: component unit testing; integration testing; system testing; acceptance testing. II Test levels a Component testing Component testing searches for defects in, and verifies the functioning of, software e. Component testing may include testing of functionality and specific non-functional characteristics, such as resource-behaviour e. One approach to component testing is to prepare and automate test cases before coding.

This is called a test-first approach or test-driven development. Component integration testing tests the interactions between software components and is done after component testing; System integration testing tests the interactions between different systems and may be done after system testing.

Testing of specific non-functional characteristics e. In system testing, the test environment should correspond to the final target or production environment as much as possible in order to minimize the risk of environment-specific failures not being found in testing. System testing should investigate both functional and non-functional requirements of the system.

Acceptance criteria should be defined when the contract is agreed. Regulation acceptance testing is performed against any regulations that must be adhered to, such as governmental, legal or safety regulations.

Regression testing is the repeated testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the change s. It is performed when the software, or its environment, is changed. Regression testing may be performed at all test levels, and applies to functional, non-functional and structural testing.

Once deployed, a software system is often in service for years or decades. During this time the system and its environment are often corrected, changed or extended. Modifications include planned enhancement changes e. Maintenance testing for migration e.

Maintenance testing for the retirement of a system may include the testing of data migration or archiving if long data-retention periods are required. Refer: software testing. You are commenting using your WordPress. You are commenting using your Google account. You are commenting using your Twitter account.

You are commenting using your Facebook account. Notify me of new comments via email. Notify me of new posts via email. Tags chapter , istqb , istqb 2 , istqb exam. I Software development models a V-model sequential development model Although variants of the V-model exist, a common type of V-model uses four test levels, corresponding to the four development levels. The four levels used in this syllabus are: component unit testing; integration testing; system testing; acceptance testing.

II Test levels a Component testing Component testing searches for defects in, and verifies the functioning of, software e. Component integration testing tests the interactions between software components and is done after component testing; System integration testing tests the interactions between different systems and may be done after system testing. Validation with the RAD development process is thus an early and major activity. Extreme Programming XP is currently one of the most well-known agile development life cycle models.

See [Agile] for ideas behind this approach. The methodology claims to be more human friendly than traditional development methods.

Some characteristics of XP are:. With XP there are numerous iterations each requiring testing. XP developers write every test case they can think of and automate them. Every time a change is made in the code it is component tested and then integrated with the existing code, which is then fully integration-tested using the full set of test cases. This gives continuous integration, by which we mean that changes are incorporated continuously into the software build.

XP is not about doing extreme activities during the development process, it is about doing known value adding activities in an extreme manner. In summary, whichever life cycle model is being used, there are several characteristics of good testing:.

The V-model for testing was introduced in Section 2. This section looks in more detail at the various test levels. The key characteristics for each test level are discussed and defined to be able to more clearly separate the various test levels. A thorough understanding and definition of the various test levels will identify missing areas and prevent overlap and repetition. Sometimes we may wish to introduce deliberate overlap to address specific risks.

Understanding whether we want overlaps and removing the gaps will make the test levels more complementary thus leading to more effective and efficient testing. Component testing, also known as unit, module and program testing, searches for defects in, and verifies the functioning of software e. Component testing may be done in isolation from the rest of the system depending on the context of the development life cycle and the system.

Most often stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner. A stub is called from the software component to be tested; a driver calls a component to be tested see Figure 2. Component testing may include testing of functionality and specific non-functional characteristics such as resource-behavior e.

Test cases are derived from work products such as the software design or the data model. Typically, component testing occurs with access to the code being tested and with the support of the development environment, such as a unit test frame-work or debugging tool, and in practice usually involves the programmer who wrote the code.

Sometimes, depending on the applicable level of risk, component testing is carried out by a different programmer thereby introducing independence. Defects are typically fixed as soon as they are found, without formally recording the incidents found. One approach in component testing, used in Extreme Programming XP , is to prepare and automate test cases before coding. This is called a test-first approach or test-driven development.

This approach is highly iterative and is based on cycles of developing test cases, then building and integrating small pieces of code, and executing the component tests until they pass. Integration testing tests interfaces between components, interactions to dif-ferent parts of a system such as an operating system, file system and hard-ware or interfaces between systems. Note that integration testing should be differentiated from other integration activities.

Integration testing is often carried out by the integrator, but preferably by a specific integration tester or test team. There may be more than one level of integration testing and it may be carried out on test objects of varying size. For example:. The greater the scope of integration, the more difficult it becomes to isolate failures to a specific interface, which may lead to an increased risk. This leads to varying approaches to integration testing. One extreme is that all compo-nents or systems are integrated simultaneously, after which everything is tested as a whole.

Big-bang testing has the advantage that everything is finished before integration testing starts. There is no need to simulate as yet unfinished parts.

The major disadvantage is that in general it is time-consuming and difficult to trace the cause of failures with this late integration. So big-bang integration may seem like a good idea when plan-ning the project, being optimistic and expecting to find no problems. If one thinks integration testing will find defects, it is a good practice to consider whether time might be saved by breaking the down the integration test process. Another extreme is that all programs are integrated one by one, and a test is carried out after each step incremental testing.

Between these two extremes, there is a range of variants. The incremental approach has the advantage that the defects are found early in a smaller assembly when it is relatively easy to detect the cause. A disadvantage is that it can be time-consuming since stubs and drivers have to be developed and used in the test. Within incremental inte-gration testing a range of possibilities exist, partly depending on the system architecture:.

The preferred integration sequence and the number of integration steps required depend on the location in the architecture of the high-risk interfaces. The best choice is to start integration with those interfaces that are expected to cause most problems. Doing so prevents major defects at the end of the inte-gration test stage.

Ideally testers should understand the architecture and influence integration planning. If inte-gration tests are planned before components or systems are built, they can be developed in the order required for most efficient testing.

At each stage of integration, testers concentrate solely on the integration itself. For example, if they are integrating component A with component B they are interested in testing the communication between the components, not the functionality of either one.

Both functional and structural approaches may be used. Testing of specific non-functional characteristics e. System testing is most often the final test on behalf of development to verify that the system to be delivered meets the spec-ification and its purpose may be to find as many defects as possible.

Most often it is carried out by specialist testers that form a dedicated, and sometimes inde-pendent, test team within development, reporting to the development manager or project manager.

In some organizations system testing is carried out by a third party team or by business analysts. Again the required level of independ-ence is based on the applicable risk level and this will have a high influence on the way system testing is organized. System testing should investigate both functional and non-functional requirements of the system.

Typical non-functional tests include performance and reliability. Testers may also need to deal with incomplete or undocumented requirements.

System testing of functional requirements starts by using the most appropriate specification-based black-box techniques for the aspect of the system to be tested. For example, a decision table may be created for com-binations of effects described in business rules. Structure-based white-box techniques may also be used to assess the thoroughness of testing elements such as menu dialog structure or web page navigation see Chapter 4 for more on the various types of technique.

System testing requires a controlled test environment with regard to, amongst other things, control of the software versions, testware and the test data see Chapter 5 for more on configuration management. A system test is executed by the development organization in a properly controlled environ-ment. The test environment should correspond to the final target or production environment as much as possible in order to minimize the risk of environment-specific failures not being found by testing.

When the development organization has performed its system test and has cor-rected all or most defects, the system will be delivered to the user or customer for acceptance testing. Acceptance testing is most often the responsibility of the user or customer, although other stakehold-ers may be involved as well. The goal of acceptance testing is to establish confidence in the system, part of the system or specific non-functional characteristics, e. Acceptance testing is most often focused on a validation type of testing, whereby we are trying to determine whether the system is fit for purpose.

Finding defects should not be the main focus in acceptance testing. For example, a large-scale system integration test may come after the acceptance of a system. Within the acceptance test for a business-supporting system, two main test types can be distinguished; as a result of their special character, they are usually prepared and executed separately. The user acceptance test focuses mainly on the functionality thereby validating the fitness-for-use of the system by the business user, while the operational acceptance test also called production acceptance test validates whether the system meets the requirements for operation.

The user acceptance test is performed by the users and application managers. In terms of planning, the user acceptance test usually links tightly to the system test and will, in many cases, be organ-ized partly overlapping in time.

If the system to be tested consists of a number of more or less independent subsystems, the acceptance test for a subsystem that complies to the exit criteria of the system test can start while another subsystem may still be in the system test phase.

In most organiza-tions, system administration will perform the operational acceptance test shortly before the system is released. Other types of acceptance testing that exist are contract acceptance testing and compliance acceptance testing. Acceptance should be formally defined when the contract is agreed.

Compliance acceptance testing or regulation acceptance testing is performed against the regulations which must be adhered to, such as governmental, legal or safety regulations.

If the system has been developed for the mass market, e. Feedback is needed from potential or existing users in their market before the software product is put out for sale commercially. Very often this type of system undergoes two stages of accept-ance test. The first is called alpha testing. Developers observe the users and note problems. Alpha testing may also be carried out by an independent test team.

Beta testing, or field testing, sends the system to a cross-section of users who install it and use it under real-world working conditions. The users send records of incidents with the system to the development organization where the defects are repaired. Test types are introduced as a means of clearly defining the objective of a certain test level for a programme or project. We need to think about differ-ent types of testing because testing the functionality of the component or system may not be sufficient at each level to meet the overall test objectives.

Focusing the testing on a specific test objective and, therefore, selecting the appropriate type of test helps making and communicating decisions against test objectives easier. A test type is focused on a particular test objective, which could be the testing of a function to be performed by the component or system; a non-functional quality characteristic, such as reliability or usability; the structure or architecture of the component or system; or related to changes, i.

Depending on its objec-tives, testing will be organized differently. For example, component testing aimed at performance would be quite different to component testing aimed at achieving decision coverage.

This is typically described in a requirements specification, a functional specification, or in use cases. Functional tests are based on these functions, described in documents or understood by the testers and may be performed at all test levels e.

Functional testing considers the specified behavior and is often also referred to as black-box testing. This is not entirely true, since black-box testing also includes non-functional testing see Section 2. Function or functionality testing can, based upon ISO , be done focus-ing on suitability, interoperability, security, accuracy and compliance. Security testing, for example, investigates the functions e. Testing functionality can be done from two perspectives: requirements-based or business-process-based.

Requirements-based testing uses a specification of the functional require-ments for the system as the basis for designing tests. A good way to start is to use the table of contents of the requirements specification as an initial test inventory or list of items to test or not to test. We should also prioritize the requirements based on risk criteria if this is not already done in the specifica-tion and use this to prioritize the tests. This will ensure that the most impor-tant and most critical tests are included in the testing effort.

Business-process-based testing uses knowledge of the business processes. Business processes describe the scenarios involved in the day-to-day business use of the system. For example, a personnel and payroll system may have a busi-ness process along the lines of: someone joins the company, he or she is paid on a regular basis, and he or she finally leaves the company.

Use cases originate from object-oriented development, but are nowadays popular in many develop-ment life cycles. They also take the business processes as a starting point, although they start from tasks to be performed by users. Use cases are a very useful basis for test cases from a business perspective.

The techniques used for functional testing are often specification-based, but experienced-based techniques can also be used see Chapter 4 for more on test techniques. Test conditions and test cases are derived from the functionality of the component or system.

As part of test designing, a model may be developed, such as a process model, state transition model or a plain-language specification. A second target for testing is the testing of the quality characteristics, or non-functional attributes of the system or component or integration group. Here we are interested in how well or how fast something is done. We are testing something that we need to measure on a scale of measurement, for example time to respond.

Non-functional testing, as functional testing, is performed at all test levels. Non-functional testing includes, but is not limited to, performance testing, load testing, stress testing, usability testing, maintainability testing, reliability testing and portability testing. Many have tried to capture software quality in a collection of characteristics and related sub-characteristics. In these models some elementary characteris-tics keep on reappearing, although their place in the hierarchy can differ.

This set reflects a major step towards consensus in the IT industry and thereby addresses the general notion of software quality. The ISO standard defines six quality characteristics and the subdivision of each quality characteristic into a number of sub-characteristics. This standard is getting more and more recognition in the industry, enabling development, testing and their stakeholders to use a common terminology for quality characteristics and thereby for non-functional testing.

The third target of testing is the structure of the system or component. If we are talking about the structure of a system, we may call it the system architecture. Structural testing is most often used as a way of measuring the thoroughness of testing through the coverage of a set of structural elements or coverage items.

It can occur at any test level, although is it true to say that it tends to be mostly applied at component and integration and generally is less likely at higher test levels, except for business-process testing.



0コメント

  • 1000 / 1000