• 软件测试技术
  • 软件测试博客
  • 软件测试视频
  • 开源软件测试技术
  • 软件测试论坛
  • 软件测试沙龙
  • 软件测试资料下载
  • 软件测试杂志
  • 软件测试人才招聘
    暂时没有公告

字号: | 推荐给好友 上一篇 | 下一篇

软件评价和测试 KPA 提案

发布: 2008-2-03 15:55 | 作者: Dave Wang | 来源: sweforum.net  | 查看: 81次 | 进入软件测试论坛讨论

领测软件测试网

  1. Senior management reviews and approves the overall evaluation and testing objectives for the software system.

  高级管理者评审并批准整个软件系统的评价和测试目标。

  2. Senior management reviews and approves that the system has met that criteria prior to installation.

  高层管理在系统正式安装之前评审和审批系统是否满足这些准则。

  ______________________________________________________

  | |

  Author's note: One of the biggest enemies of quality is unreasonable schedules. If the team is going to measured solely on just meeting dates, then the test plan will be bypassed. Management must measure functionality, resources, schedules, and quality in determining a project's success, not just dates.

  注:质量的一个最大的敌人就是不合理的进度。如果团队仅仅从满足的日期来进行度量的话,测试计划经常被忽略。管理层必须依靠度量功能、资源、进度和质量来决定项目的成功,而不仅仅是日期。

  |_____________________________________________________|

  4.3 Ability To Perform

  ________________________________________________________________________

  Ability 1 Adequate resources and funding are provided for planning and

  executing the evaluation and testing tasks.

  1. Sufficient numbers of skilled individuals are available for performing the evaluation and testing activities, including:

  - overall evaluation/test planning,

  - evaluation/test coordination,

  - evaluation/test case design,

  - evaluation/test case implementation,

  - evaluation/test execution,

  - evaluation/test results verification,

  - evaluation/test coverage analysis, and

  - defect logging and tracking.

  2. Tools to support the testing effort are made available, including:

  - test case design tools,

  - test data generators,

  - test drivers, and

  - test coverage monitors.

  3. A test environment configuration is made available, including:

  - hardware and software, dedicated to the testers, which mirrors the intended production configuration.

  Ability 2 Members of the software testing staff receive required training to

  perform their technical assignments.

  ______________________________________________________

  | |

  Examples of training for evaluation and test include:

  - evaluation and test planning;

  - criteria for evaluation/test readiness and completion;

  - use of the evaluation/testing methods and tools; and

  2- performing peer reviews.

  |_____________________________________________________|

  Ability 3 Members of the software engineering staff whose deliverables will

  be evaluated and tested receive training on how to produce testable deliverables and orientation on the overall evaluation and testing disciplines to be applied to the project.

  ______________________________________________________

  | |

  Refer to Ability 5 for an example of a testable deliverable.

  |_____________________________________________________|

  Ability 4 The project manager and all of the software managers receive

  orientation in the technical aspects of the evaluation/testing criteria and disciplines to be applied to the project.

  ______________________________________________________| |

  Examples of orientation include:

  - the evaluation/testing methods and tools to be used;

  - the entry and exit criteria for the various levels of evaluation/testing; and

  - the defect resolution process.

  |_____________________________________________________|

  Ability 5 The software engineers produce testable deliverables.

  ______________________________________________________

  | |

  An example of a testable deliverable would be a requirements specification that had the following characteristics:

  - the functional rules are written at a deterministic level of detail (i.e., given a set of inputs and an initial system state you should be able to follows the rules in the specification and determine the outputs and the final system state);

  - the specification is non-redundant;

  - the specification is unambiguous; and

  - the various requirements follow a consistent standard (e.g., standards for user interface definitions are followed which define function keys, intra-screen navigation, inter-screen navigation).

  |_____________________________________________________|

  4.4 Activities Performed

  ________________________________________________________________________

  Activity 1 The overall evaluation and testing effort is planned and the plans are documented.

  These plans:

  1. Identify the risks and exposures if defects propagate through the various project phases and into production. This information is used to determine how much evaluation and testing needs to be done.

  ______________________________________________________

  | |

  >Examples of risks to be evaluated are:

  - the potential scrap and rework and resulting cost and schedule overruns which might be caused by defects in the requirements specifications;

  - the potential cost per unit of time for system down time in production;

  - the potential cost to customers and end users of inaccurate processing; and

  - the potential risk to human lives in safety critical applications.

  Note: The premise here is that testing is essentially an insurance policy. The overall evaluation and test strategy and its associated costs should be proportional to the potential bottom line risks which defects could cause.

  |_____________________________________________________|

  2. Identify the software project deliverables to be evaluated/tested.

  ______________________________________________________

  | |

  Examples of software project deliverables to be evaluated/tested are:

  - requirements specifications;

  - design specifications;

  - code;

  - user manuals and built in help facilities;

  - training manuals, courseware, and training support systems;

  - data conversion procedures and data conversion support systems;

  - hardware/software installation procedures and support systems;

  - production cutover procedures and support systems (e.g., code that creates a temporary bridge between an existing system and its replacement, allowing some sites to run on the old and some on the new until full cutover is complete).

  - production problem management procedures and support systems (e.g., the production help desk).

  - product distribution procedures and support systems (i.e., the mechanisms for distributing updates and new releases, especially to widely distributed end users).

  - publications procedures and support systems (e.g., the mechanisms for physically publishing all of the copies of the manuals needed to support the system in production).

  |_____________________________________________________|

  3. For each deliverable to be evaluated/tested determine the characteristics to be tested.

  ______

  ________________________________________________

  | |

  Examples of characteristics to be evaluated/tested are:

  - functional integrity;

  - performance;

  - usability;

  - reliability, availability, serviceability;

  - portability (i.e., can this one code line be easily ported from one platform to another);

  - maintainability (i.e., can fixes and minor incremental improvements be easily made); and

  -   extendibility (i.e., can major additions be made to the system without causing a major rewrite).

  |_____________________________________________________|

  4. Determine the qualitative and quantitative success criteria for each deliverable and each characteristic evaluated and tested for the deliverable.

  ______________________________________________________

  | |

  An example of the functional test criteria for code could be:

  - the code is tested to verify that 100% of all functional variations derived from the requirements, fully sensitized for the observability of defects, have been run successfully; and

  - 100% of the code's statements and branch vectors have been executed.

  |_____________________________________________________|

  5. Determine the methods and tools required to evaluate/test each deliverable for each of its desired characteristics.

  ______________________________________________________

  | |

  An example of evaluating a requirements specifications might involve:

  - performing an ambiguity review;

  - walking use-case scenarios through the requirements to validate completeness;

  - building screen prototypes to validate the completeness;

  - creating cause-effect graphs from the functional requirements to validate that the precedence rules are clear;

  - doing a peer review with domain experts to validate completeness and accuracy;

  - doing a logical consistency check of the rules via a CASE tool; and

  - reviewing the test cases designed from the functional requirements with developers and end user / customers to validate the completeness and accuracy of the specifications from which they were derived.

  |_____________________________________________________|

  ______________________________________________________

  | |

  Examples of testing tools include:

  - test case design tools,

  - test data generators,

  - capture/playback tools,

  - test drivers,

  - test coverage monitors,

  - test results compare utilities,

  - memory leak detection tools,

  - debuggers, and

  - defect tracking tools.

  |_____________________________________________________|

  6. Determine the stages (sometimes called levels) of testing and refine the quantitative and qualitative test criteria into entry and exit criteria for each phase of testing.

  ______________________________________________________

  | |

  Examples of stages of code based testing include:

  - unit testing with primary emphasis on white box structural testing, usually done by the coder;

  - component testing with primary emphasis on black box functional testing and inter-unit interface testing, with some initial performance testing and initial usability testing;

  - system testing with primary emphasis on inter-component interface testing, full thread functional testing, full performance testing, full usability testing, and full reliability/recoverability testing;

  - inter-system integration testing with primary emphasis on inter-application interface testing and inter-application performance testing; and

  - acceptance testing (a.k.a. beta testing) with emphasis on final validation of functional robustness, usability, and configuration testing.

  |_____________________________________________________|

  ______________________________________________________

  | |

  An example of refining the success criteria by test stage is:

  - the entry criteria into unit testing is a peer review of the code;

  - the exit criteria from unit test is correct execution of 100% of the code statements and branch vectors;

  - the entry criteria into component test is 100% execution of the 揼 o right” statements and branches,

  - the exit criteria from component test is 100% execution of all functional variations derived from the requirements specification.

  Note that the entry criteria into component test is less stringent than the exit criteria from unit test. This allows these activities to overlap in a controlled manner.

  |_____________________________________________________|

  7. For each deliverable, decompose it into units for evaluation and test and determine the optimal sequence for evaluating/testing the units.

  ______________________________________________________

  | |

  For example, the unit testing of the code might be done in a sequence which minimizes the need for building scaffolding code to emulate interfaces to code not yet tested.

  |_____________________________________________________|

  8. Define the methods and procedures for defect reporting and tracking to be used by the project.

  Activity 2 Reconcile( 协调,和谐 ) the evaluation/test plan with the overall development plan.

  1. Verify the evaluation and test resources and schedules against the project schedules and constraints.

  2. Reconcile the desired sequencing of units for evaluation and test against the availability of those units as defined in the development plan.

  3. Get concurrence on the defect reporting and tracking mechanism from the developers.

  Activity 3 Install the evaluation and testing infrastructure.

  1. Acquire and install the testing tools needed for this project.

  2. Acquire and install the test hardware and software configuration required to create and execute the tests.

  3. Train management and staff on the evaluation and testing methods and tools to used.

  Activity 4 Perform the evaluation/testing for each deliverable, for each characteristic, at the designated test stages.

  1. Design the evaluation/test cases using the identified methods and tools.

  2. Physically implement the cases in their final “executable” form.

  3. Perform the evaluation / Execute the test cases.

  4. Verify the evaluation/test results against the expected results.

  5. Verify that the evaluation/tests fully covered their target objectives.

  6. Provide periodic reports as to the status of the evaluation/testing effort against the test plan.

  Activity 5 Defects detected are reported, tracked till closure, and analyzed for

  trends according to the project's defined software process.

  ______________________________________________________

  | |

  Examples of the kinds of data to be collected include:

  - defect description,

  - defect category,

  - severity of defect,

  - units causing/containing the defect,

  - units affected by the defect,

  - activity where the defect was introduced (i.e., root cause),

  - evaluation/test that identified the defect,

  - description of the scenario being run that identified the defects, and

  - expected results and actual results that identified the defect.

  |_____________________________________________________|

  Activity 6 Perform regression testing as needed.

  1. Create regression test procedures and test libraries for use in revalidating changes to deliverables.

  2. Execute the regression test procedures and test libraries anytime modifications are made to already tested deliverables.

  Activity 7 Revise the evaluation and test plan as needed.

  1. Review the effectiveness and efficiency of the evaluations and testing to date and the defects reported to refine the evaluation and test plan as needed.

  4.5 Measurement And Analysis

  ________________________________________________________________________

  Measurement 1 Measurements are made to determine the effectiveness of the

  evaluations and testing.

  ______________________________________________________

  | |

  An example of the measurements include:

  - the defect removal rate by phase (i.e., the portion of defects removed in an evaluation/testing phase that were introduced in the corresponding development phase).

  |_____________________________________________________|

  Measurement 2 Measurements are made to determine the completeness of the

  software evaluations and testing.

  ______________________________________________________

  | |

  Examples of the measurements include:

  - using a functional coverage analyzer to determine what percentage of the requirements have been validated;

  - using code coverage monitors to determine what percentage of the software statements and branches were executed by the test cases.

  Measurement 3 Measurements are made to determine the quality of the software

  products.

  ______________________________________________________

  | |

  Examples of the measurements include:

  - an analysis of the mean time to failure and the mean time to fix by severity of defect;

  - an analysis of the distribution of defects by unit;

  - an analysis of the number and severity of the unresolved defects; and

  - an analysis of the closure rate for defects versus the rate new ones are being reported.

  |_____________________________________________________|

  4.6 Verifying Implementation

  ________________________________________________________________________

  Verification 1 The activities for software testing are reviewed with senior

  management on a periodic basis.

  ______________________________________________________

  | |

  Refer to Verification 1 of the Software Project Tracking and Oversight key process area for practices covering the typical content of senior management oversight reviews.

  |_____________________________________________________|

  Verification 2 The activities for software testing are reviewed with the project

  manager on both a periodic and event-driven basis.

  ______________________________________________________

  | |

  Refer to Verification 2 of the Software Project Tracking and Oversight key process area for practices covering the typical content of project management oversight reviews.

  |_____________________________________________________|

  Verification 3 The software quality assurance group reviews and/or audits the

  activities and work products for software evaluation and testing and reports the results.

  ______________________________________________________

  | |

  Refer to the Software Quality Assurance key process area.

  |_____________________________________________________|

  At minimum, the reviews and/or audits verify that:

  1. All parties are involved in the definition of the software evaluation and test approach and are committed to implementing it.

  2. The test criteria and test methods are appropriate in light of the defect impact risk assessment.

  3. The software project deliverables are testable as defined by the project's standards.

  4. The entry and exit criteria for each stage of evaluation and test is being adhered to.

  5. The evaluation/testing of all of the software project deliverables is performed according to documented plans and procedures.

  6. Evaluations and tests are satisfactorily completed and recorded.

  7. Problems and defects detected are documented, tracked, and addressed.

  8. The test cases are traceable to the software products they test.

  5. RECONCILING WITH THE EXISTING CMM KPAs

  The CMM has been in use for a number of years now in a growing number of organizations. This makes modifying it problematic. If it changes too drastically, what does that do to all of the organizations which have achieved certain certification levels based on the prior version? How do modifications to the CMM affect process improvement efforts already underway? In this section we will deal with two topics. The first is leveling the Software Evaluation and Test KPA into the overall CMM. The second is some repackaging suggestions to ease adding the additional KPA without passing a pain threshold of having too many KPAs.

  5.1 Leveling The Evaluation And Testing KPA Within The CMM

  Currently, testing is part of the Software Product Engineering KPA which is at Level 3. However, many of the Level 2 KPAs are dependent on having a disciplined approach to evaluation and test in place. As stated in the justification section it is difficult to solve problems until those problems are well understood. Evaluation and test helps provide this insight. It is, in fact, one of the key drivers of cultural change that positions an organization to aggressively address many of the other KPAs.

  The CMM recognizes the criticality of good requirements to the whole process. The Requirements Management KPA is appropriately KPA number 1. However, experience over the last two decades has shown it is difficult to get really good requirements without concurrently installing requirements based evaluation and testing. This provides the necessary tight feedback loop on the quality of the requirements as they are being written.

  The Software Project Tracking and Oversight KPA, another Level 2 item, also requires the Evaluation and Testing KPA. Tracking involves determining what tasks are actually completed versus what was planned to be completed. However, without verifying that the tasks have met their completion criteria you really do not know that the tasks are truly completed.

  The Software Subcontract Management KPA, a Level 2 KPA, also requires the Software Evaluation and Testing KPA to unambiguously define the success criteria contractually and to verify that that criteria has been met. All of the legal disputes that I have testified in as an expert witness were the result of not having formal evaluation and test defined and executed.

  Given the above, the recommendation is made that the Software Evaluation and Test KPA be made a Level 2 KPA.

  5.2 Repackaging Suggestions For The Existing KPAs

  The most obvious re-packaging is splitting the Software Product Engineering KPA into two KPAs: Software Evaluation and Test and Software Product Engineering with a reduced scope. The name of the latter should probably stay the same unless the new scope causes confusion.

  The Peer Reviews KPA should be subsumed into the Evaluation and Testing KPA. As discussed, peer reviews are just one means of performing an evaluation. Separating out a single evaluation technique and making it a full KPA is a bit disproportionate (不成比例) . However, as an admitted (确认无疑的) testing bigot (顽固者) , I would not argue very hard against keeping it. It adds emphasis to the overall importance of evaluation and test.

  Some have suggested that the Software Evaluation and Test KPA itself could be split into an Evaluation KPA and a Testing KPA. My own feeling is the process loses some continuity if that is done. However, it is not something I would argue too vehemently about.

  In order to keep the number of KPAs down, I would suggest that the Software Project Planning and Software Project Tracking and Oversight KPAs be merged into one KPA. These are very tightly coupled activities. Xerox, for example, is treating them as essentially one item to install in their CMM activities. I cannot believe they are alone in this view. While this does not have anything directly to do with testing, it does help make room for a Software Testing KPA.

  [1] My degree is in mathematics, however, my minors (辅修) were archaeology (考古) and anthropology (人文学) . I have always found these far more useful than math in helping organizations install software engineering disciplines and tools.

  [2] At the extreme, we came within moments of a full thermonuclear exchange with the Soviet Union because of a software defect. The death toll would have been in the hundreds of millions.

文章来源于领测软件测试网 https://www.ltesting.net/

22/2<12

关于领测软件测试网 | 领测软件测试网合作伙伴 | 广告服务 | 投稿指南 | 联系我们 | 网站地图 | 友情链接
版权所有(C) 2003-2010 TestAge(领测软件测试网)|领测国际科技(北京)有限公司|软件测试工程师培训网 All Rights Reserved
北京市海淀区中关村南大街9号北京理工科技大厦1402室 京ICP备10010545号-5
技术支持和业务联系:info@testage.com.cn 电话:010-51297073

软件测试 | 领测国际ISTQBISTQB官网TMMiTMMi认证国际软件测试工程师认证领测软件测试网