典型的测试错误有哪些?(19)

发表于:2014-12-08来源:uml.org.cn作者:不详点击数: 标签:典型测试错误
An alternative approach to capture/replay is scripting tests. (Most GUI capture/replay tools also allow scripting.) Some member of the testing team writes a test API (application programmer interface)

  An alternative approach to capture/replay is scripting tests. (Most GUI capture/replay tools also allow scripting.) Some member of the testing team writes a "test API" (application programmer interface) that lets other members of the team express their tests in less GUI-dependent terms. Whereas a captured test might look like this:

  捕获/回放的一个替代方法是脚本化测试。(大多数GUI捕获/回放工具都允许编写脚本。)测试小组的某些成员编写一个“测试API(应用编程接口)”,允许小组的其他成员以较少依赖GUI的方式表达他们的测试。一个捕获的测试类似于这样:

  · text $main.accountField "12"

  click $main.OK

  menu $operations

  menu $withdraw

  click $withdrawDialog.all

  ...

  文本 $main.accountField "12"

  点击 $main.OK

  菜单 $operations

  菜单 $withdraw

  点击 $withdrawDialog.all

  a script might look like this:

  而一个脚本类似于:

  · select-account 12

  withdraw all

  ...

  select-account 12

  withdraw all

  The script commands are subroutines that perform the appropriate mouse clicks and key presses. If the API is well-designed, most GUI changes will require changes only to the implementation of functions like withdraw, not to all the tests that use them. Please note that well-designed test APIs are as hard to write as any other good API. That is, they're hard, and you shouldn't expect to get it right the first time.

  脚本命令是执行适当的鼠标点击和按键的子程序。如果API设计得好,大多数GUI变化仅需要对函数(例如withdraw)实现变化,而不是所有使用它们的测试。请注意设计精良的API和其他好API一样难写。也就是说,因为它们不容易写,你也不要指望第一次就得到正确结果。

  In a variant of this approach, the tests are data-driven. The tester provides a table describing key values. Some tool reads the table and converts it to the appropriate mouse clicks. The table is even less vulnerable to GUI changes because the sequence of operations has been abstracted away. It's also likely to be more understandable, especially to domain experts who are not programmers. See [Pettichord96] for an example of data-driven automated testing.

  这个方法的一个变种,是数据驱动的测试。测试员提供一个表来描述键值。某些工具读取表并将它转换为特定的鼠标点击。这个表即使在GUI变化时也不易受到损害,因为操作序列已经被抽象出来了。它也有可能是更易于理解,尤其是对于非程序员的领域专家。查看[Pettichord96]可以获得数据驱动自动化测试的示例。

  Note that these more abstract tests (whether scripted or data-driven) do not necessarily test the user interface thoroughly. If the Withdraw dialog can be reached via several routes (toolbar, menu item, hotkey), you don't know whether each route has been tried. You need a separate (most likely manual) effort to ensure that all the GUI components are connected correctly.

  注意这些抽象测试(不论是脚本化的还是数据驱动的)不一定会完全测试用户界面。如果“取款”对话框能够通过几个途径(工具条、菜单项)达到,你无法知道是否尝试了每个路线。你需要一个单独的(很可能是手工的)的工作来确保所有的GUI部件都正确地连接。

  Whatever approach you take, don't fall into the trap of expecting regression tests to find a high proportion of new bugs. Regression tests discover that new or changed code breaks what used to work. While that happens more often than any of us would like, most bugs are in the product's new or intentionally changed behavior. Those bugs have to be caught by new tests.

  不论你采用的是什么方法,不要陷入期望回归测试发现高比例的新 bug 的陷阱。回归测试是发现以前起作用、但新代码或更改后的代码不起作用的现象。虽然它比我们希望的发生的次数更多,但许多 bug 是产品的新的或故意更改的行为。那些 bug 必须通过新测试来捕捉。

  Code coverage

  代码覆盖率

  GUI capture/replay testing is appealing because it's a quick fix for a difficult problem. Another class of tool has the same kind of attraction.

  GUI捕获/回放测试因为可以快速修复困难问题而具有吸引力。另一类工具也同样具有吸引力。

  The difficult problem is that it's so hard to know if you're doing a good job testing. You only really find out once the product has shipped. Understandably, this makes managers uncomfortable. Sometimes you find them embracing code coverage with the devotion that only simple numbers can inspire. Testers sometimes also become enamored of coverage, though their romance tends to be less fervent and ends sooner.

原文转自:http://www.uml.org.cn/Test/200709289.asp