基于用户体验的性能测试

关于这个系列翻译文章的始末可以从ppent兄弟的博客那里得到一点参考:

原文名称:User Experience, Not Metrics
原文作者:Scott Barber
原文出处:http://www.perftestplus.com/

译文名称:基于用户体验的性能测试
翻译:RickyZhu
译文地址:http://www.rickyzhu.com/2007/10/16/user-experience-not-matrics-1/

《基于用户体验的性能测试》,原名User Experience, Not Metrics,出自性能测试大师Scott Barber之手。大师性能测试经验丰富,以用户体验为根本出发点,描述了基于用户体验的性能测试方法,结合丰富的测试示例论述。此书为大师毕生所学之精华,乃不出世之武林秘笈,侧重于方法论上的内功心法。练就此内功心法威力无比,再运用于各种武功招式(测试工具)则得心应手、摘叶飞花杀人于无形,是江湖中人(性能测试工作者)梦寐以求的宝典。

我这里翻译的是第十章的内容,现在正式开始。


第十章 创建下降曲线

这个系列文章的前面四章都或多或少的讲到了性能相关的测试结果报告。本章我们将讨论一个单个的,功能相当强大的性能图表,那就是下降曲线,并针对我们的观点做一个总结。

本章是这个“User Experience, Not Metrics”系列文章的第十篇。重点关注于把客户满意度以及web应用服务器用户的性能体验进行联系起来。本章不仅仅针对Rational TestStudio用户,而且针对具有Microsoft Excel使用经验的经理者们。本章的内容都是有关创建下降曲线,读者应该首先阅读第6,7和第9章的内容,并且已经对那些章节中提到的Excel走读比较熟悉和适应了。

什么是响应时间下降曲线?
“尽管互联网带宽和Web服务器的容量近年来已经提高了很多,但是Web服务器的性能问题始终还是挑战着开发和测试人员。复杂的基于Web的应用和互联网流量的动态特性组合起来,导致了非常明显的Web站点的性能下降” Steven Splaine 和 Stefan P. Jaskiel在“The Web Testing Handbook”一书中这样写到。

我最近有机会参加了Steven Splaine做的一个演讲。在演讲中,他演示了一个他称之为“性能图表”的简单的图。我既惊讶又高兴的发现这个就是我叫做下降曲线的那个图。无论你给这个图取一个什么样的名字,到这篇文章结束的时候,你一定会同意我的观点:对于测试者来说,这是演示给老板看的一个功能最强大的图表。这个图的价值在于它回答了诸如“多少….”和“…多快”的问题。

图1显示了一个相对来说简单的响应时间下降图的例子。这个图显示出了在用户负载下的用户体验。纵坐标表示的是端到端的响应时间,单位是秒,而综观底部横坐标表示的是访问系统的总的用户数。这个图的特别之处在于还包含了一个数据的表格。从图中可以看出随着用户的增加,用户体验时间也在一直增加,或者叫用户体验下降,而这恰恰是我们期望的。

图1:一个基本的响应时间下降曲线图

我选择这个特别的图因为这些数据产生了最常见的用户响应时间下降曲线。图1所示的曲线是你绘制这类图的时候绝大时间甚至超过95%的可能看到的曲线。到目前为止,如果你没有看到类似的图,我想说的是你一定遇到了下面的几种情况。
· 用户模型不够精确
· 用户模型的测试脚本不具代表性
· 被测系统根本不支持多用户
· 被测系统没有真正被加压

曲线区域

典型的响应时间下降曲线可以被分为四个区域:
· 单个用户区域
· 性能稳定区域
· 压力区域
· 屈服区域

这里的每个区域都包含一系列有关被测系统的有用的信息。下面的内容将详细讨论这些区域。所以这些才是这个图的价值所在。在测试进行之前弄清楚这些区域将使你不用额外分析就可以对你的被测系统作一个精确的初步评估。

《基于用户体验的性能测试》文章系列主题如下:

Part 1: Introduction

用户模拟篇(Modeling Real Users)
Part 2: Modeling Individual User Delays
Part 3: Modeling Individual User Patterns
Part 4: Modeling Groups of Users

时间统计篇(Meaningful Times)
Part 5: What should I time and where do I put my timers?
Part 6: What is an outlier and how do I account for one?
Part 7: Consolidating and interpreting Times

测试报告篇(Reports to Stakeholders)
Part 8: What Tests add value to stakeholders?
Part 9: Summarizing across multiple tests with accuracy
Part 10: Creating a Degradation Curve

高级应用篇(Advanced Topics)
Part 11: Handling Secure Session ID’s
Part 12: Conditional user path navigation (intelligent surfing)
Part 13: Working with Unrecognized Protocols

作者简介:
Scott Barber is the CTO of PerfTestPlus (www.PerfTestPlus.com) and Co-Founder of the Workshop on Performance and Reliability (WOPR – www.performance-workshop.org). Scott’s particular specialties are testing and analyzing performance for complex systems, developing customized testing methodologies, testing embedded systems, testing biometric identification and security systems, group facilitation and authoring instructional or educational materials. In recognition of his standing as a thought leading performance tester, Scott was invited to be a monthly columnist for Software Test and Performance Magazine in addition to his regular contributions to this and other top software testing print and on-line publications, is regularly invited to participate in industry advancing professional workshops and to present at a wide variety of software development and testing venues. His presentations are well received by industry and academic conferences, college classes, local user groups and individual corporations. Scott is active in his personal mission of improving the state of performance testing across the industry by collaborating with other industry authors, thought leaders
and expert practitioners as well as volunteering his time to establish and grow industry organizations. His tireless dedication to the advancement of software testing in general and specifically performance testing is often referred to as a hobby in addition to a job due to the enjoyment he gains from his efforts.

–待续

This entry was posted in 性能测试 and tagged . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *