sss ssss rrrrrrrrrrr
ssss ss rrrr rrrr
sssss s rrrr rrrr
ssssss rrrr rrrr
ssssssss rrrr rrrr
ssssss rrrrrrrrr
s ssssss rrrr rrrr
ss sssss rrrr rrrr
sss sssss rrrr rrrr
s sssssss rrrrr rrrrr
+===================================================+
+======= Quality Techniques Newsletter =======+
+======= August 2003 =======+
+===================================================+
QUALITY TECHNIQUES NEWSLETTER (QTN) is E-mailed monthly to
subscribers worldwide to support the Software Research, Inc. (SR),
eValid, and TestWorks user communities and to other interested
parties to provide information of general use to the worldwide
internet and software quality and testing community.
Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of QTN, provided that the
entire document/file is kept intact and this complete copyright
notice appears in all copies. Information on how to subscribe or
unsubscribe is at the end of this issue. (c) Copyright 2003 by
Software Research, Inc.
========================================================================
Contents of This Issue
o Real-Time Software Testing, by Boris Beizer
o SQRL Report Descriptions
o eValid: A Quick Summary
o Workshop on Media and Stream Processors (MSP-5)
o Special Issue Propoal: Breakthroughs and Challenges in
Sofwtare Engineering
o Conference on Automated Software Engineering (ASE2003)
o QTN Article Submittal, Subscription Information
========================================================================
Real-Time Software Testing
by Boris Beizer
Note: This article is taken from a collection of Dr.
Boris Beizer's essays "Software Quality Reflections" and
is reprinted with permission of the author. We plan to
include additional items from this collection in future
months.
Copies of "Software Quality Reflections," "Software
Testing Techniques (2nd Edition)," and "Software System
Testing and Quality Assurance," can be purchased
directly from the author by contacting him at
.
One of the most frequent set of misconceptions in testing has to do
with testing so-called "real-time" software. Concomitantly, one of
the most frequent excuses we hear for writing untestable software is
the claim that this software is "real-time" and all the special
tricks played and all the programming abuses seen are essential in
the name of performance. The third piece of poppycock on this
subject is that most testing of "real-time" systems must be done in
a realistic simulation -- e.g., on the live system. These are
common myths, and they get in the way of doing good testing of
real-time systems. More important, they make such testing far more
expensive and far less effective than it should be.
1. Let's distinguish between static testing, in which we are not
concerned with so-called real-time issues (e.g., in a test rig
with a simulated environment) and dynamic testing, in which the
software is tested under realistic or extreme load. Proper unit
and static testing should precede any kind of dynamic testing.
2. Let's also distinguish between synchronization bugs and timing
bugs. Most claimed timing bugs are not timing bugs at all, but
synchronization bugs. A synchronization bug concerns the order
in which events occur: for example, event A before event B, B
before A, A and B simultaneously (within a defined time window),
A without B and B without A. By contrast, a true timing bug
actually depends on time: for example, event B must follow A no
sooner than 25 milliseconds and no later than 200 milliseconds.
Timing bugs are rare. Synchronization bugs are common. If the
application runs under an operating system, timing bugs are
virtually impossible. Synchronization bugs can, and should be,
found by static testing. Timing bugs require dynamic testing.
3. If the software doesn't work in a static environment, it surely
won't work in a dynamic environment. If routines don't work
under unit testing, they surely won't work under real loads.
Testing in a static environment may seem redundant because many
of the tests should be rerun in the realistic dynamic
environment. It isn't redundant, however. Proper unit, all
levels of integration testing, and static system testing should
be done first in order to get as many bugs out as possible
before dealing with the so-called "real-time" bugs. Once you're
doing dynamic testing, many bugs, even low level unit bugs,
appear to be timing dependent and only erratically reproducible.
If you don't get the static bugs out (the bugs that can be
caught in static testing) first, you'll just have to get them in
the dynamic environment or after installation, at a much higher
cost.
4. Most questions of "real-time" and "hard real-time" are academic
if the application is written over an operating system.
Interrupt handlers and device drivers are obvious exceptions.
Today, the operating system obviates many "real-time" options.
For example, a system's response time may crucially depend on
disc latency, but the disc optimizer is handled by the operating
system and any attempt by the programmer to second-guess those
algorithms is likelier to result in longer, rather than shorter
response times. Also, there may be further disc latency
optimizers at work in the disc controller and in the disc drive
itself, all of which are unknown to the programmer and
uncontrollable. Another example, which will be more with us in
the future, in processors that do speculative execution and have
large cache memories, how the software is compiled and how it is
laid into cache, RAM, or virtual memory on disc crucially
affects execution time. The execution time difference could be
100:1 or worse. Since the application programmer does not know
the operating system's virtual memory replacement policies
and/or the various proprietary compiler and/or dynamic
optimizers used, there is no control over execution time.
5. Most questions of real-time have become academic because of the
current crop of high-performance computer chips. Usually, it is
cheaper to upgrade to a faster CPU than it is to do the analysis
needed to determine if the CPU can handle the load.
6. "But what if it's not possible to change the CPU or increase the
RAM?," I am often asked. For every time I hear that cry, in
only one out of a hundred is it legitimate. For small embedded
systems produced in the hundreds of thousands or the millions of
units, this is a legitimate excuse. For example, automobile
power-train controllers, diabetic sugar analyzer, computer in a
consumer's VCR.... However, even with production runs of 10,000
it may not be valid. You must take into account the cost of
developing and testing the software for such stringently
constrained platforms. For example, in one consulting
assignment, developers were constrained to work in one of those
horrid little 4-bit jobs. There was no decent development
environment, no debugger, and no test tools. I suggested to
management that they do some real cost accounting to see how
much this was costing them. The choice was to stick with the 75
cent chip, or as I suggested, upgrade to a 286 at $15. They did
the analysis and found that the 75 cent chip was costing them
$450 per system in additional software development and testing
costs. They didn't take my advice: they upgraded to a 486 for
the next generation product.
7. If written over an operating system, then 90% - 95% of the
software can be tested in static mode without risk. That is,
repeating the tests under load is unlikely to reveal bugs not
revealed by thorough, earlier static testing. If this is an
operating system or an embedded system which is its own
operating system, the comparable figure is 85% - 90%.
8. The cry of "REAL-TIME!" or "HARD REAL-TIME!" is often used by
developers to avoid unit testing or to undercut the work of an
independent test group. The claims goes that any form of static
testing, because it is not "realistic," is worthless. This claim
is most often used as a cop-out to avoid unit testing.
Interestingly enough, when you do apply dynamic testing
techniques such as stress testing, they will then also cry
"unrealistic." Especially if your tests results in copious
embarrassing crashes.
9. Proper implementation of finite-state behavior should never be
time-critical. If it is time-critical, then the finite-;state
design is probably improper, probably incomplete, and it
probably did not pay attention to potential critical race
conditions.
10. Real, real-time issues (as contrasted to real-time excuses) are
difficult to analyze and even more difficult to test. It is
almost impossible to generate the timing and synchronization
situations involved unless you have a total environment
simulator. Proper testing of synchronization problems (event
ordering) can rarely be done in the real environment. Only an
environment simulator can adjust those critical timing and race
conditions. The cost of such simulators can easily exceed the
cost of the application. If the external environment is not
totally controlled or simulated, most notions of "real-time"
testing and real-time behavior are meaningless self-deception.
11. Real-time systems should be designednot tested. By which I mean
that all real-time issues should be encapsulated and forced to
have an extremely limited scope (e.g., interrupt handler). If
you don't do this, then the potential for synchronization
problems are so extreme, and the number of test cases needed to
probe all the variations so huge, that whatever you actually can
test is such a statistically insignificant part of the whole
that it is again merely a more elegant form of self-deception.
In summary, here is real-time testing in a nutshell:
a. Proper unit and integration testing.
b. No tricks in the name of performance and real-time.
c. Encapsulate all real, real-time software. If that's more
than 5% of the total, you probably can't make the
application work at all.
d. Let the operating system do it whenever you can.
5. Complete static behavioral testing (both positive and negative)
of all functionality before you do one dynamic test.
6. Early stress testing to find the easy synchronization and timing
bugs.
7. External environment simulators if you must actually test
synchronization and timing problems. But do a prior analysis of
such potential problems (e.g., Petri nets, timed Petri nets,
temporal logic) to assure that the protocol or algorithm is
stable (e.g., can't lock up) and if possible, to avoid such
problems altogether.
8. Upgrade your processor chip. Increase RAM.
9. Do inverse profile based stochastic testing to probe low-
probability paths. That is, instead of the user profile
distributions, you generate tests using the 1-P (the inverse
distribution.).
10. Don't play any tricks in the code, whatsoever, not one, in the
name of performance improvement. Code in the cleanest,
simplest, most straightforward way you can. That will probably
result in the most highly optimized code and be faster than any
tricky code. Then measure the system's performance and the
routines' execution times using a statistical profiler. If any
routine is found to be a hot-spot that consumes excessive CPU
time, reprogram that unit and only that unit. If you don't have
a faster algorithm for that task, don't expect any improvement
from recoding.
11. Don't accept developers' cries of "real-time" and "unrealistic"
as excuses to avoid the static testing that must be passed
before any dynamic testing makes sense.
In short, do everything you can to avoid the real-time testing
issues and then do it right!
========================================================================
SQRL Report No. 10
Verification of the WAP Transaction Layer
Using the Model Checker SPIN
Yu-Tong He
Abstract: This report presents a formal methodology of formalizing
and verifying the Transaction Layer Protocol (WTP) design in the
Wireless Application Protocol (WAP) architecture. Corresponding to
the Class 2 Transaction Service (TR-Service) definition and the
Protocol (TR-Protocol) design, two models at different abstraction
levels are built with a finite state automation (FSA) formalism. By
using the model checker SPIN, we uncover defects in a latest
approved version of the TR-Protocol design, which can lead to
deadlock, channel buffer overflow and unfaithful refinement of the
TR-Service definition. As an extended result, a set of safety,
liveness and temporal properties is verified for the WTP to be
operating in a more general environment which allows for loss and
re-ordering messages.
SQRL Report No. 11
Modelling Concurrency by Tabular Expressions
Yuwen Yang
Abstract: Tabular expressions (Parnas et al) [26,27,29,31,32] are a
software specification technique that becomes increasingly popular
in software industry. The current state of the technique is
restricted to sequential systems only. In this thesis we show how
concurrency can be treated in some systematic way in the framework
of tabular expressions.
A precise notion of composite global automata (compare [20,36])will
be defined. The tabular expressions [24,26,28] used by Software
Engineering Research Group will be slightly extended to deal with
the transition function/relation of concurrent automata.
In the thesis, each sequential process is viewed as a Finitely
Defined Automaton with Interpreted States [18], and all of the
processes in the system are composed to a global finite state
automata to model the concurrent system. The thesis starts with a
common model of a nondeterministic finite automaton, extends the
traditional automata, and associates two sets called synchronization
set and competition set, respectively, to each action of the
individual processes. Finally, whole processes in the system are
composed and the actions dependence of individual process is
eliminated for the global action. A simple example of Readers-
Writers is given to illustrate this method.
========================================================================
eValid: A Quick Summary
http://www.e-valid.com
Readers of QTN probably are aware of SR's eValid technology offering
that addresses website quality issues.
Here is a summary of eValid's benefits and advantages.
o InBrowser(tm) Technology. All the test functions are built into
the eValid browser. eValid offers total accuracy and natural
access to "all things web." If you can browse it, you can test
it. And, eValid's unique capabilities are used by a growing
number of firms as the basis for their active services
monitoring offerings.
o Functional Testing, Regression Testing. Easy to use GUI based
record and playback with full spectrum of validation functions.
The eVmanage component provides complete, natural test suite
management.
o LoadTest Server Loading. Multiple eValid's play back multiple
independent user sessions -- unparalleled accuracy and
efficiency. Plus: No Virtual Users! Single and multiple
machine usages with consolidated reporting.
o Mapping and Site Analysis. The built-in WebSite spider travels
through your website and applies a variety of checks and filters
to every accessible page. All done collected entirely from the
users' perspective -- from a browser, just as your users will
see your website.
o Desktop, Enterprise Products. eValid test and analysis engines
are delivered at moderate costs for desktop use, and at very
competitive prices for use throughout your enterprise.
o Performance Tuning Services. Oursourcing your server loading
activity can surely save your budget and might even save your
neck! Realistic scenarios, applied from multiple driver
machines, impose totally realistic -- no virtual user! -- loads
on your server.
o HealthCheck Subscription. For websites up to 1000 pages,
HealtCheck services provide basic detailed analyses of smaller
websites in a very economical, very efficient way.
o eValidation Managed Service. Being introduced this Fall, the
eValidation Managed WebSite Quality Service offers comprehensive
user-oriented detailed quality analysis for any size website,
including those with > 10,000 pages.
Resellers, Consultants, Contractors, OEMers Take Note
We have an active program of product and service resellers. and
we'd like to hear from you if you are interested in joining the
growing eValid "quality website" team. Also, we provide OEM
solutions for internal and/or external monitoring, custom-faced
testing browsers, and a range of other possibilities. Let us hear
from you!
========================================================================
5th Workshop on Media and Stream Processors (MSP-5)
San Diego, CA, December 1, 2003
http://www.pdcl.eng.wayne.edu/msp5/
The Fifth Workshop on Media and Stream Processors (previously known
as the Workshop on Media Processors and DSPs), held in conjunction
with MICRO-36 in San Diego, brings together researchers and
engineers in computer architecture and compiler design working in
different fields in the area of multimedia, communications, and
digital signal processing. The workshop will provide a forum for
presenting and exchanging new ideas and experiences in this area.
The Workshop on Media and Stream Processors has grown significantly
in the last couple years, with a full-day workshops containing an
average of 9 papers and 2-3 invited talks per year. This year we
are continuing to encourage topics on network processors and active
networking.
Topics of interest include, but are not limited to the following
list. Again, we are increasing the workshop's scope to include
network processors and active networks, so we strongly encourage the
submission of papers related to that topic.
* Networks processors and active networking
* Optimization/performance analysis of processor
architectures for media, network, and stream-based
processing
* Compiler/software optimizations for subword parallelism
and media, network, and stream-based processing
* Hardware/Compiler techniques for improving memory
performance of media, network, and stream-based processing
* Workload characterization of media and streaming applications
* Low-power design for media and stream-based processors
* Application-specific hardware architectures for graphics,
video, audio, communications, and other streaming applications
* System-on-a-chip architectures for media, network, and
stream processors
* Hardware/Software Co-Design of media, network, and
stream processors
* Integration of media, network, and stream
processors/co-processors with general-purpose processors
* Desktop/Set-top media and stream-based system architectures
* Case studies of commercial media, network, and stream-based
processing products and solutions
PROGRAM CO-CHAIRS:
Vipin Chaudhary
Wayne State University, Cradle Technologies
vipin@wayne.edu
Alex Dean
North Carolina State University
alex_dean@ncsu.edu
Jason Fritts
Washington University
jefritts@cs.wustl.edu
========================================================================
Journal of Universal Computer Science
Special Issue On
Breakthroughs and Challenges
In Software Engineering
http://www.tdg-seville.info/cfp/jucs04/
The Journal of Universal Computer Science (J.UCS) is a high-quality
publication that deals with all aspects of computer science since
1995. It is currently indexed by the ISI and the
CompuScience/CompuTex databases, amongst others, and it is published
in co-operation with Springer Verlag.
J.UCS is pleased to announce a special issue on Software Engineering
that will be published in April, 2004. Please, continue reading or
take a look at the web page above to find further details.
Topics of Interest
We are chiefly interested in the research areas and topics listed
below, although others might well be considered:
- Project Management: Cost estimation, time estimation, scheduling,
productivity, programming teams, configuration management,
software process models, software quality assurance, maintenance,
metrics.
- Requirements Elicitation: Elicitation methods, interviews,
languages, methodologies, rapid prototyping.
- Software Analysis and Design: Computer-aided software engineering,
evolutionary prototyping, component-oriented design methods,
domain-specific architectures, architectural languages,
architectural patterns, design patterns.
- Testing and Verification: Certification, code inspections and
walk-throughs, diagnostics, symbolic execution, input generation
approaches, error handling and recovery, coverage testing.
- State-of-the-art Technologies: Web services standards, advanced
run-time platforms.
========================================================================
18th IEEE International Conference on
Automated Software Engineering
October 6-10, 2003
Montreal, Quebec, Canada
http://www.ase-conference.org
Register now via the conference website - early registration ends
September 5th.
The IEEE International Conference on Automated Software Engineering
brings together researchers and practitioners to share ideas on the
foundations, techniques, tools, and applications of automated
software engineering technology. Both automatic systems and systems
that support and cooperate with people are within the scope of the
conference, as are models of software and software engineering
activities. Papers on applications of Artificial Intelligence
techniques to automating software engineering, empirical evaluation
of automated software engineering techniques and tools, and
automation applied to new directions in software engineering methods
and technologies are all welcome.
It is clear that software technology will play an increasingly
important role in society's future. However, the bursting of the
Internet bubble has shown that this future will not be built upon
buzz words and promises. As future investments in software
technology will likely be more conservative, the economic incentives
for automating aspects of the software engineering lifecycle
increase. However, as a community, we face many research challenges
in scaling up and extending ASE technology to support open,
component-based systems while providing the flexible balance of
discipline and agility required in today's development processes.
Papers traditionally presented at the conference cover topics
including:
* Automated reasoning techniques
* Category & Graph-theoretic approaches to software engineering
* Component-based systems
* Computer-supported cooperative work
* Configuration management
* Domain modeling and meta-modeling
* Human computer interaction
* Knowledge acquisition
* Maintenance and evolution
* Modeling language semantics
* Ontologies and methodologies
* Open systems development
* Program understanding
* Re-engineering
* Reflection- and Metadata approaches
* Requirements engineering
* Reuse
* Specification languages
* Software architecture
* Software design and synthesis
* Software visualization
* Testing
* Tutoring, help, documentation systems
* Verification and validation
Invited speakers this year are John Myloupolos (University of
Toronto) and Anthony Finkelstein (University College London).
Paper evaluation this year was very thorough and highly selective.
The Program Committee chose 22 full technical papers from 170
submissions for presentation at the conference. Each paper was peer
reviewed by at least three members of the International Program
Committee. The Program Committee also selected another 25 papers for
inclusion as short papers in the proceedings. These papers represent
work that the Program Committee feels is interesting and novel but
not mature enough to warrant a full ASE paper at this stage.
Previous ASE conferences have shown that some of the most
interesting and entertaining topics are discussed in the less
informal and highly interactive poster sessions. Visit the
conference website for a preliminary program.
In addition to the technical papers and demonstrations, 6 tutorials
are scheduled (see the conference web page for details) and several
colocated workshops are offered: the 3rd International Workshop on
Formal Approaches To Testing Of Software (FATES), the 2nd
International Workshop on MAnaging SPecialization/Generalization
HIerarchies (MASPEGHI), and the 2nd International Workshop on
Traceability in Emerging Forms of Software Engineering (TEFSE).
General Chair
Houari Sahraoui
Universite de Montreal, Canada
sahraouh@iro.umontreal.ca
Program Chairs
John Grundy,
University of Auckland, New Zealand
john-g@cs.auckland.ac.nz
John Penix
NASA Ames Research Center, USA
jpenix@email.arc.nasa.gov
========================================================================
------------>>> QTN ARTICLE SUBMITTAL POLICY <<<------------
========================================================================
QTN is E-mailed around the middle of each month to over 10,000
subscribers worldwide. To have your event listed in an upcoming
issue E-mail a complete description and full details of your Call
for Papers or Call for Participation to .
QTN's submittal policy is:
o Submission deadlines indicated in "Calls for Papers" should
provide at least a 1-month lead time from the QTN issue date. For
example, submission deadlines for "Calls for Papers" in the March
issue of QTN On-Line should be for April and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
(about four pages). Longer articles are OK but may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
Inc., and may be edited for style and content as necessary.
DISCLAIMER: Articles and items appearing in QTN represent the
opinions of their authors or submitters; QTN disclaims any
responsibility for their content.
TRADEMARKS: eValid, HealthCheck, eValidation, TestWorks, STW,
STW/Regression, STW/Coverage, STW/Advisor, TCAT, and the SR, eValid,
and TestWorks logo are trademarks or registered trademarks of
Software Research, Inc. All other systems are either trademarks or
registered trademarks of their respective companies.
========================================================================
-------->>> QTN SUBSCRIPTION INFORMATION <<<--------
========================================================================
To SUBSCRIBE to QTN, to UNSUBSCRIBE a current subscription, to
CHANGE an address (an UNSUBSCRIBE and a SUBSCRIBE combined) please
use the convenient Subscribe/Unsubscribe facility at:
<http://www.soft.com/News/QTN-Online/subscribe.html>.
As a backup you may send Email direct to as follows:
TO SUBSCRIBE: Include this phrase in the body of your message:
subscribe
TO UNSUBSCRIBE: Include this phrase in the body of your message:
unsubscribe
Please, when using either method to subscribe or unsubscribe, type
the exactly and completely. Requests to unsubscribe
that do not match an email address on the subscriber list are
ignored.
QUALITY TECHNIQUES NEWSLETTER
Software Research, Inc.
1663 Mission Street, Suite 400
San Francisco, CA 94103 USA
Phone: +1 (415) 861-2800
Toll Free: +1 (800) 942-SOFT (USA Only)
FAX: +1 (415) 861-9801
Email: info@soft.com
Web: <http://www.soft.com/News/QTN-Online>