sss ssss rrrrrrrrrrr
ssss ss rrrr rrrr
sssss s rrrr rrrr
ssssss rrrr rrrr
ssssssss rrrr rrrr
ssssss rrrrrrrrr
s ssssss rrrr rrrr
ss sssss rrrr rrrr
sss sssss rrrr rrrr
s sssssss rrrrr rrrrr
+===================================================+
+======= TeSting TechniqueS NewSletter (TTN) =======+
+======= ON-LINE EDITION =======+
+======= March 1996 =======+
+===================================================+
TESTING TECHNIQUES NEWSLETTER (TTN), On-Line Edition, is E-Mailed
monthly to support the Software Research, Inc. (SR) user community and
provide information of general use to the world software testing commun-
ity.
(c) Copyright 1995 by Software Research, Inc. Permission to copy and/or
re-distribute is granted to recipients of the TTN On-Line Edition pro-
vided that the entire document/file is kept intact and this copyright
notice appears with it.
TRADEMARKS: STW, Software TestWorks, CAPBAK/X, SMARTS, EXDIFF,
CAPBAK/UNIX, Xdemo, Xvirtual, Xflight, STW/Regression, STW/Coverage,
STW/Advisor and the SR logo are trademarks or registered trademarks of
Software Research, Inc. All other systems are either trademarks or
registered trademarks of their respective companies.
========================================================================
INSIDE THIS ISSUE:
o Quality Week 1996: FINAL PROGRAM ANNOUNCED
o Software Negligence and Testing Coverage (Part 3 of 4), by Cem
Kaner
o Frequently Asked Questions about the Space Shuttle Computers (Part
3 of 3)
o 18th International Conference on Software Engineering
o Correct Correction
o TTN SUBMITTAL POLICY
o TTN SUBSCRIPTION INFORMATION
========================================================================
Ninth International Software Quality Week (QW'96)
21-24 May 1996
Sheraton Palace Hotel, San Francisco, California
Conference Theme: Process Convergence
CONFERENCE THEME: QUALITY PROCESS CONVERGENCE
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Advances in technology have swept the computing industry to new heights
of innovation. The astonishing growth of the InterNet and the WWW, the
maturation of client-server technology, and the emerging developments
with C++ and Sun's Java Language (tm) are two illustrations of the rapid
deployment we are seeing the 1990s. For software quality to keep track
existing methods, approaches and tools have to be thought of in well-
structured ``process models'' that apply quality control and test
methods in a reasoned, practical way. Quality Process Convergence -
making sure that applied quality techniques produce real results at
acceptable costs - is the key to success. The Ninth International Soft-
ware Quality Week focuses on software testing, analysis, evaluation and
review methods that support and enable process thinking. Quality Week
'96 brings the best quality industry thinkers and practitioners together
to help you keep the competitive edge.
CONFERENCE SPONSORS
^^^^^^^^^^^^^^^^^^^
The QW'96 Conference is sponsored by SR/Institute, in cooperation the
IEEE Computer Society (Technical Council on Software Engineering) and in
cooperation with the ACM. Members of the IEEE and ACM receive a 10%
discount off all registration fees.
TECHNCIAL PROGRAM DESCRIPTION
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The Pre-Conference Tutorial Day offers expert insights on ten key topic
areas. The Keynote presentations give unique perspectives on trends in
the field and recent technical developments in the community, and offer
conclusions and recommendations to attendees.
The General Conference offers four track presentations, mini-tutorials
and a debate:
Technical Track Topics include:
OO Testing
Specifications
Ada
Statistical Methods
Rule-Based Testing
Class Testing
Testability
Applications Track Topics include:
Decision Support
Mission-Critical
Innovative Process
Internal Risk
GUI Testing
New Approaches
Management Track Topics include:
QA Delivery
Testing Topics
Process Improvement - I
Process Improvement - II
Metrics to Reduce Risk
Process Improvement III
Success Stories
Quick-Start Mini-Tutorial Track includes:
An Overview of Model Checking
Software Reliability Engineered Testing Overview
Teaching Testers: Obstacles and Ideas
Testing Object-Oriented Software: A Hierarchical Approach
Best Current Practices in Software Quality
A History of Software Testing and Verification
Software Testing: Can We Ship It Yet?
Q U A L I T Y W E E K ' 9 6
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
C O N F E R E N C E P R O G R A M
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TUESDAY, 21 MAY 1996 (TUTORIAL DAY)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Tutorial Day offers ten lectures in two time slots on current issues and
technologies. You can choose one tutorial from each of the two time
slots.
Tuesday, 21 May 1996, 8:30 - 12:00 -- AM Half-Day Tutorials
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Mr. Robert V. Binder (System Consulting Inc.) "Object-Oriented System
Testing: The FREE Approach (Tutorial A)"
This tutorial presents a complete and coherent approach to system
testing based on object-oriented software requirements. Participants
will learn how to develop an efficient and effective system test plan
from use cases, scenarios and object-interaction diagrams.
Dr. Boris Beizer (ANALYSIS) "An Overview Of Testing Unit, Integration,
System (Tutorial B)"
This overview of software testing introduces newcomers to software
testing to the technical and conceptual vocabulary of testing in
order to prepare them to understand the conference material. In the
past, this has been one of the most popular pre-conference tutorials.
It assumes only basic programming knowledge and no prior experience
with formal testing methods. It is updated each year to assure
currency.
Dr. Walt Scacchi (University of Southern California) "Understanding
Software Productivity (Tutorial C)"
Based on an in-depth analysis of the relationship between ISO 9001
and the SEI CMM, this course provides overviews and detailed examina-
tions of both models. Participants will learn to determine criteria
for applying each model to the engineering practices of a particular
organization; to understand how ISO 9000-3 supports the application
of each clause in practice; to avoid time-consuming misinterpreta-
tions; to organize the Key Process Areas in Version 1.1 of the CMM;
to define areas of overlap and difference between the two models; and
to anticipate the impact of specific quality assessment programs.
Mr. Lech Krzanik (CCC Software Professionals Oy) "BOOTSTRAP: A European
Software Process Assessment and Improvement Method (Tutorial D)"
BOOTSTRAP, which was developed based on the experience of SEI and ISO
9001/9000-3, is expected to become the first complete, widely used
methodology and tool suite for process assessment and improvement to
become fully SPICE compatible (SPICE, Software Process Improvement
and Capability determination, is an ISO standard initiative to be
published next year). Against the background of an up-to-date com-
parative review of the principles and practices of other software
process assessment and improvement approaches, the BOOTSTRAP metho-
dology, tools and experiences are demonstrated.
Mr. John D. Musa (AT&T Bell Labs) "Software Reliability Engineered Test-
ing (Tutorial E)"
Software-reliability-engineered testing (SRET) is engineered to test
software as efficiently and reliably as possible. This tutorial
teaches the major activities of SRET: developing an operational pro-
file, defining "failure" with severity classes, setting system
failure intensity objectives, allocating system failure intensity
objectives among components, certifying failure intensities of
acquired software components, testing the system to the failure
intensity objectives, and rehearsing customer acceptance tests.
Tuesday, 21 May 1996, 1:30 - 5:00 -- PM Half-Day Tutorials
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Mr. Hans-Ludwig Hausen (GMD Gesellschaft fur Mathematik und Datenverar-
beitung mbH) "Software Quality Evaluation and Certification (Tutorial
F)"
This tutorial will make the attendees aware of the development of the
technology of software quality evaluation and certification, and pro-
vide them with practical approaches to the problem of quality. The
results and findings of several European projects on software quality
and productivity are presented specifically to meet the needs of
software managers and developers.
Dr. Norman F. Schneidewind (Naval Postgraduate School) "Software Relia-
bility Engineering for Client-Server Systems (Tutorial G)"
This tutorial addresses the increasing use of multi-node client-
server and distributed systems, in which software entities executing
on multiple nodes must be modeled as systems if realistic reliability
predictions and assessments are to be made. The following topics are
covered: specifying client-server software reliability requirements,
identifying critical and noncritical client and server functions;
specifying a client- server architecture to meet software reliability
requirements; modeling and predicting client-server software relia-
bility, and integrating modeling and prediction with software testing
of client-server systems.
Mr. William J. Deibler, Mr. Bob Bamford (Software Systems Quality Con-
sulting) "Models for Software Quality -- Comparing the SEI Capability
Maturity Model (CMM) to ISO 9001 (Tutorial H)"
This tutorial examines what is currently known and unknown about
software productivity through: (a) review and comparative analysis
of published empirical studies and measures of software productivity,
and what affects it; as a basis for (b) synthesizing what can be done
to better measure, understand and improve software productivity.
Mr. Dan Craigen, Mr. Ted Ralston (ORA Canada) "An Overview of Formal
Methods (Tutorial I)"
This tutorial provides a high-level briefing about Formal Methods,
without focusing on mathematical minutiae or parochial arguments
about which formal method is "best." Formal Methods can be used to
extend our capability to predict the behavior of systems and to com-
plement the analyses of conventional approaches to software quality
(testing and inspection). The tutorial presents the basic concepts
of Formal Methods. Some major successes of Formal Methods in indus-
try are summarized, and popular myths are addressed. The overview of
the capabilities of the technology includes what is currently feasi-
ble and what is being investigated. The tutorial concludes with gui-
dance about how to get started with Formal Methods and where to find
further resources.
Mr. Tom Gilb (Independent Consultant) "Software Inspection (Tutorial J)"
This tutorial focuses on correcting misconceptions about software
inspection and on updating participants to a more advanced level of
practice. Advances made over the last 20 years will be discussed,
and participants will hear how to get the most out of inspections,
how to move on from our present state of inspections, and whether
inspections relate to tests as complement or as competition.
22-24 MAY 1996 -- QUALITY WEEK '96 CONFERENCE
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Wednesday, 22 May 1996, 8:30 - 12:00 -- OPENING KEYNOTES
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Mr. Walter Ellis (Or Equivalent) (Software Process and Metrics) "NSC:
A Prospectus And Status Report (Keynote) (1-1)"
Mr. Tom Gilb (Independent Consultant) "The `Result Method' for Qual-
ity Process Convergence (Keynote) (1-2)"
Prof. Leon Osterweil (University of Massachusetts Amherst) "Perpetu-
ally Testing Software (Keynote) (1-3)"
Dr. Watts Humphrey (Carnegie Mellon University) "What if Your Life
Depended on Software?" (Keynote) (1-4)"
Wednesday, 22 May 1996, 1:30 - 5:00 -- PM Parallel Tracks
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TECHNOLOGY TRACK
Mr. T. Ashok, Mr. K. Rangaraajan, Mr. P. Eswar (VeriFone Inc.)
"Retesting C++ Classes (2T1)"
Mr. John D. McGregor, Mr. Anuradha Kare (Department of Computer Sci-
ence, Clemson University) "PACT: An Architecture for Object-Oriented
Component Testing (2T2)"
Mr. Shane McCarron (X/Open Company Ltd.) "The Assertion Definition
Language Project: A tool for Automated Test and Documentation Gen-
eration (2T3)"
Mr. Daniel Jackson (Carnegie Mellon University) "New Technology For
Checking Software Specifications (2T4)"
APPLICATIONS TRACK
Prof. Vic Basili, Mr. Zhijun Zhang (University of Maryland) "A Frame-
work for Collecting and Analyzing Usability Data (2A1)"
Dr. Peter Liggesmeyer (Siemens AG) "Selecting Test Methods, Tech-
niques, Metrics, and Tools Using Systematic Decision Support (2A2)"
Mr. Lorenzo Lattanzi, Mr. Francesco Piazza (Alenia Spazio) "Testing
of a Mission Critical Real-Time Software for Space Application"
(2A3)"
Dr. Jacob Slonim, Mr. Michael Bauer, Ms. Jillian Ye (IBM Canada Lab)
"Structural Measurement of Functional Testing: A Case Study in an
Industrial Setting (2A4)"
MANAGEMENT TRACK
Mr. Dave Duchesneau, Mr. Jay G. Ahlbeck (The Boeing Company) "The
Secret to Installing Valued-Added SQA (2M1)"
Dr. Walt Scacchi (University of Southern California) "Knowledge-Based
Software Process (Re)Engineering (2M2)"
Mr. Tilmann Bruckhaus (School of Computer Science) "How Tools, Pro-
ject Size and Development Process Affect Productivity (2M3)"
Mr. Otto Vinter (Bruel & Kjaer) "Experience-Driven Process Improve-
ment Boosts Software Quality (2M4)"
QUICK START TRACK MINI-TUTORIALS
Mr. Daniel Jackson (Carnegie Mellon University) "An Overview of Model
Checking (Q2)"
Mr. John D. Musa (AT&T Bell Labs) "Software Reliability Engineered
Testing Overview (Q1)"
Thursday, 23 May 1996, 8:30 - 12:00 -- AM Parallel Tracks
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TECHNOLOGY TRACK
Mr. Franco Mazzanti, Mr. Consolata Marzullo (IEI-CNR) "The Need and
Feasibility of the Static Detection of Erroneous Executions in Ada95
(3T1)"
Dr. Sandro Morasca, Mr. Mauro Pezze, Mr. Sergio Silva (Departimento
di Elettronica e Informazione Politecnico di Milano) "Mutation
Analysis For Concurrent ADA Programs (3T2)"
Mr. Joseph Huey-Der Chu, Mr. John Dobson (University Of Newcastle
upon Tyne) "A Statistics-Based Framework for Automated Software Test-
ing (3T3)"
Ms. Gwendolyn Walton, Mr. James A. Whittaker (University Of Central
Florida (will be President Software Engineering Technologies))
""Software Technology Based On A Usage Model (3T4)"
APPLICATIONS TRACK
Mr. Michael Deck (Cleanroom Software Engineering, Inc.) "Cleanroom
Practice: A Theme and Variations (3A1)"
Ms. Ilene Burnstein, Mr. Taratip Suwannasart, Mr. C. Robert Carlson
(Illinois Institute Of Technology) "The Development of a Testing
Maturity Model (3A2)"
Mr. Jarrett Rosenberg (Sun Microsystems) "Linking Internal and Exter-
nal Quality Measures (3A3)"
Mr. Staale Amland (Avenir A.S.) "Risk Based Testing of Large Finan-
cial Application (3A4)"
MANAGEMENT TRACK
Mr. Chuck House (Centerline Software) "The Development Dilemma of the
SEI Model Case Studies in Software Process Improvement (3M1)"
Ms. Barb Denny (Rockwell - Collins Commercial Avionics) "Achieving
ISO- 9001: A Software Prospective (3M2)"
Captain Brian G. Hermann (U.S. Air Force) "Software Maturity Evalua-
tion: When Is Software Ready for Operational Testing or Fielding?
(3M3)"
Mr. Robert A. Martin, Ms. Mary T. Drozd (The Mitre Corporation)
"Using Product Quality and Level of Integration to Assess Software
Engineering Capability and Focus Process Improvement (3M4)"
QUICK START TRACK MINI-TUTORIALS
Mr. James Bach (STL) "Teaching Testers: Obstacles and Ideas (Q3)"
Mr. Shel Siegel (Objective Quality Inc.) "Testing Object Oriented
SW: A Hierarchical Approach (Q4)"
Thursday, 23 May 1996, 8:30 - 12:00 -- PM Parallel Tracks
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TECHNOLOGY TRACK
Dr. Alberto Avritzer, Dr. Elaine Weyuker (AT&T Bell Labs ) "Testing a
Rule-Based System (4T1)"
Ms. Valerie Barr (Hofstra University) "Rule-Based System Testing with
Control and Data Flow Techniques (4T2)"
Dr. T. H. Tse, Mr. Zhinong Xu (University Of Hong Kong) "Test Case
Generation for Class-Level Object-Oriented Testing (4T3)"
Mr. Biju Nair, Kenneth R. Gulledge, Ramona F. Lingevitch (SAFCO Cor-
poration) "Using OLE Automation for Efficiently Automating Software
Testing (4T4)"
APPLICATIONS TRACK
Prof. Lee J. White (Case Western Reserve University) "Automated GUI
Testing for Static or Dynamic Interactions (4A1)"
Ms. Carolyn L. Fairbank (The Maryland Insurance Group) "Moving up to
Test Automation - Mainframe to GUI (4A2)"
Dr. Boris Beizer, Mr. Tom Gilb (Independent Consultants) "Testing Vs.
Inspection -- THE GREAT DEBATE (4A3)"
MANAGEMENT TRACK
Mr Steven L. Dodge (Naval Surface Warfare Center Division "Focusing
Testing Efforts: Software Metrics in Test Planning (4M1)"
Ms. Johanna Rothman (Rothman Consulting Group) "Measurements to
Reduce Risk in Product Ship Decisions (4M2)"
Dr. Bob Birss (Moderator), Mr. Robert Hodges, Mr. Cem Kaner, Mr.
Brian Marick, Ms. Melora Svoboda (AT&T Bell Laboratories) "How To
Save Time And Money In Testing: A PANEL DISCUSSION (4M3)"
Dr. Matthias Grochtmann (Daimler-Benz AG) "Testing Software is Okay;
But Testing Machines is Fun Too (4M4)"
QUICK START TRACK MINI-TUTORIALS
Mr. Tom Drake (NSA Software Engineering Center) "Best Current Prac-
tices In Software Quality Engineering (Q5)"
Prof. Leon Osterweil, Dan Craigen (University of Massachusetts
Amherst) "A History of Software Testing and Verification (Q6)"
Friday, 24 May 1996, 8:30 - 10:00 -- AM Parallel Tracks
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TECHNOLOGY TRACK
Prof. Antonia Bertolino & Lorenzo Strigini (IEI-CNR) "Predicting
Software Reliability From Testing Taking Into Account Other Knowledge
About a Program (5T1)"
Mr. Bernd Otto (CondaMs. t GmbH Berlin) "Design For Testable Telecom-
munications Software -- A Practical Approach (5T2)"
APPLICATIONS TRACK
Mr. William (Bill) Farr (Naval Surface Warfare Center Defense Divi-
sion) "A Tool For Software Reliability Assessment (5A1)"
Mr. Shankar L. Chakrabarti, Mr. Rajeev Pandey (Hewlett-Packard Com-
pany) "Testing The WEB We Weave (5A2)"
MANAGEMENT TRACK
Mr. Roger Drabick (Eastman Kodak Company) "Testing Experiences on an
Imaging Program (5M1)"
Mr. Bret Pettichord (BMC Software, Inc.) "Success with Automation
Testing (5M2)"
QUICK START TRACK MINI-TUTORIALS
Mr. Roger W. Sherman, Mr. Stuart Jenine (Microsoft Corporation)
"Software Testing: Can We Ship It Yet? (Q7)"
Friday, 24 May 1996, 10:30 - 1:00 -- CLOSING KEYNOTES
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Mr. Guenther R. Koch (European Software Institute) "The European
Software Institute As A Change Agent (KEYNOTE) (6-1)"
Mr. Clark Savage Turner (Software Engineering Testing) "Legal Suffi-
ciency of Safety-Critical Testing Process (Keynote) (6-2)"
Dr. Boris Beizer (ANALYSIS) "Software *is* Different KEYNOTE (6-3)"
Dr. Edward Miller (Software Research) "Conference Conclusion"
R E G I S T R A T I O N F O R Q U A L I T Y W E E K
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Contact SR/Institute at qw@soft.com or FAX +1 (415) 550-3030 or register
electronically at http://www.soft.com/QualWeek/.
========================================================================
SOFTWARE NEGLIGENCE AND TESTING COVERAGE (Part 3 of 4)
by
CEM KANER, J.D., PH.D., ASQC-C.Q.E.
Editors Note: This article was first published in the Software QA Quar-
terly, Volume 2, No. 2, 1995, pp. 18-26. Copyright (c) Cem Kaner, 1995.
All rights reserved. It is reprinted in TTN-Online, in four parts, by
permission of the author.
(Part 3 of 4)
10. Relational coverage. [Checks whether the subsystem has been exer-
cised in a way that tends to detect off-by-one errors] such as errors
caused by using < instead of <=. (Footnote 11)
------------
Footnote 11 B. Marick, The Craft of Software Testing, Prentice Hall,
1995, p. 147.
------------
This coverage includes:
* Every boundary on every input variable. (Footnote 12)
* Every boundary on every output variable.
* Every boundary on every variable used in intermediate calculations.
------------
Footnote 12 Boundaries are classically described in numeric terms, but
any change-point in a program can be a boundary. If the program works
one way on one side of the change-point and differently on the other
side, what does it matter whether the change-point is a number, a state
variable, an amount of disk space or available memory, or a change in a
document from one typeface to another, etc.? See C. Kaner, J. Falk, and
H.Q. Nguyen, Testing Computer Software (2nd. Ed.), Van Nostrand Rein-
hold, 1993, p. 399-401.
------------
11. Data coverage. At least one test case for each data item / variable
/ field in the program.
12. Constraints among variables: Let X and Y be two variables in the
program. X and Y constrain each other if the value of one restricts the
values the other can take. For example, if X is a transaction date and Y
is the transaction's confirmation date, Y can't occur before X.
13. Each appearance of a variable. Suppose that you can enter a value
for X on three different data entry screens, the value of X is displayed
on another two screens, and it is printed in five reports. Change X at
each data entry screen and check the effect everywhere else X appears.
14. Every type of data sent to every object. A key characteristic of
object-oriented programming is that each object can handle any type of
data (integer, real, string, etc.) that you pass to it. So, pass every
conceivable type of data to every object.
15. Handling of every potential data conflict. For example, in an
appointment calendaring program, what happens if the user tries to
schedule two appointments at the same date and time?
16. Handling of every error state. Put the program into the error state,
check for effects on the stack, available memory, handling of keyboard
input. Failure to handle user errors well is an important problem, par-
tially because about 90% of industrial accidents are blamed on human
error or risk-taking. (Footnote 13) Under the legal doctrine of "forsee-
able misuse" (Footnote 14), the manufacturer is liable in negligence if
it fails to protect the customer from the consequences of a reasonably
forseeable misuse of the product.
------------
Footnote 13 B.S. Dhillon, Human Reliability With Human Factors, Pergamon
Press, 1986, p. 153.
Footnote 14 This doctrine is cleanly explained in S. Brown (Ed.) The
Product Liability Handbook: Prevention, Risk, Consequence, and Forensics
of Product Failure, Van Nostrand Reinhold, 1991, pp. 18-19.
------------
17. Every complexity / maintainability metric against every module,
object, subsystem, etc. There are many such measures. Jones (Footnote
15) lists 20 of them. (Footnote 16) People sometimes ask whether any of
these statistics are grounded in a theory of measurement or have practi-
cal value. (Footnote 17) But it is clear that, in practice, some organi-
zations find them an effective tool for highlighting code that needs
further investigation and might need redesign. (Footnote 18).
------------
Footnote 15 C. Jones, Applied Software Measurement, McGraw-Hill, 1991,
p. 238-341.
Footnote 16 B. Beizer, Software Testing Techniques (2nd Ed.), Van Nos-
trand Reinhold, 1990, provides a sympathetic introduction to these meas-
ures. R.L. Glass, Building Quality Software, Prentice Hall, 1992, and
R.B. Grady, D.L. Caswell, Software Metrics: Establishing a Company-Wide
Program, Prentice Hall, 1987, provide valuable perspective.
Footnote 17 For example, C. Kaner, J. Falk, and H.Q. Nguyen, Testing
Computer Software (2nd Ed.), Van Nostrand Reinhold, 1993, pp. 47-48;
also R.L. Glass, Building Quality Software, Prentice Hall, 1992, [Soft-
ware metrics to date have not produced any software quality results
which are useful in practice] p. 303.
Footnote 18 R.B. Grady, D.L. Caswell, Software Metrics: Establishing a
Company-Wide Program, Prentice Hall, 1987 and R.B. Grady, Practical
Software Metrics for Project Management and Process Improvement, PTR
Prentice Hall (Hewlett-Packard Professional Books), 1992, p. 87-90.
------------
18. Conformity of every module, subsystem, etc. against every corporate
coding standard. Several companies believe that it is useful to measure
characteristics of the code, such as total lines per module, ratio of
lines of comments to lines of code, frequency of occurrence of certain
types of statements, etc. A module that doesn't fall within the [normal]
range might be summarily rejected (bad idea) or re-examined to see if
there's a better way to design this part of the program.
19. Table-driven code. The table is a list of addresses or pointers or
names of modules. In a traditional CASE statement, the program branches
to one of several places depending on the value of an expression. In the
table-driven equivalent, the program would branch to the place specified
in, say, location 23 of the table. The table is probably in a separate
data file that can vary from day to day or from installation to instal-
lation. By modifying the table, you can radically change the control
flow of the program without recompiling or relinking the code. Some pro-
grams drive a great deal of their control flow this way, using several
tables. Coverage measures? Some examples:
* check that every expression selects the correct table element
* check that the program correctly jumps or calls through every table
element
* check that every address or pointer that is available to be loaded
into these tables is valid (no jumps to impossible places in memory, or
to a routine whose starting address has changed)
* check the validity of every table that is loaded at any customer
site.
20. Every interrupt. An interrupt is a special signal that causes the
computer to stop the program in progress and branch to an interrupt han-
dling routine. Later, the program restarts from where it was inter-
rupted. Interrupts might be triggered by hardware events (I/O or sig-
nals from the clock that a specified interval has elapsed) or software
(such as error traps). Generate every type of interrupt in every way
possible to trigger that interrupt.
21. Every interrupt at every task, module, object, or even every line.
The interrupt handling routine might change state variables, load data,
use or shut down a peripheral device, or affect memory in ways that
could be visible to the rest of the program. The interrupt can happen at
any time -- between any two lines, or when any module is being executed.
The program may fail if the interrupt is handled at a specific time.
(Example: what if the program branches to handle an interrupt while
it's in the middle of writing to the disk drive?)
The number of test cases here is huge, but that doesn't mean you don't
have to think about this type of testing. This is path testing through
the eyes of the processor (which asks, [What instruction do I execute
next?] and doesn't care whether the instruction comes from the mainline
code or from an interrupt handler) rather than path testing through the
eyes of the reader of the mainline code. Especially in programs that
have global state variables, interrupts at unexpected times can lead to
very odd results.
22. Every anticipated or potential race. Imagine two events, A and B.
Both will occur, but the program is designed under the assumption that A
will always precede B. (Footnore 19) This sets up a race between A and B
-- if B ever precedes A, the program will probably fail. To achieve race
coverage, you must identify every potential race condition and then find
ways, using random data or systematic test case selection, to attempt to
drive B to precede A in each case.
Races can be subtle. Suppose that you can enter a value for a data item
on two different data entry screens. User 1 begins to edit a record,
through the first screen. In the process, the program locks the record
in Table 1. User 2 opens the second screen, which calls up a record in
a different table, Table 2. The program is written to automatically
update the corresponding record in the Table 1 when User 2 finishes data
entry. Now, suppose that User 2 finishes before User 1. Table 2 has been
updated, but the attempt to synchronize Table 1 and Table 2 fails. What
happens at the time of failure, or later if the corresponding records in
Table 1 and 2 stay out of synch?
------------ Footnore 19 Here as in many other areas, see Appendix 1 of
C. Kaner, J. Falk, and H.Q. Nguyen, Testing Computer Software (2nd
Ed.), Van Nostrand Reinhold, 1993 for a list and discussion of several
hundred types of bugs, including interrupt-related, race-condition-
related, etc.
------------
( T O B E C O N C L U D E D )
========================================================================
FREQUENTLY ASKED QUESTIONS ABOUT THE SPACE SHUTTLE COMPUTERS
Part 2 of 3
This is the FAQ list just for the Space Shuttle computer systems. The
information here was collected by Brad Mears during several years of
working in the shuttle flight software arena, then expanded by Ken Jenks
with major assistance from Kaylene Kindt of the NASA/Johnson Space
Center's Engineering Directorate. If you believe any part of this docu-
ment is in error, contact me and I'll try to resolve the issue thru
further research. My email address is kjenks@gothamcity.jsc.nasa.gov.
The latest version of this document is available via anonymous FTP at
ftp://explorer.arc.nasa.gov/pub/SPACE/FAQ/shuttle-GPC-FAQ.txt
<>
8) Why five computers? Redundancy. Every aspect of the shuttle is
designed with redundant systems (when possible). The goal of the shut-
tle is to withstand any two failures and still maintain crew safety.
This goal is met with very few exceptions. For the computer system,
there are two levels of redundancy. The first level of redundancy is
thru the use of multiple computers running the same software. The
second level is thru the use of two completely independent sets of soft-
ware.
The five General Purpose Computers (GPCs) are all running simultaneously
during Ascent and Entry. Four of them are running the Primary Avionics
Shuttle Software (PASS) and one of them is running the Backup Flight
Software (BFS). The four PASS machines run the exact same software,
using the exact same inputs. Each of the four has equal responsibility
for controlling the vehicle. If any of them fail, the others will over-
ride it and the vehicle is still safe.
The four PASS GPC's have a unique voting scheme. The four computers in
the "redundant set" process the same inputs and (usually) give the same
outputs. Periodically, they compare their outputs. If a conflict
arises, the computers vote on each other. If one computer gives outputs
which contradict the outputs of the majority of the computers in the
redundant set, that computer puts itself in a wait state, and the
remaining computers carry on. If the remaining three computers
disagree, the majority rules again, and another computer could be booted
out of the redundant set. When only two computers remain in the redun-
dant set, if there is a conflct, it's called a "split" and each GPC will
command its own buses, and they won't be in synch any more. At this
point, the ground controllers and the crew will then be working very
hard to figure out what to do next, but there's always the BFS.
So if all four PASS machines have failed for some reason (hardware or
software), or if there is a two-on-two split or a one-on-one split in
the redundant set, the crew can engage the Backup Flight Software. Dur-
ing normal flight, the BFS is running and watching the same inputs as
the PASS GPC's, but the settings of certain hardware switches prevent it
from taking control of the vehicle. After BFS engage, this changes --
the four PASS machines are prevented from issuing commands to the vehi-
cle and the BFS machine is now in full control.
The previous discussion applies only to Ascent and Entry. While On-
orbit, the computers are put into a different configuration. As soon as
the Ascent phase is over and the vehicle is safely in orbit, the crew
will perform an operation called "freeze-drying" a GPC. As discussed
earlier, this operation consists of configuring one GPC for De-orbit and
Entry and then putting it in Sleep mode. This computer can be used for
a safe return to earth, even if all the other GPC's and MMU tape drives
have failed. In addition to the freeze-dried GPC, a copy of the Entry
software is contained in the upper memory of the four PASS GPC's. This
allows quick access to the Entry software without using the MMU's. The
BFS computer also contains Entry software and can be configured for
Entry without accessing the MMU's.
9) How does the crew engage the Backup Flight Software? The Commander
and Pilot each have a bright red switch located prominently in front of
them on their Rotational Hand Controller (joystick). At any time during
flight, either or both of them can depress this switch. At that time,
an irreversible process begins.
Some special purpose boxes called Backup Flight Controllers detect this
event and turn certain discrete signals to the GPCs off or on. These
hardware signals are what actually allow any given GPC to issue control
commands to the vehicle.
Within 40 milliseconds, all of the PASS computers will be disabled from
sending commands to the vehicle and the BFS computer will be in control.
Other than some changes on the CRTs, the transition to BFS is mostly
transparent.
There is a reluctance to actually engage BFS and the crew will not do so
until some very specific conditions arise. The reason is that the BFS
has never been proven in flight. Yes, it has been tested in simulators
a zillion times and people have a fairly high confidence in it. But a
warm feeling is no substitute for actual flight tests.
10) Who wrote the software and what language is it in? The PASS was
written by IBM Federal Systems Division. Most of it is written in an
obscure language called HAL/S and some of it in assembly. HAL/S was
developed specifically for the Shuttle program by Intermetrics, Inc.,
which was founded by five MIT programmers who had worked on software for
the Apollo program. HAL/S is something like a cross between Pascal and
FORTRAN. It has built in support for vector and matrix arithmetic.
This is useful since all guidance & navigation calculations make heavy
use of these structures. Another nice feature is very strong support
for real-time scheduling. Contrary to legend, the name of the language
is not related to IBM ("add one to each letter position"), but is really
a tribute to J. Halcombe Laning, who invented an algebraic compiler for
MIT's "Whirlwind" computer in 1952.
The BFS was written by three companies: Rockwell International, Charles
S. Draper Lab, and Intermetrics. It is also largely HAL/S with some
assembly.
<>
References: Newsgroups: sci.space.shuttle From: taft@mv.us.adobe.com
(Ed Taft) Subject: Re: I'd like a copy of the GPC FAQ... Organization:
Adobe Systems Incorporated, Mountain View, CA Date: Fri, 29 Apr 1994
17:47:19 GMT
Just as a footnote to this excellent FAQ, I'd like to mention that there
were several interesting in-depth articles on the development of the GPC
software in the September 1984 issue of Communications of the ACM. This
is a publication of the Association for Computing Machinery which should
be available in any technical library. Perhaps this bibliographic refer-
ence should be added to the FAQ. -- Ed Taft taft@adobe.com
---------------------- "Space Shuttle: The History of Developing the
National Space Transportation System." Dennis R. Jenkins
(djenkins@iu.net), (c) 1992, ISBN 0-9633974-1-9, pp. 158-163.
---------------------- Also check out the WWW page about the GPC's:
http://www.ksc.nasa.gov/shuttle/technology/sts-newsref/sts-
av.html#sts-dps ---------------------- P. Newbold, et al.; HAL/S
Language Specification; Intermetrics Inc., Report No. IR-61-5 (Nov.
1974)
( T O B E C O N T I N U E D )
========================================================================
18th International Conference on Software Engineering
25-30 March 1996
Technische Universitaet Berlin, Germany
PRELIMINARY PROGRAM
ICSE is the flagship conference of the international software engineer-
ing community. The objectives are to provide a forum to present new
software engineering research results, to exchange experience reports
regarding the use of software engineering technologies in industry, to
expose practioners to promising new technologies, to expose researchers
to the problems of industrial software development, and to encourage the
transfer of advanced software engineering technologies from research
into practice.
ICSE 18 is the main event of the International Software Engineering Week
'96 (ISEW 96). The ISEW 96 in Berlin, Germany, is the premier interna-
tional software engineering meeting in 1996. The venue of ISEW 96 is
the Technische Universitaet Berlin.
ICSE 18 will feature invited keynote presentations, parallel conference
sessions, tutorials, workshops, plenaries, workshop presentations,
reports from industrial experiences, mini-tutorials, and other events.
INVITED SPEAKERS
o Chip Anderson, Microsoft, USA
o Victor Basili, University of Maryland, USA
o Bo Hedfors, Ericsson, Sweden
o Anthony Hoare, Oxford University, UK
o Tom de Marco, Atlantic Systems Guild, USA
o Hasso Plattner, SAP, Germany
CONFERENCE SESSIONS
o Understanding and Analysis
o Supporting Requirements
o Testing and Analysis
o Object Orientation in Use
o Analysis of Distributed Systems
o Measurement
o Component-based Software
o Formal Design
o Configuration Management and Reuse
o Process Effectiveness
o System Validation
o Environments
o System Generation
o Data Flow Testing
o Maintenance and Evolution
o Testing Algorithms
WORKSHOPS
o Third Annual European Symposium on Cleanroom Software Engineering; Tuesday
o Fourth IEEE Workshop on Program Comprehension; Friday - Sunday
o Workshop on Multimedia Software Engineering; Monday+Tuesday
o Sixth Workshop on Software Configuration Management; Monday+Tuesday
o Workshop on Technology Transfer; Friday+Saturday
o 8th International Workshop on Software Specification and Design; 22+23 March
o Metrics Symposium METRICS 96; Monday+Tuesday
o First International Workshop on Software Engineering for Parallel
and Distributed Systems; Monday+Tuesday
o Third International Workshop on Software Engineering Education; Friday
SPONSORS
Gesellschaft fuer Informatik, ACM SIGSOFT, and IEEE Computer Society TCSE.
REGISTRATION AND CONTACT
This is only a very short description of the program. For more informa-
tion or registration access: "http://www.gmd.de/Events/ICSE18/"
General Chair:
H. Dieter Rombach
Fachbereich Informatik
Universitaet Kaiserslautern
67653 Kaiserslautern, Germany
========================================================================
C O R R E C T E D C O R R E C T I O N N O T E D !
Yes, we know, the E-mailed copies of the February 1996 issue was labeled
January 1995 and we did it as a joke. To see if anyone would catch that
we didn't succeed in correcting the software error. Nobody caught it
or, if they did, they didn't tell us. We won't do it again.
========================================================================
------------>>> TTN SUBMITTAL POLICY <<<------------
========================================================================
The TTN On-Line Edition is forwarded on the 15th of each month to Email
subscribers via InterNet. To have your event listed in an upcoming
issue, please Email a description of your event or Call for Papers or
Participation to "ttn@soft.com". The TTN On-Line submittal policy is as
follows:
o Submission deadlines indicated in "Calls for Papers" should provide
at least a 1-month lead time from the TTN On-Line issue date. For
example, submission deadlines for "Calls for Papers" in the January
issue of TTN On-Line would be for February and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
(about four pages).
o Length of submitted calendar items should not exceed 68 lines (one
page).
o Publication of submitted items is determined by Software Research,
Inc., and may be edited for style and content as necessary.
========================================================================
----------------->>> TTN SUBSCRIPTION INFORMATION <<<-----------------
========================================================================
To request your FREE subscription, to CANCEL your subscription, or to
submit or propose a calendar item or an article send E-mail to
"ttn@soft.com".
TO SUBSCRIBE: Use the keyword "subscribe" in front of your Email address
in the body of your Email message.
TO UNSUBSCRIBE: please use the keyword "unsubscribe" in front of your
Email address in the body of your message.
TESTING TECHNIQUES NEWSLETTER
Software Research, Inc.
901 Minnesota Street
San Francisco, CA 94107 USA
Phone: +1 (415) 550-3020
Toll Free: +1 (800) 942-SOFT (USA Only)
FAX: + (415) 550-3030
Email: ttn@soft.com
WWW URL: http://www.soft.com
## End ##