sss ssss rrrrrrrrrr
ssss ss rrrr rrrr
sssss s rrrr rrrr
ssssss rrrr rrrr
ssssssss rrrr rrrr
ssssss rrrrrrrr
s ssssss rrrr rrrr
ss sssss rrrr rrrr
sss sssss rrrr rrrr
s sssssss rrrrr rrrrr
+===================================================+
+======= Testing Techniques Newsletter (TTN) =======+
+======= ON-LINE EDITION =======+
+======= November 1995 =======+
+===================================================+
TESTING TECHNIQUES NEWSLETTER (TTN), On-Line Edition, is E-Mailed
monthly to support the Software Research, Inc. (SR) user community and
provide information of general use to the world software testing commun-
ity.
(c) Copyright 1995 by Software Research, Inc. Permission to copy and/or
re-distribute is granted to recipients of the TTN On-Line Edition pro-
vided that the entire document/file is kept intact and this copyright
notice appears with it.
TRADEMARKS: STW, Software TestWorks, CAPBAK/X, SMARTS, EXDIFF,
CAPBAK/UNIX, Xdemo, Xvirtual, Xflight, STW/Regression, STW/Coverage,
STW/Advisor and the SR logo are trademarks or registered trademarks of
Software Research, Inc. All other systems are either trademarks or
registered trademarks of their respective companies.
========================================================================
INSIDE THIS ISSUE:
o 9th INTERNATIONAL SOFTWWARE QUALITY WEEK -- CALL FOR PAPERS
o THE PENTIUM BUG -- AN INDUSTRY WATERSHED (Part 3 of 4)
Dr. Boris Beizer
o BOOK REVIEW: UNIX TEST TOOLS AND BENCHMARKS
by Rodney C. Wilson
o UPDATED FAQ OF STW PRODUCTS
o CALENDAR OF EVENTS
o 1996 INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS
ADVANCED PROGRAM
o TTN SUBMITTAL POLICY
o TTN SUBSCRIPTION INFORMATION
========================================================================
NINTH INTERNATIONAL SOFTWARE QUALITY WEEK 1996 (QW`96)
Conference Theme: Quality Process Convergence
San Francisco, California -- 21-24 May 1996
QW`96 is the ninth in a continuing series of International Software
Quality Week Conferences focusing on advances in software test technol-
ogy, quality control, risk management, software safety, and test automa-
tion. Software analysis methodologies, supported by advanced automated
software test methods, promise major advances in system quality and
reliability, assuring continued competitiveness.
The mission of the QW`96 Conference is to increase awareness of the
importance of software quality and methods used to achieve it. It seeks
to promote software quality by providing technological education and
opportunities for information exchange within the software development
and testing community.
The QW`96 program consists of four days of mini-tutorials, panels,
technical papers and workshops that focus on software test automation
and new technology. QW`96 provides the Software Testing and QA/QC com-
munity with:
Quality Assurance and Test involvement in the development process.
Exchange of critical information among technologists.
State-of-the-art information on software test methods.
Analysis of method and process effectiveness through case studies.
Vendor Technical Presentations.
Two-Day Vendor Show.
QW`96 is soliciting 45 and 90 minute presentations, half-day standard
seminar/tutorial proposals, 90-minute mini-tutorial proposals, or propo-
sals for participation in panels and "hot topic" discussions on any area
of testing and automation, including:
Cost/Schedule Estimation
ISO-9000 Application and Methods
Test Automation
CASE/CAST Technology
Test Data Generation
Test Documentation Standards
Data Flow Testing
Load Generation and Analysis
SEI CMM Process Assessment
Risk Management
Test Management Automation
Test Planning Methods
Test Policies and Standards
Real-Time Software
Real-World Experience
Software Metrics in Test Planning
Automated Inspection
Reliability Studies
Productivity and Quality Issues
GUI Test Technology
Function Point Testing
New and Novel Test Methods
Testing Multi-Threaded Code
Integrated Environments
Software Re-Use
Process Assessment/Improvement
Object Oriented Testing
Defect Tracking / Monitoring
Client-Server Computing
IMPORTANT DATES:
Abstracts and Proposal Due: 15 December 1995
Notification of Participation: 15 February 1996
Camera Ready Materials Due: 15 March 1996
FINAL PAPER LENGTH:
Papers should be limited to 10 - 20 pages, including text, slides
and/or View Graphs
SUBMISSION INFORMATION:
Abstracts should be 2-4 pages long, with enough detail to give
reviewers an understanding of the final paper, including a rough
outline of its contents. Indicate if the most likely audience is
technical, managerial or application-oriented.
In addition, please include:
o A cover page with the paper title, complete mailing and e-mail
address(es), and telephone and FAX number(s) of each author.
o A list of keywords describing the paper.
o A brief biographical sketch of each author.
Send abstracts and proposals including complete contact information to:
Ms. Rita Bral
Quality Week '96 Director
Software Research Institute
901 Minnesota Street
San Francisco, CA 94107 USA.
For complete information on the QW'96 Conference, send E-mail to
qw@soft.com, phone SR Institute at +1 (415) 550-3020, or, send a FAX to
SR/Institute at +1 (415) 550-3030.
========================================================================
THE PENTIUM BUG-- AN INDUSTRY WATERSHED (Part 3 of 4)
Dr. Boris Beizer
------------------------------------------------------------------------
Copyright, Boris Beizer, 1995. Permission is granted to print this docu-
ment for personal use and to redistribute it for internal corporate use
subject to the provision that this entire copyright notice must appear.
Use of this document or parts thereof in any commercial publication or
document without the written permission of the author is strictly prohi-
bited.
------------------------------------------------------------------------
1. General
Two guys in the Australian Outback, a dozen polar Eskimos and a tribe in
the Amazon jungle have not heard about the Pentium bug. People oblivious
to floating point arithmetic before INTEL's customer relations fiasco
now speak angrily about this bug and how it will affect their lives.
There are a half-dozen class action suits afoot claiming warrantee
breaches and false advertising, as well as a few stockholders' suits
[INTE95]. This bug has probably been blamed for every human and techni-
cal misery from anorexia to zygopteran infestations.
Was this bug serious? Yes and no. Technically, it wasn't the first, it
wasn't the worst, and it won't be the last of its kind. But from the
point of view of its impact on the computer industry, it is probably the
most significant bug ever. It has forever transformed the industry. It
is a watershed for all of us.
Some of you who are in software might mistakenly believe that this was a
hardware bug and that it therefore doesn't affect you: if so, you're
wrong on both counts. It was a software bug, but one that happened to
be compiled into silicon rather than RAM. But it doesn't matter if it
was hardware or software because the users, as we have just learned, do
not make that distinction.
<>
3.4. Gunning For Microsoft
An interesting sidelight of the entire situation is that many articles
on the Pentium chip, both in the trade and the general press, mentioned
Windows 95 (Chicago) and its possible bugs. What had Windows 95 to do
with it? Nothing! But article after article in prestigious newspapers
such as The Wall Street Journal and The New York Times as well as the
trade press mentioned Windows 95 and the fact that its delivery had been
delayed. The implication of these articles was that the delay was
caused by bugs. The possibility that the delay was caused by a desire
for more thorough testing (bugs or no bugs) was never mentioned. Again,
in the public's mind, you're damned if you test and delay and you're
damned if you don't. The message to Microsoft was clear: "We're out to
get you!"
3.5. The Impact on Software Quality
The impact of all of the above is contrary to the consumer's interests.
The result will be poorer hardware and software quality, not better.
Valuable, limited human and financial resources will be squandered on
useless testing (one trillion-- think of it-- one trillion tests: even
though fully automated, it's an impressive number, comparable to all the
software testing done before by everybody on everything). INTEL had no
choice but to run those tests, but what did those tests prove? Nothing
that the prior mathematical model hadn't shown. And were all those
tests needed to validate the model? Absolutely not. But statistically
ignorant consumers don't accept T-tests and similar technical niceties.
Such wasteful efforts that pander to consumers' ignorance means higher
prices, lower profits for stockholders, and inevitably, lower quality
products at any given cost.
4. What To Do
4.1. What We Can't Do
We cannot allow ourselves, in a panic, to let uninformed consumers dic-
tate engineering practices: their best interests would not be served by
such capitulation. We cannot sign an implied contract to give them the
theoretically impossible bug-free product. We can't change the industry
norms and offer absolute, no questions asked, free replacement hardware
or software for every bug. We have to do the technically and economi-
cally correct things, no matter how strange such things may seem to the
public, and no matter how stridently consumer activists, muck-racking
reporters, venal lawyers, and glory hounds yell. That means we have to
do the best engineering we can.
However, there are some other things we can't do. We can't continue to
appear to be patronizing them. We can't publicly say "We know better
than you what you need", even if it's the truth. We can't ignore their
very real fears, anxieties, and anger. We can't ignore the fact that
computers and software are now a consumer rather than a technical pro-
duct.
4.2. Watch Our Own Propaganda
Why do users expect impossible perfection? Because for over 40 years
we've been telling them that that's what we were going to give them.
The users did not create the infallible computer myth-- we did. You
did. I did it whenever I told my nontechnical friends "that even if one
character is wrong, the whole program is garbage." Star Trek did it
with the infallible Mr. Data. They know that their computer can execute
millions of instructions per second without error. It isn't a long leap
from that truth to the belief that every instruction will always be
correctly executed and the further leap to the belief that every
instruction will be correct.
We correct our propaganda best by being candid about our bugs. It
really galls me when I report a bug or ask for a workaround for a hard
bug, and the regular customer service channels mostly reflect the atti-
tudes of their law firm, "Stonewall and Denial". Flood the trade and
general press with releases about every bug we find and fix. Provide
easily searched on-line indexes of bugs on bulletin boards. Provide
free downloads of fixes and workarounds. Provide an easy way for users
to report bugs and other problems.
4.3. Kill The Perfect Software Myth.
The public believes that they are entitled to bug-free software. This
belief is re-enforced by the general press, unfortunately by the trade
press, and worst of all, by us. This false belief is the most damaging
one of all. We have known for over forty years that bug-free software
is a theoretical impossibility. Forty years of software development has
also confirmed that theoretical knowledge: even if it were theoreti-
cally possible, it would be practically impossible. But we should ask
if the search for "bug-free" software, whether achievable or not, is in
the public's interest.
We act as if it is. We proudly talk about our "zero-defect" objectives,
as if it can be achieved if we work harder and better. We have execu-
tives who believe that it is achievable. We have managers who believe
that, too. No wonder the public and the press believe it-- we have prom-
ised it.
This belief and our implicit promise damages software quality. It
results in higher costs, worse products, and more bugs. Let us not
forget what quality is all about. Quality is not our most important
product-- our product is. Quality is not a God-given right. It is not
even, in an abstract sense, necessary. We put quality into our products
because it improves profitability. Quality means lower service costs,
lower legal exposure, customer loyalty, and increased market share.
Quality is but one important component of a product strategy. Excessive
quality does not serve the public's interest. The unachievable search
for perfect quality (i.e., bug-free software) diverts scarce human
resources. Time to market is increased, time and resources are wasted
on useless massive testing of the wrong kind. Valuable human resources
are wasted apologizing for even harmless bugs. The result is higher
cost to the public, less functionality, and lower quality.
I do not expect the public to really understand this. But they will
never change their counter-productive expectations as long as we con-
tinue to feed them this myth.
4.4. Stop Beta Testing and Amateurism
The users' perception about testing and quality assurance is also based
on myths. For most users, to the extent that they know anything about
software engineering at all, they believe that our primary mode of
assurance quality is to have hundreds or thousands of users try the pro-
duct before its official release-- that is, they believe that we rely
mainly on beta testing. Putting this myth another way, it is equivalent
to saying that our primary mode of quality assurance is to use sup-
posedly talented amateurs. That leads to further misconceptions on
their part: because they are amateurs, because they can identify with
the beta testers, because if they chose to they could be beta testers
themselves, and therefore, they are qualified to judge how to test and
how well tested a product is.
Although your products may not be subjected to beta testing, consumer
software products are, and it is these products and the perceived
methods used to assure their quality with which the users identify all
software products.
Beta testing is not cost-effective for well-crafted software. That is
known to many leading independent software vendors. As a result of
this, the leading software producers have been quietly dismantling and
downgrading their beta testing efforts-- the leaders stopped relying on
beta testing years ago. I say "quietly" because those vendors who ack-
nowledge the strength of the beta test myths, the misplaced trust the
trade and general press place on that myth, and the possible hysterical
reactions of a vindictive press if they were to announce that they would
quit beta testing.
We have to educate them and let them know that software testing and
quality assurance is a highly professional activity that cannot be done
by amateurs; that we use sophisticated technologies; that we use highly
automated tools. Let them know how we test, how many tests, what kind,
how often. Show them videos of our "test tracks" just like the auto-
mobile companies. Let us change their perception so that instead of
thinking that they could usefully test software, they realize they are
no closer to that than they are to a Mercedes-Benz test engineer, a pro-
fessional race car driver, or an aircraft test pilot. Let's put Test
and QA into the propaganda mill instead of just design hype. Let's make
reliability the desirable feature, not multimedia hyperbabble.
<>
5. References
ATKI68 Atkins, Daniel E. Higher Radix Division Using Estimates
of the Divisor and Partial Remainder. IEEE Transactions
on Computers, Vol. C-17, #10, Oct. 1968, pps. 925-934.
BEIZ90 Boris Beizer, Software Testing Techniques, 2nd Edition,
Van Nostrand Reinhold, 1990.
BEIZ95 Samuel Beizer. Private communication.
IBMS95 Pentium Study, IBM Research White Paper, December 12,
1994.
INTE95 Interview of Howard High, INTEL, February 2, 1995.
LOTU95 Interview of Peter Cohen, Lotus Development, February 2,
1995.
NADL56 Nadler, Morton. A High Speed Electronic Arithmetic Unit
for Automatic Computing Machines, Acta Tech (Prague), #6,
1956, pp. 464-478.
SASS95 McGrath, Sue. Private Correspondence, SAS Institute,
February 9, 1995.
SHAR94 Sharangpani, H.P., and Barton, M.L., Statistical Analysis
of Floating Point Flaw in the Pentium Processor. INTEL
Corporation white paper, November 30, 1994.
------------------------------------------------------------------------
Boris Beizer, PhD
ANALYSIS
1232 Glenbrook Road
Huntingdon Valley, PA 19006
PHONE: 215-572-5580
FAX : 215-886-0144
Email: BBEIZER@MCIMAIL.COM
========================================================================
BOOK REVIEW: U N I X T E S T T O O L S A N D B E N C H M A R K S
by Rodney C. Wilson
Prentice Hall PTR, Upper Saddle River, NJ 07858, 1995.
ISBN 0-13-125634-3.
There is so much of it going around! Lists of tools and lists of com-
panies and lists of tools and companies and methodologies. Even lists
of lists (e.g. Yahoo!)? This offering by long-time QA guru Wilson is,
in a way, a list of things, but here in a relatively compact book is a
list, a compendium, and a survey of the relevant technologies, that cov-
ers quite a bit of ground.
Included (here the emphasis is our own but the comments come in part
from the blurb and other related matter) are:
o Best practices for unit, integration, and system testing
o A detailed outline & checklist for V&V (testing across the life
cycle)
o A detailed checklist for successful beta testing
o Practical methods and useful tools for functional and structural
testing
o Methods and tools for regression, reliability, stress, and load test-
ing
o A catalog (including other references) for leading technology sup-
pliers
o Testing and benchmarking technologies (both licensed and public
domain)
o Quality management (e.g., from independence to interdependence)
o Key quality metrics for release engineering management
Overall, a pretty handy body of facts and related analyses in a compact
package.
If there are drawbacks to this book they are only that the material
might be not 100% current, but this is like complaining that it is not a
hyper-text document on the WWW and updated every month! [Well, it's an
idea!]
Overall, 1.75 thumbs up!
-EFM
EDITORS NOTE: Mr. Wilson wears two hats, quality architect and QA
manager, at Cadence Design Systems, Inc., San Jose, California. You can
reach him by Email at: rodney@cadence.com.
========================================================================
UPDATED FAQ OF STW PRODUCTS
For those of you who may not have seen the most recent FAQ about testing
tools, here is a partial list of our most recent version:
------------------------------------------------------------------------
Name of Tool : Software TestWorks (STW(tm))
Kind of Tool : Automated Testing Tools Suite
Company Name : Software Research, Inc.
Address : 901 Minnesota Street
San Francisco, CA 94107 USA
Internet Addr : info@soft.com
Phone and Fax : (415) 550-3020; USA Only: (800) 942-SOFT
FAX: (415) 550-3030
Description : Automate and streamline your testing process with
Software Research's Software TestWorks (STW(tm)).
STW/Regression automates test execution and
verification for GUI and Client/Server applications.
STW/Coverage measures how well test cases exercise
a program at unit, system and integration levels.
STW/Advisor analyzes source code, providing insight
into resource management, quality and predictability.
Platforms : DEC Alpha; HP 9000/700, 800; IBM RS/6000; NCR 3000;
SGI; Sun SPARC; x86 SCO, Solaris; x86 MS-DOS/MS-Windows
------------------------------------------------------------------------
Name of Tool : STW/Regression
Kind of Tool : Test management, execution and verification toolset
Company Name : Software Research, Inc.
Address : 901 Minnesota Street
San Francisco, CA 94107 USA
Internet Addr : info@soft.com
Phone and Fax : (415) 550-3020; USA Only: (800) 942-SOFT
FAX: (415) 550-3030
Description : STW/Regression automates and manages tests on both text
and GUI-based applications. It increases test speed and
accuracy, improving cycle time and quality.
STW/Regression works for host and client-server
applications with automated load generation for
multi-user client-server applications; it also employs
a test management component for automated test
execution and management.
Platforms : DEC Alpha; HP 9000/700, 800; IBM RS/6000; NCR 3000;
SGI; Sun SPARC; x86 SCO, Solaris; x86 MS-DOS/MS-Windows
------------------------------------------------------------------------
Name of Tool : STW/Coverage
Kind of Tool : Code coverage analysis toolset
Company Name : Software Research, Inc.
Address : 901 Minnesota Street
San Francisco, CA 94107 USA
Internet Addr : info@soft.com
Phone and Fax : (415) 550-3020; USA Only: (800) 942-SOFT
FAX: (415) 550-3030
Description : The STW/Coverage multi-platform suite of testing tools
measures how well test cases exercise a program and
identifies what code has not been exercised at unit,
system and integration levels. STW/Coverage can be
used with GUI and Client/Server development tools, and
is available for C, C++, Ada, COBOL, and FORTRAN.
Platforms : DEC Alpha; HP 9000/700, 800; IBM RS/6000; NCR 3000;
SGI; Sun SPARC; x86 SCO, Solaris
------------------------------------------------------------------------
Name of Tool : STW/Advisor
Kind of Tool : Advisor tool suite
Company Name : Software Research, Inc.
Address : 901 Minnesota Street
San Francisco, CA 94107 USA
Internet Addr : info@soft.com
Phone and Fax : (415) 550-3020; USA Only: (800) 942-SOFT
FAX: (415) 550-3030
Description : STW/Advisor provides static source code analysis and
measurement, using seventeen metrics to measure a
program's data, logic and size complexity. Test
data/file generation more fully tests applications by
creating additional tests from existing tests. Static
Analysis is available for C; metrics are available for
C, C++, Ada and FORTRAN.
Platforms : DEC Alpha; HP 9000/700, 800; IBM RS/6000; NCR 3000;
SGI; Sun SPARC; x86 SCO, Solaris
------------------------------------------------------------------------
========================================================================
International Symposium on Software Testing and Analysis (ISSTA'96)
January 8-10, 1996
Workshop on Formal Methods in Software Practice (FMSP'96)
January 10-11, 1996
Hyatt Islandia, San Diego, California, USA
Sponsored by ACM SIGSOFT
ISSTA'96 brings together researchers and practicioners to present and
discuss research in software testing and analysis. Presentations will
cover a wide range of topics, including new theoretical models and tech-
niques, empirical results and experience, and software tools. This year,
a special workshop track within the symposium allows timely presentation
of work in progress, and of analyses, reviews, and opinions on the state
of software testing and analysis.
The purpose of FMSP'96 is to bring together experts in formal methods
technology and the early innovators in industry who have adopted formal
methods. Discussions will focus on the impact of formal methods on soft-
ware practice, as well as on strategies to further this impact in the
future. This workshop is being co-located with ISSTA'96 to encourage the
cross-pollination of ideas between the formal methods and the testing
communities.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ISSTA'96 ADVANCE PROGRAM
Monday, January 8:
8:00-8:30 WELCOME AND GENERAL INFORMATION
8:30-10:00 KEYNOTE: Test and Analysis of Software Architectures.
Will Tracz, Loral Federal Systems
10:30-12:00 CONCURRENT & REAL-TIME Efficient state space genera-
tion for analysis of real-time systems. I. Kang & I. Lee
An incremental approach to structural testing of concurrent soft-
ware. P. V. Koppol & K. C. Tai
Improving the accuracy of Petri Net-based analysis of concurrent
programs. A. T. Chamillard & L. A. Clarke
2:00-3:30 SPECIFICATION-BASED TESTING Generating functional test
cases in-the-large for time-critical systems from logic-based
specifications. S. Morasca, A. Morzenti, & P. SanPietro
Daistish: Systematic algebraic testing for OO programs in the
presence of side effects. M. Hughes & D. Stotts
Structural specification-based testing with ADL. J. Chang, D. J.
Richardson, & S. Sankar
4:00-5:00 ISSTA WORKSHOP SESSION I Scale-up issues in the use of
formal methods for automated testing. J. F. Leathrum & K. A.
Liburdy
Experiences and lessons from the analysis of TCAS II. M. P. E.
Heimdall
Predicting dependability by testing. D. Hamlet
Using perturbation analysis to measure variation in the informa-
tion content of test sets. L. Morell & B. Murrill
5:00-6:00 RECEPTION 7:30 NEW RESULTS SESSION BEGINS
Tuesday, January 9:
8:30-10:00 ANALYSIS Linear and structural event sequence analysis.
W. E. Howden & G . M. Shi
Separate computation of alias information for reuse. M. J. Harrold
& G. Rothermel
Critical slicing for software fault localization. R. A. DeMillo,
H. Pan, & E. H. Spafford
10:30-12:00 IMPLEMENTATION-BASED TESTING The path-wise approach to
data flow testing with pointer variables. D. I. S. Marx & P. G.
Frankl
Unconstrained duas and their use in achieving all-uses coverage.
M. Marre & A. Bertolino
Software error analysis: A real case study involving real faults
and mutations. M. Daran & P. Thevenod-Fosse
1:30-2:30 ISSTA WORKSHOP SESSION II Generation of multi-formalism
state-space analysis tools. M. Pezze & M. Young
Beyond traditional program slicing. A. M. Sloane & J. Holdsworth
Approaches to verification and validation of a reliable multicast-
ing protocol. J. R. Callahan & T. Montgomery
A semantic model of program faults. A. J. Offutt & J. Hayes
2:45-3:30 ISSTA WORKSHOP SESSION III Towards a structural load
testing tool. C-S. D. Yang & L. L. Pollock
Automated test data generation for programs with procedures. B.
Korel
Reachability analysis of feature interactions: A progress report.
K. P. Pomakis & J. M. Atlee
4:00-5:30 MODELING Compositional verification by model checking
for counter-examples. T. Bultan, J. Fischer & R. Gerber
Elements of style: Analyzing a software design feature with a
counterexample detector. D. Jackson & C. A. Damon
Constructing abstract models of concurrent real-time software. J.
C. Corbett
6:00-7:00 RECEPTION
Wednesday, January 10:
8:30-10:00 VERIFICATION & VALIDATION Using partial-order methods
in the formal validation of industrial concurrent programs. P.
Godefroid, D. Peled, & M. Staskauskas
Formal specification and verification of the kernel functional
unit of the OSI session layer protocol and service using CCS. M.
Barjaktarovic, S-K. Chin, & K. Jabbour
A logic-model semantics for SCR requirements. J. M. Atlee & M. A.
Buckley
10:30-12:00 JOINT ISSTA/FMSP PANEL DISCUSSION: Formal Methods and
Testing: Why State-of-the-Art is not State-of-the-Practice
Richard Denney, Quality Assurance, Landmark Graphics
Dick Kemmerer, Professor, University of California Santa
Barbara Nancy Leveson, Professor, University of Washington
Alberto Savoia, Director Software Research, Sun Microsys-
tems Labs
12:00-12:30 CLOSING REMARKS
For more information on the conference, and how to register, visit the
following World Wide Web site:
ISSTA'96: http://www.cs.ucsb.edu/Conferences/ISSTA96
========================================================================
------------>>>CALENDAR OF EVENTS<<<---------------
========================================================================
Here is a list of upcoming events of interest.
"o" indicates that Software Research, Inc. will lead or participate in
these events.
20-21 Nov 1995 Systems Testing & QA Techniques, Holiday Inn, 1500
Washington Street, Minneapolis, MN: Tel: 800-573-6333
or 201-256-1909 Fax: 201-256-2115
o 27-30 Nov 1995 EuroSTAR'95, Contact: EuroSTAR Conferences Limited,
4th Floor, 7 Hanover Square, London W1R 9HE, Tel:
44-171-4934229 Fax: 171-3553738, Email:
eurostar@evolutif.demon.co.uk
04-06 Dec 1995 Software Acquisition, Washington DC: Contact: Dana
Marcus, Tel: (310) 534-3922, Fax: (310) 534-0743
04-06 Dec 1995 Software Configuration Management, Phoenix, AZ:
Tel:301-445-4400, Fax:301-445-5722, E-Mail:
uspdi@clark.net, WWW: http://www.clark.net/pub/uspdi
07-08 Dec 1995 Process Mapping, (Portland, OR) UC Berkeley Extension,
Business & Management Dept P, 1995 Universtity Avenue,
Suite 300, Berkeley, CA 94704: Tel: 510-642-6117
========================================================================
------------>>> TTN SUBMITTAL POLICY <<<------------
========================================================================
The TTN On-Line Edition is forwarded on the 15th of each month to sub-
scribers via InterNet. To have your event listed in an upcoming issue,
please e-mail a description of your event or Call for Papers or Partici-
pation to "ttn@soft.com". The TTN On-Line submittal policy is as fol-
lows:
o Submission deadlines indicated in "Calls for Papers" should provide
at least a 1-month lead time from the TTN On-Line issue date. For
example, submission deadlines for "Calls for Papers" in the January
issue of TTN On-Line would be for February and beyond.
o Length of submitted items should not exceed 68 lines (one page).
o Publication of submitted items is determined by Software Research,
Inc., and may be edited as necessary.
========================================================================
----------------->>> TTN SUBSCRIPTION INFORMATION <<<-----------------
------------------->>>NEW INSTRUCTIONS!!<<<-------------------
========================================================================
To request a FREE subscription or submit articles, please send E-mail to
"ttn@soft.com".
TO SUBSCRIBE: please use the keywords "Request-TTN" or "subscribe" **AND
INCLUDE YOUR EMAIL ADDRESS** in the Subject line of your E-mail header.
To have your name added to the subscription list for the biannual hard-
copy version of the TTN -- which contains additional information beyond
the monthly electronic version -- include your name, company, and postal
address in the body of the mail message.
TO CANCEL: include the phrase "unsubscribe" or "UNrequest-TTN" **AND
YOUR EMAIL ADDRESS** in the Subject line.
Note: To order back copies of the TTN On-Line (August 1993 onward),
please use the keywords "Back issue request" in the Subject line, and
please specify the month(s) and year(s) in the body of your message when
E-mailing requests to "ttn@soft.com".
TESTING TECHNIQUES NEWSLETTER
Software Research, Inc.
901 Minnesota Street
San Francisco, CA 94107 USA
Phone: (415) 550-3020
Toll Free: (800) 942-SOFT
FAX: (415) 550-3030
E-mail: ttn@soft.com