sss ssss rrrrrrrrrrr ssss ss rrrr rrrr sssss s rrrr rrrr ssssss rrrr rrrr ssssssss rrrr rrrr ssssss rrrrrrrrr s ssssss rrrr rrrr ss sssss rrrr rrrr sss sssss rrrr rrrr s sssssss rrrrr rrrrr +===================================================+ +======= Quality Techniques Newsletter =======+ +======= March 2002 =======+ +===================================================+ QUALITY TECHNIQUES NEWSLETTER (QTN) is E-mailed monthly to Subscribers worldwide to support the Software Research, Inc. (SR), TestWorks, QualityLabs, and eValid user communities and other interested parties to provide information of general use to the worldwide internet and software quality and testing community. Permission to copy and/or re-distribute is granted, and secondary circulation is encouraged by recipients of QTN provided that the entire document/file is kept intact and this complete copyright notice appears with it in all copies. Information on how to subscribe or unsubscribe is at the end of this issue. (c) Copyright 2002 by Software Research, Inc. ======================================================================== Contents of This Issue o QWE2002 Conference Wrapup o Qualities of a Good Tester -- A Baker's Dozen, by Boris Beizer o 3D SiteMaps: A New Information Presentation Method o QW2002 Call for Papers/Presentations o QTN Article Submittal, Subscription Information ======================================================================== QWE2002 Conference Wrapup We were delighted that so many could attend QW2002 and we believe that QWE2002 was an effective meeting. Here's a rundown of important post-conference information. * Best Paper Award... This year the Advisory Board's votes resulted in a tie for first place and the Advisory Board recommended to honor two presentations as the co-Best Papers of QWE2002. The Co-Best Paper Awards, each of which carries a cash price of $500, were awarded to: > Dr. Christian Bunse & Dr. Oliver Laitenberger (Fraunhofer Institute for Experimental Software Engineering, Germany) for their paper (8T) "Improving Component Quality Through the Systematic Combination of Construction and Analysis". > Mr. Guillermo Pastor (INAD, Spain) & Prof. Antonio de Amescua (Carlos III University of Madrid, Spain) for their paper (12I) "Web Development: A New Quality Paradigm." * Best Presentation Award... The Best Presentation Award, which includes a certificate and an invitation to present the same talk at QW2002, went to Mr. Michael Hillelsohn (Software Performance Systems, USA) for his talk (6M) "Using SPICE as an Internal Software Engineering Process Improvement Tool." * Conference Photos... There is a gallery of Conference Photos available now on the QWE2002 website. If you don't see the "Photos" button on the navigation bar, simply click to: <http://www.qualityweek.com/QWE2002/pictures/12mar/indextue1.html> * Keynote Presentations on WebSite in PDF Format... Two keynote speakers presentations are now on the website. > Prof. Koenrad DeBackere (Organizing for High Tech Innovation). Go to: <http://www.qualityweek.com/QWE2002/Papers/K21.html> > Dr. Rik Nuytten (Building the Infrastructure for the Future). Go to: <http://www.qualityweek.com/QWE2002/Papers/K32.html> * Expo Guide in PDF Downloadable... If for some reason you didn't have time to get to the QWE2002 expo, or if you've misplaced your Expo Guide, we've made a PDF version of the guide available for download. You can get this by clicking on "Expo Guide" on the navigation bar, or click here: <http://www.qualityweek.com/QWE2002/expoguide.phtml> Edward Miller Conference Chair (miller@qualityweek.com) ======================================================================== Qualities of a Good Tester -- A Baker's Dozen by Boris Beizer Note: This article is taken from a collection of Dr. Boris Beizer's essays "Software Quality Reflections" and is reprinted with permission of the author. We plan to include additional items from this collection in future months. You can contact Dr. Beizer at. What makes a good software tester? Many myths abound, such as being creatively sadistic or able to handle dull, repetitive work. As a one-time test manager and currently as a consultant to software development and testing organizations, I've formed a picture of the ideal software tester-they share many of the qualities we look for in programmers; but there are also some important differences. Here's a quick summary of the sometimes contradictory lessons that I've learned. 1. Know Programming. Might as well start out with the most controversial one. There's a popular myth that testing can be staffed with people who have little or no programming knowledge. It doesn't work, even though it is an unfortunately common approach. There are two main reasons why it doesn't work. (1) They're testing software. Without knowing programming, they can't have any real insights into the kinds of bugs that come into software and the likeliest place to find them. There's never enough time to test "completely", so all software testing is a compromise between available resources and thoroughness. The tester must optimize scarce resources and that means focusing on where the bugs are likely to be. If you don't know programming, you're unlikely to have useful intuition about where to look. (2) All but the simplest (and therefore, ineffectual) testing methods are tool- and technology-intensive. The tools, both as testing products and as mental disciplines, all presume programming knowledge. Without programmer training, most test techniques (and the tools based on those techniques) are unavailable. The tester who doesn't know programming will always be restricted to the use of ad-hoc techniques and the most simplistic tools. Does this mean that testers must have formal programmer training, or have worked as programmers? Formal training and experience is usually the easiest way to meet the "know programming" requirement, but it is not absolutely essential. I met a superb tester whose only training was as a telephone operator. She was testing a telephony application and doing a great job. But, despite the lack of formal training, she had a deep, valid, intuition about programming and had even tried a little of it herself. Sure she's good-good, hell! She was great. How much better would she have been and how much earlier would she have achieved her expertise if she had had the benefits of formal training and working experience? She would have been a lot better a lot earlier. I like to see formal training in programming such as a university degree in Computer Science or Software Engineering, followed by two to three years of working as a programmer in an industrial setting. A stint on the customer-service hot line is also good training. I don't like the idea of taking entry-level programmers and putting them into a test organization because: (1) Loser Image. Few universities offer undergraduate training in testing beyond "Be sure to test thoroughly." Entry-level people expect to get a job as a programmer and if they're offered a job in a test group, they'll often look upon it as a failure on their part: they believe that they didn't have what it takes to be a programmer in that organization. This unfortunate perception exists even in organizations that values testers highly. (2) Credibility With Programmers. Independent testers often have to deal with programmers far more senior than themselves. Unless they've been through a coop program as an undergraduate, all their programming experience is with academic toys: the novice often has no real idea of what programming in a professional, cooperative, programming environment is all about. As such, they have no credibility with their programming counterpart who can sluff off their concerns with "Look, kid. You just don't understand how programming is done here, or anywhere else, for that matter." It is setting up the novice tester for failure. (3) Just Plain Know-How. The programmer's right. The kid doesn't know how programming is really done. If the novice is a "real" programmer (as contrasted to a "mere tester") then the senior programmer will often take the time to mentor the junior and set her straight: but for a non-productive "leech" from the test group? Never! It's easiest for the novice tester to learn all that nitty-gritty stuff (such as doing a build, configuration control, procedures, process, etc.) while working as a programmer than to have to learn it, without actually doing it, as an entry-level tester. 2. Know the Application. That's the other side of the knowledge coin. The ideal tester has deep insights into how the users will exploit the program's features and the kinds of cockpit errors that users are likely to make. In some cases, it is virtually impossible, or at least impractical, for a tester to know both the application and programming. For example, to test an income tax package properly, you must know tax laws and accounting practices. Testing a blood analyzer requires knowledge of blood chemistry; testing an aircraft's flight control system requires control theory and systems engineering, and being a pilot doesn't hurt; testing a geological application demands geology. If the application has a depth of knowledge in it, then it is easier to train the application specialist into programming than to train the programmer into the application. Here again, paralleling the programmer's qualification, I'd like to see a university degree in the relevant discipline followed by a few years of working practice before coming into the test group. 3. Intelligence. Back in the 60's, there were many studies done to try to predict the ideal qualities for programmers. There was a shortage and we were dipping into other fields for trainees. The most infamous of these was IBM's Programmers' Aptitude Test (PAT). Strangely enough, despite the fact the IBM later repudiated this test, it continues to be (ab)used as a benchmark for predicting programmer aptitude. What IBM learned with follow-on research is that the single most important quality for programmers is raw intelligence-good programmers are really smart people-and so are good testers. 4. Hyper-Sensitivity to Little Things. Good testers notice little things that others (including programmers) miss or ignore. Testers see symptoms, not bugs. We know that a given bug can have many different symptoms, ranging from innocuous to catastrophic. We know that the symptoms of a bug are arbitrarily related in severity to the cause. Consequently, there is no such thing as a minor symptom-because a symptom isn't a bug. It is only after the symptom is fully explained (i.e., fully debugged) that you have the right to say if the bug that caused that symptom is minor or major. Therefore, anything at all out of the ordinary is worth pursuing. The screen flickered this time, but not last time-a bug. The keyboard is a little sticky-another bug. The account balance is off by 0.01 cents-great bug. Good testers notice such little things and use them as an entree to finding a closely- related set of inputs that will cause a catastrophic failure and therefore get the programmers' attention. Luckily, this attribute can be learned through training. 5. Tolerance for Chaos. People react to chaos and uncertainty in different ways. Some cave in and give up while others try to create order out of chaos. If the tester waits for all issues to be fully resolved before starting test design or testing, she won't get started until after the software has been shipped. Testers have to be flexible and be able to drop things when blocked and move on to another thing that's not blocked. Testers always have many (unfinished) irons in the fire. In this respect, good testers differ from programmers. A compulsive need to achieve closure is not a bad attribute in a programmer- certainly serves them well in debugging-in testing, it means nothing gets finished. The testers' world is inherently more chaotic than the programmers'. A good indicator of the kind of skill I'm looking for here is the ability to do crossword puzzles in ink. This skill, research has shown, also correlates well with programmer and tester aptitude. This skill is very similar to the kind of unresolved chaos with which the tester must daily deal. Here's the theory behind the notion. If you do a crossword puzzle in ink, you can't put down a word, or even part of a word, until you have confirmed it by a compatible cross-word. So you keep a dozen tentative entries unmarked and when by some process or another, you realize that there is a compatible cross-word, you enter them both. You keep score by how many corrections you have to make-not by merely finishing the puzzle, because that's a given. I've done many informal polls of this aptitude at my seminars and found a much higher percentage of crossword-puzzles-in-ink afficionados than you'd get in a normal population. 6. People Skills. Here's another area in which testers and programmers can differ. You can be an effective programmer even if you are hostile and anti-social; that won't work for a tester. Testers can take a lot of abuse from outraged programmers. A sense of humor and a thick skin will help the tester survive. Testers may have to be diplomatic when confronting a senior programmer with a fundamental goof. Diplomacy, tact, a ready smile-all work to the independent tester's advantage. This may explain one of the (good) reasons that there are so many women in testing. Women are generally acknowledged to have more highly developed people skills than comparable men-whether it is something innate on the X chromosome as some people contend or whether it is that without superior people skills women are unlikely to make it through engineering school and into an engineering career, I don't know and won't attempt to say. But the fact is there and those sharply-honed people skills are important. 7. Tenacity. An ability to reach compromises and consensus can be at the expense of tenacity. That's the other side of the people skills. Being socially smart and diplomatic doesn't mean being indecisive or a limp rag that anyone can walk all over. The best testers are both- socially adept and tenacious where it matters. The best testers are so skillful at it that the programmer never realizes that they've been had. Tenacious-my picture is that of an angry pitbull fastened on a burglar's rear-end. Good testers don You can't intimidate them-even by pulling rank. They'll need high-level backing, of course, if they're to get you the quality your product and market demands. 8. Organized. I can't imagine a scatter-brained tester. There's just too much to keep track of to trust to memory. Good testers use files, data bases, and all the other accouterments of an organized mind. They make up checklists to keep themselves on track. They recognize that they too can make mistakes, so they double-check their findings. They have the facts and figures to support their position. When they claim that there's a bug-believe it, because if the developers don't, the tester will flood them with well-organized, overwhelming, evidence. A consequence of a well-organized mind is a facility for good written and oral communications. As a writer and editor, I've learned that the inability to express oneself clearly in writing is often symptomatic of a disorganized mind. I don't mean that we expect everyone to write deathless prose like a Hemingway or Melville. Good technical writing is well-organized, clear, and straightforward: and it doesn't depend on a 500,000 word vocabulary. True, there are some unfortunate individuals who express themselves superbly in writing but fall apart in an oral presentation- but they are typically a pathological exception. Usually, a well-organized mind results in clear (even if not inspired) writing and clear writing can usually be transformed through training into good oral presentation skills. 9. Skeptical. That doesn't mean hostile, though. I mean skepticism in the sense that nothing is taken for granted and that all is fit to be questioned. Only tangible evidence in documents, specifications, code, and test results matter. While they may patiently listen to the reassuring, comfortable words from the programmers ("Trust me. I know where the bugs are.")-and do it with a smile-they ignore all such in-substantive assurances. 10. Self-Sufficient and Tough. If they need love, they don't expect to get it on the job. They can't be looking for the interaction between them and programmers as a source of ego-gratification and/or nurturing. Their ego is gratified by finding bugs, with few misgivings about the pain (in the programmers) that such finding might engender. In this respect, they must practice very tough love. 11. Cunning. Or as Gruenberger put it, "low cunning." "Street wise" is another good descriptor, as are insidious, devious, diabolical, fiendish, contriving, treacherous, wily, canny, and underhanded. Systematic test techniques such as syntax testing and automatic test generators have reduced the need for such cunning, but the need is still with us and undoubtedly always will be because it will never be possible to systematize all aspects of testing. There will always be room for that offbeat kind of thinking that will lead to a test case that exposes a really bad bug. But this can be taken to extremes and is certainly not a substitute for the use of systematic test techniques. The cunning comes into play after all the automatically generated "sadistic" tests have been executed. 12. Technology Hungry. They hate dull, repetitive, work-they'll do it for a while if they have to, but not for long. The silliest thing for a human to do, in their mind, is to pound on a keyboard when they're surrounded by computers. They have a clear notion of how error-prone manual testing is, and in order to improve the quality of their own work, they'll f ind ways to eliminate all such error-prone procedures. I've seen excellent testers re-invent the capture/playback tool many times. I've seen dozens of home-brew test data generators. I've seen excellent test design automation done with nothing more than a word processor, or earlier, with a copy machine and lots of bottles of white-out. I've yet to meet a tester who wasn't hungry for applicable technology. When asked why didn't they automate such and such-the answer was never "I like to do it by hand." It was always one of the following: (1) "I didn't know that it could be automated", (2) "I didn't know that such tools existed", or worst of all, (3) "Management wouldn't give me the time to learn how to use the tool." 13. Honest. Testers are fundamentally honest and incorruptible. They'll compromise if they have to, but they'll righteously agonize over it. This fundamental honesty extends to a brutally realistic understanding of their own limitations as a human being. They accept the idea that they are no better and no worse, and therefore no less error-prone than their programming counterparts. So they apply the same kind of self-assessment procedures that good programmers will. They'll do test inspections just like programmers do code inspections. The greatest possible crime in a tester's eye is to fake test results. ======================================================================== 3D-SiteMaps: A New Information Presentation Method "One Picture is Worth A Thousand Words." Introduction eValid's 3D-SiteMap charts make it very easy to gain a deep understanding of how a WebSite is constructed. Viewing a WebSite structure in 3D is a remarkably easy way to learn about its structure and about the interdependence of the URLs (each WebSite page is a single URL). 3D-SiteMap displays are used to: > Find which URLs are the focus of dependence (have a lot of parents): a problem on such a URL may have many ramifications. > Find which URLs are sources of dependence (have a lot of children): a problem on such a URL may invalidate dependent URLs. > Find all URLs that are are related to any particular URL. > Locate unwanted or unnecessary dependencies: you many not want as many links as there are. > Help decide URL to focus detailed testing attention on next: The most central URLs, with the most dependencies, have the greatest impact on quality because they are likely used most. > Help identify Good and Bad WebSites. About 3D-SiteMap Charts eValid 3D-SiteMaps are generated by eValid after each Site Analysis run. Each chart is a complete presentation of the site dependence information that is also given in tabular form in the Site Analysis Reports. The 3D-SiteMap display is completely dynamic, in full 3D, and adjustable under user control. Making Display Adjustments The 3D-SiteMap image shows only URLs and their interconnecting links if they were visited during the Site Analysis run. To reduce the complexity of the 3D-SiteMap image for very large or very complex WebSites you can add suffixes the the "Exclude URLs File". For example, you might wish to prevent mapping references to *.GIF or *.JPG files. Mouse Control of 3D-Image Control of display orientation and size in the eValid 3D-SiteMap is done with the mouse: > Hover On URL: Reveal the URL, the number of parents and the number of children, plus show all immediate parents and children (one link away) that exist on the current display. (A detailed description is given below). After mouse-over on one URL the colors remain static until you mouse-over another URL. Click on any void area on the display to restore the image to the original. > Left Button, Click On Void Area: Restore the Full Set of Links. (Cancels parent/child highlighting). > Left Button, Horizontal Motion: Rotate the 3D-SiteMap display around vertical axis. > Left Button, Vertical Motion: Rotate the 3D-SiteMap display around horizontal axis. > Right Button, Vertical Motion: Scale the 3D-SiteMap display Up and Scale Down (Zoom In and Zoom Out). > Right Button, Horizontal Motion: Adjust URL Box Size (Smaller to Left, Larger to Right). > Left + Right Button Combined: Reposition the current 3D-SiteMap Display. (Tracks mouse movement.) > Left Button, Click On URL: Redraw the 3D-SiteMap with this URL as the base URL > Right Button, Click On URL: Invoke View Options as follows: > View the this item as the base URL in a NEW 3D-SiteMap Display. (Note: Your existing 3D-SiteMap will be preserved.) > View the URL in a browser window. > Right Button, Click On Void Area: Invoke Display Options as follows: > Click Include TAGs to choose for display only a selected [sub-]set of HTML TAGS. > Click Include Extensions to choose for display only a selected [sub-]set of Filename Extensions. > Click Select URL Height to reflect either URL size or measured URL download time. > Click Apply Changes to apply changes selected. > Click Reset to restore the 3D-SiteMap display to its original form. 3D-SiteMap URL Colors and Link Sense Indication The eValid 3D-SiteMap is color coded and annotated as follows: > Base URL -- the one from which the current 3D-SiteMap is drawn -- is GREEN. > Children of the base URL are BLUE. The shade is lighter the further away from (below) the base URL. > Parents of the base URL are RED. The shade is lighter the further away from (above) the base URL. > Unavailable links (with a return code of 400 or greater) are YELLOW . Dependency Relationships When using the Mouse Pointer, Hover on URL feature the dependency relationships for the temporary base URL (the one you are hovering over) are color coded as follows: > The mouse-overed item becomes GREEN (this is the temporary base URL of the display). > The temporary base's parent URLs are shown in RED. > The temporary base's children URLs are shown in BLUE. > Any of the temporary base's parents that happen also to be its children are shown in MAGENTA. > Unavailable links (with a return code of 400 or greater) are shown in YELLOW . Live Examples You can see how these 3D SiteMaps work with four examples found at: <http://www.e-valid.com/Promotion/3DSiteMaps/examples.html>. Contact to obtain an EVAL key for eValid. ======================================================================== QW2002 Call for Papers/Presentations <http://www.qualityweek.com/QW2002> QW2002 is the 20th in the continuing series of International Internet & Software Quality Week Conferences that focus on advances in software test technology, reliability assessment, software quality processes, quality control, risk management, software safety and reliability, and test automation as it applies to client-server applications and WebSites. Software analysis and verification methodologies and processes, supported by automated software analysis and test tools, promise major advances in system software and WebSite quality, reliability, and availability. The Mission of the QW2002 Conference is to increase awareness of the entire spectrum of methods used to achieve internet and software quality. QW2002 provides technical education, with opportunities for practical experience exchange, for the software development and testing community. ABOUT QW2002's THEME: The Wired World... Change is very rapid in the new wired world, and the wave of change brought about by the Internet affects how we approach our work, and how we think about quality of software and its main applications in IT and E-commerce. QW2002 aims to tackle internet and related issues head on, with special presentations dealing with changes in the software quality and internet areas. QW2002 OFFERS... The QW2002 program consists of four days of mini-Tutorials, panels, technical papers and workshops that focus on software and internet test technologies. QW2002 provides the Software Testing and Web Quality community with: > Real-World Experience from Leading Industry and Government Practitioners. > Quality Assurance and Test involvement in the development process. > Lessons Learned & Success Stories. > Latest Tools and Trends. > State-of-the-art information on software quality and Web methods. > Vendor Technical Presentations and Demonstrations > Carefully chosen 1/2-day and full-day tutorials from well-known technical experts. > Three-Day Conference, including Five Tracks: Technology, Web/Internet, Applications, Process/Management, Quick-Start. > Two-Day Vendor Show/Exhibition > Analysis of method and process effectiveness through case studies. > Over 80 Presentations > Meetings of Special Interest Groups and ad hoc Birds-Of-A- Feather Sessions. > Exchange of critical information among technologists, managers, and consultants. QW2002 is soliciting 45 and 90 minute technical presentations, tutorial proposals, quick-start proposals, and panel discussion proposal, on all areas of internet and software quality, including these topics: WebSite Monitoring E-Commerce Reliability/Assurance Application of Formal Methods Software Reliability Studies Client/Server Testing CMM/PMM Process Assessment Cost / Schedule Estimation Test Data Generation and Techniques Automated Inspection Methods Test Documentation Standards GUI Test Technology Integrated Test Environments Quality of Service (QoS) Matters WebSite Load Generation and Analysis Object Oriented Testing Test Management Automation Process Improvement GUI Test Management Productivity and Quality Issues Real-Time Software New and Novel Test Methods Test Automation Technology and Experience WebSite Testing Real-World Experience Defect Tracking / Monitoring Risk Management Test Planning Methods Test Policies and Standards WebSite Quality Issues Test Outsourcing IMPORTANT DATES: Abstracts and Proposals Due 30 April 2002 Notification of Participation 15 June 2002 Presentation Materials Due 15 July 2002 SUBMISSION INFORMATION... Here are the steps for submitting material for QW2002: 1. Prepare your QW2002 Abstract as an ASCII file, a Microsoft Word document, or in PostScript or PDF format. Abstracts should be 1-2 pages long, with enough detail to give members of QW2002's International Advisory Board an understanding of the final paper/presentation, including a rough outline of its contents. Send it by Email (as a MIME attachment) to: . Please include in your Email: a. A brief biographical sketch of each author. b. A photo of each author. c. The complete contact coordinates of the primary author. 2. Fill out the Speaker Data Sheet on the QW2002 WebSite giving some essential facts about you and about your proposed presentation. The URL for the form is: <http://www.qualityweek.com/QW2002/speaker-data.phtml> 3. If you prefer, you may send material by postal mail to: Ms. Rita Bral Software Research Institute 1663 Mission Street, Suite 400 San Francisco, CA 94103 USA QW2002 AWARDS... > Best Paper Award: The winner receives wide recognition in the QA Community and receives a $1,000 grant. > Best Presentation Award: The winner is invited to present the winning talk at Quality Week Europe 2003 (QWE2003) set for March 2003 in Brussels, BELGIUM, EU. For complete information on the QW2002 Conference, e-mail your request to , call SR/Institute at [+1] (415) 861-2800, or FAX SR/Institute at [+1] (415) 861-9801. Prospective product/service exhibitors should contact the QW2002 team early because Expo space is strictly limited. ======================================================================== ------------>>> QTN ARTICLE SUBMITTAL POLICY <<<------------ ======================================================================== QTN is E-mailed around the middle of each month to over 10,000 subscribers worldwide. To have your event listed in an upcoming issue E-mail a complete description and full details of your Call for Papers or Call for Participation to . QTN's submittal policy is: o Submission deadlines indicated in "Calls for Papers" should provide at least a 1-month lead time from the QTN issue date. For example, submission deadlines for "Calls for Papers" in the March issue of QTN On-Line should be for April and beyond. o Length of submitted non-calendar items should not exceed 350 lines (about four pages). Longer articles are OK but may be serialized. o Length of submitted calendar items should not exceed 60 lines. o Publication of submitted items is determined by Software Research, Inc., and may be edited for style and content as necessary. DISCLAIMER: Articles and items appearing in QTN represent the opinions of their authors or submitters; QTN disclaims any responsibility for their content. TRADEMARKS: eValid, STW, TestWorks, CAPBAK, SMARTS, EXDIFF, STW/Regression, STW/Coverage, STW/Advisor, TCAT, and the SR logo are trademarks or registered trademarks of Software Research, Inc. All other systems are either trademarks or registered trademarks of their respective companies. ======================================================================== -------->>> QTN SUBSCRIPTION INFORMATION <<<-------- ======================================================================== To SUBSCRIBE to QTN, to UNSUBSCRIBE a current subscription, to CHANGE an address (an UNSUBSCRIBE and a SUBSCRIBE combined) please use the convenient Subscribe/Unsubscribe facility at: <http://www.soft.com/News/QTN-Online/subscribe.html>. As a backup you may send Email direct to as follows: TO SUBSCRIBE: Include this phrase in the body of your message: subscribe TO UNSUBSCRIBE: Include this phrase in the body of your message: unsubscribe Please, when using either method to subscribe or unsubscribe, type the exactly and completely. Requests to unsubscribe that do not match an email address on the subscriber list are ignored. QUALITY TECHNIQUES NEWSLETTER Software Research, Inc. 1663 Mission Street, Suite 400 San Francisco, CA 94103 USA Phone: +1 (415) 861-2800 Toll Free: +1 (800) 942-SOFT (USA Only) Fax: +1 (415) 861-9801 Email: qtn@soft.com Web: <http://www.soft.com/News/QTN-Online>