sss ssss rrrrrrrrrrr ssss ss rrrr rrrr sssss s rrrr rrrr ssssss rrrr rrrr ssssssss rrrr rrrr ssssss rrrrrrrrr s ssssss rrrr rrrr ss sssss rrrr rrrr sss sssss rrrr rrrr s sssssss rrrrr rrrrr +===================================================+ +======= Testing Techniques Newsletter (TTN) =======+ +======= ON-LINE EDITION =======+ +======= July 1996 =======+ +===================================================+ TESTING TECHNIQUES NEWSLETTER (TTN), On-Line Edition, is Emailed monthly to support the Software Research, Inc. (SR) user community and provide information of general use to the worldwide software testing community. (c) Copyright 1996 by Software Research, Inc. Permission to copy and/or re-distribute is granted to recipients of the TTN On-Line Edition provided that the entire document/file is kept intact and this copyright notice appears with it. ======================================================================== INSIDE THIS ISSUE: o Software Process: Improvement and Practice (New Journal, Pilot Issue Contents) o TCAT C/C++ For WIndows 95, NT and 3.1x Available o Software Investments Strategy (Part 3 of 3), by L. Bernstein and C. M. Yuhas o Software Quality Journal (Vol. 5, No. 1, March 1996), Table of Contents o Call for Papers: 1st EuroMicro Conference on Software Maintenance and Re-Engineering o Six Sigma: Hardware Si, Software No! (Part 2 of 2), by Robert V. Binder. o Call for Papers: Formal Methods in Software Practice. Special Issue of IEEE Transactions on Software Engineering (April 1997). o TTN Printed Edition Eliminated o TTN SUBMITTAL POLICY o TTN SUBSCRIPTION INFORMATION ======================================================================== Software Process: Improvement and Practice Pilot Issue 1995 NOTE: Here are the contents of the first issue of this new journal which should be of interest to testing and quality professionals everywhere. "The Evolution of the SEI's Capablity Model for Software" by M. C. Paulk "Two Case Studies in Modeling Real, Corporate Processes" by N.S. Barghouti, D.S. Rosenblum, D.G. Belanger, and C. Alliegro "Return on Investment (ROI) from Software Process Improvement as Measured by US Industry" by J.G. Brodman and D.L. Johnson "Software Process Improvement by Business Process Orientation" by V. Gruhn and S. Wolf "SPICE: A Framework for Software Process Assessment" by T.P. Rout Contact Information: Dewayne E. Perry Editor in Chief AT&T BEll Labls 600 Mountain Avenue Murry Hill, NJ 07974 Email: dep@research.att.com Wilelm Schafer Universitat Paderborn VB17 Informatik, Paderborn, Germany D-33095 Email: wilhelm@uni-paderborn.DE ======================================================================== TCAT C/C++ for Windows 95, NT and 3.1x Available Porting work and several technical revisions to SR's earlier-released Windows 95, NT version of its sophisticated, professional-level Test Coverage Analysis Tool (TCAT) for C and C++ are now complete. This new release of TCAT, Version 1.3 for Windows 95, NT and 3.1x, features true 32-bit native executables, advanced compiler-based source-language scanning technology, improved runtime support, and full GUI access to a project's call trees, function and method digraphs, and current and past-test coverage data at both the branch (C1) and call- pair (S1) level. FEATURES of TCAT C/C++ Version 1.3 include: o Full processing for ANSI C, K&R C, "standard C++" and Microsoft VC++ through Version 4.x. o Complete support for Microsoft Foundation Classes (MFCs), including the most-recently released revisions in VC 4.1. o A new standardized and easy-to-use automated product installation and de-installation process. o Simplified user-based licensing that also supports group, department-wide and/or site licenses. o Easy to use point-and-click coverage reporting with full reflection of coverage data to original source texts, plus point-and-click display of call-tree, class-hierarchy, and individual function digraph with immediate back-reference to source code. o Separable C1 (very high detail, at the segment level) and S1 (lower detail, at the call-pair level) coverage measurement. o Complete C and C++ language support combined. o Complete support for C++ templates, in-line functions, exception handlers. o Support for DLLs, multi-threaded code, device drivers and other time critical applications. o Easy interfaces to handle large, multiple complex projects, without capacity limitations. o Improved and fully-indexed user documentation available both in hard copy and on-line versions. o Improved GUI that permits easy integration with popular C++ compilers, including MS VC 4.x. o A fully worked example using "step8" of Scribble, a sample application program delivered with VC++, employing a wide range of Microsoft Foundation Class features. o A common user interface and unified work-flow in all three Windows platforms. o Improvements in instrumentation efficiency, runtime data-collection efficiency, and source-viewing capability. BENEFITS of TCAT C/C++ use as a quality control filter include: o Highly reliable, low overhead calculation of test suite completeness, suitable for use by developers as well as testers. o Rapid identification of untested logical segments and/or call-pairs help you pinpoint untested functions, segments, classes, modules and units. o Early detection of latent defects due to untested or poorly tested software. o Enhanced program understanding from detailed system level (call tree) and module level structure (digraph) displays. APPLICATIONS and intended uses of TCAT C/C++ include: o "Industrial strength" applications which are very large and highly complex and which stress C++ to its limits. o Test suite completeness checking (to determine how to expand/extend incomplete suites). o Unit/module level and system level (integration) testing support. o Modification analysis and re-testing in maintenance/upgrade modes. Complete information about TCAT C/C++ for Windows 95, NT and 3.1x, including how to download a trial copy by Anonymous FTP, is available from our Web site at http://www.soft.com or on request from SR or via E-mail to info@soft.com. ======================================================================== Software Investments Strategy (Part 3 of 3) by Lawrence Bernstein and C. M. Yuhas Now for some reasons NOT to do OO: 1. Bjarne points out that every significant change carries risk. "My #1 reason to worry about OO is that middle managers will have to learn new tricks and techniques to manage OO development well. This is not easy: A development manager, say, is often in a position to get blame for whatever goes wrong, yet gives most of the credit to their people when things go well. This doesn't encourage risk-taking or innovation. Unless managers somehow find the time to get a basic understanding and a minimum level of comfort with OO, it will seem only a risk, and token efforts will abound." He goes on to warn, "It's popular, which means there are a lot of charlatans out there. As ever, there is no substitute for thinking; if someone tries to sell you something that sounds too good to be true, it probably is." 2. Others warn that tools, design, analysis, and management techniques for OO are still not fully mature and it requires investment (primarily in education and tools). 3. If you have a legacy system, then there is a paradigm shift to make it OO. There will be much rework in architecture, design, code and testing. All the rework provides no short term benefits to the developers or customers. 4. Unless managers somehow find the time to get a basic understanding and a minimum level of comfort with OO, it will seem only a risk, and token efforts will abound. The question of execution time performance penalties with the use of C++ often arises. People fear that the size of C++ source code, object code or execution times will not be competitive with that of C. Bjarne's experience is that comparable designs gives close to identical performance in time and space in C and C++. Massive libraries requiring megabytes of overhead are more common with C++ than with C. However, that is an effect of using first-generation off-the-shelf components and not a fundamental one. There is general agreement that the use of OO leads to more concise programs and maps the solution domain more easily into the problem domain. This results in a threefold increase in productivity and we have measured large projects with excellent class designs gaining six times productivity gains even for the first release and then factors of ten in later releases. But do not expect to gain these improvements in the first attempt at OO, as our experience shows that it is at best a break-even proposition. Object Oriented Programming/Testing: Ten point program to OOT: 1. Pilot tools and processes before widely deploying them, 2. Have OOT consultants jump-start the first job, 3. Invest $10k per OO designer in tools and equipment, 4. Train good procedural programmers in OOT for 6 months, expect only two months of useful work from them during this training cycle. 5. Limit the number of Object Classes to .5% of the project's function points, with no more than one third having inheritance of greater than 3. 6. Spend 30% of the total development cycle in design with a heavy emphasis in object modeling. Be particularly concerned with generality of the objects. Limit the number of object architects to no more than 10% of the staff and empower them and only them to create object classes. 7. Model performance as soon as design starts. Calibrate and analyze the model frequently. 8. Invest heavily in Unit Test drivers and design tests for feature and data relationships. 9. Assign work by feature teams, NOT by class or methods. Do not let the tool move you one more level of abstraction from the problem domain. 10. Do not expect productivity gains from the first release by a team new to OOT. But, thereafter expect 3:1 gains. Conclusion The preceding litany of woes is not a condemnation of the new advances in software technology. Object-oriented programming and large-scale reuse are the next high-payoff leaps forward, but they are also high risk. The problems cited here are consistent with those discussed elsewhere in the literature, most recently by Richard Fairley [Fair94] who also discusses ways to manage that risk. So where should you put your bucks? You should invest in master programmers and equip them with the very best tools and apprentices, prototyping, customer teaming, reuse and object oriented technology. But most of all you need to hire good, experienced management. The PC software industry has already committed major investment to allow the reuse of components. Object-oriented programming and large scale reuse offer the most promising near term breakthroughs, and successful companies will embrace them. The trick is to bound your investment with good management. For example, those who have experience have discovered that reuse librarians and object engineers who can jump-start projects are invaluable. Contain your enthusiasm for the new toys. If a project can have 1,000 objects, limit the number to 100. The number of objects for a program should be no more than 1 % of the total number of lines of code, and no more than one third of those objects should have an inheritance level of more than 3. It is otherwise too easy to fall into the trap of obfuscating the language, creating objects willnilly rather than using those that have already been identified. Advancing new technology is a heady business. Prof. Ed Richter of USC points out that the more technologically competent an organization , the harder it is for it to adopt new technology. Explore a new tool or process in a prototype, if it looks promising try it out on three projects. Now deploy it by insisting that production projects 'prove it out' rather than 'proving it in.' There is considerable risk in being leading edge adopters but vast rewards are available to those who meet the challenge. References Arthur, Lowell Jay. Programmer Productivity: Myths, Methods, and Murphology A Guide for Managers, Analysts, and Programmers, John Wiley & Sons, New York, 1983, pp.25-27. Boehm, Barry W., Gray, T. E., and Seewaldt, T. "Prototyping Versus Specifying: A Multiproject Experiment," IEEE Transactions on Software Engineering, Vol. SE-10, No. 3, May 1984. Desmond, John, "IBM's Workgroup Hides Repository," Application Development Trends, April 1994, p. 25. Dijkstra, E "The Humble Programmer," 1972 ACM Turing Award Lecture in Classics in Software Engineering, ISBN 0-917072-14-6, 1979 Fairley, Richard. "Risk Management for Software Projects," IEEE Software, May, 1994, pp. 57-67. Jones, Capers. Programming Productivity, McGraw-Hill Book Company, New York, 1986, p. 83-210. Mills, Harlan. Software Productivity, Dorset House Publishing, New York, 1988, pp. 13-18. Poulin, J. S., Caruso, J. M. and Hancock, D. R. "The business case for software reuse," IBM Systems Journal, Vol. 32, No. 4, 1993, pp. 567- 594. Pressman, Roger. "Hackers in a Decade of Limits," American Programmer, January 1994, pp. 7-8. Selby, R "Empirically Analyzing Software Reuse in a Production Environment. In Software Reuse:Emerging Technology, W. Tracz (Ed.), IEEE Computer Society Press, 1988 pp. 176-189. Walston, C. E. and Felix, C. P. "A method of programming measurement and estimation," IBM Systems Journal, No. 1, 1977, pp. 54-60. Yourdon, Edward Nash. Classics in Software Engineering, Yourdon Press, New York, 1979, p. 122. LAWRENCE BERNSTEIN Chief Technical Officer Operations Systems Business Unit Room 4WD-12C 184 Liberty Corner Road Warren, New Jersey 07059. He can be reached at: lbernstein@attmail.com. Editors Note: Lawrence Bernstein is a frequent contributor to TTN- Online. C. M. Yuhas is a freelance writer whose articles have appeared in many IEEE publications, UNIX Review, DATAMATION, in COMPUTERWORLD and in the International Journal of Systems and Network Management. CONTACT INFORMATION: C. M. Yuhas, Freelance Writer, 4 Marion Ave. Short Hills, NJ 07078. ======================================================================== Software Quality Journal Volume 5 Number 1 March 1996 ISSN 0963-9314 Table of Contents: H. SAIEDIAN and L. M. MCCLANAHAN Frameworks for quality software process: SEI Capability Maturity Model versus ISO 9000 F.M. HOWENDEN, S.D. WALKER, H.C. SHARP and M. WOODMAN Building quality into scientific software P. JOHNSON Design for instrumentation: high quality measurement of formal technical review For subscription orders contact: Subscription Dept., Chapman & Hall, International Thomson Publishing Services Ltd., Cheriton House, North Way, Andover, Hampshire SP10 5BE, UK. Tel: 01264 342713. Fax: 01264 342807. Email: chsub@itps.co.uk ======================================================================== Call for Papers FIRST EUROMICRO WORKING CONFERENCE ON SOFTWARE MAINTENANCE AND REENGINEERING Berlin, GERMANY, 17-19 March 1997 The purpose of the working conference is to promote discussion and interaction about a series of topics which are yet underrepresented. We are particularly interested in exchanging concepts, prototypes, research ideas, and other results which should not only contribute to the academic arena but will also benefit the business community. Topics of interest include but are not limited to: Maintenance and Reengineering Tools (CARE-Tools), Reverse Engineering Tools, Support of Reengineering Tasks by CASE-Tools, Software Reusability, Tele-Maintenance (Concepts, Experiences, Use of New Technologies), Maintainability of Programming Languages (eg. OOPLs), Models and Methods for Error Prediction, Measurement of Software Quality, Maintenance Metrics, Formal Methods, Reengineering and Reverse Engineering Concepts, Experiences from Redesign and Reengineering Projects, Millennium Problem (Year 2000), Organizational Framework and Models for "RE"-Projects, Software Evolution, Migration and Maintenance Strategies, Design for Maintenance, Preventive Maintenance, Personnel Aspects of Maintenance (Motivation, Team building), Third Party Maintenance, Empirical Results about the Maintenance Situation in Businesses, Version and Configuration Management, Legal Aspects and Jurisdiction, Organization and Management of Large Maintenance Projects, Software Offloading, Related Areas such as Software Documentation Submission of papers: There are two types of papers: full length papers (30 minutes presentation, not exceeding 4000 words in length and including a 150-200 word abstract) and short papers (15 minutes presentation, not exceeding 2000 words in length and including a 75-100 word abstract). Prospective authors are strongly encouraged to send a PostScript version of their paper by anonymous ftp to ftp.ifi.unizh.ch and put this file into the directory pub/CSMR97/incoming (in order to avoid overwritings, the PostScript file should be named:.ps). In addition, they should send by Email to CSMR97@ifi.unizh.ch the title of the paper, full names, affiliations, postal and Email addresses of all authors, fax and telephone numbers. Alternatively, the paper can be sent by postal mail. In that case, five copies of all the above items should be sent to the program chairman. The following signed information should be included in the submission: All necessary clearances have been obtained for the publication of this paper. If accepted, the author(s) prepare the camera-ready manuscript in time for inclusion in the proceedings, and will personally present the paper at the working conference. The proceeding will be published by IEEE Computer Society. Full papers exceeding 8 pages (short papers 4 pages) will be charged DM 100 per page in excess. Important dates: The deadline for submissions is Sept. 15, 1996. Authors will be notified of acceptance by Nov.21, 1996. The camera ready version of the paper will be required by Dec. 20, 1996. Special sessions: Sessions of special interest proposed by delegates will be welcome. Please send suggestions to the program chairman before the closing date of submissions. General information: The working conference will take place at the Fraunhofer Institute for Software and Systems Engineering (ISST). Enquiries about the working conference arrangements should be directed to the organizing chairman or to the local chairman. For more information access the following Web pages: URL: http://rrws12.wiwi.uni-regensburg.de/~c1389/confcall.html URL: http://www.isst.fhg.de/csmr Program Chair and Contact: Lutz Richter Dept. of Computer Science University of Zurich Winterthurerstrasse 190 CH-8057 ZURICH SWITZERLAND Tel: +41-1-257-4330 (4331) Fax: +41-1-363-0035 Email richter@ifi.unizh.ch Program Committee: V. Ambriola, University of Pisa (I) I. Baker, IBM Europe (UK) K. Bennett, University of Durham (UK) D. Glatthaar, IBM Germany (D) J.-L. Hainaut, University of Namur (B) W. Kirchgaessner, SAP AG (D) F. Lehner, University of Regensburg (D) M. Loewe, HIF Hannover (D) P. Nesi, University of Florence (I) Z. Oery, Siemens-Nixdorf Informationssysteme AG (D) G. Sechi, IFCTR-CNR Milano (I) H. Sneed, SES Software (D) Organizing Chair: Franz Lehner Institute for Business Informatics University of Regensburg Universitatsstr. 31 D-93040 REGENSBURG, Germany Tel.: +49-941-943-2734 Fax: +49-941-943-4986 Email: Franz.Lehner@wiwi.uni-regensburg.de Local Chair: Ingo Classen Fraunhofer Institute for Software and Systems Engineering (ISST) Kurstrasse 33 D-10117 BERLIN, Germany Tel.: +49-30-20224-700 Fax: +49-30-20224-799 Email: Ingo.Classen@isst.fhg.de ======================================================================== Six Sigma: Hardware Si, Software No! (Part 2 of 2) Robert V. Binder. Copyright 1996, Robert V. Binder. All Rights Reserved SUMMARY: Six sigma is a parameter used in statistical models of the quality of manufactured goods. It is also used as a slogan suggesting high quality. Some attempts have been made to apply 6 sigma to software quality measurement. This essay explains what 6 sigma means and why it is inappropriate for measuring software quality. 2.2 Software Characteristics of Merit are not Ordinal Tolerances. Six sigma applies to linear dimensions and counts of the outcomes of identical processes. The ordinal quantification of physical characteristics of merit cannot be applied to software without a wild stretch of imagination. The characteristic of merit is _implicitly_ redefined from ordinal (dimensional measurement) to cardinal (unit count) in every discussion applying 6 sigma to software I've encountered. This is problematic; the analytical leverage of the ordinal model is lost and it is unclear what is being counted. * A "software" either conforms or it doesn't. It is meaningless to attempt to define greater or lesser conformance: what would the units be? What would be the upper bound of conformance -- would we reject too much conformance? What is the lower bound? The fact that people (and systems) will tolerate egregiously buggy software is not the same thing as establishing a quantitatively meaningful scale. Performance profiles (e.g. "fast busy no more than 1 minute in 10000") are the only truly quantifiable design tolerance for software, but are never the whole requirement for any real system or single "part". * A physical measurement provides an unambiguous value (limited to the known precision of the measuring mechanism.) In contrast, attempts to count faults in software (by testing) can only give weak estimates. Exhaustive testing could provide such a failure count with certainty, but is impossible for any practical purpose. The number of faults *not* revealed by feasible testing strategies cannot be given with any certainty. Proving correctness doesn't help, since any single fault is sufficient to disprove. * Voas argues convincingly that it is not the gross number of detected software faults that is important, but the ability of system as a whole to hide as yet undetected faults [3]. Even if a cardinal interpretation was valid, more problems remain. The mapping between faults and failures is not one-to-one. So, 6 sigma of what? A fault density of _x_ or less per Instructions, NCSLOC, Function Points, or ...? A failure rate of _x_ or less per CPU seconds, transactions, interrupts, or ... ? Shall we use either 3.4E10-6 (the "drift" number) or 2.0E10-9 (the absolute number) for _x_ ? Other interpretations are certainly possible -- this is exactly the problem. As a point of reference, here are some reported defect densities for released software. NASA Space Shuttle avionics 0.1 Failures/KLOC [4] Leading-edge software companies 0.2 Failures/KLOC [5] Reliability survey 1.4 Faults/KLOC [6] Best, Military system survey 5.0 Faults/KLOC [7] Worst, Military system survey 55.0 Faults/KLOC [7] For the sake of argument, assume that a six sigma software standard calls for no more than 3.4 failures per million lines of code (0.0034 failures per KLOC.) This would require a software process roughly two orders of magnitude better than current best practices. It is hard to imagine how this could be attained, as the average cost of the shuttle code is reported to be $1,000 per line. [4] 2.3 Software is not Mass Produced. Even if software components could be designed to ordinal tolerances, they'd still be one-off artifacts. It is inconceivable that one would attempt to build thousands of identical components with an identical development process, and then, post hoc, try to fix the process if it produces systems which don't meet the constraint(s) or just throw away the defective systems. (We can produce millions of _copies_ by a mechanical process, but this is irrelevant with respect to software defects.) Quantification of reliability is a whole 'nother ballgame. 3. Six Sigma as Slogan/Hype I'm not against very high quality software (my consulting practice exists because very high quality software is hard to produce but nearly always worth the effort.) However, slogans like "six sigma" can confuse and mislead, even when applied to manufacturing [8]. Used as a slogan, "six sigma" simply means some subjectively (very) low defect level. The precise statistical sense is lost. Discussions of "six sigma" software based on this vague sloganeering ignore the fundamental flaws of applying a model of physical ordinal variance to the non-physical, non-ordinal behavior of software systems. This is not to say there are no useful applications of statistical process control to software quality assurance (my favorite is to use u- charts for inspection and test results.) I say we leave 6 sigma to the manufacturing guys. Let's figure out what we need to do to routinely get very high field reliability in software intensive systems and agree on an operational definition for reliability measurement. References/Notes [1] Morris Hamburg, _Statistical Analysis for Decision Making_, (Harcourt, Brace & World, 1970) [2] Bill Smith, "Six-Sigma Design," _IEEE Spectrum_, September 1993, v 30, n 9, pp 43-46. [3] Michael A. Friedman and Jeffrey M. Voas, _Software Assessment: Reliability, Safety, and Testability_ (New York: John Wiley & Sons, 1995) [4] Edward Joyce, "Is Error-free Software Possible?," _Datamation_, February 18, 1989. [5] Leading-edge software companies are achieving 0.025 user-reported failures per function point or better (Capers Jones, _Applied Software Measurement_ (McGraw-Hill, 1991) p 177.) Assuming Jones' conversion factor of 128 lines of C source per function point, we get 0.2 = (0.025 * (1000/128)). [6] John D. Musa, Anthony Iannino, and Okumoto Kazuhira, _Software Reliability: Measurement, Prediction, Application_ (New York: McGraw- Hill Publishing Company, 1990.) p 116. [7] Joseph P. Cavano and Frank S. LaMonica, "Quality Assurance in Future Development Environments," _IEEE Software_, September 1987, pp 26-34. [8] Jim Smith and Mark Oliver, "Six Sigma: Realistic Goal or PR Ploy?," _Machine Design_, September 10, 1992, pp 71-74. Contact Information: ---------------------------------------------------------------------- Bob Binder rbinder@rbsc.com RBSC Corporation 312 214-3280 tel http://www.rbsc.com 3 First National Plaza 312 214-3110 fax Suite 1400 847 475-3670 tel/fax Software Engineering Lab Chicago, IL 60602-4205 ---------------------------------------------------------------------- ======================================================================== CALL FOR PAPERS IEEE Transactions on Software Engineering Special Issue on Formal Methods in Software Practice April 1997 Deadline for submissions: September 30, 1996 (This call for papers is also available on the World Wide Web at: URL: http://www.cs.ucsb.edu/~dillon/TSE/) After more than two decades of research in formal methods, it is time to take stock of our position. Founded in the area of mathematical specification and verification, formal methods have become corner- stones in many other areas, such as synthesis, transformation, prototyping, reverse engineering, and testing. A variety of formal methods are gaining industrial acceptance with the emergence of production quality software tools; nevertheless, many state-of-the-art formal methods have yet to become state-of-the-practice. The purpose of this special issue of "IEEE Transactions on Software Engineering" is to catalyze a narrowing of the gap between state-of- the-art in research on formal methods and state-of-the-practice in the application of these methods. The special issue will feature papers that attest to the impact of formal methods on software practice, and that present strategies and techniques to further this impact in the future. Papers are solicited both from experts in formal methods technology and from early innovators in industry who have adopted formal methods. Submissions should focus on the application of formal methods to software practice. They should cover topics related to transition of formal methods technology to industry --- reports on experience in using formal methods and how emerging technology addresses the needs of industry and society. Specific topics of interest include, but are not limited to: * Specification and Verification Technology: Languages and tool support, model checking, etc. * Software Processes Based on Formal Methods: Design, prototyping, development, testing, analysis, verification, refinement, etc. * Evolving Systems: The role of formal methods and specification technology in the design and development of evolving systems. * Architecture Specifications: Specification of software architectures and conformance checking of implementations. * Reverse Engineering: Generation of documentation and requirements from existing formal specifications or implementations. * Partial Approaches: The use of formal methods to focus on practical verification of specific program properties (e.g., array bounds, memory allocation). * Security/Safety: Formal methods-based approaches to enhance various security and safety properties of programs. * Case Studies: Use of formal methods in real life software projects. What kinds of software projects are suited for formal methods? * Education and Training: Experiences with formal methods technology transfer to software engineers and managers. INSTRUCTIONS FOR SUBMITTING MANUSCRIPTS Authors are invited to submit six copies of their contributions to either of the special issue editors. Submissions should be received by SEPTEMBER 30, 1996. Manuscripts must be in English (double-spaced; 12 pt; 30 pages max.). Each copy should have a cover page with title, name, and address (including e-mail address) of author(s), an abstract of no more than 200 words, and a list of identifying keywords. IMPORTANT DATES September 30, 1996: Due date for six (6) copies of full manuscript January 31, 1997: Notification of acceptance/rejection February 28, 1997: Due date for final version April 1997: Tentative date for publication of special issue EDITORS OF THE SPECIAL ISSUE Laura K. Dillon Computer Science Department University of California Santa Barbara, CA 93106 Phone (805) 893-3411 Fax (805) 893-8553 Email: dillon@cs.ucsb.EDU Sriram Sankar Sun Microsystems Laboratories 2550 Garcia Ave., MTV29-112 Mountain View, CA 94043 Phone (415) 336-6230 Rax (415) 969-7269 Email: sankar@anchor.Eng.Sun.COM THE IEEE TRANSACTIONS ON SOFTWARE ENGINEERING This Transactions is a monthly archival journal. It publishes well- defined theoretical results and empirical studies that have potential impact on the construction, analysis or management of software. The scope of the Transactions ranges from the mechanisms through the development of principles to the application of those principles to specific environments. It is assumed that the ideas presented are important, have been well analyzed and/or empirically validated, and are of value to the software engineering research or practitioner community. ======================================================================== TTN Printed Edition Eliminated Because of the much-more-efficient TTN-Online version and because of the widespread access to SR's WWW, we have discontinuing distributing the Printed Edition (Hardcopy Edition) of Testing Techniques Newsletter. The same information that had been contained in the Printed Edition will be available monthly in TTN-Online, issues of which will be made available 2 to 4 weeks after electronic publication at the WWW site: URL: http://www.soft.com/News/ ======================================================================== ======================================================================== ------------>>> TTN SUBMITTAL POLICY <<<------------ ======================================================================== The TTN On-Line Edition is forwarded on approximately the 15th of each month to Email subscribers worldwide. To have your event listed in an upcoming issue please Email a complete description of your or a copy of your Call for Papers or Call for Participation to "ttn@soft.com". TTN On-Line's submittal policy is as follows: o Submission deadlines indicated in "Calls for Papers" should provide at least a 1-month lead time from the TTN On-Line issue date. For example, submission deadlines for "Calls for Papers" in the January issue of TTN On-Line would be for February and beyond. o Length of submitted non-calendar items should not exceed 350 lines (about four pages). o Length of submitted calendar items should not exceed 68 lines (one page). o Publication of submitted items is determined by Software Research, Inc., and may be edited for style and content as necessary. TRADEMARKS: STW, Software TestWorks, CAPBAK/X, SMARTS, EXDIFF, CAPBAK/UNIX, Xdemo, Xvirtual, Xflight, STW/Regression, STW/Coverage, STW/Advisor and the SR logo are trademarks or registered trademarks of Software Research, Inc. All other systems are either trademarks or registered trademarks of their respective companies. ======================================================================== ----------------->>> TTN SUBSCRIPTION INFORMATION <<<----------------- ======================================================================== To request your FREE subscription, to CANCEL your current subscription, or to submit or propose any type of article send Email to "ttn@soft.com". TO SUBSCRIBE: Send Email to "ttn@soft.com" and include in the body of your letter the phrase "subscribe ". TO UNSUBSCRIBE: Send Email to "ttn@soft.com" and include in the body of your letter the phrase "unsubscribe ". TESTING TECHNIQUES NEWSLETTER Software Research, Inc. 901 Minnesota Street San Francisco, CA 94107 USA Phone: +1 (415) 550-3020 Toll Free: +1 (800) 942-SOFT (USA Only) FAX: + (415) 550-3030 Email: ttn@soft.com WWW URL: http://www.soft.com ## End ##