Description

Book Synopsis
Dorothy Graham and Mark Fewster are the principal consultant partners of Grove Consultants which provides consultancy and training in software testing, test automation, and Inspection. Mark Fewster developed the test automation design techniques which are the primary subject of this book. He has been refining and applying his ideas through consultancy with a wide variety of clients since 1991. Dorothy Graham is the originator and co-author of the CAST Report (Computer Aided Software Testing tools) published by Cambridge Market Intelligence, and the co-author of Software Inspection published by Addison-Wesley in 1993. Both authors are popular and sought-after speakers at international conferences and workshops on software testing.

Table of Contents
Preface
Part One: Techniques for Automating Test Execution
1 Test automation context
1.1 Introduction
1.2 Testing and test automation are different
1.3 The V-model
1.4 Tool support for life-cycle testing
1.5 The promise of test automation
1.6 Common problems of test automation
1.7 Test activities
1.8 Automate test design?
1.9 The limitations of automating software testing
2 Capture Replay is Not Test Automation
2.1 An example application: Scribble
2.2 The manual test process: what is to be automated
2.3 Automating Test Execution: inputs
2.4 Automating Test Result Comparison
2.5 The next steps in evolving test automation
2.6 Conclusion: Automated is not automatic
3 Scripting techniques
3.1 Introduction
3.2 Scripting techniques
3.3 Script pre-processing
4 Automated comparison
4.1 Verification, comparison and automation
4.2 What do comparators do?
4.3 Dynamic comparison
4.4 Post-execution comparison
4.5 Simple comparison
4.6 Complex comparison
4.7 Test sensitivity
4.8 Comparing different types of outcome
4.9 Comparison filters
4.10 Comparison guidelines
5 Testware Architecture
5.1 What is testware architecture?
5.2 Key issues to be resolved
5.3 An Approach
5.4 Might this be Overkill?
6 Automating Pre- and Post-Processing
6.1 What are Pre- and Post-Processing?
6.2 Pre- and Post Processing
6.3 What should happen after test case execution
6.4 Implementation Issues
7 Building maintainable tests
7.1 Problems in maintaining automated tests
7.2 Attributes of test maintenance
7.3 The conspiracy
7.4 Strategy and tactics
8 Metrics
8.1 Why measure testing and test automation?
8.2 What can we measure?
8.3 Objectives for testing and test automation
8.4 Attributes of software testing
8.5 Attributes of test automation
8.6 Which is the best test automation regime?
8.7 Should I really measure all these?
8.8 Summary
8.9 Answer to DDP Exercise
9 Other Issues
9.1 Which Tests to Automate (first)?
9.2 Selecting which tests to run when
9.3 Order of test execution
9.4 Test status
9.5 Designing software for (automated) testability
9.6 Synchronization
9.7 Monitoring progress of automated tests
9.8 Tailoring your own regime around your tools
10 Choosing a tool to automate testing
10.1 Introduction to Chapters 10 and 11
10.2 Where to start in selecting tools: your requirements, not the tool market
10.3 The tool selection project
10.4 The tool selection team
10.5 Identifying your requirements
10.6 Identifying your constraints
10.7 Build or buy?
10.8 Identifying what is available on the market
10.9 Evaluating the short listed candidate tools
10.10 Making the decision
11 Implementing tools within the organization
11.1 What could go wrong?
11.2 Importance of managing the implementation process
11.3 Roles in the implementation/change process
11.4 Management commitment
11.5 Preparation
11.6 Pilot project
11.7 Planned phased installation or roll-out
11.8 Special problems in implementing
11.9 People issues
11.10 Conclusion
12 Racal-Redac Case History
12.1 Introduction
12.2 Background
12.3 Solutions
12.4 Integration to Test Automation
12.5 System Test Automation
12.6 The Results Achieved
12.7 Summary of the case history up to 1991
12.8 What happened next?
13 The Evolution of an Automated Software Test System
13.1 Introduction
13.2 Background
13.3 Gremlin 1
13.4 Gremlin 2.0: A Step Beyond Capture/Replay
13.5 Finding The Real Problem
13.6 Lesson Learned
14 Experiences with Test Automation
14.1

Software Test Automation

Product form

£74.09

Includes FREE delivery

RRP £77.99 – you save £3.90 (5%)

Order before 4pm tomorrow for delivery by Sat 10 Jan 2026.

A Paperback / softback by Mark Fewster, Dorothy Graham

1 in stock


    View other formats and editions of Software Test Automation by Mark Fewster

    Publisher: Pearson Education Limited
    Publication Date: 28/06/1999
    ISBN13: 9780201331400, 978-0201331400
    ISBN10: 0201331403

    Description

    Book Synopsis
    Dorothy Graham and Mark Fewster are the principal consultant partners of Grove Consultants which provides consultancy and training in software testing, test automation, and Inspection. Mark Fewster developed the test automation design techniques which are the primary subject of this book. He has been refining and applying his ideas through consultancy with a wide variety of clients since 1991. Dorothy Graham is the originator and co-author of the CAST Report (Computer Aided Software Testing tools) published by Cambridge Market Intelligence, and the co-author of Software Inspection published by Addison-Wesley in 1993. Both authors are popular and sought-after speakers at international conferences and workshops on software testing.

    Table of Contents
    Preface
    Part One: Techniques for Automating Test Execution
    1 Test automation context
    1.1 Introduction
    1.2 Testing and test automation are different
    1.3 The V-model
    1.4 Tool support for life-cycle testing
    1.5 The promise of test automation
    1.6 Common problems of test automation
    1.7 Test activities
    1.8 Automate test design?
    1.9 The limitations of automating software testing
    2 Capture Replay is Not Test Automation
    2.1 An example application: Scribble
    2.2 The manual test process: what is to be automated
    2.3 Automating Test Execution: inputs
    2.4 Automating Test Result Comparison
    2.5 The next steps in evolving test automation
    2.6 Conclusion: Automated is not automatic
    3 Scripting techniques
    3.1 Introduction
    3.2 Scripting techniques
    3.3 Script pre-processing
    4 Automated comparison
    4.1 Verification, comparison and automation
    4.2 What do comparators do?
    4.3 Dynamic comparison
    4.4 Post-execution comparison
    4.5 Simple comparison
    4.6 Complex comparison
    4.7 Test sensitivity
    4.8 Comparing different types of outcome
    4.9 Comparison filters
    4.10 Comparison guidelines
    5 Testware Architecture
    5.1 What is testware architecture?
    5.2 Key issues to be resolved
    5.3 An Approach
    5.4 Might this be Overkill?
    6 Automating Pre- and Post-Processing
    6.1 What are Pre- and Post-Processing?
    6.2 Pre- and Post Processing
    6.3 What should happen after test case execution
    6.4 Implementation Issues
    7 Building maintainable tests
    7.1 Problems in maintaining automated tests
    7.2 Attributes of test maintenance
    7.3 The conspiracy
    7.4 Strategy and tactics
    8 Metrics
    8.1 Why measure testing and test automation?
    8.2 What can we measure?
    8.3 Objectives for testing and test automation
    8.4 Attributes of software testing
    8.5 Attributes of test automation
    8.6 Which is the best test automation regime?
    8.7 Should I really measure all these?
    8.8 Summary
    8.9 Answer to DDP Exercise
    9 Other Issues
    9.1 Which Tests to Automate (first)?
    9.2 Selecting which tests to run when
    9.3 Order of test execution
    9.4 Test status
    9.5 Designing software for (automated) testability
    9.6 Synchronization
    9.7 Monitoring progress of automated tests
    9.8 Tailoring your own regime around your tools
    10 Choosing a tool to automate testing
    10.1 Introduction to Chapters 10 and 11
    10.2 Where to start in selecting tools: your requirements, not the tool market
    10.3 The tool selection project
    10.4 The tool selection team
    10.5 Identifying your requirements
    10.6 Identifying your constraints
    10.7 Build or buy?
    10.8 Identifying what is available on the market
    10.9 Evaluating the short listed candidate tools
    10.10 Making the decision
    11 Implementing tools within the organization
    11.1 What could go wrong?
    11.2 Importance of managing the implementation process
    11.3 Roles in the implementation/change process
    11.4 Management commitment
    11.5 Preparation
    11.6 Pilot project
    11.7 Planned phased installation or roll-out
    11.8 Special problems in implementing
    11.9 People issues
    11.10 Conclusion
    12 Racal-Redac Case History
    12.1 Introduction
    12.2 Background
    12.3 Solutions
    12.4 Integration to Test Automation
    12.5 System Test Automation
    12.6 The Results Achieved
    12.7 Summary of the case history up to 1991
    12.8 What happened next?
    13 The Evolution of an Automated Software Test System
    13.1 Introduction
    13.2 Background
    13.3 Gremlin 1
    13.4 Gremlin 2.0: A Step Beyond Capture/Replay
    13.5 Finding The Real Problem
    13.6 Lesson Learned
    14 Experiences with Test Automation
    14.1

    Recently viewed products

    © 2026 Book Curl

      • American Express
      • Apple Pay
      • Diners Club
      • Discover
      • Google Pay
      • Maestro
      • Mastercard
      • PayPal
      • Shop Pay
      • Union Pay
      • Visa

      Login

      Forgot your password?

      Don't have an account yet?
      Create account