A Framework for Testing for A Framework for Testing for Repeatable - - PDF document

a framework for testing for a framework for testing for
SMART_READER_LITE
LIVE PREVIEW

A Framework for Testing for A Framework for Testing for Repeatable - - PDF document

T6 Concurrent Session Thursday 10/25/2007 11:15 AM JUMP TO: Biographical Information The Presentation Related Paper A Framework for Testing for A Framework for Testing for Repeatable Success Repeatable Success Presented by:


slide-1
SLIDE 1

T6

Concurrent Session

Thursday 10/25/2007 11:15 AM JUMP TO: Biographical Information The Presentation Related Paper

A “Framework for Testing” for A “Framework for Testing” for Repeatable Success Repeatable Success

Presented by: Randy Slade, Kaiser Permanente HMO

Presented at: The International Conference on Software Testing Analysis and Review October 22-26, 2007; Anaheim, CA, USA 330 Corporate Way, Suite 300 , Orange Park, FL 32043 888-268-8770 904-278-0524 sqeinfo@sqe.com www.sqe.com

slide-2
SLIDE 2

Randy Slade Randy Slade

Randy Slade and Viktoriya Kozlova are Test Managers at Kaiser Permanente HMO. Together they led the development and implementation of the “Framework For Testing” for their testing groups. They jointly developed this presentation and white paper. Viktoriya graduated from Kiev National University and was previously a teacher. She has 15 years experience in Quality Assurance and Testing, with 9 years at Kaiser and prior to that was with IBM. Viktoriya describes herself as pro-active, not re-active. She believes in: managing chaos; tracking deliverables; and using repeatable processes. Randy has 30 years experience in software development, Quality Assurance, and Testing across various industries. He is a graduate of California State University at Chico, and attended Duke University Fuqua Graduate School’s Management Development Program. Randy has been a Test Lead, Test Manager, Testing and Quality Assurance Consultant at various companies including: Pacific Bell, BellCore Communications; Kaiser Permanente; California State Automobile Association; Pacific Gas & Electric; Wells Fargo Bank; and Patotech. Randy is an advocate for continuous improvement, repeatable processes, building quality into a product from the beginning, and implementing lessons learned.

slide-3
SLIDE 3

08.07.2007

A “Framework for Testing” for Repeatable Success

by: Randy Slade Viktoriya Kozlova From Kaiser Permanente HMO

slide-4
SLIDE 4

The History

Testing Team of 60 to 125 resources Resource pool management Delivery of 60 – 80 projects per year Deployments of in-house developed, off-shelf stand alone, and

hybrid integrated applications

Collaborative effort with applications development and business

team

Collaborative effort with applications vendors

slide-5
SLIDE 5

The Challenge

No standards or repeatable processes Incoherent approach and inconsistent results Communication gaps Dependencies on individual expertise, knowledge, and heroic efforts Manage workload Manage peaks and valleys

slide-6
SLIDE 6

Desired Goals

Develop clearly defined, standardized, repeatable processes Identify set of critical deliverables for each testing phase Develop standardized, re-usable templates Define guidelines to manage resource pool and projects workload Develop guideline to maintain project historical data

slide-7
SLIDE 7

Benefits_Processes

Implementation of “best practices” on all projects, regardless of

resource staffing

Achievement of predictable and manageable results including

historical data collection

Development of Subject Matter Expertise for applications and

utilization of Cross Knowledge practices for all projects

Leverage new improvements across all projects using unified

approach

slide-8
SLIDE 8

Benefits_Resource Management

Develop pool staffing model of highly professional testing resources

with proficiency in testing systems/components/areas

Flexibility with resources allocation assignments and re-assignments

to the projects with ability to sustain commitments continuity

Leverage workload with re-usability of testing artifacts Motivate resources professional and leadership development Support individual initiatives for processes improvements

slide-9
SLIDE 9

Approach_Initiation

Identify key stakeholders from impacted areas to get multiple

perspectives based on different experiences

Clearly define goals and benefits Formulate desired outcome statement Obtain overall group commitment Form a Core Team

slide-10
SLIDE 10

Approach_Core Team

Review existing processes for common phases (engagement,

planning, preparation, execution, deployment)

Conduct gap analysis and identify missing processes Prioritize based on work impact/criticality/effectiveness Define process development/improvement guidelines Assign Lead for each sub-team for process development Develop overall workplan

slide-11
SLIDE 11

Approach_Process Development/Improvement Sub-Team

Form a Sub-Team Schedule an introduction meeting for Sub-Team with clearly define

agenda

Outline process development tasks and milestones Develop implementation strategy and delivery timeline Develop process with designated Sub-Team Send for review to other Sub-Teams Collect and incorporate feedback

slide-12
SLIDE 12

Approach_Process Development/Improvement Sub-Team

Identify testing artifact(s) that should support new/improved process

implementation

Assign sub-team resource for artifact(s) template development Conduct sub-team internal and Core Team external artifact(s)

template reviews. Incorporate feedback

Pilot new/improved process/artifact(s) template on selected projects Obtain approval from the Core Team on new/improved process

and/or artifact(s) template

Publish and implement new/improved process and/or artifact(s)

template

slide-13
SLIDE 13

Implementation Requirements

Process should be documented Process might have a supporting presentation and/or

supporting workflow

Process might have a supporting artifact(s) template Process should be piloted Process and/or artifact(s) should be approved for implementation

slide-14
SLIDE 14

Process Workflow_ Example (Overview)

slide-15
SLIDE 15

How To Read A Process

Start at the top Each numbered box has an associated detailed sub-process Bracket on a side with blue colored text indicates pertinent

template location for the designated box

Bracket on a side with black colored text indicates pertinent

document for the designated box

Step by step road map from the beginning to the end of the

specific process

slide-16
SLIDE 16

Sub-Process Workflow_ Example

slide-17
SLIDE 17

Framework For Testing

“Framework For Testing” is an outcome of six month Process

Development/Improvement effort.

Implementation of this framework has resulted in high quality

processes, templates, tools and presentations that support the implementation of a standardized testing methodology

Testing deliverables were met and have exceeded customer

  • satisfaction. This framework supported timely deployments

within the defined budget

This methodology worked effectively in the current and planned

information technology infrastructure

slide-18
SLIDE 18

Methodology Workbook

Consolidate new/improved processes organized by five phases

  • f a software testing life-cycle

Guided resources through the “what, when, how and why” steps Develop Entrance and Exit Criteria for each phase as a check

point

Provide step by step workflows for each process and sub-

process

Provide narratives for each workflow and reference for

supporting templates

slide-19
SLIDE 19

Testing Life Cycle

Testing Life Cycle consists of five phases:

Engagement Planning Preparation Execution End of Engagement

slide-20
SLIDE 20

Engagement Phase

Collections of:

Workflows – Overall Phase Workflow, Sub-flows for each sub-

process

Presentations – Testing Services Introduction Templates – Engagement Form

Charge Agreement Testing WorkPlan Project Status Report

Tools – Early Engagement Estimation

Time Reporting Tracking

slide-21
SLIDE 21

Planning Phase

Collections of:

Workflows – Overall Phase Workflow, Sub-flows for each sub-

process

Presentations – Readiness Assessment For Testing (RAFT)

Offshore Readiness Assessment for Testing (ORAFT)

Templates – RAFT

ORAFT Requirements Review Log Test Plan Test Coverage & Traceability Matrix

slide-22
SLIDE 22

Planning Phase

Tools – RAFT Score Calculation and Recommendations

ORAFT Score Calculation and Recommendations Requirements Upload into Quality Center Repository Risk Reporting Lessons Leaned Data Collection

slide-23
SLIDE 23

Preparation Phase

Collections of:

Workflows – Overall Phase Workflow, Sub-flows for each sub-

process

Presentations – Cross Knowledge Templates – Test Cases

Peer Review Business & Development Review Cross Knowledge

Tools – Test Cases Upload into Quality Center Repository

slide-24
SLIDE 24

Execution Phase

Collections of:

Workflows – Overall Phase Workflow, Sub-flows for each sub-

process

Templates –

Execution Status Report Defect Status Report Consolidated Status Report Test Summary Report

Tools – Test Readiness

slide-25
SLIDE 25

End Of Engagement Phase

Collections of:

Workflows – Overall Phase Workflow, Sub-flows for each sub-

process

Presentations – Lessons Learned Templates –

Lessons Leaned End of Engagement Plan

slide-26
SLIDE 26

Standards Outside of Testing Life Cycle

Standard Interview Approach New Employees Testing Orientation Project Transition Resource Transition Projects & Resources Allocation Tracking Cross Area Resources Allocation

slide-27
SLIDE 27

Key Leaning

Continues and consistent processes maintenance – you are

never done

Constant Improvement can be driven across the organization

with common processes

The best way to an organization is to develop and implement

unified processes, tools and templates

slide-28
SLIDE 28

STARWEST 2007 A “Framework For Testing” for Repeatable Success Authored by: Randy Slade and Viktoriya Kozlova Viktoriya Kozlova and I are Test Managers at Kaiser Permanente HMO. I would like to tell you about the situation we found ourselves in a few years ago and the solution we developed to improve the quality of the software being delivered and reduce the heroic effort levels that were needed to produce success. Our hope is that some of these challenges may resonate with you, and if they do, that you will find our solutions beneficial. A few years ago we found ourselves in charge of a testing group of 60 people who were: burned out from never ending heroic challenges; had no standards or repeatable processes; no common ways of doing things; no best practices shared; who were dramatically under-staffed and were constantly overwhelmed by more work than they could do. The types of projects they were testing varied from medical systems; human resource systems; technology support systems; finance systems; etc. They included in-house developed software solutions, vendor products, off-the-shelf products, and combinations

  • f the above.

Our challenge was to provide the testing for all of these applications before they go into the Kaiser production environment. This included requirements based functional testing, traditional system testing, end-to-end testing, workflow integration testing, regression testing, production support testing, etc. To tell you a little more about our operation, we were using the resource pool model. This meant during the course of a year everyone worked on multiple applications and projects. Our work, with few exceptions, was not

  • n-going. It came, we tested it, then it was gone and might not come back any time soon.

We tested 60 to 80 projects in a year, and grew our team from 60 people to 125 in a little

  • ver six months.

With no standards or repeatable processes and in incoherent approach which yielded inconsistent results, we had to manage an ever changing workload while growing a staff to double its beginning size. We set down and laid out our objectives. Our organizational environment was that we were part of an independent test organization. We were not part of the development

  • group. It was up to us to define and communicate the types of testing services we would
  • provide. We wanted to have clearly defined standardized repeatable processes that would

be applicable across our varied projects. We needed to identify a set of critical deliverables and determine when they should be delivered. It became immediately clear that we needed to have standardized re-usable templates for each of our deliverables. We

slide-29
SLIDE 29

also needed to define guidelines to manage the resource pool and project work loads. Eventually we realized the need for guidelines to maintain project historical data. What we hoped to accomplish was to be able to implement best practices on all of our projects, without regard to who was performing the testing. We wanted to have predictable manageable results and collect historic data which could be learned from and perhaps re-used later. We wanted to develop a broader pool of subject matter expertise for applications that re-occurred, and utilize cross knowledge sharing practices for all

  • projects. In short, we wanted to build robustness into our processes, and wanted to have

ways to offset the dependence on specific critical knowledge residing in only one

  • resource. Another benefit we sought was to be able to leverage improvements across all

projects by using a unified approach. We also wanted to develop processes which would be transferable to all of the new people coming on board so that they would have the same approach, tools, procedures, and expectations about doing their job that the current team had. The benefits we received from implementing these changes were:

  • We developed a pool staffing model of highly professional testing resources with

proficiency in testing multiple systems, components, and areas.

  • This gave us flexibility in assigning resources to projects, and re-assigning them

due to changing needs, with the ability to still meet our commitments. It enabled us to roll people on and off of projects as needed, with little interruption and minimal spin up time.

  • Using common templates for testing artifacts increased re-usability.
  • By creating an environment of constant improvement, with standard processes

and tools, motivated our testers and increased their professional development.

  • The work on process improvements let to individual initiatives to improve

processes. The Approach This section will describe how we began this effort. We identified key stakeholders from impacted areas to get multiple perspectives based on their different experiences. We clearly defined our goals and desired benefits. We formulated a desired outcome

  • statement. We obtained overall group commitment to this plan, because it was going to

require additional effort on everyone’s part. We had to convince them that even though they were drowning now, it would pay off in the long run if they invested effort now to make it better tomorrow. Once we had the commitment, we formed the core working team. The next steps were:

  • Review any existing processes for common testing phases (engagement, planning,

preparation, execution, deployment)

  • Conduct a gap analysis and identify missing processes.
  • Prioritize needed processes based on work impact, criticality, effectiveness.
  • Define process development/improvement guidelines.
slide-30
SLIDE 30
  • Assign a lead for each sub-team for process development.
  • Develop an overall work plan.
  • Formed individual sub-teams.
  • Scheduled introduction meeting for the sub-team.
  • Outlined process development tasks and milestones.
  • Develop implementation strategy and delivery timeline.
  • Develop the needed process with the sub-team.
  • Send the process to other sub-teams for review and feedback.
  • Collect and incorporate feedback
  • Identify testing artifact that would be supporting the new/improved process when

it is implemented.

  • Assign sub-team resources to develop the artifact template.
  • Conduct sub-team internal and Core Team reviews on the artifact template.

Incorporate feedback as needed.

  • Pilot new/improved process and artifact templates on selected projects.

Incorporating feedback as appropriate.

  • Obtain approval from the Core Team on the new/improved process and artifact

template.

  • Publish and implement the new improved process and artifact template.

That is the process we followed for developing all of our processes and templates. It may sound long and arduous, but this allowed us to keep control of the work of multiple teams working at the same time developing different processes and templates and having standard deliverables that conformed to the same criteria. Some of those standard criteria are:

  • The process must be documented, using the common formats.
  • Processes may have to be supported by a presentation and or workflow.
  • A Process might have a supporting artifact template.
  • Processes should be piloted before they are put in general use and considered

“final”.

  • A Process and/or artifact should be approved by the core team before it is

considered “final” and put into general use. Reading a Process Workflow Refer to the power point presentation for a sample workflow to review. Here are some tips on reading the workflows we created. Reading a process workflow is pretty easy once you understand a few of the basics.

  • Start at the top.
  • Each numbered box has an associated detailed sub-process.
  • The bracket on the side with blue colored text indicates the location of a template,

presentation, or folder that is pertinent to the box.

  • The bracket on the side with black colored text indicates a document that is

pertinent to the box.

slide-31
SLIDE 31
  • This provides you a step by step road map for a process from beginning to end.

Next is an example of a sub-process workflow. This is an expansion of the box numbered 1.2, showing what all it entails. When all of the processes are put together, you have the “Framework For Testing”. It was the outcome of a six month process development/improvement effort. Implementation of this framework has resulted in high quality processes, templates, tools, and presentations that support the implementation of a standardized testing methodology. With the implementation of these processes, testing deliverables were met and customer satisfaction has greatly improved. We have been able to support timely deployments within the defined budget. This methodology has worked effectively on our varied and diverse projects and on a changing IT landscape. Testing Workbook The next step was to consolidate the new and improved processed and organize them by five phases of the software testing life-cycle. This workbook can now guide our resources through the “what, when, now, and why steps of testing for all of their projects. As they move through the testing phases they have Entrance and Exit criteria which act as a checkpoint to assure that they are ready to begin the next phase. The workbook provides step by step workflows for each process and sub-process which will can be followed from the beginning of a project through to the conclusion. The workbook also provides narratives for each workflow and has a reference for supporting templates. Many of the tools, templates, and deliverables are negotiable. They may not be applicable to all projects. They identify the maximum set that could be needed, it is up to the lead to negotiate the services and deliverables that will used on each project. We have broken the Testing Life Cycle down to the following phases:

  • Engagement
  • Planning
  • Preparation
  • Execution
  • End of Engagement

Engagement The Engagement phase is basically when the client engages us for testing services. There is a narrative description of the Engagement phase, including a description of duties the test lead will be expected to perform. There are also “average duration estimations” for the major deliverables. Those major deliverables are: Early Engagement Estimate (if applicable); Charging Financial Agreement (including a Statement Of Work); a Testing Work Plan (like a MS Project work breakdown structure for testing activities). In addition to the narrative, there is a workflow and sub-flows for the processes covering the Engagement phase. There is one presentation, the Testing Services Introduction. This presentation is given by the test lead to the Project Manager and/or Client Manager. It describes who we are, what services we provide, what deliverable they can expect from

slide-32
SLIDE 32

us, and how we perform our work. There are four (4) templates used in the Engagement

  • Phase. They are: Engagement Form; Charge Agreement; Testing Work Plan; Project

Status Report. There are two tools used during this phase, the Early Engagement Estimation tool and the Time Reporting Tracking tool. The Engagement Phase Exit Criteria includes the following description: Internal Engagement Phase Exit Criteria for Testing Deliverables is a standard that must be met to consider the work for the Engagement Phase completed. The following checklist is an evaluation point for the Test Lead prior to moving to the Test Planning

  • Phase. Items that were not met as Exit Criteria for Testing Deliverables for the

Engagement Phase should be escalated to the management level and may be presented as a risk factor to the project team.

  • NCAL Regional Testing Engagement Form (completed)
  • NCAL Regional Testing Introduction Presentation (conducted)
  • Charge Agreement (approved and signed off)
  • Testing Work Plan (approved)

Test Planning As with each test phase, there is a description to assist the test lead with their activities during this phase. This is where planning starts. It begins with reviewing the Entrance Criteria for the phase, which is similar to the exit criteria for the previous phase. It also describes which documents should be reviewed, revised if necessary, and then signed off by the funding authority. The Test Lead then completes the Readiness Assessment for Testing (RAFT) form and evaluation. This consists of a series of questions that the test lead asks the Project Manager and/or Client Manager to determine if the project is following the Kaiser software development life cycle (SDLC), and if so, where the project is in the SDLC. The project will be evaluated in several areas where they are deficient will be highlighted in the tool. This is a non-threatening way of identifying risk early on and beginning risk mitigation discussions. During planning and initial requirements verification process is performed. The first draft of the Test Plan is begun. A Test Coverage and Traceability Matrix is begun. Finally, the Planning Phase Exit Criteria (Testing Deliverables) Checklist is reviewed. In the workbook there is a narrative description of what the Test Lead needs to do for each of the activities contained in the phase, and tips on how they can be best

  • accomplished. These narratives are followed by process workflows which describe step

by step what needs to be done by whom, the locations of templates, and where to store or submit documents. Each phase contains lists of the presentations which may be pertinent, templates to be used, and tools that may be used. To give you an idea of what these could include, the following are from the Planning Phase Exit Criteria (Testing Deliverables): Charge Agreement Addendum (complete, if applicable) RAFT and RAFT Recommendations (presented and approved) Testing Work Plan Addendum – if applicable (approved) Initial Requirements Review (completed)

slide-33
SLIDE 33

Requirements Repository (stored in the Tool selected by the project) Test Coverage and Traceability Matrix (initiated and reviewed) Test Plan (signed off) Lessons Learned Initial Data Collection Matrix (if applicable) Preparation Phase If you consider the test planning process is similar to peeling an onion, this is the next level of detail. In this phase, as more information becomes available, test artifacts created in previous phases may need revision. In this phase there is a review of the logical and physical solution design. The Test Coverage and Traceability Matrix is

  • finalized. Test Cases are developed and reviewed. The review process consists of an

internal peer review of test cases, and an external client review and signoff, which includes client prioritization of the test cases. The internal review is one of several steps used to share knowledge with other test team members so that no one team member is the

  • nly person knowledgeable enough to perform the testing in that area. When a peer signs
  • ff on a test case review, they are saying that not only do they agree that the test cases are

adequately developed, contained sufficient detail, but also that they could execute them and evaluate the results. This also allows flexibility in moving resources. The workbook contains narrative descriptions of what the Test Lead needs to do to accomplish each of these tasks, with tips and estimates of average durations for many of the deliverables. Test Execution Phase The Test Execution phase is where the documented, reviewed and approved test cases are executed in the manner described in the test plan. This is what most people think is the

  • nly thing we do – testing. Actually, as testing professionals we know that this is only
  • ne phase in the testing process. As with all other test phases, the workbook contains

narrative descriptions of each of phase deliverables. There are workflows for the process and sub-processes. There are Templates for the testing artifacts, and there are tools. Test Execution begins with a Test Readiness Review. This review is conducted by the test lead and is attended by core project members. The purpose of the review is to determine if all is in place and ready for the test execution to begin. Its purpose is described in the work book as:

  • To confirm that all testing entrance criteria are met.
  • To insure that all dependencies and deliverables from other teams are completed.
  • To have an agreement that testing can be conducted.
  • To identify associated risks and document them in the Risk Reporting form for

tracking. The Test Execution Phase ends with the Test Summary Report. This report describes what was accomplished in testing and the major results of what we found. It is a formal description of what occurred during testing, including variances of test items to their design specifications. It explains any variances from the test plan, test designs, or test procedures.

slide-34
SLIDE 34

End Of Engagement Phase The End Of Engagement Phase has two main testing artifacts created or completed. The first is Lessons Learned. Often Lessons Learned input is created by the test team during each of the testing phases. In this phase, they combine the previously collected input, and conduct a final Lessons Learned with the test team. Please keep in mind that this is an “internal to testing Lessons Learned”. There is often also a project Lessons Learned which is conducted by the Project Manager. The final testing artifact created is the End of Engagement Plan. This is an internal document and does not have to be sent to the project team. The purpose of this document is to provide a summary of all project information related to the testing engagement, and to list all activities and deliverables with their indicated location. The objective of this document is to provide: project generic information; provide historical information on project stakeholders, their roles and responsibilities; to specify all testing activities, deliverables, knowledge transfer, timelines, risks and assumptions to assure a smooth re- engagement and re-use of pertinent information, documentation, and testing artifacts. Standard Outside of the Testing Life Cycle There were some standards, tools, templates that we also created which fall outside of the Testing Life Cycle. They are helpful to us, and may be of value to you if you determine that you need to add more process rigor to your testing approach. They include: Standard Interview Approach – Remember that we had to grow the team from 60 to 125 in six months. This meant that we needed to have standard approach to interviewing, because we would not always have the same interviewers. We developed lists of questions, areas to score, a scoring template, a rating scale, etc. New Employees Testing Orientation – This was to indoctrinate each new tester and lead into our approach for testing. To give them the same mindset our current team had

  • n the processes we had developed and how they were used. It also included the usual

what a new employee needs to know that is not taught by HR. Project Transition - We found that sometimes testing responsibilities were passed to us from other groups. This was a challenging area, since everyone approaches testing differently and passes knowledge on in different ways. We developed this area out of self defense, and to standardize best practices. Resource Transition – Because we were using the “pool model”, we had to be adept at transitioning resources on and off projects effectively. Project & Resource Allocation Tracking – with 60 to 125 people who were working on 20 to 40 projects at any given time, we had to come up with a way to track the projects and the resources to make sure we maximized efficiency, kept everyone busy, and didn’t drop the ball on any projects. This is our major tool for management reporting on resource utilization. Cross Area Resource Utilization Tracking – As if it was not complicated enough to keep track of all of the work on our plate, we came up with a process to share resources across the different Kaiser regions and departments. This was to meet the needs of peak test loads in one region with the resources from another region that was not fully being

slide-35
SLIDE 35

utilized at the moment. This reduced the need for short term contractors and greatly improved the ramp-up time to get additional testing resources up and running on a project. Key Learnings Some of the key learnings we have from having gone through this process are:

  • Continuous and consistent process maintenance is required. You are never done.
  • Constant improvements can be driven across the organization with unified

processes.

  • The best way to grow an organization is to implement common processes, tools

and templates.

  • The road to success is to learn, develop, improve, share, and implement.

Summary This has been a description of what one testing group did when they found they had no formal processes or standards and needed to institutionalize their best practices in a way that could be used by all of the testers and leads on all of their projects. Also they needed to meet the needs of new people who would be brought in at future dates, but who would need these same uniform ways of performing our job as testers. As the processes were identified and implemented, they were then put together into the “Framework For Testing”. It was the outcome of a six month process development/improvement effort. Implementation of this framework has resulted in high quality processes, templates, tools, and presentations that support the implementation of a standardized testing methodology. With the implementation of these processes, testing deliverables were met and customer satisfaction has greatly improved. We have been able to support timely deployments within the defined budget. This methodology has worked effectively on our varied and diverse projects and on a changing IT landscape. The final thought I would like to leave with you is that what we have done here can be done in any organization. It does not matter how large the test team, what type or how many applications they test, what type of business they support. You too can develop and implement a “Framework For Testing” which will lead to repeatable success!