user interface testing with microsoft user interface
play

User Interface Testing with Microsoft User Interface Testing with - PDF document

T13 Concurrent Session Thursday 10/25/2007 1:30 PM JUMP TO: Biographical Information The Presentation Related Paper User Interface Testing with Microsoft User Interface Testing with Microsoft Visual C# Visual C# Presented by: Vijay


  1. T13 Concurrent Session Thursday 10/25/2007 1:30 PM JUMP TO: Biographical Information The Presentation Related Paper User Interface Testing with Microsoft User Interface Testing with Microsoft Visual C# Visual C# Presented by: Vijay Upadya, Microsoft Presented at: The International Conference on Software Testing Analysis and Review October 22-26, 2007; Anaheim, CA, USA 330 Corporate Way, Suite 300 , Orange Park, FL 32043 888-268-8770 � 904-278-0524 � sqeinfo@sqe.com � www.sqe.com

  2. Vijay Upadya Vijay Upadya Industry Experience: I’ve been involved in software test for over 9 years; the last 7 at Microsoft. I’m currently working in the Microsoft Visual C# group and I primarily focus on test strategy, test tools development and test process improvements for the team. Speaking Experience: Speaker at QAI International Quality Conference, Toronto, 2006.

  3. User Interface testing with Microsoft Visual C# A case study on Visual C# team’s approach to UI testing Vijay Upadya, Microsoft

  4. Agenda � Introduction � Problem definition � Path to solution � Testability � Layered approach to testing � Results � Conclusion

  5. Objectives � How to test UI-centric components by by-passing the UI � How to design software with testability � How to leverage testability to adopt data driven testing

  6. Overview of SUT � System under test (SUT) = Visual Studio.NET � Component under test = Visual C# code editor � Features under test = Refactoring, Formatting, etc � Test data = C# source code

  7. System Under Test (SUT)

  8. Way we tested before SUT Test Test 1 � Execution steps Test 2 � Test data � Execution steps � Test data

  9. Code example – UI test External External dependency dependency / / I nvokes r enam e di al og publ i c voi d publ i c voi d I nvokeRenam I nvokeRenam eDi al og( eDi al og( st r i ng st r i ng ol dSt r i ng, ol dSt r i ng, st r i ng st r i ng newSt r i ng) newSt r i ng) { / / Sear ch t he st r i ng t o be r enam ed i f ( Ut i l i t i es. Fi nd( i f ( Ut i l i t i es. Fi nd( ol dSt r i ng) ! = Fi ol dSt r i ng) ! = Fi ndResul t . Found) ndResul t . Found) t hr ow new t hr ow new Except i on( ol dSt r i ng Except i on( ol dSt r i ng +“ not f ound! “ ) ; ot f ound! “ ) ; UI Sync UI Sync Ut i l i t i es. DoRenam Ut i l i t i es. DoRenam e( newSt r i ng) ; e( newSt r i ng) ; Ut i l i t i es. Sl eeper . De Ut i l i t i es. Sl eeper . Del ay( wai t Dur at i on) ; l ay( wai t Dur at i on) ; / / Conf i r m pr evi ew changes wi ndow l oaded i f ( ! Pr evi ewChangesRenam i f ( ! Pr evi ewChangesRe nam eDi al og. Exi st s( app) ) eDi al og. Exi st s( ap p) ) t hr ow new t hr ow new Except i on( " Except i on( " Pr evi ewChanges wi nd Pr evi ewChanges wi ndow di d not l oad! ow di d not l oad! " ) ; " ) ; }

  10. Problem definition � All test automation written through UI � Product UI changed constantly till very late in product cycle � Tests took unnecessary dependency on other features � Larger surface area in tests increased probability of false failures � Lots of test failures due to UI synchronization issues

  11. Consequence � Automation generated high noise � Team spent lot of time on investigating non-product related test failures

  12. Path to solution � Investigated ways of testing the core functionality behind UI by by-passing it � Came up with list of test hooks in the product that tests could directly call into � Wrote minimal set of targeted UI tests

  13. What is testability? � Testability is the characteristic of a piece of software that enables all of its code paths to be exercised by an automated test in an efficient manner. � In other words: “ How expensive is it to test?” � Testability is determined by the SOCK analysis: S implicity, O bservability, C ontrol and K nowledge of Expected Results .

  14. Testability example – Rename refactoring � Simplicity Have clear separation between UI code and the code that actually performs the refactoring � Observability Need following testability hook to the product to get validation information HRESULT Renam HRESULT Renam eAndVal i dat e( eAndVal i dat e( / * i n* / BSTR ol dNam / * i n* / BSTR ol dNam e, e, / * i n* / BSTR newNam / * i n* / BSTR newNam e, e, / * out , r et val * / BSTR * pVal i dat i onResul t ) / * out , r et val * / BSTR * pVal i dat i onResul t )

  15. Testability example (cont…) Control Need programmatic access for invoking the feature by � HRESULT Renam HRESULT Renam e( / * i n* / BSTR ol dNam e( / * i n* / BSTR ol dNam e, / * i n* / BSTR e, / * i n* / BSTR newNam newNam e) e) This should provide same functionality as the rename � refactoring dialog but in a programmatic way. Knowledge of expected results HRESULT above should contain error information for ‘Rename’ � failure cases

  16. Why is testability important? � Reduces the cost of testing in terms of time and resources � Reduces the time to diagnose unexpected behavior � Increases the effectiveness of tests and the quality of the product

  17. Testability - Best Practices � Add "Testability" section to feature spec templates � Ask, "How are we going to test this?“ in spec reviews � Understand why and how a testability hook will be used before asking for it! � Prioritize testability hooks based on milestone exit criteria � Legacy code: It’s never too late to add testability

  18. Layered approach to testing Test type SUT Integration UI Level tests Scenario Object tests model level Component Component tests level Unit tests Unit level

  19. Layer definitions � UI level Features are automated by manipulating UI elements such as windows and controls � Object model level Functionality accessed programmatically by by-passing the UI � Component level Multiple components tested together but without the entire system being present. � Unit level Individual APIs are tested in isolation

  20. Layered approach – In action SUT Target level Target level = Object Model = UI UI Level Test Intent Object Test New model level Engine Test Engine Test data 1 Component level Test data 2 Unit level Test data 3 Test data n

  21. Sample test intent file (XML) - Rename <action name="CreateProject" /> <action name="AddFile" fileName="TC1.cs" /> <action name="OpenFile" fileName="TC1.cs" /> <action name=“Rename" startLoc="Begin" oldName=“Class1" newName=“NewClass1" /> <action name="AddConditionalSymbol" name="VERIFY" /> <action name="Build" /> <action name="Run" /> <action name="CleanProject" />

  22. Sample test data file cl ass cl ass / * Begi n* / / * Begi n* / Cl ass1 { Cl ass1 { publ i c voi d publ i c voi d M M et hod( ) { et hod( ) { } } cl ass cl ass Test { Test { st at i c i nt st at i c i nt M M ai n( ) { ai n( ) { #i f #i f VERI FY VERI FY NewCl ass1 c = ne NewCl ass1 c = new NewCl ass1( ) ; w NewCl ass1( ) ; c. M c. M et hod( ) ; et hod( ) ; r et ur n 0; r et ur n 0; #endi f #endi f r et ur n r et ur n 1; 1; } }

  23. Layered approach – Best Practices � Key to success of this approach � Separation of “test intent” from “test execution” � Leverage testability hooks This enables � � Running the same test on multiple test data � Running the same test on multiple target levels

  24. Results � Test robustness: ~100% test robustness � Test authoring: Easier and faster to write tests � Coverage: High coverage as same tests run on multiple targets � Performance: Tests run 70% faster than the previous UI tests � Communication: Increased interaction with developers and helped testers understand the components better

  25. Conclusion � Test UI-centric components without using UI � Investing early on testability really pays off � Focus on automating at non-UI levels to get highly maintainable tests � UI testing doesn’t go away completely � Separating test intent from test execution helps in achieving high coverage

  26. Questions?

  27. User Interface Testing with Microsoft Visual C# A case study on Visual C# teams approach to UI testing Vijay Upadya Microsoft 08/07/2007

  28. Abstract Manually testing software with a complex user interface (UI) is time-consuming and expensive. Historically the development and maintenance costs associated with automating UI testing have been very high. This paper presents a case study on the approaches and methodologies the Visual C# test team adopted as an answer to these testing challenges that plagued the team over many years. Paper explains how the team designed testability into the product Microsoft Visual Studio 2005. These features allowed the test team to create a robust and effective test suite that bypasses the UI completely. However testing through the UI is important to uncover integration issues in the product. This paper also explains how team developed an approach that enables reusing the same tests to exercise the features through UI and through testability APIs. These resulted in dramatic reduction of costs associated with developing and maintaining tests.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend