January 2015

Volume 30 Number 1


Visual Studio 2015 - Web-Based Test Case Management with TFS

By Manoj Bableshwar | January 2015

Application Lifecycle Management with Team Foundation Server (TFS) is all about leveraging an integrated toolset to manage your software projects, from planning and development through testing and deployment. As a core part of Team Foundation Server, the Test Hub enables you to create and run manual tests through an easy-to-use Web-based interface that can be accessed via all major browsers on any platform. In this article, I’ll delve into the phases of manual testing—planning and creating tests, reviewing them with stakeholders, running the tests, and tracking the test progress of the team. I’ll touch on different value propositions, such as the flexibility to customize workflows; end-to-end traceability; criteria-­based test selection; change tracking and audit; sharing test steps and test data; stakeholder review; and most important, ease of use, especially for testers who’ve been using Excel-based frameworks for manual testing. To access the Test Hub, you can navigate to it by clicking on the Test tab in on-premises TFS, just the way you access the Work tab to manage backlogs or the Build tab to monitor builds. Alternatively, you can sign up for a free Visual Studio Online (VSO) account at visualstudio.com and activate the 90-day account trial to try out Test Hub.

Plan Test Activity for the Sprint

Sprints or iterations are units of planning for teams that practice Agile or Scrum methodologies. It makes sense to plan test efforts for a sprint, just as it’s done for user stories. To get started with test planning, create a test plan by providing a name and associating it with a team and a sprint. The test plan can have an owner and test cycle dates for out-of-band test activity such as a beta release sign-off or a user acceptance test cycle. Test plans in TFS are work items, so you get all the benefits of work items, such as change-tracking with work item history; permissions based on area paths; rich-text summary fields; file attachments and more. However, the most important benefit of work items is customization. Work item customization makes it possible to align the workflows and fields of artifacts used for tracking activities with the business processes used by the organization. This concept can be extended to better reflect the test activities practiced as part of your software development model, by customizing test plan work items. Moreover, the process of customizing test plan work items is similar to that of other work items, such as bugs or user stories. For example, the default states of a test plan can be changed from Active and Inactive to, say, Authoring, Testing, or Archived. Additional user fields such as reviewers, approvers, sign­off owner, and so forth, needed for accountability or audit requirements, can be added to the test plan. As you integrate your processes into the test plan, you may want to restrict access to it, so that only certain people, such as team leads or test managers, have access to create and modify test plans. The Manage Test Plans permission can be used to moderate access to test plans at a user or team level.  

Once you’ve set up a test plan, you’ll be eager to create and run tests. But before that, it’s important to think about the best way to organize those tests to enable reuse and end-to-end traceability of test efforts. Test suites are artifacts that are contained in a test plan and enable grouping of test cases into logical units. Test suites are of three types: requirement-based test suites (RBS), query-based test suites (QBS) and static test suites. Static test suites work like folders to organize RBS and QBS. If you want to group test cases yourself, you can manually pick and add test cases to static test suites.

Like test plans, test suites are work items, so all the customization benefits mentioned earlier apply to test suites. Some examples of custom fields for a test suite are summary fields describing instructions to set up the test application and fields to describe the nature of tests such as functional or integration, test complexity, and so on. Just as with test plans, you can moderate access to test suites at a user or team level with the Manage Test Suites permission. Changes to test cases contained in the suite, owner, state or other fields can be tracked in the test suite work item history.

End-to-end Traceability with Requirement-Based Suites

Requirement-based suites correspond to user stories (or product backlog items for scrum and requirements for CMMI-based projects) that the team is working on in the current sprint. The goal of creating an RBS by picking a user story is to enable traceability. Test cases created in an RBS are automatically linked to a user story, making it easy to find the scenarios covered to test the user story. Bugs, if any, that are filed while running these test cases are also linked to the user story and the test case, thus providing end-to-end visibility of a user story, its test scenarios and open bugs. This helps you measure the quality and ship-readiness of a feature.  

Criteria-Based Testing with Query-Based Suites

Regression-test coverage is as important as test coverage for new features. Teams typically set up regression-test coverage based on criteria—all priority 1 tests, all end-to-end scenario tests, all automated tests and so forth. Test Hub supports criteria-based testing with QBS; these suites are created by defining a query on test cases. Test cases that match the query criteria are automatically populated in the QBS, without any need to manually refresh the QBS. QBS can also be used in other scenarios, such as tracking test cases for bugs that are being fixed in the current sprint.

Creating Test Cases with an Excel-Like Grid

Test cases are the basic units of testing, each containing test steps that describe a set of actions to be performed, and expected results that describe what has to be validated at each test step. Each test step can have an optional attachment, for example, a screenshot that illustrates the output. Like test plans and test suites, test cases are work items, so all benefits of work item customization apply to test cases, as well.

There are two ways to create test cases. The first option is to use the test case work item form, which lets you create one test case at a time. The second option, and the one that really lets you breeze through creating test cases, is the Excel-like grid shown in Figure 1. The grid resonates very well with manual testers, who, typically, would’ve written and tested their test cases in Excel. With the grid, testers can create multiple test cases at a time, fluently typing test titles, steps, and expected results while navigating the grid with tabs, arrows, and the Enter key. It’s a simple experience to insert, delete, cut, copy and paste rows. What’s more, the grid can display all test case fields, such as state, tags, automation status and so on, plus these fields can be bulk-marked for multiple test cases. If you have an intermittent Internet connection or are just more comfortable writing test cases in Excel, you’re welcome to do that. Just copy and paste all the test cases you’ve written in Excel into the grid and save them to populate them into the system. In fact, if your team is just adopting the TFS Test Hub for testing, the grid can help you import your test cases from Excel. Check out the Test Case Migrator Plus utility at tcmimport.codeplex.com for advanced import requirements from Excel.

The Excel-Like Grid Can Be Used to Create Multiple Tests
Figure 1 The Excel-Like Grid Can Be Used to Create Multiple Tests

Share Test Steps and Test Data

Some test scenarios need specific test data as input to be meaningfully tested. Also, it makes sense to repeat tests with different variants of test data, for example, valid and invalid input sets or different combinations of items in a shopping basket. Parameters can be used to associate a test case with test data. With mature test teams that cover large and complex test scenarios, it’s quite possible that many test cases use similar test data to drive testing. Shared parameters can help you consolidate and centrally manage such test data. You can also import test data from Excel and use it to drive tests through shared parameters.

Just as with the test data, it’s possible the test steps are common across multiple test cases, for example the steps to log into an application or navigate to a form. Such common test steps can be consolidated into shared steps. The advantage of using shared steps is that a change, such as an updated application URL or an additional authentication step while logging in, can be updated in the shared step. Changes to shared parameters or shared steps will reflect across all referenced test cases instantly.

Review Tests with Stakeholders

Before running tests, it’s a good idea to share the tests with stakeholders, such as product managers or business analysts,  to solicit their comments. In cross-division or cross-organization development and test teams, such as outsourced test projects, a formal signoff may be required before proceeding with test execution. To share tests with stakeholders for review, you can export a test plan or a bunch of test suites by e-mail or print them to PDF or hard copy. The output of the e-mail dialog can be edited before sending it to stakeholders. You can also copy and paste to Word documents when stakeholders are required to respond with inline review comments.

Running Tests with the Web-Based Test Runner

To prepare the team to run tests, the test lead can assign tests to team members. The owner of a test case and the tester of a test case can be different people; the test lead has the flexibility to shuffle testers or even take the help of vendors to have tests executed. The most valuable feature of the Web-based Test Runner, which is used to run manual tests, is its cross-platform support. Because the Test Runner is browser-based, you can run it on any platform that supports any major browser—Internet Explorer, Chrome, Firefox and Safari.

The Test Runner presents the test steps and expected results in a narrow window, making it easy to read and execute the steps on the application being tested (see Figure 2). Image attachments that were created while writing the test case are visible and can be zoomed into. If your test case is driven by test data, then each row of parameter values included in the test case will correspond to one iteration of the test.

Web-Based Test Runner
Figure 2 Web-Based Test Runner

A test can have different outcomes—Passed, Failed, Blocked and Not Applicable. The Blocked state can be used when tests are waiting on an external dependency, such as a bug fix, and Not Applicable is useful when a test doesn’t apply to the current feature—a service release, for example. As you walk through validating the test steps, you mark them passed or failed. For failed steps, you can jot down comments for issues you observed while testing. You can report the failure to developers by creating a bug, right in the context of the Test Runner session. The bug is auto-populated with all the steps performed before you encountered the issue. The bug can also be updated with additional comments and screenshots before filing it. The bug is linked to the test case that was run while filing it and the requirement being tested, thus enabling end-to-end traceability. On the other hand, if you find that the discrepancy between the expected results and the application is because the application was recently updated, you can fix the test case inline, while it’s running. If you’re in a really long test session running many tests and need to take a break, you can pause the tests and resume them later. If you find that a test is failing for you, and wish to find out when it last passed or which team member got to execute it successfully, looking at the recent results of the test case will answer those questions.

While the Test Runner helps you walk through each test step of a test case in detail, the bulk-mark feature helps you pass or fail multiple tests at once. If you’re validating high-level test scenarios highlighted by the test case title, but not actually walking through detailed test steps, you can quickly mark each test’s outcome, without launching the Test Runner. The bulk-mark feature is particularly helpful when a large number of tests have been executed offline and their status has to be reflected back in the system.

Track Test Progress with Charts

“Is my feature ship-ready?” “Is my team on track to complete testing this sprint?” “Are all the test cases I planned for this sprint ready to run?” These are some of the questions in which test leads, test managers and stakeholders are interested. The Test Hub lets you create a rich set of charts to help answer such questions (see Figure 3). Charts come in two sets: test case charts that can be used to track the progress of test authoring activities, and test result charts that can be used to track test execution activities. And these charts can be different kinds of visualizations—pie, column, stacked bar, pivot table and so forth. Test case fields, such as owners, state, priority and the like can be used as pivots for test case charts. Test result charts come with the test suite, outcome, tester, run by, priority and more as pivots. For example, to find the test status of user stories, you can create a stacked bar chart with test suite and outcome as pivots for all the requirements-based suites being tested in the current sprint. These charts can either be created for a bunch of test suites or for a test plan to roll up information for the entire test plan. You can also share the insights with stakeholders by pinning the charts to the homepage. Finally, all the charts display real-time metrics, without any lag or processing delay.

Tracking Test Results
Figure 3 Tracking Test Results

Wrapping Up

The Test Hub isn’t just for manual testers. It’s a tool that product owners and business analysts can use to gauge how their features measure up against the acceptance criteria. The grid can be used to keep track of acceptance criteria for requirements, and can later be used for sign-off. To summarize, the Test Hub offers:

  • Customization of workflows with test plan, test suite and test case work items.
  • End-to-end traceability from requirements to test cases and bugs with requirement-based test suites.
  • Criteria-based test selection with query-based test suites.
  • Excel-like interface with the grid for easy test case creation.
  • Reusable test steps and test data with shared steps and shared parameters.
  • Sharable test plans, test suites and test cases for reviewing with stakeholders.
  • Browser-based test execution on any platform.
  • Real-time charts for tracking test activity.

Test Hub provides an easy yet comprehensive way to test the user stories you plan to release in a sprint. Test Hub is available on-premises with TFS, as well as in the cloud with Visual Studio Online. Get started with a free 90-day trial right away at visualstudio.com. To see Test Hub in action, watch the demo at aka.ms/WebTCMDemo.


Manoj Bableshwar is a program manager with the Visual Studio Online team at Microsoft. His team ships Manual Testing Tools to Visual Studio Online.

Thanks to the following Microsoft technical expert for reviewing this article: Ravi Shanker
Ravi Shanker Ravi works as a Principal Program Manager with the Visual Studio Testing Tools team.