Oct-28-2008

Get Smart: Storyotypes in your backlog

Post written by Chris Spagnuolo. Follow Chris on Twitter 10 comments

getsmart-shoephone.jpgAs I’ve been looking through backlogs from various organizations and teams, I’ve started to notice a trend. Well, less of a trend than finding numerous similarities. The similarities I’m seeing are in the user stories in the backlogs. Many of the backlogs I’ve seen have user stories that say something like “Get smart on the dojo toolkit” or “Find out more about the ASP.NET MVC“. I call them the “Get Smart” stories. The Get Smart stories are stereotyped stories that contain little or no detail. Storyotypes. It’s not that I don’t believe in stories for research spikes. What I don’t like is that these stories don’t follow the simple INVEST rule of user stories. If you’ve never heard of the INVEST rule, it basically says that user stories should be:

  • Independent
  • Negotiable
  • Valuable
  • Estimable
  • Small
  • Testable

Thinking back to the Get Smart stories, I think they are independent and valuable, but they fall short of being negotiable, estimable, sized appropriately, and especially testable. Now, I know that these stories generally are used to define research spikes and therefore are used to give better estimates for other user stories in the backlog. So, I can even let go of estimable, negotiable and small. What I want to focus on is the “T”: Testable. If all your story says is “Get smart on the dojo toolkit“, how does your team test if you “Got smart“? There are no specifics in the story about why your team needs to get smart. What is the reason for the research? As my colleague Ben Carey has described it, the testable attribute ensures that our acceptance criteria are not ambiguous. To make something testable – we need to make it measurable and specific. And I believe this has to apply to the Get Smart stories as well.

So, instead of “Get smart on the dojo toolkit“, try to rewrite the story to maybe say something like “In order to send complex data between the client and server as json, as a delivery team we want to research the dojo toolkit and the ASP.NET MVC.” And, if you want to go with the traditional user story format, you could write it as “As a delivery team, we want to research the dojo toolkit and the ASP.NET MVC so that we could figure out a way to send complex data between the client and server as json“. These stories are much more testable because they’re specific and measurable. When the team completes one of these stories, they should be able to describe how to send complex data between the client and server as json using either or both the dojo toolkit and the ASP.NET MVC. In fact, they would probably be able to demo a prototype of the functionality based on the stories. And, because these stories are specific, they prevent the team from wandering through documentation, tutorials, and code samples without any aim or goal. The research in this story is confined only to understanding how to send data as json. Nothing else. So, be specific and avoid storyotypes….do that and you and your team will get smart!

Share on Facebook
Post to Google Buzz
Bookmark this on Yahoo Bookmark
Share on FriendFeed
Bookmark this on Digg
Share on reddit
Share on LinkedIn
Share on StumbleUpon
Bookmark this on Delicious

  1. Ben Carey said,

    Great point. If you do make the stories testable then you could go as far as including sample tests as an outcome of the story. In your example with MVC, the acceptance criteria could certainly still be automated and the resulting tests could be leverage the tests as documentation.

    If a team member would play the ASP.Net MVC story, they could write tests that demonstrate the data exchange and distribute the tests as a representation of what was learned. Visibone does this on their quick-reference cards. I think that the tests end up being a great way to distribute the knowledge to the team in addition to the original learning.

  2. Oana Juncu said,

    Hi Chris, we do struggle a little bit with UI stories that are hard to test. In fact, they are submitted only to a classic acceptance test by the product owner; We tried to plan implementation of UI test oriented tools, but thy high degree of set-up complexity vs. recognized value/priority ro the business left it in an endless postponed state.
    By the way, I liked the “get smart” stoires, I’ve seen somethink alike in my project experience as “POC stories”.

  3. Venu Tadepalli said,

    Yes, we had a story “convert data.” But, laying out the acceptance criteria helped us understand the depth of the problem.

  4. Linda Cook said,

    Yes, teams sometimes have ‘placeholder’ stories. These stories are not estimated or testable in their current state. They are intended to remind everyone of important features that are date sensitive. Instead of working on these vague stories, the teams will spike the analysis so that the real stories with test criteria can be created and estimated. Caution is warranted to be sure not to overload (read create waste) the backlog with features that have little or unknown value.

  5. Michael Bolton said,

    I’m not sure I understand clearly the notion that a story is “not testable”. I think it’s because we have different notions of “testability”. To me, testability is “anything that helps us to ask or answer a question that we might have about a product”. Do you believe that “testable” means a) something that can be made subject to a Yes or No answer (or a set of them) posed by a human; b) something that can be made subject to a Yes or No answer /as asked and answered by a computer program/; or c) what I said; or d) something else?

    This is important to me, a tester, because testing is not merely a process of confirmation, verification, and validation. To me, testing is a process of exploration, discover, investigation and learning. That learning is about test coverage–the extent to which we have tried the program to learn sufficiently everything that matters about how the product can work and how it might fail. A program that has been subject only to confirmatory tests (to me) isn’t a program that has been tested; it’s been checked.

    Cheers,

    —Michael B.

  6. Scott Barnes said,

    This is why we need strong CSMs. If the story can’t be tested, it shouldn’t be a story. If you can’t test when it is done, you’ll possibly never be done. I have seen those as well. Good coaching quickly remedies this type of issue.

  7. Anand Kothari said,

    Great point.

    With TDD (Test-Driven Development), one can make sure that the story has test case(s) written first and the validation criteria is also free from ambiguity for the development team to overcome the lack of specificity. It also demands that the definition of done is crisp and measurable. This is something that the Agile team should practice to refine over period of time.

  8. Sharan Karekatte said,

    User Stories (US) are meant to be a little generic. They are meant to spawn discussion within the team to come up with concrete Acceptance Criteria (AC). These specific (and testable) AC need to be recorded since they form the basis for all work going forward. The Product Owners and Testers should agree on the AC. Since much thought has gone into these detailed AC, the US eventually becomes testable and preempts (for most part) wandering/drifting.

  9. Chris said,

    @Sharan K. Yes stories are meant to be generic, but not so generic that you can’t derive value from it. Especially when you’re writing research spike stories. If these aren’t specific, teams can end up wandering too much and exploring too many non-value oriented items.

  10. Hal Arnold said,

    I agree with VenuT, we drive all our story with story tests. When we feel comfortable with the tests, [which means they're testable] then we know we have something that we can get started on, and that will usually be fairly unambiguous.

    I’ve given over to calling them story tests; where we used to call them, either user acceptance or customer acceptance tests. Both these terms really confused the teams, both the customer, and the technical guys. Often the tests are ‘doable’ as unit or integration tests; that is, they prove the point, provide the basis for a satisfying and successful TDD session, focused on ‘customer value’, but aren’t demo-able in a fashion that a normal non-technical customer would understand. So I have given up calling them CATs, even though they are business level tests.

Add A Comment

 


Creative Commons License
©2011 Edgehopper.com. Please don't copy me, it's bad karma.
Edgehopper by Chris Spagnuolo is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.