Nov-5-2008

QA and Testing in an Agile Environment

Post written by Chris Spagnuolo. Follow Chris on Twitter 55 comments

In the past few weeks I’ve been asked about and have been considering exactly how to fit QA and testing into a two week iteration. A primary concern of the folks I’ve been talking with is that QA’s and testers on an agile team have nothing to do at the start of an iteration. The second concern is that we can’t keep writing new code up until the last minute of an iteration if QA is to adequately test the code, and as such, what do the developers do at the end of the iteration. Of course, the underlying concern in both of these cases is keeping the QA’s and the devlopers effectively utilized during an iteration. Software quality always seems to boil down to a utilization/cost equation doesn’t it? Well, after giving it some thought, I think I’ve come with a basic schedule for QA’s and developers over a two week iteration. Here’s the plan:

Slide1.jpg

So, let’s take a walk through the schedule. On the first two days of the iteration, the developers get busy coding the stories in their backlog while the QA’s start writing test cases/acceptance tests. The QA’s should also be running these tests against the existing code base. This validates that the test harness is working correctly and that the new tests do not mistakenly pass without requiring any new code. Obviously, the new tests should also fail for the expected reason. This tests the tests themselves, in the negative: it rules out the possibility that the new tests will always pass, and therefore be worthless. OK, so that’s day one and two.

Over the next five days, the QA’s begin testing any testable code that has been completed. At the same time, the developers continue coding and also start working on fixing any defects picked up in the testing. Pretty standard days. Code-test-fix.

OK, we’re deep into the iteration now. Days eight and nine see QA continuing to test all of the remaining code written during the iteration. Yes, ALL of the remaining code. At this point, the developers effectively stop writing “new” code and focus their energy on fixing all defects identified in testing. If the dev team has no defect work or find themselves with down-time while waiting for testing results, they can and should be helping the product owner develop and refine the user stories that are likely to be played in the next iteration. Additionally, developers can start considering the design aspects of the immediate upcoming stories and coordinating design decisions with the project architect.

Last day! Don’t start scrambling now, you’ve got a demo and review later today. So, on Friday, the developers focus on bashing any remaining defects and making the code bombproof. We’re shooting for zero defects here folks. The QA’s are spending most of their time doing some final acceptance testing (as is the product owner). That’s it. It’s noon and it’s time for your review and demo. You’ve tested and fixed everything so you and your team can demo with total confidence. No surprises for your team, you’ve built serious quality into your software!

So, there’s my idea for a basic schedule that completely integrates QA and testing into your iteration. Now, this is only one way I think QA can be integrated into an iteration. There are probably dozens of other ways to do this and I think ultimately, the way teams work QA into their iterations really needs to be assessed on a case-by-case basis. So, I’d like to pose the question to agile teams out there: How do you currently fit QA/testing into your iterations?

Share on Facebook
Post to Google Buzz
Bookmark this on Yahoo Bookmark
Share on FriendFeed
Bookmark this on Digg
Share on reddit
Share on LinkedIn
Share on StumbleUpon
Bookmark this on Delicious

  1. Dave Rooney said,

    Hi Chris,

    Absolutely is QA part of “done”. I tell teams that I coach that the done state for a story is “done and tested by development, tested by QA and accepted by the business”. The closer you can get to doing this with individual stories, the better.

    That means you need sufficient QA capacity to handle the pipeline of completed stories coming from the developers. I also tell teams work on one story until completion (or multiple if the development capacity is there), i.e. limit the work in process. After all 2 stories completed to 100% in an iteration delivers more business value than 8 stories completed to 75%.

    For defects, I have the developers, QA and the business triage them – is it a technical issue or an unclear, misunderstood or missing requirement? Does it affect the done state of a story? If so, fix it immediately. If not, confer with the business about its priority and schedule it accordingly.

    This, of course, has the implication that the QA ‘team’ is probably larger than what would be considered normal. I don’t see that as a bad thing.

    Dave Rooney

  2. James Schmidt said,

    Another approach is to break the QA and developer silos and create more diverse teams. Meaning, put one or more QA testers on a team of a few engineers. The definition of done becomes when the team completes the item including testing. This is simple but it works for me.

    James Schmidt
    AdvancedMD Software

  3. Basim Baassiri said,

    I’ve posted to the agilistas group in linkedin but i’ll repost here as well

    I agree in theory that the philosophy of agile QA is getting the testing to start early and to shorten the feedback loop and that QA ideally should start testing as soon as the code is completed. From a QA perspective, code complete has a different meaning then when development code is complete. What if the code completed by dev is a dependency for another set of features? What if the code completed by dev is not testable from an acceptance test perspective? In practice, code gets delivered late. In my view, QA definition of code complete means that all of the development effort is done and not a subset of it. Testing a moving target or code base causes unnecessary churn. Here is the scenario:
    Dev completes story 1 –> QA tests it and marks as DONE
    Dev completes story 2 that affects story 1 –> QA tests it and marks as DONE

    The result here is that story 1 now has to be re-tested as a result of the changes done to story 2. Story 1 is at a state of quality unknown

    We’ve tried this approach with some success. We also tried the approach where by QA is an iteration behind. What I mean by behind, the build candidate at the end of the iteration is taken and testing starts. Given we are a small shop developing a web app and release very often (almost at the end of each iteration), this approach didn’t work either as our SCM process wasn’t in place to handle this approach.

    As a result, we have opted for following process
    1. test cases definition done early in the iteration
    2. Test case review (not being done but ideally solicit feedback from dev and story owners)
    3. We use keyword and domain based automation techniques so test cases are written once and can be used with the automation framework
    4. Once dev signals the story is complete, start testing manually
    5. Code supporting features in the automation framework if required. This obviously can’t be done until the code is complete
    6. Verify defects and run regression testing activity
    7. Release

    looking forward to your thoughts

  4. Tim Lesher said,

    That’s exactly what we’ve been doing, except that we’ve found this causes too much “crunch” on SQA and “slack” for developers in the last few days of the iteration.

    I think it’s psychologically hard for devs to avoid Parkinson’s Law (where the work expands to fill the remaining iteration time).

    To alleviate this, we’ve experimented with reassigning creation of automated tests to dev instead of SQA. SQA still designs and writes the tests, of course, and usually executes them; Dev just reduces the test to automated form at the end of the iteration (subject to SQA’s approval).

    This makes the per-story pipeline nice and uniform:

    1. Dev designs and codes; SQA designs tests and pokes holes in the design
    2. SQA executes tests; Dev addresses issues and pokes holes in the tests
    3. SQA executes acceptance tests and writes defects; Dev automates tests

    The acceptance-level and integration-level tests that SQA is running during iteration N get automated by development, and take effect as part of the automated suite beginning in interation N+1.

    It’s still evolving, but it’s been working out well so far.

  5. Gunther Verheyen said,

    Chris, first of all, in my understanding of “Agile” all software activities need to be performed in èvery iteration (Sprint). So, certainly testing!

    The ideas and thoughts you describe here are close to what I’ve been practicing in my projects. It holds the use of eXtreme Programming practices for Scrum’s engineering standards, but also an additional role, complementary to the Product Owner. I’ve named this role the ‘guide’. While the essence of this role is functional testing, it is certainly more (including co-Product Ownership).

    As from day 2 of the Sprint (day 1 = Sprint Planning) the guide starts writing functional test scenarios upon the Stories for the Sprint, beginning with the Stories that developers start to code.

    I’ve defined 4 Quality Loops that are all executed on a daily base, every day of the Sprint (why wait until half Sprint?):
    Loop 1. Developers work in pairs upon a test-first basis. By the way, a test-first also includes some GUI-testing (e.g. with Selenium).
    Loop 2. Developers can refactor, i.e. rewrite working code to make it better, simpler, remove duplicate code etc.
    When a test-first no longer fails, the code is checked in and gets picked up by a Continuous Integration system. The CI runs multiple times a day. Logs are being checked regularly and rework is inserted into the team using a Kanban board.
    Loop 3. The guide deploys a stable version from the CI to perform the functional testing. Functional feedback is inserted into the team using a Kanban board (items are generally too small to make it a Story).
    Loop 4. (if applicabel) Overnight performance tests are performed on a deployed version using scripts with ramp-up’s, think time, etc.

    Two days before the Sprint Review base development stops (this is slack): code is refined, improved and whatever it takes to make sure that it meets ‘done’. Mostly it includes merely solving of small functional issues.

    The discipline of applying this has made sure that we have never experienced major bugs or problems, not during Sprint Review, not in production.

  6. Stephen Nimmo said,

    I am currently engaged in an environment where UAT is considered to be a post iteration task. For every sprint and subsequent delivery, the product is then handed to the QA team as the team begins the new sprint. So at any given time, the QA team is not only working on test plans and scripting for the current sprint, but they are also finishing out the final UAT testing for the prior sprint.

    In my experience, the sprint delivery cannot be considered production ready do to the usual gap between a sprint team and the alternatively managed business users along with usual bureaucracy involved in production deployments. But then again, I also believe these aspects of the software development cycle are out of the scrum team’s scope. A development team can only build and test – they can’t make the users sign off.

  7. Surendra Lingareddy said,

    In the traditional way of doing things, often requirements were handed down to the dev team for development, QA team would lag to see the development reach partial to full implementation and QA would then write their test cases based on their understanding of requirements and implementation.

    Personally, the most benefit the organizations I have worked have drawn from the Agile way is the tendency to natural juxtapose QA and Dev. When you come into an iteration, you usually have an un-crystallize idea about the story you implement. The planning meeting serves to select the stories for this iteration as well as to flush their details out. QA and Dev both along with the business (Prod Mgr and SCMs at least) walk out with the same understanding of what the story is. The card or where ever this information is captured serves the detail, the high level test cases and the acceptance criteria.

    While Dev implements along with their unit tests, QA writes their own tests or automates. Continuous Integration helps in QA taking an early shot at builds and dev have a way to cross check whether their implementation meet expectations. In other words, QA tests serve as a detailed check list to meet.

    It is easier said than done but I am sure in a good time (with retrospectives), team will naturally gravitate towards this.

    Surendra

  8. Patrick Curtain said,

    A briefer response… I’ve had QA following one iteration behind, focusing on black box test automation and maintenance of tests affected by change. That puts the focus (and personnel budgeting) on TDD in the dev team, continuous integration and flowing QA effort back into the team.

  9. Bob Galen said,

    Chris,

    I’m slightly confused here. So yes, QA and any other work that is required to achieve feature/story/task done-ness should be part of the teams’ responsibility. However, isn’t setting up some sort of intra-Sprint ‘schedule’ sort of defeating the purpose of self-direction?

    While what you propose isn’t “bad”, it does try and plan or schedule for the team. I’d rather that (1) the team buy-into the notion of getting things ‘done’ and (2) that they need to work efficiently and effectively to maximize how much they deliver and the quality of what they deliver–so the team should be figuring this out–sort of on their own.

    Another part of that is significantly blending the individual roles–so that it’s the team working and not developers handing over things to testers. My initial thought when reading this was that it was a reaction to Waterfall workflow within the confines of a Sprint–and that doesn’t smell right to me either.

    There’s a concept called kanban that I’ve been exposed to lately on another group. The whole notion seems to be to throttle the work (cards) that a team can take on to try and reinforce blended team behavior to drive work to done. So fewer in-process tasks/features, and heavy collaboration to get it done as quickly as possible. It seems that this leads to (the claims) to higher efficiency / effectiveness.

    Just some initial reactions, but I *may* be totally missing the point. Nice job putting food for thought out there Chris!

    Bob Galen.

  10. Chris said,

    @Bob Galen: I understand your concern. I’m not talking about directing work here. Most QA’s and devs have asked me how to fit QA into an iteration. This is merely a suggestion as a way for teams to think about organizing their work to maximize the effectiveness and utilization of both the QA’s and the devs. Teams can self-organize and self-direct, and that is a real key to agile success. However, having some guidance or idea of how to attack the work of an iteration isn’t necessarily a bad idea.

    @Patrick Curtain: Be careful going down that path. In my opinion a team is not “done” with a story until it is tested. If testing isn’t complete until the “next” iteration, your team doesn’t get story point credit for the completed story until the next iteration. This has a serious impact on your team velocity. More importantly, if a story is not tested, it’s not potentially shippable and therefore has no value to your customer. If it’s not potentially shippable, that means it is work in progress, and work in progress = waste.

  11. Nick Geise said,

    We emphasis acceptance criteria (the last ‘C’ of the 3 C’s for user stories) for requirements . We document the acceptance criteria leveraging Fit. Our developers then have fixture-building (and unit test) tasks associated with each story they are working on. The Fit tests are then exercised as part of our continuous integration process. This addresses a lot of the functional testing needs and spreads the effort out throughout the iteration. An effect of this approach is a blending of the tradiional BA/QC roles. We not only ask “what do you want?” we also ask “what will you do to verify we delivered what you want?”.

    The User Interface (UI) typically undergoes more change across iterations, so we don’t start automating that testing for screens until they stablize somewhat. We then automate the UI tests using QTP or Selenium and fold them into our CI process as well.

    Performance testing is typically handled in a different environment than our functional and UI testing and we do this after each iteration as-needed.

    Things that do not pass the acceptance criteria as defined at the beginning of the iteration, and committed to by the team, must be repaired by the team before the end of the iteration.

  12. PINGBACK said,

    PINGBACK http://www.leadingsoftwaredevelopment.com/2008/11/article-on-dev-and-testing-schedules.html

    …This is a good article addressing developing features and QA testing them during the same iteration. Chris offers a good example process. I like the comments that followed because you see different points of views regarding the topic. This brings to light some of the issues with Dev and QA cohesion…

  13. Robin Stirling said,

    I’ve had “Testers” involved at the Product Backlog level gaining understanding and assisting with the Acceptance Criteria. Within the Sprints they have assisted the Dev’s with the test criteria and perform validation of the unit test completeness. They also participate in preparing the Potentially Shippable Product” at the end of the sprint for the Product Owner and business representatives and also maintain the regression test suite.

    The Sprint prior to a Full Production Release focusses on Non Functional Testing, Support and Documentation which has the Product Backlog owner representing the UAT/QA/Support teams.

    As with all Scrums there is no one project structure fits all as it needs to match the nature of the project and the clients organization.

  14. John Szurek said,

    Pretty much the way you laid it out. Not many options when you are dealing with a few weeks of runway. We find it is a challenge to do formal testing, but I don’t think that formal testing of each sprint is necessarily within the spirit of the agile concept. There are also no agile police around to stop you from starting parts of the next sprint while the previous one is being finished.

  15. Elizabeth Johnson said,

    Chris, we do exactly what you said in our iterations. We have been doing this for quite some time and everyone on the teams seem to do quite well this way. Everyone stays busy and all the committed work is completed, including testing!

  16. Bhardwaj Velamakanni said,

    I agree. I my teams, I treat the tester as another Developer with dependencies. While others plan their own work at micro-level this tester starts off refining the Test cases that drive the development. As soon as a chunk of code is deemed complete, testing goes on, in parallel to the other tasks (Which often starts on the first day of the sprint itself. All the pending tasks, including the tester’s tasks and the bug-fix requests go into the prioritized backlog.

    To answer the question, my teams dont wait until the middle of the sprint to start test execution.

  17. James Herrmann said,

    QA is always 1 iteration behind development. Trying to fit them into the same iteration puts tremendous pressure on the QA group as they wait for development tasks to complete at iteration’s end.

    The final iteration before a release is dedicated to full regression testing with development in freeze. Defects fixes are permitted during this time.

  18. Ann Konkler said,

    In response to last post, we address this issue by making sure no story is larger than 5 points — in other words, Business Analysis & Dev time (incl. automated unit tests; TDD) should take no more than 1/2 of the two-week iteration. QA’s then have most stories ready to test at mid-iteration. BA’s/DEV’s spend the rest of the iteration debugging and planning for the upcoming iteration.

  19. Stephen Lawrence said,

    Ditto – with Elizabeth- one point to make though, is the whole process works a lot smoother if the stories can be broken down into smaller units- ie 1 day or 2 for dev, this removes the bottlenecks and keeps a smooth transition from dev to test flowing. Defects is an interesting question – rather than leave a couple of days at the end I believe you should still apply a value scale against the defects- ie what is the value of fixing several cosmetic defects when you can complete another high value business story. Of course this all depends on how close to a release you are.

  20. Dr. Frank Harper said,

    Chris, your spot on with your approach. I am always going over the concept of an increment of potentially shippable product functionality. Each Sprint (iteration), the team under the direction of the Product Owner, commits to turning selected Product Backlog (User stories/cases) into such an increment. They quickly learn that in order to achieve this QA is critical. The team and Product Owner experience the productivity in having QA write test cases that “test the request.”

    I coach the teams to organize in a manner to ensure that clean code: has no bugs, adheres to coding standards, refactored to remove any duplicate or ill-structured code; contains no clever programming tricks; and is easy to read and understand–per Sprint (Iteration). Of course, this is an evolutionary process as the team learns that if code isn’t clean in all of these respects, developing functionality in subseqent Sprints (iterations) will be more time consuming. Eventually the light bulb comes on and the team organizes and manages their efforts to integrate QA through the entire process.

  21. Naresh Jain said,

    I might sound too judgmental here. Appolozie for it in advance.

    Chris, this approach seems like a mini-waterfall inside your iteration. Development and then testing and then fixing bugs. I’ve never seen this work effectively and as Tim pointed out, it creates an even distribution of work load. Your approach also encourages Code freezes, which feels scary to me. Remember the quote – “Walking on Water and Developing software is easy when things are frozen”.

    Nick has some good suggestions about Ron Jefferies’ 3Cs.

    On large enterprise projects we have a deep-dive session during the iteration, where the Business experts sit with the devs and QAs to write an acceptance test for the acceptance criteria and then they split and do the required worked. Once the dev says, they are done, by then the Tester are ready with their detailed tests. If the feature passes all those tests (most of which is automated), the dev quickly shows the business expert what was done. If all’s good, the story is considered to be completed.

    In essence what I’m trying to say is instead of waiting till the last few days of your iteration, you can start testing as soon as the dev says they are done. In some cases, the dev can ask the testers to do a smoke test on their machines mid-way thru. This approach has worked well for large enterprise projects where the devs are not the domain experts.

    Personally I prefer the devs to be the domain experts. That’s the case in my current company. So we have eliminated both the Tester and Busniess Expert role. Devs play both those roles. Product lead does some exploratory testing at random points during the iteration. We also do a lot of hallway testing which gives us good feedback about usability and user experience.

    Again, this model works well for small product companies. The moment its a large enterprise project, many roles kick in and some waterfallimse comes in. But we try our best to minimize it.

    Last but not the least, this has worked for us after inspecting and adapting how we work. So please don’t copy blindly (monkey-see monkey do). Figure out what your team needs, by actually trying various things.

  22. Simon Droscher said,

    We added a couple of columns to our board – “tests ready” and “demoed and accepted”. Tests Ready site between To Do and In Progress, and basically requires the dev to sit with a QA member and discuss what tests will be required for the story. Depending on the nature of the story, there may be a need to write manual test cases in our manual test case repository, FIT tests or other automated tests – the dev and QA agree on who will do this, and then the Tests Ready tick is given.

    In the daily scrum the team generally gives grief to anyone who has stuff in progress without a Tests Ready tick.

    After code is complete the dev is required to demo the completed story to QA and have it accepted. Having the tests agreed upon up front makes this a lot easier.

    It has helped a lot with quality as it ensures there is a good amount of testing and it makes it hard for devs to sneak anything through without QA knowing in advance.

  23. Vivekanand Ramaswamy said,

    Chris, getting the QA team early into the SDLC serves the purpose. Having said that, the communication between the QA and dev guys during the unit testing phase remains imperative. Validation and Verification during the Iteration cycles still holds the trump card. The focus of the testers during the verification stage of iteration should be meticulous.

  24. Declan Whelan said,

    Chris,

    With the teams I work with I use the Agile testing matrix that Brian Marick proposed. Within each quadrant there are different types of test tasks and different collaboration points within and outside the team.

    Having QA working one iteration behind is really not good because feedback is delayed and bugs are only found after the work is considered “done”. Even if testing is done on previous versions I think it is critical that the team plan and execute this work in the current iteration. This encourages a collective ownership and a whole team approach.

    I encourage writing *automated* story tests during the sprint as well as scheduling exploratory sessions as time-boxed stories. By doing this “doneness” is achieved by passing the automated story tests for each story. As risk increases in certain areas then session-based tests are scheduled to increase knowledge and decrease risks in these areas. We use a “low-tech” quality dashboard to capture and radiate this information.

    I highly recommend a combination of automated story tests with session-based exploratory tests. The focus absolutely needs to shift from finding bugs to bug prevention by providing a tester’s perspective early, both during sprint planning and throughout development. The team benefits in the long term from this collaboration as developers learn more about unit testing and the scope of the unit tests while the developers learn more about good testing strategies.

    A fundamental shift in thinking is needed. It’s no longer about having to “approve” stories and releases. Instead its all about defect prevention and gathering and broadcasting information gleaned from testing.

    The Agile Journal will shortly publish an article that I wrote on this topic that will fill in a lot of the blanks.

  25. James Herrmann said,

    Hi Ann,

    I read your approach. Just to clarify; the Devs are given 1 week’s worth of work in a 2 week iteration so as not to step on QA in week 2? If so, that would really seem to hurt productivity quite a lot.

    By staggering Dev and QA, the Devs can continue to work on new stories with QA unimpeded. It is a risk and defects may come up. That is expected in Agile and defects are addressed based on severity.

    We also have planning sessions and reflection between iterations: 1 day per # of weeks in the iteration (normally 2). This means that our iterations do not begin and end on the same day of the week from iteration to iteration.

    After several years of tweaking, we’ve really hit on something that works well for our organization. But, each shop is different.

  26. Hari Koorapati said,

    Hi,

    As you said, we can have QA team writing test cases from day 1. I recommend QA team always behind one iteration behind dev team. Once a feature is done after unit and integration testing by dev team, this can be handed over to QA team. QA team starts with System and Acceptance testing with this.

    If QA finds any bugs, depends on severity of corrections can be taken immediately / at the end of second iteration.

    Thanks & Regards,
    Hari.

  27. Keith Sterling said,

    I smell a whiff of waterfall in the description of your process, especially the fact that “In the final days of the iteration, the devs stop writing new code and focus on fixing defects”.

    I’m guessing your probably have a rather flat burndown chart until towards the end of your iteration which is never a good thing.

    I prefer to see my teams fixing defects as they occur, which means testers finding them as the developers write the code, which means a story not being signed off until both of the above complete.
    There is nothing wrong with a developer moving onto the next story while the tester completes testing, but this natural stagger in the work, should never be allowed to continue past 1 ( possibly 2 ) stories, otherwise you are just building up a tsunami of defects at the end of each iteration.

    With this in mind, I have no problem with developers writing new code until almost the last day of the iteration, if the testers are keeping up with the pace. If you are TDD this should never be a problem, however if testing is lagging I generally encourage the developers in my teams to help with the testing so that we get the most stories to a DONE/DONE stage as possible.

    Testers testing everything at the end of the iteration, again can be interpreted as waterfall, so I try to avoid this and ensure I have good automated test coverage so that the entire test suite is being executed constantly throughout the iteration.

  28. Bob Marshall said,

    Write the tests first! And I’m not just talking about developers writing units tests (e.g.TDD). Even more important, for those shops that have reached the necessary level of expertise, is having analysts, or better yet, customers, writing the acceptance tests up front too (v.f. Story Driven Development, a.k.a. SDD).

    In all the projects I’ve run over the past ten years and more, we have not had “testers” on the team – nor any need for them. Sufficiently professional and motivated development teams should (must) be able to produce defect-free code without the need for testing. All testing to find defects is waste!

    - Bob

  29. Tino Zottola said,

    Chris,

    The timeline described in your blog pretty well describes the optimum use of QA and development resources. During the last day of the cycle we also conduct a post-mortum session with some of our QA people and developers to learn from the mistakes of current development cycle and apply the lessons to the next cycle.

    We have a distributed QA approach, which is split into two levels:

    Tier 1 testing is done on each module before it is submitted into the trunk load. We have a 100% pass test case requirement for each module submitted into the trunk. Our developers are responsible for Tier 1 testing and the majority of this testing is automated in the development environment (e.g. Netbeans, Eclipse, etc.)

    Tier 2 testing is done on each completed trunk load (composed of fully tested Tier 1 modules). This testing is done by the QA people exclusively at the system level. This testing is both automated and manual depending on the product characteristics.

  30. Julian Harris said,

    What you’ve suggested is what I’ve done in the past successfully — you want to push as much of the quality measures into the sprint as possible. How much? Try to keep the release sprint as predictable as possible. If your release sprint increases linearly with the number of previous sprints, then you should consider embedding more of these quality activities into each sprint.

    In my projects, the QA team are on the same table as the dev team and communicate fluidly. A few retrospectives into one project the team decided to formally agree with a number of checkpoints for each user story: 1. when the test script was written, the devs writing against it would do a once-over (15 minutes or so). 2. When the code was ‘ready for testing’ the QA and dev would do another quick once-over (another 15-30). This eliminated enormous numbers of ‘test/fail/raise bug/code/retest’ (90%) and ended up with more getting ‘done’.

    In some environments QA means something closer to ‘business acceptance testing’ and doing a number of things such as validating the system against an end to end business process etc. We’d usually stagger this one sprint / iteration later against what was delivered in the previous sprint and some of these are very time-consuming activities that can indeed take more than a whole sprint to prepare (e.g. test data for some form of expert system). You’d want to do those as frequently as practicable but accept it maybe every ‘few’ sprints.

  31. Sharan Karekatte said,

    I agree with you all. Here’s how we go about integrating QA/Testing tasks into each Sprint:
    • QA (or Tester, I’ll use these terms interchangeably) starts analyzing the User Stories and Acceptance Criteria and follows up with the Product Owner on any open questions, and meets with the Developers to ensure everyone’s on the same page.
    • Once everyone’s clear on the requirements, QA starts writing up the test cases.
    • Developers try and get a testable build ready in the next few days.
    • QA starts testing, and there’s back and forth between QA and the Developers.
    • QA or Developer tries to give a preliminary demo of what the team has been working on to the Product Owner just to ensure that the team is meeting all of the expectations.
    • When the complete functionality for a feature set has been delivered for testing, QA starts logging defects in the defect tracking tool from that point on.
    • When testing is complete for one feature set, QA schedules a QA Review with QA members of other teams. This helps Testers from other teams understand the feature set (under QA Review) in case there is overlap with their team’s feature set and also provide feedback to the Tester doing the QA Review.
    • In some cases, the Tester (doing the QA Review) may also request for another Tester who is more familiar with the feature set to do a verification of test cases and possibly some validation.
    • The QA Review helps spread the knowledge and also allows for interaction between QA members on different teams. Sometimes, the Product Owner and Documentation Specialist may also attend the QA Review to get a better understanding of the feature set or functionality.
    • We follow a ‘zero defects’ policy, meaning, any defects (critical to cosmetic) found during the sprint are resolved. Any defects found after a sprint is over are entered in the defect tracking tool and put back on the Product Backlog.
    • QA walk-through’s of a feature set/functionality with the Product Owner are typically more thorough and detailed (you know how QA folks are).
    • We try to wrap up most of the walk-throughs before the final sprint review, leaving very little or nothing for the final sprint review, so that we can focus more on the Sprint Retrospective.

  32. David Lumsden said,

    It is easier & quicker for developers to fix bugs on the same day as they create them, or maybe the next day. The help TDD gives here is in proportion to the comprehensiveness of the tests. QA will pick up problems not detected by the developers, & the sooner they do this the better. Chris’ basic approach is good; two caveats:
    1. to keep developers busy they have to be free to move on to new stories or the defect backlog
    2. the shift to agile can be a bigger hurdle for QA than for developers

  33. Chris said,

    Trackback to this post from Leading Software Development: http://www.leadingsoftwaredevelopment.com/2008/11/article-on-dev-and-testing-schedules.html

  34. Emmanuel Szabados said,

    Some outlines from my experience:

    - moving teams working few stories at a time was the best: in planning2 teams define what are the first stories from priority order they will start to work on. Therefore, each team members panic the first day of the iteration because some don’t know what to do. After ad-hocs team session they find themselves useful and start write test cases, review automated scripts for any changes, pair between tester & developers….

    - if you have a big organization, you can create a distinct QA Automation team and have them being mercenaries in other team to help implementing automation like Fitness… Then the team continue to manage their test framework

    - if you have small group and small budget, get some consultant to write and package simple automated tests using 4GL tools like WGET. Pair some of your engineers with the consultant to share knowledge.

    - if iterations requires lot of testing:
    + while we keep testing continuously, we plan to stop development one and half week before the end and have the all team focus on “Bug-out” and very small new code introduction. Sometimes you just want to ensure good quality delivery versus good burn down graph :-)
    + at some point we call other group to help testing (customer services, even business or directors). Those are “blitz testing”. We create small scenarios, ask people to test between 15-30 minutes max and log stuff in word doc with capture screen or chat in Skype… Then we regroup…. Very efficient; People get to better understand the product and new features; Indirect socialization between groups; etc…

    - we have also a strong unit testing framework which includes more than just unit tests and that is tight to continous build. Every time “unit” tests broke, we regroup and fix it and the one who broke it get a “Toilet Plunger” –> trust me that mark the point :-)

    - To reduce product bugs, we do also “Kill Bugs” session: take a bunch of engineers, testers, product owners… Every Friday afternoon for 2 month, bring beer and snack in a big conf room; Have all working together in the conf room with their laptops… to:
    + review the general bug list (close, split, prioritize bugs…)
    + debug (code, test, close)
    + hit the product to death and continue recording bugs or fixing it

    - Since most testers are also coders today (scripts, Ruby….), promote cross functions by paring dev and testers.

    - Since what developers do the most is also testing their code, extend their own testing by involving them in dependency testing to allow them to get out of their own development box and see the big picture.

    Wishes:
    - I wish that all this above takes less time for people to get into it :-)
    - I’d love to do more TDD but that’s very hard to do it right.
    - better 4GL automated testing tool.

  35. Tim Mackinnon said,

    Controversially I disagree with you guys – the best team I was ever on (Connextra) we weren’t allowed to have a tester – this forced us to properly write tests (and use TDD) and naturally made us feel responsible for the quality of the product. Too often I see teams loaded up with QA’s and developers throwing code over the wall to them. While the latter scenario sort of works – bug counts seem higher and story execution seems less focused – its a slippery slope to silo’d execution.

    I have lately seen some teams with 1 or 2 testers who actually champion the use of testing – they are like Scrum Masters for quality – challenging developers to write better tests (and the team/users to write better stories) and helping them slice stories into small pieces that can be verified (they tend to do exploratory testing to identify any testing holes). This seems like a much better way to go – and these teams to have a natural flow that oozes efficiency.

  36. Eric Lakey said,

    These are all great points. I really like what Stephen said. “what is the value of fixing several cosmetic defects when you can complete another high value business story” In my current project, we are doing something similar to what Julian suggested. We spend an hour or two at the end of each iteration with all parties to review the functionality being delivered to make sure there were no major disconnects and to identify other bugs/tweaks to features. We then
    prioritize completion of new features, feature changes, and correction of defects into future iterations. It is important to set the expectation of upper management that the burn rate needs to include bug fix time and feature clarification (aka allowable scope creep). That way, no one assumes that all future development hours in future iterations will be dedicated to new feature development. That is an old-world belief that assumes you know everything up front. If your group is pretty experienced with Agile, this probably isn’t an issue, but many of my clients need extra communication about this.

  37. Asha Somayajula said,

    To everybody who has agreed to Chris’s post. I’d like to know if this would be possible on projects which are matrixed with shared QA/Developer resources and short sprints of ( maybe 2 weeks). Would we have time for reflection and all the completeness criteria that we have as developers/qa/analysts? Can anyone suggest a better approach to handling the aggressiveness in the delivery of the product increment on this ?

    Thanks,
    - Asha

  38. Alex Forbes said,

    I thought the following posts by Damon Poole at AccuRev would be of interest on this subject.

    http://damonpoole.blogspot.com/search/label/QA

    You may also be interested in viewing a short demonstration of how an early stage integration between AccuRev and Rally provide agile requirements traceability.

    http://blog.accurev.com/2008/06/11/agile-requirements-traceability-with-accurev-and-rally/

  39. Marty Hollas said,

    I have been leading QA teams in iterative development for a couple of years now, and the most common frustration voiced by QA is the lack of work at the beginning, and then a flood of work at the end that is completely impossible to complete. QA becomes the point of failure in the eyes of others, people loose sight of the team focus and things start to break down.

    There are many reasons that a team hits this speed bump, but from my experience it starts with poor estimating and planning, poor communication within the team, or a team that hasn’t ‘gelled’.

    Typically, QA can start breaking down the stories on day 1 and most of the time can start creating test strategies or test cases. If the estimating and planning was done correctly, at least 1 or 2 dev tasks should be ready to test by the 3rd or 4th day of the iteration. QA first manually tests the features, then starts automating. From there the momentum builds and the flow steadies, until the iteration is complete.

    Daily updates in the scrum is key to staying on track. An estimate is just that – you may finish earlier, you may finish later, but you always know where you are each day. If dev gets too far ahead of QA, you will blow your iteration plan and set QA up as a bottleneck during your next iteration, while failing to complete stories during the current.

    If your team is not getting along, or cannot seem to ‘gel’ after a couple iterations then something has to change. The most successful iterations that I have been involved with were with teams that knew what to expect from each other, had no problems sitting side by side with QA and Dev, and all of them understood that it is a team effort to reach the goal. I have been with teams where Dev and QA do not get along, where developers take each bug as an attack on their work and want no part in QA being a voice in their development approach, and these teams have the hardest times trying to meet iteration milestones.

    A successful iteration starts with a strongly knit team estimating and planning according to their skill sets, and will end successfully as long as they can work together every day side by side and utilize the leads and PM to remove roadblocks – it just takes time.

    My 2 cents.
    Marty

  40. Patrice Davan said,

    Hi Chris,

    It would depends on how you position your QA and the complexity of the project: is your focus on Component integration/Sub system test or more on System Integration/Product acceptance.

    By complexity of the project I meant multi-tier application or application doing lot of third party services integration.

    Component Integration is indeed at the frontieer between Dev and QA in term of ownership for testing. For this type of validation there are strong “QA” requirements for test data set, tools, mock-up/simulator to execute testing.
    Typically those works is strongly linked to the Test plan creation and could be started from the begining of the sprint. Development of those tools should be “delivered” with the Continuous Integration framework in // of the code.

    Regarding the System Integration – and I know some will say it is not Agile ;-) – my view is it would mainly be in the second week of your sprint (although it could start earlier).. Typically performance, stability, user acceptance testing would require several days of a stable “final” release even if some of this was done as “exploratory” testing during the first week of your sprint.

    In term of QA having nothing to do at the begining of the sprint I totaly disagree as they have sevaral “high criticality” tasks, amongst others:
    - work with stakeholders to ensure we can create test plan that mapped the acceptance criteria and – by experience – it is by doing this we often find flaw in the requirements and the acceptance criteria :-)
    - anticipate needs in term of libraries and tools for automation
    - set-up test environment
    - work on test data set and tools to generate them
    - define and implement mockup/simulator

    As being part of the definition of Done: Yes :-)

  41. Don Clarke said,

    One of the little things that we have implemented and has improved development productivity for us is that we have pulled unit testing away from the devleopers. Our process works as such

    Developer works on stories that on their schedule for the sprint.
    Developer completes his story and assigns it to a tester that is dedicated to the development team.
    Tester unit tests the functionality and either approves or documents defects and assigns back to the developer
    if approved the stories are approved for release within the sprint
    if defects are present it is assigned back to the developer with the expectation that he resolve the defects within the sprint.

    We are experiencing a 90%+ success rate from the dev team since we implemented this process

  42. Keith Sterling said,

    Hi Don, does that mean your developers are not writing ANY unit tests. I’d be interested in to know
    a) How many testing resources this is taking up, do you have a 1:1 ratio
    b) Do you find you still are completing stories within an iteration
    c) How do the developers know when they have delivered a story ( i.e when to stop coding )
    d) Can you expand further on what you mean by 90% success rate from the dev team

  43. Liza Ivinskaia said,

    Hi everyone,

    The best approach I’ve worked with was:

    1.When discussing features (we did it at sprint planning meetings) we defined which user acceptance test cases, performance tests etc should be done

    2.In the beginning of the sprint we together with developers identified which test cases may be automated and automated them with help of our test tool.

    3.After that developers coded and run automated tests. At the same time testers prepared manual test cases, test data, test environment etc.

    4.We did not wait with delivery to test until all the features are done. As soon as developers had something that passed through its user acceptance tests they gave it to us so that we could do manual tests, exploratory tests, performance tests etc.

    5.We “closed” delivery to test “gates” 3-4 days before the sprint end so that testers could finish all activities

    6.We were done when we passed all the test cases we planned.

    We used this approach in maintenance there we often did not have more than one-two iterations before production deployment and it helped us to be in time with test activities.

    I believe that the same approach with combination when you have one sprint at the end of the project for test completion (exploratory testing, performance testing, test reporting etc) may be used in other development projects even if you do not have automated tests (but in this case you have to deal with that things will take longer time).

  44. Sharan Karekatte said,

    Development without dedicated QA is dangerous, and though it may seem to work well initially quality always suffers in the long term. I’m talking about a well integrated team with absolutely no ‘throwing over the wall’. ‘Over the wall’ seems so ‘waterfall’. QA in our teams not only test but also look at the whole Scrum process and suggest process improvements. They also analyze user stories and acceptance criteria to determine value-addition, and question the Product Owners on them to ensure validity and ROI. Developers also practice TDD and write extensive unit tests.

  45. Therese Schoch said,

    In my Agile experience the QA team is involved in the iteration planning and estimating from day one and walks in stride with Dev. At the end of a 2 week iteration and/or when Dev is complete with new stories and moving on to the next iteration. QA is in the process of testing all functionality as well as overall performance testing. However, it is fair to say, (at least in my experience), that inevitably a flood of work comes at the tail end of an iteration. Sometimes this is due to defects found and sometimes it is due to the lack of planning and/or estimating early on in the iteration.
    The key to our team’s success is communication, effective planning and accurate estimating. While we all know we can under or over estimate we seem to always pull together to meet our date and deliver quality sotware to our business customers. :-)
    I have also seen that every IT shop differs and suggest that tailoring your Agile methodology for your company needs is the best approach to take. It may take some experimenting yet find what works for you so you can deliver a quality product to your business customer in a cost effective manner.

  46. Carlton Nettleton said,

    Make the iterations longer. I have noticed that Agile has been trending towards shorter and shorter iterations, but there is a reason why Scrum started (and still has) 30 day Sprints – it allows for a complete piece of work to be DONE during the span of the timebox. In many legacy systems in large enterprises, it is simply “not possible” to get something analyzed coded, tested and deployed (and retested) in 2 weeks. Changing the cycle to 30 days improves your chance of having it “done, done”.

  47. Carlos Simonini said,

    I agree with the idea of cross-functional teams. Even so, take into account that the members in the team have different skills.
    A Quality Assurance role (QA) is mixed with a Business Analyst role or with a developer role.

    By the way, QA and QC (Quality Control/Testing) are not the same thing in a waterfall aproach. QA is focus on prevention and QC is focus on detection.
    As a QA Analyst who works in an Agile frame, I’ve been included in the whole process, and, in my opinion, testers should be included in the whole process within the Agile practices too. In that way, the QA/Tester prevents issues because of the misinterpretation of the requeriments.

  48. Rolf Barbakken said,

    I think it’s important to remember that

    1. Your developers should not be the testers. They should test their own code, of course, but the code/release should also be tested by someone else. As a programmer you should know you are probably not the best to find faults with your own work. TDD is good, but will not solve such problems.

    2. QA is not (just) testing code. Carlos is right here.

    3. QA and QC should be a continous task, and fits very easily into any development cycle if done correctly. With cross-functional teams, both QA and QC has a natural part in it.

    Sadly, many developers still feel QA/QC is annoying and “its easy to point at faults”. As a leader, you should make sure the team take ownership of their deliveries as a team.

  49. Bob Marshall said,

    Sorry, Carlton. Have to disagree.

    Of course, if a Agile team has no influence over what goes on “over the fence” and there’s no one in the wider organisation concerned with (or accountable for) concept-to-cash cycle times, then the team may have to accept the sub-optimisation due to the “impossibility” of quick deployment. I have run projects for many years very successfully on a two-week cycle – even for embedded systems development where the hardware and software evolves together, and have recently seen teams leveraging modern languages (e.g. Ruby, Python) and tools to run cycles as short as four *hours*! And yes, these folks’ output is “done, done”.

    When Shingeo Shigo first proposed S.M.E.D., people though it was “impossible” too. See: http://en.wikipedia.org/wiki/SMED

    HTH

  50. Alan Griffiths said,

    It has already been mentioned that QA and Testing are part of the development process and thus part of the iteration, but one approach that hasn’t been mentioned is to have the testers pair with the developers and evolve the tests and code for each story together.

  51. Carlton Nettleton said,

    @Bob

    I am with you that we should have a preference for shorter cycles. I used to think that if a team could not do 2 week iterations, they should do 1 week iterations. Now I am not so sure. In many organizations I have observed this sort of rapid cycling is just thrashing for them and more destructive overall – the work is not done, the quality level is in the can and people end up burned out from the stress. If we were to extend the iteration from 2 weeks to 4 weeks, what would we have to do to make it a success?

  52. Adam Koszlajda said,

    Chris,

    My experiences are quite similar to these mentioned above. You have actually two possible approaches depending on organization and projects…

    Option A
    That is a perfect situation if you have testers within your team, which writes first the test scenarios or at least you have ready to use UATs. It is quite close to what Gunther writes. At this case it is just enough to establish daily builds and proper communication between developers and fully-committed testers (eg. via Bug Tracking System and organizing Triage Sessions). Unfortunatelly there are still companies, which “can not afford testers” and projects, which does not fit to this model.

    Option B
    QA is mostly on developers shoulders and they should write number of Unit Tests. It is quite close to what Tim writes. It is also a good approach to leave them 1-3 days at the end of sprint, so they can stabilize the project and double check that after integration everything works well together. Then you can engage the analyst or the customer who has 1-3 days between sprints to double check the functionality developed by programmers. The time between sprints is necessary to see how many bugs are within your code and estimate how much time will be needed in next sprint to fix them.

    In each option it is strongly related that the code created by the programmer is reviewed, by the other before it is checked in. Pair programming provides it, but my experience is that Peer programming much better approach :D

  53. Bob Marshall said,

    @Carlton.

    Personally, I think that if a team can’t do two week iterations, I’d be asking why not (i.e. root cause analysis using the 5 Whys technique, most likely). If it’s down to intractable external factors (i.e. not immediately fixable within and by the team) then I suspect that the root causes would suggest iterations longer than two weeks, not shorter. Two-week or shorter iterations can result in stress, it’s true, but that generally results from trying to cram in too much into an iteration. Establishing an early indication of team velocity, and then using that along with some more-or-less consistent sizing of backlog items, really helps in countering that tendency. The disadvantages of longer iterations include:
    - More work in progress (inventory)
    - Reduced focus (more things to test, work on, ship at the end of the iteration)
    - Longer (time) between injection of defects and finding them (making process improvement less effective)
    - generally extension to the mean cycle time
    - etc

    Of course, shorter iterations have some disadvantages too – I’ve found the optimal balance point to generally rest around the two week mark.

    Actually, these days I think that iterations are a half-way house for people just getting to grips with Agile. For me, single-piece continuous flow is the way to go for more mature teams and organisations (c.f. FlowChain – http://www.linkedin.com/e/gis/121933 )

    HTH!

    - Bob

    On 11/5/08 6:39 AM, Chris Spagnuolo asked:
    ——————–
    How do you fit QA and testing into an iteration?

    Lately I’ve been wrestling with the question of how QA and testing fits into an iteration while keeping both the QA’s and the devs effectively utilized. After some thought, I’ve come up with a “schedule” of activities for the QA’s and devs that I think works. Essentially, QA’s start the iteration on day one writing test cases (and testing the test cases in the negative) while the devs are writing code. In the middle of the iteration, QA conducts tests on completed code while the devs continue coding and fixing defects uncovered by the tests. In the final days of the iteration, the devs stop writing new code and focus on fixing defects, helping the product owner refine/define upcoming stories, and thinking about design considerations of upcoming stories while QA tests ALL of the code written during the iteration. It’s just one way of thinking through this problem. So, my question to the group is, how do you currently address QA/testing in your iterations? And, is it even part of your definition of done?

    If you want to read more details and see a graphic I’ve developed around this idea, check out my latest blog post at http://edgehopper.com/qatesting-in-an-agile-environment/

  54. Ann Konkler said,

    James, let me attempt to clarify…

    Devs and QA are busy throughout the iteration. By limiting the story/feature size to no more than “5 points”, it ensures that DEV will have multiple stories in the pipeline ready for QA and that there’s time to fix any defects. Goes something like this: DEV applies TDD to story 1, they ‘stamp it’ which signals QA to perfom more robust integration testing, acceptance testing, etc on that particular story. While QA has story 1, DEV begins story 2, stamps it as a signal to QA, and then DEV starts story 3, and so on. Meanwhile, QA may have found bugs in story 1 and story 2; these stories then get ‘passed’ back to DEV (added to the iteration backlog) and once fixed, they go back to QA for another pass through the cycle until QA signs off on each story. By the end of the iteration, each story may have gone through a couple cycles of DEV (TDD) – QA (find defects) – DEV (fix defects) – QA (validate fix). May sound more waterfall than it is — DEVs and QA are together as one team, looking at the same backlog, and often pairing together to ensure stories are completed within the iteration.

  55. Corey Baker said,

    To me, the fundamental tenets of Agile is that resources are cross-functional. This implies that estimates on a story can be done by a single development “unit” that is dedicated to the story until completion. These estimates involve all that is required to develop and ship code, namely requirements vetting, development, and testing.

    (Note: I have “unit” in quotes, because that will differ based upon organization, automation maturity, defect tolerance, etc.)

    Once you break-out the responsibilities of work, you no longer can fulfill the development requirements of a story (vetting/development/testing) via a single development unit. At this point, you have broken the development process into multiple sub-processes with implicit dependencies. Once that occurs, you have no capability of force-fitting these multiple, intra-dependent into a set cycle AND maximize resource usage. That’s not the way the requirements/code/testing cycle works and trying to accommodate these cycles with intra-sprint schedules is defeating the purposes of Agile.

    So, what is one to do in the typical organization in which software development and testing is split? IMO, you have three choices:

    1. Create a development unit that CAN take care of everything required to deliver code AND keep them dedicated to a story. In most places, that would mean a tester/developer pair together until a story is complete. Unfortunately, this will often result in alternating downtime as the code/test cycles fluctuate back-and-forth.

    2. Stagger QA and development, admitting the testing is dependent upon development completion. This requires embracing that “done” has different connotations for different stages of the development process. This shouldn’t be an issue by many think it’s anathema to Agile. I argue that, in most organizations, even after testing there is a release preparation step required before code is deployed. Thus, even if code is tested successfully, it isn’t “done,” there are still pieces of the process that remains incomplete. Embracing software development as a more complex process is key to this strategy.

    3. Force intra-iteration schedules to try and allow for time needed to manage the ebb and flow of the code/test cycles. This is the suggestion of this post and I believe it can be successful if you have backlog of other activities that can be worked upon during downtime. Basically, this strategy is to use non-standard development activities (hardening of architecture, for example) to fill the gaps caused by the dependencies between testing and coding.

    Anybody using anything other than these three in shops where testers do not develop as well?

Add A Comment

 


Creative Commons License
©2011 Edgehopper.com. Please don't copy me, it's bad karma.
Edgehopper by Chris Spagnuolo is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.