Given the global distribution of software and how internet is bringing the world together, community based testing activities have been gaining a lot of momentum in the recent years. Such activities could be forum discussions, beta testing efforts, crowd sourced testing etc. Of specific interest in this blog is to see what crowd sourced testing is and when can this model be leveraged to yield success
In simple terms, crowd sourced testing is leveraging the community at large to test a given product. This is the community that spans people from diverse cultures, geographies, languages, walks of life who test the given software, putting the software to use under very realistic scenarios which a tester in the core test team may not be able to think of, given his / her limited bounds of operation. There are specific crowd sourcing companies such as utest.com that bring together crowd sourced testers and companies needing such testing to boot strap and carry on the overall project. Given the kind of bugs such test efforts result in, the short lead time within which the test effort can yield productive results and the reasonable costs associated (often times the product company pays only for valid bugs reported), one would think the Return on Investment (ROI) is very high and be tempted to go this route. Like any other area, crowd sourced testing is not risk free. It has some inherent risks to consider and mitigate, failing which the test effort may turn out to be a very random one, affecting the overall project and product cost, timeline and quality. I’ve delved into some of the core points below on when such an effort makes sense and when it does not, to adopt to make the most of a crowd sourced testing engagement.
When Crowd Sourced Testing is likely to work well?
·There is a global user base for the product under development
oProduct under development is more consumer centric rather than enterprise centric. E.g. Gaming, Mobile, web driven consumer products/applications, where global user feedback makes sense
·The product company is committed to working with a large group of people, understanding it involves some amount of overhead in such a decentralized test effort, rather than containing everything in house
·The product requires subject matter expertise in certain specialized areas and such expertise is wide spread. E.g.content testing in specific domains, language testing etc. for which gathering in-house expertise is expensive, if at all possible
·There is a need to simulate end user scenarios in testing – e.g. performance testing for the product needs to be done at different internet bandwidths and devices available in various countries
·The product team does not have time or resources to take on full-fledged testing in-house but has a good grasp of product requirements, test coverage to achieve and an overall strategy with which it can engage a globally sourced team
When Crowd Sourced Testing is not likely to work?
·Incorrect team is assembled with the wrong choice of testers, whose backgrounds do not best fit to test the product
·When the test effort is left to the community and the product test management fails to tie their efforts into the overall test strategy
·Lack of a clear strategy on what to crowd source and what to keep in house – e.g. some secure testing areas, automation (for both re-usability and to provide opportunities to your in house testers), are better off when retained in-house rather than given to the community
·A product that requires a lot of internal communication with cross groups to understand the interfaces; such high dependencies are usually difficult to get external testers involved with
Keeping the above points inmind and taking cues from your own scenario will help you determine whether crowd sourced testing really makes sense and if so what to, when and how to leverage from the crowd sourced community.