Archive for December, 2009

Take Breaks

The first week of my first QA job out of college, the guy training me, Steve, told me “remember to take breaks when doing manual tests.” I nodded, figuring it must be some kind of trick. Why would you take breaks when you are on deadline?  It took me a few development cycles to realize the value of what he had said, and to this day, it remains one of the best pieces of advice I’ve gotten. We are in the middle of testing right now, and in the middle of a full day of manual tests, I was reminded of Steve’s advice.

Take breaks because your mind can only focus like that for so long before your eyes start to glaze over and you make silly mistakes: like starting to go on autopilot instead of evaluating every interaction, or ignoring a possible bug as a usual web glitch -when it might be something bigger. So this is my reminder for the day… go take a break. Go play tetris, or bubble shooter (my favorite), take a walk, close your eyes and listen to pandora for a full song. Give yourself five or ten minutes to reset and refocus. Go look at something that is not a computer screen, or talk to a coworker, or get up and stretch. The deadline will still be there, but you will be a more useful tool in meeting that deadline if you remember to take breaks.


Test Link Review


If you search for test tracking and management software, you will run into Test link. Testlink, though far from perfect, does some pretty cool things. I love any excuse to try out a cool new program (Which is the best part about working in a tech environment, isn’t it?) so I decided to give it a shot.  I used to keep track of my tests on proprietary software and open office spreadsheets, but the organization offered by testlink moved my whole testing approach to a new level.

The Good:



requirements management

requirements based testing

result reporting

central location for assigning, approving and executing tests

Open Source and written in PHP, so it can be adjusted for your specific project

The Bad:

It is a bit clunky to use

Minor bugs and unfinished features

Isn’t set up for a ton of integration- but there is an API

The results:

Testing process has been opened up for anyone on the project to be aware of what QA is working on. Communication between QA and Devs has improved, and the approval process is much easier and more organic than holding a big meeting. Test link has the potential for seamless integration with bug filing and requirements management, helping to carefully order and maintain a tester’s world. It doesn’t come pre-linked to much, but the site does offer some instruction for that. It does offer a good user guide and a pretty active community for support.

You can input your documents (I haven’t found a way to do this automatically, but manually putting them in does ensure that you do careful ambiguity review) and then link each requirement to a test case, or generate test cases right from the requirement. The execute section can accept xml results and notes. Testlink automatically generates test plans, test reports with data from the system, so you can put out a document that shows your test plan, and then gives overviews of your test cases and tracks the requriements. It has been extremely useful for increasing transparency in the QA department and keeping everyone involved with the process. There is a way to create custom fields that I have been using for test plan approvals and linking to bugs filed.

In all, it is a great program- that could use some finishing tweaks and an UI overhaul. I can look past its flaws though for the functionality it provides. I’ll admit it- I love testlink.

Also read this post here:

QA Toolbox

There are a ton of tools out there for testing. Every project has a multitude of options to chose from and configure, from open source to subscription services to paid software. When I was trying to chose which programs to get the QA department started with, I spent hours researching what was available, demoing products and trying to figure out what was the right match. It is an ongoing process, and I plan to keep adding to/adjusting our tools as I learn about more things.

My criteria were:

1. open source, if possible

2. Easy to use, time is always short, and I don’t want anyone to waste time on a difficult program

3. Able to integrate with tools we already use: IM, email etc, to make it as seemless as possible

4. Usable/viewable by the whole team – to keep with agile transparency practices.

So far, I have found some really great programs, I will go through each one in more detail in the coming days, but here is the list of what I’m currently using.


1. Testlink – love this program! It is an open source test tracking and test management tool.

LightHouse 2.  Lighthouse– a very simple ticketing system with a good API

3. Litmus – a subscription based browser compatibility program

4. Selenium – for our front end web testing

…Stay tuned for full reviews!

Document Standards

Recently, I have read some conflicting reviews on using IEEE standards for documentation and test plans. I’ve heard people say it is a crutch, and claim that if you need the document to do your job, then using it is actually harming you. This makes sense in a lot of ways.

If you approach the template like you would approach a worksheet in school: fill the blanks in with word for word answers from the book and don’t give it much thought, of course it is a crutch. It gives you a false sense that you know things and are learning, when really, you are just being prompted.  I’d hardly call that the template’s fault though. Likely, a tester who put so little thought into their test plans would have trouble adapting with or without a template. Does this mean they are not viable tools for test documentation or requirements specification?

The way I see it, IEEE templates are just that: templates. Of course no one template is going to work for every company or even every project.  The template must simply guide a person to create a living, meaningful document for your project. If used in that way, it is not a crutch but a standard to hold yourself to. You just need to remember to apply some rational thought to it instead of using it blindly.

I have found IEEE 829 (test plan) and 830 (requirements document) to be very useful on my project. They have been a good starting point for building documentation and test plans as well as providing a cohesive feel and expected deliverables for the team. I find them very flexible and easy to adapt to every project we’ve tried so far.

Some great detail on IEEE 829 can be found here – I’ve included the basic outline below.

1. Test Plan Identifier
2. References
3. Introduction
4. Test Items
5. Software Risk Issues
6. Features to be Tested
7. Features not to be Tested
8. Approach
9. Item Pass/Fail Criteria
10. Suspension Criteria and Resumption Requirements
11. Test Deliverables
12. Remaining Test Tasks
13. Environmental Needs
14. Staffing and Training Needs
15. Responsibilities
16. Schedule
17. Planning Risks and Contingencies
18. Approvals
19. Glossary

A nice overview of IEEE 830 for Requirements documents can be found here, the format is below:

1. Introduction
1.1 Purpose
1.2 Document conventions
1.3 Intended audience
1.4 Additional information
1.5 Contact information/SRS team members
1.6 References

2. Overall Description
2.1 Product perspective
2.2 Product functions
2.3 User classes and characteristics
2.4 Operating environment
2.5 User environment
2.6 Design/implementation constraints
2.7 Assumptions and dependencies

3. External Interface Requirements
3.1 User interfaces
3.2 Hardware interfaces
3.3 Software interfaces
3.4 Communication protocols and interfaces

4. System Features
4.1 System feature A
4.1.1 Description and priority
4.1.2 Action/result
4.1.3 Functional requirements
4.2 System feature B

5. Other Nonfunctional Requirements
5.1 Performance requirements
5.2 Safety requirements
5.3 Security requirements
5.4 Software quality attributes
5.5 Project documentation
5.6 User documentation

6. Other Requirements
Appendix A: Terminology/Glossary/Definitions list
Appendix B: To be determined

Agile vs. Waterfall Development

Being new to the Agile world, I am trying to figure out exactly where testing should fit in this model.

Waterfall testing is clear – you test the final product at the end of the development cycle, report bugs, test bug fixes and sign off on testing.  Agile approaches things a little differently. Documents aren’t always finished before the project starts, the product isn’t finished before you start testing, and the very things you are testing can change quickly. Agile Development is in vogue – but where does good testing fit in that schedule?

I am learning that effective agile testing has the same principal as waterfall: testing quickly and early. The difference is in how you arrange your test schedule around development sprints, instead of grouped together at the end. Adjust your test plan into mini-test plans based on milestones. Treat each milestone as a whole project that needs to be tested. At the end, you can run through all the test cases again to test the complete functionality, but each step along the way has been tested as completely as if it were a stand alone product.

Get involved in the process at the kick off meeting, talk to the developers, pop into daily stand ups when you have time, ask questions and listen to how they talk about what they are working on. Does it seem like certain parts of functionality are giving them a hard time? Take note. If you listen to your team, they will tell you the problem areas, the parts that were rushed, the developers who weren’t talking very much while they were working. It gives you some red flags to look at when designing your test plans.

Agile Development promises more transparency in the development process – we can include QA in that. Open up test plans and test cases for review. Ask developers to sign off your coverage, give them a chance to make suggestions. It gives you a chance to learn from them, but it also makes them aware of how you test and what you look for. Quality becomes a group goal.

This is a great video from youtube on the differences between agile and waterfall development, which is a great overview. I initially found the video on this blog, which also has lots of info on Agile Testing.

The QA Brain

Software testers come from all kinds of backgrounds- computer science, biology, English, business, music… some find it tedious while others find it enthralling. From the outside, filing bugs, executing tests, planning QA schedules is straight forward enough, but some people excel in what other people can’t bare. I’ve found that good testers develop (or are have inherently) a common mindset.

A QA is:

Detail oriented. This goes with out saying. A software tester is always aware of changes in UI, in functionality, or inconsistencies in the product they are tested. They are responsible for watching out for the minute details that often fall through the cracks. You might stare at the same text or the same few pages day after day, but you have got to keep yourself focused enough to notice the details.

Destructive. Software development is very constructive- software engineers creatively design and build their systems to work.  They create functionality where nothing existed before. Software testers need to be destructive. We need to test not only that it works, but also that it doesn’t work correctly. Confirming that all functionality works is only half of our job, and arguably, not even the most important part. We need to test what happens when the software encounters a situation no one has thought about. How does it fail? Where does it fail? How does it recover from failure? How far can it be pushed before it goes down?  QAs go in to test and try to break it in as many ways as possible. Users will find creative ways use the system and QA needs to make sure that they have already tested those creative ways before it goes out. This is the biggest difference in the mindset between a software engineer and a software tester. A good tester will think negitively- look for all the loopholes, cracks and over sights.

Creative. A software tester needs to look for bugs where you don’t expect them. Issues are there; they just need to be found. There are always bugs in software; the best QA is often the most creative in imaging crazy test cases.

A good communicator. A lot of how successful you are as a QA goes back to your language skills. Clearly communicating a bug can make the difference between getting it fixed and getting it rejected. A bug needs to express its importance quickly, and give clear reproduction steps and reference documents in away that anyone can understand quickly. Good document ambiguity review requires you not just to know what questions to ask, but how to ask them well. If you are maintaining documents, they need to be easily accessible for many different types of people. Often a QA needs to be understood by engineers, project managers and clients. Sometimes (often) each of those groups speaks a very different language. QA documents, test plans, bugs, and participation in meetings needs to be understood and respected by each of those groups. Know at least enough tech to understand how the devs work and how your system works, and always be eager and open to learn more.

Patient. When it is you job to work with buggy, broken, early versions of software, you will need to be patient. Maybe you need to wait for documents to be ready, maybe you have to help an engineer understand why your bug is an issue, and maybe you need to debug an automated test that you just can’t seem how to fix. Software testers need to stay calm and rational, dig through the problem instead of running up against it.

Organized. You are nowhere with out organization. You will be responsible for keep track of your own test cases, test results, bug status and product schedule. You might have help there, but there is a good chance you are going to have a lot going on that you need to keep on your radar. Test management programs are a lifesaver, but QAs successfully use excel spreadsheets and online documents as well.

Self-Disciplined. The great thing about QA is that you often get a lot of freedom. You have schedules to adhere to and functionality to test- but you usually have a lot of leeway in how you go about getting your work done. This is great for people who can stay on track. You can be creative, ask questions, set your own pace, push yourself and try new things. The challenge with that is to stay focused and keep your tasks organized and prioritized.

A Multitasker. This goes back to the organization piece. If you can’t multitask, software testing is going to be very difficult. You need to juggle automation results, debugging, manual tests and quick fixes. Some people don’t work that way. Some people love to zoom in on one task and carefully complete it-and only it. When those people try to test they often get distracted by weird, corner case bugs and miss the showstopper issues. Multitasking is essential, to keep your testing moving forward, keep you focused on what you need to do and what the rest of the team is doing, and keep track of previous issues filed and new features coming up.

Stubborn. If you find a bug, and you know it is a bug, you need to be stubborn enough to push back when it is rejected. Devs can be intimidating- they wrote the code, of course they know how it should work. But really, that isn’t their job- it’s yours. You should find out how the product works

An overachiever. You could stop testing when all your bugs are verified, but if you have time, why not try a little more? Why not ask questions about something that seems to be working strangely, just to confirm that it is ok? Why not actively hunt down bugs instead of passively testing functionality? A lot of testing is going out in search of things- ambiguities, proper behavior, bugs, rules, compatibility/connectivity rules, you take all your data and roll it up into a picture of what needs to be tested and how urgently. Take the time to make sure you are looking in the right places, asking the right questions and, schedule