May 28th, 2008 cschneid
After my half-assed post about RSpec testing the Sinatra framework got several google hits, I figured I’d look into it a little further. Here’s what I found.
First, a very simple Sinatra application.
get '/' do
get '/:thingy' do
Now a test script.
describe 'Hello World' do
specify "should render hello at /" do
@response.body.should == "Hello"
specify "should render argument at /anything" do
@response.body.should == "Hello foo"
@response.body.should == "Hello bar"
specify "should not respond to nested paths" do
What I did - basically, all that needed to be done was to bypass the ‘sinatra/test/spec’ file, which in turn requires the test/spec library, which we need to avoid.
Instead, we require the RSpec interop library, and then get the ‘sinatra/test/unit’ file, which sets up a reasonable environment to start testing, including deferring to ‘sinatra/test/methods’ to define the helpers like ‘get_it’, ‘post_it’ and so on. The bonus is that you get to skip the require of test/spec, and get to stick to pure RSpec.
Where to go from here - I would like some input on what other helpers would be cool/useful. I haven’t worked with RSpec all that much (well.. at all), so I don’t know what’s helpful, and what’s not. Leave comments, or hit me up on IRC.
May 17th, 2008 cschneid
As a developer who moved from a testing position, I have the background to really understand how hard testing really is. Since so many organizations force the developers to do double duty as testers, hopefully this article can help streamline that process by focusing attention down to what really matters. Since testing is such a large topic, I’ll dive into each one later in more detail.
Because humans aren’t perfect, not even close. There are so many places in the software development lifecycle where things can go wrong. From bad or underspecified requirements, to misunderstandings by the developer of the business domain, to off-by-one and null pointer bugs in the code to poor performance.
There is no way to ever catch all of these potential issues, since the number of possible testing permutations grows so quickly. What you can do as a developer turned tester is to use your analytic frame of mind to hunt down and kill bugs as quickly as possible.
What to Test?
Everything! Well, you don’t have time? How about the most critical stuff?
My general approach to planning a project’s testing is:
- Run through everything once. Run through the “happy path” of the code, and verify it at least does a close approximation to what it should. This step should probably happen during the initial building of the code.
- Rank your components or areas by criticality and perceived errors. You will get an idea in step 1 that certain areas are buggier than others. By combining the impact of failure for an area with the likelihood of failure, you’ll get the risk of each area. (impact * likelihood = risk)
- Test as much as you can in the time you have.
- Be as honest as possible with your boss on what’s good and what’s not. You are there to find bugs, they are there to decide what risk is acceptable.
Pessimism and Optimism
- Positive testing
- Test the ideal conditions, the user doesn’t do anything weird, and the input is correct. Verify it returns correct values. Use your knowledge of what the code is doing to narrow down unique cases. Does the code handle Japan queries differently than US queries? Test both. US and Europe follow the same code path? Test one, and leave the other as a TODO item.
- Negative testing
- Now use your imagination. What happens if the user clicks stuff out of order? Enters in a 0 length string? a 1000 length? An email with two “@” signs? Cancel a process halfway through, does the program recover? Try unplugging your network cable halfway through an operation, does the program hang? Lots of weird cases can happen, try to reproduce the most likely (ram, disk, network, bad input).
Automated or Manual
Both! Use each where it makes sense. The cost-benefit curves over time for each type of test look entirely different. Automated tests are expensive up front, but almost free to run over and over. Manual tests take about the same amount of work every time they are run, no matter how many times before they were run.
Remember that many types of tests are simply impossible to run without automation. Things like load testing, stress testing, memory leak testing, and so on are so tedious or large that there isn’t any way to manually execute it.
Manual tests are great since humans will never follow a script perfectly, by veering off course just slightly during the execution of the test, they will find errors that rigid automated tests will never find. Automated tests suffer from running over the same path over, and over again. In addition, manual testing will find issues that the automated tests weren’t even designed to look for. Things like “clicking that button causes the one over there to change text… that’s not right”.
- Black Box
- Testing the software from the outside. Using only the user interfaces and external API of the program to test.
- White Box
- Testing the program from the inside. You see the code, the branches and loops, giving you a better chance of forcing out bugs. Tools like code coverage comes in at this point.
- Grey Box
- The guy is making up terms… I guess if I was forced to define it, it would be using whitebox code analysis to determine useful black box tests.
- Unit Testing
- Testing one method call, or on function at a time. Very low level testing, typically done by the developer themselves. This is very good at catching typo, off by one, and code errors. It is almost worthless at catching assumption or understanding errors.
- Component Testing
- I made up this term, since I haven’t heard it put nicely elsewhere. This is just testing a single standalone module or component of the larger software system.
- Integration Testing
- Combine all the components together, and you have the whole system. Test that integration between components, look for flaws at the junctions between them.
- User Acceptance Testing
- Show the technically clean application to your final users, have them go through the original set of requirements, and verify they are complete. This type of testing may happen several times as major chunks of the application get completed.