Posts

Oracle deficiency?

We were in the middle of a release. It was late and we had just found a bug, a show stopper for a feature. Ben then said I always wonder - what separates a good tester from an average tester? at most times I feel it's their ability to spot the important bugs at the right time.  These words have struck with me ever since. And this happened at another site. I observed "attack-name": "HEADER_COUNT_EXCEEDED" in an API response when I used postman to poke some of the APIs built. The error was not consistent but I just did not feel right about this. Checked with the team (lets call them teamA) testing the APIs and it appears they had never seen it before. So I checked with another team (lets call this teamB) who were consuming the APIs but building something different. It was the same response. I was aware that both teams used J-meter to test APIs and I was poking it via postman. Could that be a problem? I built a small collection to iterate the API ca

Exploring Hypothesis with pytest - Part 1

Image
I found Hypothesis library very intriguing so have started exploring it.  Here are my notes from the first hypothesis exploratory session What is Hypothesis? From their webpage - Hypothesis is a Python library for creating unit tests which are simpler to write and more powerful when run, finding edge cases in your code you wouldn’t have thought to look for. It is stable, powerful and easy to add to any existing test suite. You can read more about it and its claims here -> https://hypothesis.readthedocs.io/en/latest/index.html I am definitely not a fan of tall claims like - "finding edge cases in your code you wouldn’t have thought to look for"! Anyways I will come to that later on. For now I wanted to explore its ability to add to any existing test suite. So I picked pytest What is pytest? Again from their webpage - The pytest framework makes it easy to write small tests, yet scales to support complex functional testing for applications and libraries. More her

Using Bug Magnet to test APIs

I get extremely irritated when people ask me the toolset I use to test without trying to understand/learn what I am testing and how I intend to use a specific tool. My choice of tool and how I use it very much depends on the testing objective and the context. Below is one such example when I used  Bug Magnet , a chrome/firefox extension to test APIs. Test objective/mission:  Explore the email field added to the JSON payload. I started off the test session with a 5 min brainstorming. I mostly use mind map for this. what regex is used in the code? what inputs can break it? what inputs are business critical? response code when it fails? 400? error message returned on failure? min/max length? multiple email addresses? duplicates? does it matter? do we care if it is a *valid/fake email address?  what about domains such as mailinator? (Context:) Given we are not going gung-ho on validation. What could be useful at this moment? From the brainstorming session I decided

Coaching by doing

Recently, I was not happy with the bug reports on one of my team. The title was generic, there was little or too much information in its description, logs, response was not formatted and the attachments at times made very little sense. My initial thought was to forward 'how to write good bug reports' blog links from the web but something in me did not wanted me to do it. They explored well and the bugs they found were very good. Also, I did not feel a meeting or a workshop targeting only bug writing skills would have the right impact on this team. So instead of direct coaching or sharing blog links I started to reproduce the bugs locally and started editing the bug title/description/code format/attachments, etc in the bug reports. I did not mention anything to the team. After couple of weeks I started noticing a change. The bug reports got better! The title was appropriate, description much improved, code well formatted with screen shots and there was also gifs in attac

TOO BIG - the mindmap

Image
I have always believed that testers should actively play a part on the left side of the scrum board in preventing defects and not just the right side of it where we find them. Gojko Adzic , one of my favourite agilist introduces 'TOO BIG' heuristic that can help us with that in his latest post. You can read the full post here https://gojko.net/2017/01/05/user-stories-too-big.html  I have created a mindmap [. PDF | . MUP | . MM ] of it for my reference. Hope you find it useful too.

ET. My Way

Image
Here is the link to my talk ' ET. My Way ' at the London Tester Gathering .

Using git ‘pull' ‘merge' principle to exploratory testing

Image
I am a huge fan of git. I like its speed, ease of branching, offline capability, undo, and many many more features. But the one I love the most is its ability to bring the team together. I see it as an amazing collaboratory tool. The ability to tag others in the team to review a pull request before it's merged into master by the reviewer is so simple yet so powerful. (a quick summary to those new to this approach - dev's create a new branch when they start working on a story, continuously update it, testers can pull code off this branch and test it to provide quick feedback and when the code is ready to be merged, the dev can tag members in the team to review the code and the reviewer can merge the branch to master if he\she is happy with the code) We have extended this to feature files too. When we can't get hold of our product owners for a three amigos session. We raise a pull request with our scenarios and tag them for review. They add comments. We wou