TestToTester

Exploring Hypothesis with pytest - Part 1

I found Hypothesis library very intriguing so have started exploring it.  Here are my notes from the first hypothesis exploratory session

What is Hypothesis?
From their webpage - Hypothesis is a Python library for creating unit tests which are simpler to write and more powerful when run, finding edge cases in your code you wouldn’t have thought to look for. It is stable, powerful and easy to add to any existing test suite.
You can read more about it and its claims here -> https://hypothesis.readthedocs.io/en/latest/index.html

I am definitely not a fan of tall claims like - "finding edge cases in your code you wouldn’t have thought to look for"! Anyways I will come to that later on.

For now I wanted to explore its ability to add to any existing test suite. So I picked pytest

What is pytest?
Again from their webpage - The pytest framework makes it easy to write small tests, yet scales to support complex functional testing for applications and libraries.
More here -> https://docs.pytest.org/en/latest/

Now I wanted a program to explore this unit test library. So I picked the first practise python from https://www.practicepython.org/exercise/2014/01/29/01-character-input.html

Here is my program for
#Create a program that asks the user to enter their name and their age. Print out a message addressed to them that tells them the year that they will turn 100 years old.

Filename - whenYouAre100.py

Then I started adding some unit tests in pytest for the code. I started with the below checks
  • What happens if I pass 100? expected = 2019
  • What happens if I pass 0? expected = 2119
  • What happens if I pass 101? expected = 2018
  • What happens if I pass a random number like 50? expected = 2069

Filename - test_whenYouAre100.py

On running the tests (command to run the test - pytest -v test_whenYouAre100.py) All tests passed.


Time to bring in hypothesis

Instead of all the above tests I defined just this one test


On running the test (Command to run the test - pytest --hypothesis-show-statistics test_whenYouAre100.py)



It appears that hypothesis has run 100 examples from range min_value=0 to max_value=101. 
I was not sure what those examples were run. Tried to get it to log (its called database in hypothesis) all the examples but for now I couldn't figure it out. So instead I started testing it.

I modified the function in the above code to 


With this change in code if I run the 4 unit tests (the 4 with values, 100, 101, 0 and 50) it will not catch the bug. Hopefully hypothesis can catch it?

So I ran the test written using hypothesis again and boom. The test failed at the value of i = 25.



I then changed self.age to 99 in the program and again the result was a fail 



That was pretty cool. 

So from the first bit of exploring hypothesis it does look pretty handy. Slick, easy to integrate, like how it prints failed tests and I can let a library pick examples for me.

But, I absolutely hate the tall claims. Words like "finding edge cases in your code you wouldn’t have thought to look for". Really?

If I were to run with only 

without integers range defined - I might have had to run this test n times to get the test to catch the bug!

Saying that I do like the library for now. So stay tuned for more notes on it.



Using Bug Magnet to test APIs

I get extremely irritated when people ask me the toolset I use to test without trying to understand/learn what I am testing and how I intend to use a specific tool.
My choice of tool and how I use it very much depends on the testing objective and the context.

Below is one such example when I used Bug Magnet, a chrome/firefox extension to test APIs.

Test objective/mission: Explore the email field added to the JSON payload.

I started off the test session with a 5 min brainstorming. I mostly use mind map for this.

  • what regex is used in the code?
  • what inputs can break it?
  • what inputs are business critical?
  • response code when it fails? 400?
  • error message returned on failure?
  • min/max length?
  • multiple email addresses?
  • duplicates? does it matter?
  • do we care if it is a *valid/fake email address? 
  • what about domains such as mailinator?
  • (Context:) Given we are not going gung-ho on validation. What could be useful at this moment?

From the brainstorming session I decided to break my test session into two.

  1. I wanted to first focus on just the validation around regex and 
  2. a follow up session to gain information on questions outside validation


So refactored the test objective to - Test the email field in the JSON payload for validation.

Started off the session by defining my test data

  • Business critical test data, the email addresses that should definitely be accepted. 
  • Pairing with the dev helped me learn the regex used to validate email address. This helped me add more tests around the boundaries of the regex. 
  • Also, I was aware and wanted to make use of the amazing list in Bug Magent for email valid/invalid addresses. I had previously used it for testing a UI functionality.


Yes, bug magnet from its webpage is an exploratory testing assistant for Chrome and Firefox. But I wanted to make use of it's email valid/invalid addresses to validate a field in JSON payload.

Navigating to the bug magnet installed folder path revealed the below list in config.json

"E-mail addresses": {
    "Valid" :{
      "Simple": "email@domain.com",
      "Dot in the address": "firstname.lastname@domain.com",
      "Subdomain": "email@subdomain.domain.com",
      "Plus in address": "firstname+lastname@domain.com",
      "Numeric domain": "email@123.123.123.123",
      "Square bracket around IP address": "email@[123.123.123.123]",
      "Unnecessary quotes around address": "\"email\"@domain.com",
      "Necessary quotes around address": "\"email..email\"@domain.com",
      "Numeric address": "1234567890@domain.com",
      "Dash in domain": "email@domain-one.com",
      "Underscore": "_______@domain.com",
      ">3 char TLD": "email@domain.name",
      "2 char TLD": "email@domain.co.jp",
      "Dash in address": "firstname-lastname@domain.com",
      "Intranet": "name@localhost",
      "Non-ascii Email" : "nathan@学生优惠.com"
    },
    "Invalid": {
      "No @ or domain": "plainaddress",
      "Missing @": "email.domain.com",
      "Missing address": "@domain.com",
      "Garbage": "#@%^%#$@#$@#.com",
      "Copy/paste from address book with name": "Joe Smith ",
      "Superfluous text": "email@domain.com (Joe Smith)",
      "Two @": "email@domain@domain.com",
      "Leading dot in address": ".email@domain.com",
      "Trailing dot in address": "email.@domain.com",
      "Multiple dots": "email..email@domain.com",
      "Unicode chars in address": "あいうえお@domain.com",
      "Leading dash in domain": "email@-domain.com",
      "Leading dot in domain": "email@.domain.com",
      "Invalid IP format": "email@111.222.333.44444",
      "Multiple dots in the domain": "email@domain..com"
    }
  },

I then added the email addresses (business critical and regex boundaries) to the above list and converted it into a CSV file. A small sample below

email_addresses
firstname+lastname@domain.com
email@123.123.123.123
xxx@yyy.com

The next step was to turn the email field in the JSON payload into a variable and run it via postman collection.

{
    "docEmail":"{{email_addresses}}"
}

The test revealed many inconsistencies, some values were validated to return 400 but some just threw 500. The generic error message was not helpful either.

--------------------------------------------end of session-------------------------------------------

So, yes a tester could give you any of the below tools plus more if you ask them just their toolset

- Mindmup
- Bug Magnet
- Big list of naughty strings >> https://github.com/minimaxir/big-list-of-naughty-strings
- Atom
- Postman
- Fiddler
- Insomnia
- J-meter
- Charles Proxy
- pyresttest
- dev tools
- newman, etc

but you will never learn how they use it!

Test with me @

Tweets