Unit testing is a central part of modern software testing and Test-Driven Development (TDD). Every language has some kind of unit testing support. Some are separate libraries or frameworks (such as JUnit or Pytest), while others are integrated into the language or in an IDE (such as with Microsoft Visual Studio). Many of the concepts and IDEs are similar, though different language capabilities (objects, exceptions, etc.) result in different unit testing capabilities.
This exercise is written in Python using pytest
, but
you are free to use any language or test suite you prefer. These
exercises as written should be doable on hopper.slu.edu without any effort
on your part.
In this exercise you will:
pytest
by default will recursively search directories visible from
the current directory when it looks for tests. Running the following
exercises from your default home directory may take a long time or could
result in unwanted behavior.
pytest
. Imagine you are writing a software library, meaning you
are authoring a collection of functions, but you are not trying to write a
finished executable. If you're like me, traditionally you test your software
by running it in a traditional executable and printing lots of input/output
pairs. This works, but unit testing with pytest
is a better
solution: pytest lets you run tests independent of any particular finished
program, and it assesses correctness for you. Moreover, a good set of unit
tests then become a regression test suite- if you ever refactor your code
you can re-run your unit tests and verify that the overall behavior has not
changed.
Start by creating a file called myLibrary.py
and writing a
trivial function inside. For example, the contents of your file might be:
def myFunction(x): return x + 1;
Remember: Correct indentation in Python is non-optional. The function definition should start at the beginning of a line and the return statement should have a tab in front of it.
import myLibrary
" and finally you
can run it with (for example) "myLibrary.myFunction(5)
". In total,
your output might look something like this:
[username@hopper directory]$python Python 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import myLibrary >>> myLibrary.myFunction(5) 6 >>>
Make sure your code runs as expected. You will have to press CTRL-D to quit the Python interpreter.
pytest
to create a test suite for this
function. There are several ways you can specify tests to be run, but for
now we will separate all our tests into another file. However, be warned
that there are some considerations when constructing test suites for
large or complicated projects.
See the following documentation on pytest
test discovery
for full details.
Create a file called test_example.py
. The "test_
"
tells pytest
that this file contains a unit test. Then, we can
create a unit test by defining a function within that file that also has the
"test_
" preface. For example, we can create a unit test assertion
with the following:
import myLibrary def test_case1(): assert myLibrary.myFunction(2) == 3
Then, since both the file and the test within the file should be
automatically discoverable by pytest
, we can run the unit test
just by typing "pytest
" at the termina. Do so now, and the test
should run successfully.
test_example.py
:
def test_case2(): assert myLibrary.myFunction(5) == 7
Since both test cases are prefaced with "test_
" we can execute
the entire test suite just by typing "pytest
" at the terminal.
[username@hopper directory]$pytest ============================= test session starts ============================== platform linux -- Python 3.6.8, pytest-5.1.2, py-1.8.0, pluggy-0.13.0 rootdir: /student/username/directory collected 1 item test_example.py F [100%] =================================== FAILURES ===================================
__________________________________ test_case1 __________________________________ def test_case2(): > assert myLibrary.myFunction(5) == 7 E assert 6 == 7 E + where 6 = <function myFunction at 0x7f8779561e18>(5) E + where <function myFunction at 0x7f8779561e18> = myLibrary.myFunction test_example.py:7: AssertionError ============================== 1 failed in 0.41s ===============================
[username@hopper directory]$
pytest
is trying to
be as helpful as possible. The failure message indicates the test case that
failed as well as the values in the assertion, and even gives us a stack trace
of where the value "6" came from above.
Go ahead and fix the failing test, and confirm that you can run
pytest
without errors.
myLibrary.py
can have more than one
function and the file test_example.py
can have more unit tests.
Write a few functions and try out various logical assertions ( ==, !=, <,
>, etc).
Why? First, that other software may not be written yet, especially if you are doing TDD. Second, there's no guarantee that other software is correct. Suppose that software A depends on software B to function, and software A's tests are failing. Where does the error come from? At first glance it's not obvious if the error stems from A, or from B, or from the interface between them. However, if the software B is actually a fake test double, then you can rule it out as a source of error. Moreover, you can do system integration tests later whose sole purpose is to test A with B, so it's just fine if the unit tests for A don't really test A with B.
The test doubles we will look at are called dummies, stubs, spies, mocks, and fakes. The first four are related to each other, and each builds upon the others (in the order given above). Using the simplest possible test double makes your tests easier to write, and makes it obvious exactly what is and is not tested.
We saw this article about these test doubles earlier that describes each of these in a simple, conversational style. Go ahead and read it, if you have not already.
car.py
, a simple class for representing your everyday automobile.
You can download car.py
to the current directory of your Linux
terminal with the following command:
wget https://cs.slu.edu/~dferry/courses/csci5030/notes/car.py
Or, you can access it directly with this link. Do so, and take a moment to read over the code. Hopefully nothing there should be terribly surprising.
assert myCarInstance.distance == 0
Easy to say, but we can't actually run this test yet. Why? First we need
to call the constructor for the car class, and the constructor expects an
engine
object and a fuel_tank
object. We don't have
those objects yet... so what are we to do? We could give up on testing
car
until we actually have implementations for engine
and fuel_tank
, but that's a bad solution. This is exactly the
situation for a test double, and in particular we need a dummy double.
A dummy double is used whenever we need to pass an argument, but we know
that the argument is never actually used. That's the case here. If you look
at the code for car.py
you can see that the distance
attribute is set in the constructor, and engine
and
fuel_tank
objects are not actually used.
Try to write a unit test using dummy objects. If you get stumped, or when you're done, you can see my solution.
However, the second point is that we don't actually even need dummy classes
to begin with, but only because we're using Python. Python is a very flexible
language, and in particular there are no strong data types. The result is that
Python doesn't really care what we pass to the car
constructor...
we can pass anything we want and Python will just throw us an error if we try
to use it incorrectly. However, since engine
and
fuel_tank
are never used, they can't be used incorrectly either.
I would still suggest the first solution because it's only a few extra
keystrokes and it makes it obvious what you're doing, but you could have
gotten away with a solution that looks like this
In a more strongly typed language, such as Java, you'd always have to create a proper dummy object.
computeRange()
function.
Now mere dummy objects are no longer enough because we need to access the
fuel_tank.size
and engine.MPG
attributes. However,
the actual values returned can be chosen by you, the tester. This is the
purpose of a stub test double.
Create a stub test double for the engine
and
fuel_tank classes by giving them the appropriate attributes.
Then, write a unit test or two for the computeRange()
function.
My solution is here when you're done.
In particular, note that an empty class is no longer acceptable for testing
this new function. If you try to use the dummy classes from before you'll get
errors about accessing the size
and MPG
attributes.
fillGasTank()
function.
This is a difficult function to test, because all that happens is that we
call a method in the fuel_tank
class. From the point of view
of the person testing car.py
, the fillGasTank()
function is essentially a black box. All we really can do is verify that
fuel_tank.refill()
is called. Hence, we use a spy
test double to verify exactly that.
Now, we could implement our own spy similar to what is shown in The
Little Mocker, but Python includes support for creating spies and other
mock objects in the unittest
module. This module is an alternative
to pytest
in many respects, and while pytest
is perhaps more
intuitive than unittest
, the unittest
module includes
some indispensible functionality that is not provided by pytest
.
One of these features is the ability to create and use mock objects.
In particular, you can import the unittest.mock
module, and
take a brief look
at the documentation here. The basic workflow is to create an instance
of the class you want, and then create mock methods using the
unittest.mock
module. Then, the module automatically logs if and
how the mocked methods are called, and allows you to use a set of assert
statements to confirm the behavior.
You can see my implementation of a spy object here
. If you run this test case as-is, you should get an assertion error
at line 19 because the call to fillGasTank()
is commented
out. Once you uncomment that line the test case should run successfully.
fuel_tank.subtract()
in the drive()
function. One feature of the
unittest.mock
module is that you can spy on what specific values
a function is called with. Use the documentation above to write a set of
stubs and spies that verifies the value passed to
fuel_tank.subtract()
. In particular, if miles = 60
and engine.MPG = 30
, then the value passed to
fuel_tank.subtract()
should be 2
.
Make sure that your code passes correctly, and then fails if you change some of the numbers involved. My solution is here.
fuel_tank.refill()
was called. That spy looked at actions happening
in the system, but it was oblivious to the actual behavior of the system. This
last exercise created a mock object that expects a certain behavior... given a
certain initial state (attribute values) and an input (drive(60)
)
the mock object knew that fuel_tank.subtract()
should be called
with value 2
.
The last method in car.py
computes the total weight of the
vehicle dependent on how much gas is in the fuel tank. Suppose we want to
validate that driving our vehicle does in fact consume gas, and that it
should weigh less after calling the drive()
function than it did
before. To do so we need some kind of logic implementing the
computeWeight()
function. We don't necessarily need the full or
completely accurate implementation of the fuel_tank
class, we just
need something approximate. We need a fake.
Write a fake fuel_tank
that includes an implementation of
the subtract()
and computeWeight()
functions. Then,
verify that the weight after driving is less than the weight that was before
driving. You can see my solution here. If you modify
the drive(60)
line to drive a negative number of miles then you
can induce a test failure to prove to yourself that the test works. (And
that we can magically grow gasoline just by driving in
reverse. I guess we should have had a unit test for that, huh?)
shapes.py
. Then, create a class called
rectangle
. The constructor should take two arguments: the length
and the width of the rectangle. You should implement functions called
computeArea()
and computePerimeter()
. Develop this
code using TDD. Remember that writing good test cases in TDD is tantamount to
requirements analysis in traditional software development. Make sure to consider
edge cases such as what happens if a user passes negative numbers for length
or width, (or both), or if they pass non-numeric data such as None
or a string. What happens if the user passes zeros to your constructor- does
the code behave how you'd like, or were you surprised?