Friday, June 20, 2014

Agile in a Flash 52

TDD Process Smells The Code
Agile in a Flash by Jeff Langr and Tim Ottinger (card #52)

> Not failing first
> No green bar in last ten minutes
> Skipping the refactoring step
> Skipping easy tests
> Skipping hard tests
> Not writing the test first
> Setting code coverage targets

--

Not failing first. It’s tempting to go right for green (passing), but an initial red (failing) test tells you that the test works.

No green bar in last ten minutes. You’ re not relying on your tests anymore if it’s been a while since you’ve seen green. Is it ever a good time to not know whether you’ve broken something? Take smaller steps.

Skipping the refactoring step. It’s a truly bad idea to make a mess “for now” and promise to clean it up in the mythical “later” (Robert Martin has addressed this fallacy in his many keynote addresses).

Skipping easy tests. You won’t save much time by not testing simple code, and you’ll possibly create silly mistakes that go undetected in the code.

Skipping hard tests. Difficulty in testing drives us to reconsider and improve our design (to something that is also easier to test!).

Not writing the test first. TDD requires you to drive your code changes from the outside. Writing tests later makes code harder to test and hard to refactor for testing.

Setting code coverage targets. This is a well-understood failure because of the Hawthorne Effect (roughly, “you get what you measure”). It is better to discourage artificial inflation (gaming) of metrics.

--

“I have not failed, I’ve just found 1000 ways that won’t work.” ― Thomas Edison

~ Remember failure is necessary before people can understand success.

Agile in a Flash 51

Test Double Troubles The Code
Agile in a Flash by Jeff Langr and Tim Ottinger (card #51)

> Inhibited refactoring
> Tool complexity
> Passing tests that indicate nothing
> Mocking the SUT
> Low readability
> Ambitious mock implementations
> Vendor dependency

--

Inhibited refactoring. Test doubles exploit the linkages between classes, so refactoring class relationships may cause mock-based tests to fail.

Tool complexity. Mock tools have extensive APIs and can be idiosyncratic. It’s costly to understand both the tool and the code it produces.

Passing tests that indicate nothing. Fakes and stubs may not act quite like the classes they replace, leading to tests that pass and code that fails.

Mocking the SUT. Complex setup can bury the embarrassing fact that code we need to be real has been replaced with mock code, invalidating the test.

Low readability. Mock setup can be dense and idiomatic, making it difficult to see what’s really being tested.

Ambitious mock implementations. A fake is an alternative implementation of the class it replaces and can have bugs of its own. Keep your test doubles small and purposed to a single test or test fixture.

Vendor dependency. All third-party tools eventually fall out of favor, replaced with something newer and better. Don’t get stuck with yesterday’s tool.

To remedy most of these challenges, keep your use of mocks isolated and minimal. Refactor tests to emphasize abstraction and eliminate redundant mock detail.

--

“If you could kick the person in the pants for most of your trouble, you wouldn’t sit for a month.” ― Theodore Roosevelt

Agile in a Flash 50

Break Unit Test Writer’s Block The Code
Agile in a Flash by Jeff Langr and Tim Ottinger (card #50)

> Test that you can call the method at all
> Pick the most interesting functionality
> Pick the easiest functionality
> Write an assertion first
> Rename or refactor something
> Switch programming partners
> Reread the tests and code

--

Software authors can also get writers block. A good trick to beating it is to do some useful programming activity to get the creative juices flowing.

In legacy code, calling a method at all can require considerable setup, mocking, and dependency breaking. These activities get you entrenched in the code and activate your urge to clean up the code base.

Let your curiosity guide you. Test the most interesting bit of the functionality you’re writing. You’ll have more energy to do work that interests you most.

Alternatively, pick the easiest functionality so that you experience some success and build momentum. “Simple is as simple does.”

When writing a method normally (begin-to-end) fails, try writing the assertion first and working backward from there.

If you rename or refactor something, you’ll have your head wrapped around the problem more and end up with a code improvement to boot!

If the flow of ideas is getting stale, “a change is as good as a break,” so switch programming partners.

You may be stuck because you don’t really understand the code you need to change. Help your brain: reread the tests and code that already exist.

--

“Don’t waste time waiting for inspiration. Begin, and inspiration will find you.” ― H. Jackson Brown Jr.

Agile in a Flash 49

Field Guide to Mocks The Code
Agile in a Flash by Jeff Langr and Tim Ottinger (card #49)

> Test double: Any object that emulates another
> Stub: Returns a fixed value to the SUT
> Fake: Emulates a production object
> Mock: Self-verifies
> Partial mock: Combines production and mock methods
> Spy: Records messages for later verification

--

Test doubles are emulation objects you build to simplify testing. Surrounding the System Under Test (SUT) with test doubles allows you to exercise and observe it in isolation. Otherwise, accessing database, network, external hardware, or other subsystems can slow or break the test run and cause false positives. Stubs, fakes, mocks (partial or not), and spies are nuanced kinds of test doubles. This field guide, based on XUnit Test Patterns, will help you sort out standard terminology (used in many mock tools).

Test doubles have changed the art of test-driving by allowing tests to be smaller, simpler, and faster running. They can allow you to have a suite of thousands of tests that runs in a minute or less. Such rapid feedback gives you the confidence to dramatically change your code every few minutes. But take care; you should do the following:

> Learn to use test doubles, but employ them only when you need the isolation.
> Use a mock tool (instead of hand-coding them) if it improves test quality.
> Learn the various types of mocks summarized in this field guide.*12
> Read Card 51, Test DoubleTroubles, to avoid their many pitfalls.

--
12. If only to avoid being embarrassed by your peers. Or just keep this card handy.


"The secret of life is honesty and fair dealing. IF you can fake that, you've got it made." Groucho Marx

Thursday, June 12, 2014

Agile in a Flash 48

Refactoring Inhibitors The Code
Agile in a Flash by Jeff Langr and Tim Ottinger (card #48)

> Insufficient tests
> Long-lived branches
> Implementation-specific tests
> Crushing technical debt
> No know-how
> Premature performance infatuation
> Management metric mandates

--

Agile’s demand for continual change can rapidly degrade a system’s design. Refactoring is essential to keeping maintenance costs low in an agile environment.

Insufficient tests. TDD gives you the confidence to refactor and do the right thing, when you would not otherwise for fear of breaking existing code.

Long-lived branches. Working on a branch, you’ll plead for minimal trunk refactoring in order to avoid merge hell. Most branches should be short-lived.

Implementation-specific tests. You may want to discard tests, ultimately killing refactoring, if small refactorings breaks many tests at once. Minimize testto-System Under Test (SUT) encapsulation violations, which mocks can create.

Crushing technical debt. Insufficient refactoring creates overwhelming rampant duplication and difficult code. Refactor on every green to minimize debt.

No know-how. You can’t refactor if you don’t know which direction to take the code. Learn all you can about design, starting with simple design.

Premature performance infatuation. Don’t let baseless performance fear stifle cohesive designs. Make it run, make it right, and only then make it fast.

Management metric mandates. Governance by often one-dimensional and insufficient metrics (such as increase coverage) can discourage refactoring.


-- Do not assume you wrote good code or did good refactoring. Test, Test and then test again.

Wednesday, June 11, 2014

Agile in a Flash 47

Prevent Code Rot Through Refactoring The Code
Agile in a Flash by Jeff Langr and Tim Ottinger (card #47)

> Refactor only while green
> Test constantly
> Work in tiny steps
> Add before removing
> Test extracted functionality
> Do not add functionality

--

Start with all tests passing (green), because code that does not have tests cannot be safely refactored. If necessary, add passing characterization tests so that you have a reasonable green starting point.

By working in tiny steps and running tests constantly, you will always know whether your last change has broken anything. Refactoring is much easier if you insist on always having only one reason to fail.

By adding new code before removing old code (while testing constantly), you ensure you are not creating blocks of untested code as you work. For a while, your code will be half-refactored, with the code of old and new implementations of a behavior present. Each act of refactoring takes several edit/test cycles to reach a clean end state. Work deliberately and incrementally.

When extracting classes or methods, remember that they may need unit tests too, especially if they expose behaviors that were previously private or hidden. Be watchful of introducing changes in functionality. When tempted to add some bit of functionality, complete the current refactoring and move on to the next iteration of the Red/Green/Refactor cycle.


“Learning to write clean code is hard work. It requires more than just the knowledge of principles and patterns. You must sweat over it. You must practice it yourself, and watch yourself fail. You must watch others practice it and fail. You must see them stumble and retrace their steps. You must see them agonize over decisions and see the price they pay for making those decisions the wrong way.” - Martin, Robert C. (2008) Clean Code: A Handbook of Agile Software Craftmanship. Prentice Hall.

Friday, June 6, 2014

Agile in a Flash 46

Triple A for Tight Tests The Code
Agile in a Flash by Jeff Langr and Tim Ottinger (card #46)

> Arrange all the things needed to exercise the code
> Act on the code you want to verify
> Assert that the code worked as expected

--

Arrange-Act-Assert (AAA), a simple concept devised by Bill Wake of xp123.com, provides subtle power by helping test readers quickly understand the intent and flow of a test. No more “What the heck is this test even testing?”

In a unit test, you first create, or arrange, a context; then execute, or act on, the target code; and finally assert that the System Under Test (SUT) behaved as expected. Thinking in terms of AAA has direct impact on the code’s visual layout.

The simplest (perhaps ideal) test has three lines, one for each A. Here’s an example where we must verify that the SUT applies late fines to a library patron:

@Test
public void applyFine() {
Patron patron = new Patron(); // arrange the context
patron.setBalance(0);
patron.applyFine(10); // act
assertEquals(10, patron.fineBalance()); // assert
}

You don’t need the comments—the blank lines alone are sufficient.

AAA (also known as given-when-then) isn’t an absolute rule. You might not need an Arrange, and it’s OK to combine an Act and an Assertion into a single-line test.

“Reliability is the degree to which an assessment tool produces stable and consistent results” ~ Phelan, C & Wren, J. (http://www.uni.edu/chfasoa/reliabilityandvalidity.htm)

Testing methodologies:
http://www.guru99.com/testing-methodology.html