Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts

Friday, June 20, 2014

Agile in a Flash 52

TDD Process Smells The Code
Agile in a Flash by Jeff Langr and Tim Ottinger (card #52)

> Not failing first
> No green bar in last ten minutes
> Skipping the refactoring step
> Skipping easy tests
> Skipping hard tests
> Not writing the test first
> Setting code coverage targets

--

Not failing first. It’s tempting to go right for green (passing), but an initial red (failing) test tells you that the test works.

No green bar in last ten minutes. You’ re not relying on your tests anymore if it’s been a while since you’ve seen green. Is it ever a good time to not know whether you’ve broken something? Take smaller steps.

Skipping the refactoring step. It’s a truly bad idea to make a mess “for now” and promise to clean it up in the mythical “later” (Robert Martin has addressed this fallacy in his many keynote addresses).

Skipping easy tests. You won’t save much time by not testing simple code, and you’ll possibly create silly mistakes that go undetected in the code.

Skipping hard tests. Difficulty in testing drives us to reconsider and improve our design (to something that is also easier to test!).

Not writing the test first. TDD requires you to drive your code changes from the outside. Writing tests later makes code harder to test and hard to refactor for testing.

Setting code coverage targets. This is a well-understood failure because of the Hawthorne Effect (roughly, “you get what you measure”). It is better to discourage artificial inflation (gaming) of metrics.

--

“I have not failed, I’ve just found 1000 ways that won’t work.” ― Thomas Edison

~ Remember failure is necessary before people can understand success.

Thursday, June 12, 2014

Agile in a Flash 48

Refactoring Inhibitors The Code
Agile in a Flash by Jeff Langr and Tim Ottinger (card #48)

> Insufficient tests
> Long-lived branches
> Implementation-specific tests
> Crushing technical debt
> No know-how
> Premature performance infatuation
> Management metric mandates

--

Agile’s demand for continual change can rapidly degrade a system’s design. Refactoring is essential to keeping maintenance costs low in an agile environment.

Insufficient tests. TDD gives you the confidence to refactor and do the right thing, when you would not otherwise for fear of breaking existing code.

Long-lived branches. Working on a branch, you’ll plead for minimal trunk refactoring in order to avoid merge hell. Most branches should be short-lived.

Implementation-specific tests. You may want to discard tests, ultimately killing refactoring, if small refactorings breaks many tests at once. Minimize testto-System Under Test (SUT) encapsulation violations, which mocks can create.

Crushing technical debt. Insufficient refactoring creates overwhelming rampant duplication and difficult code. Refactor on every green to minimize debt.

No know-how. You can’t refactor if you don’t know which direction to take the code. Learn all you can about design, starting with simple design.

Premature performance infatuation. Don’t let baseless performance fear stifle cohesive designs. Make it run, make it right, and only then make it fast.

Management metric mandates. Governance by often one-dimensional and insufficient metrics (such as increase coverage) can discourage refactoring.


-- Do not assume you wrote good code or did good refactoring. Test, Test and then test again.

Wednesday, May 7, 2014

Agile in a Flash 38

Stop the Bad Test Death Spiral The Team
Agile in a Flash by Jeff Langr and Tim Ottinger (card #38)

> Only integration tests are written ! Learn TDD.
> Overall suite time slows ! Break into “slow/fast” suites.
> Tests are run less often ! Report test timeouts as build failures.
> Tests are disabled ! Monitor coverage.
> Bugs become commonplace ! Always write tests to “cover” a bug.
> Value of automated testing is questioned ! Commit to TDD, acceptance tests (ATs), refactoring.
> Team quits testing in disgust ! Don’t wait until it’s too late!--

Each misstep with TDD may lead down a bad spiral toward its abandonment.

Learn TDD. Violating cohesion and coupling principles impedes unit testing and promotes integration tests instead. TDD drives a unit-testable, SOLID (OO Design Pattern) design.
Break into slow/fast suites A fast suite runs in ten seconds. Keep the slow suite small. Use a continuous test tool such as Infinitest (Selenium works well too). Keep coupling low.
Report test timeouts as build failures Continually monitor the health of your test suite. If the suite slows dramatically, developers soon skimp on testing.
Monitor coverage Seek coverage above 90% on new code and stable/increasing coverage on existing code. Recast integration tests as unit tests or ATs.
Always write tests to cover a bug Test first, of course. Defects indicate inadequate test coverage. Track and understand each defect’s root cause!
Commit to TDD, ATs, refactoring Do TDD diligently. Many bugs are rooted in duplication that you must factor out. Quality problems slow production!
Don’t wait until it’s too late! If you admit defeat, it may be too late—managers rarely tolerate second attempts at what they think is the same thing.

From experience, Avoid building tests that encompass more than the subject or code tested or the test becomes overly complex and “brittle.”

Friday, April 11, 2014

Agile in a Flash 21

Agile in a Flash by Jeff Langr and Tim Ottinger (card #21)

Acceptance tests are used to verify that the team has built what the customer requested in a story.

> Are defined by the customer
> Define “done done” for stories
> Are automated
> Document uses of the system
> Don’t just cover happy paths
> Do not replace exploratory tests
> Run in a near-production environment
--

Are defined by the customer Acceptance tests (ATs) are an expression of customer need. All parties can contribute, but ultimately, a single customer voice defines their interests as an unambiguous set of tests.
Define “done done” for stories ATs are written before development as a contract for completion. Passing ATs tell programmers their work is done and tell customers that they can accept it.
Are automated You can script, and thus automate, all tests that define expected system capabilities. Manually executed scripts are a form of abuse.
Document uses of the system Design readable tests so they demonstrate, by example, valid uses of the system. Such documentation never becomes stale!
Don’t just cover happy paths It’s hard to capture all alternate and exceptional conditions. Inevitably you’ll miss one. Add it to your AT suite.
Do not replace exploratory tests Exploratory testing highlights the more creative aspects of how a user might choose to interact with a new feature. It also helps teach testers how to improve their test design skills.

Run in a near-production environment ATs execute in an environment that emulates production as closely as possible. They hit a real database and external API calls as much as possible. ATs are slow by definition.

Monday, April 7, 2014

Google Java Standards

Just got this web site with the Google Java Standards.
Useful points:
  • Files are all in UTF-8
  • Braces always used: void method() {}
  • One statement per line: how many times have you seen someone a gigantic boolean condition all on one line?
  • Enums formatted as if an array (no methods or docs)
  • Testing caught exceptions can be ignored if, and named, "expected"
We can find issues with files not formatted correctly, especially when using a tool like Camel and Bean IO to convert files to Java objects.  Using the base UTF encoding, we keep characters that can hurt the code from causing issues.
Braces are important for visually defining connected elements of the code and for asserting logic.  It may be a couple key strokes  less to exclude them, but someone working on that code may incorrectly change the logic and behavior of the code because they did not see the connected operations.
One statement per line helps readability, debugging and can help to deter code smell.
return (x == y && x== z || x== a || x ==b )
Remember here that java works with order of operations as top to bottom.  A debugger will just error on the one line if there is an error, and as well, a Boolean condition that is nested in the line may make determining the overall outcome difficult.  By breaking the conditions into individual comparisons, the debugger can reveal if the logic is accurate and which statement is causing the logic to be true or false.  This might better, even more, refactored into other methods were the Boolean logic is more concise to one comparison rather than as a chain.
(x == y
&& x== z
|| x== a
|| x ==b)
Testing that causes exceptions should name the exception as "expected" helps testers understand what the test is trying to accomplish or asserting. Something is being thrown in order to prove that incorrect parameters or other bad condition is being attempted.
@Test(expected = Exception.class)
methodThatCausesException(){ //operation that causes exception}