Friday, September 23, 2016

Log4Net - steps

Thought I would jot down some notes for log4Net. There is not really much to log4net other than it is a logging framework for .NET  (For Java, it is log4j or the core to slf4j).

3 components need  to be created to get it to work with .NET 4.5+.

1.       Use NuGet to install the log4net package to your projects
2.       Include log4net in the assembly and base configurations.
3.       Create an logger instance per class to be logged and call using log levels.

After install the package using NuGet.
I find the core AssemblyInfo.cs file

Add :
[assembly: log4net.Config.XmlConfigurator(Watch = true)]

In the Web.config or app.config (depending on your application startup configuration)
You declare the namespace in the <configSections /> element

    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />

And then create the log4net element with root and Appender definitions (see below).  The Root element is where you define logging level that you want and can be specific to the appender.  --For production and QA environments, you will want to avoid “ALL, DEBUG” unless you mean to use this for specific cases.

Logging Levels:
1.       OFF
2.       FATAL
3.       ERROR
4.       WARN
5.       INFO
6.       DEBUG
7.       ALL

Filters can be applied here as well as the type of logging “appenders.” For MySystem, I defined a ConsoleAppender for the debugging sessions and a FileAppender in order grab logging events while the application is running in environments that we cannot debug. There are other types of appenders, but these are the most widely used (see the Apache website). This gives a PatternLayout logging of
“Data LogLevel ClassLogger -  Message logged with new line and exception message (if Found)”

2016-09-16 14:40:56,461 INFO MySystem.Areas.Jobs.Controllers.PostTicketsController - inside PostTicketsController index try else send email id :1201
2016-09-16 14:41:17,677 ERROR MySystem.Models.EmailModels - System.Net.Mail.SmtpException: Failure sending mail. ---> System.Net.WebException: Unable to connect to the remote server ---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 10.0.0.27:25
   at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress)…

  <log4net>
    <root>
      <level value="DEBUG" />
      <appender-ref ref="ConsoleAppender" />
      <appender-ref ref="MyFileAppender" />
    </root>
    <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender">
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date %level %logger - %message%newline%exception" />
      </layout>
    </appender>
    <appender name="MyFileAppender" type="log4net.Appender.FileAppender">
      <file type="log4net.Util.PatternString" value="App_Data\mysystem.log" />
      <appendToFile value="true" />
      <lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date %level %logger - %message%newline%exception" />
      </layout>
    </appender>
  </log4net>

In the class that I want to have create a logger, I use the following code:

using log4net;
private static ILog log = LogManager.GetLogger(typeof(Startup));// this is using typeof(thisclass)
<!—- OR -->
private static readonly ILog logger = LogManager.GetLogger (System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);//generic logger for any class
       
//Since it is part of the assembly, you may not have to declare the using but declare the logger like
private static log4net.ILog log = log4net.LogManager.GetLogger(typeof(Startup));


Then simply, in the line of code that I want to log.
            log.Info("MySystem Server app starting up ...");
            log.Debug("Why are you looking at my stuff ... object.property is " + object.property);
            log.Error("Exception occurred " + ex);

Deployment notes:
For a project, the main Server project and the MySystem.Core project had to be updated for logging to work. Reminder, if the root log level is higher than the class logging, the logging does not get performed.  In the above examples, if the logging level is ERROR or WARN, the Info and Debug will not be logged.

Some caveats,
Earlier editions of log4net required you to specify the .NET Framework target and is supposedly corrected in current editions.
First caution with logging is that too much logging can cause performance issues.
Also be careful about logging SQL messages directly into a log (especially a database-orientated using the AdoNetAppender log) as that can be a path for SQL-injection and becomes another system dependency at runtime.
A RollingFileAppender, FileAppender requires write access to the directory.

More Info (including more appenders and .NET framework specific notes):
http://logging.apache.org/log4net/
http://www.codeproject.com/Articles/140911/log-net-Tutorial


“The journey of a thousand miles begins with a single step.” (Lao Tzu)


Friday, June 20, 2014

Agile in a Flash 52

TDD Process Smells The Code
Agile in a Flash by Jeff Langr and Tim Ottinger (card #52)

> Not failing first
> No green bar in last ten minutes
> Skipping the refactoring step
> Skipping easy tests
> Skipping hard tests
> Not writing the test first
> Setting code coverage targets

--

Not failing first. It’s tempting to go right for green (passing), but an initial red (failing) test tells you that the test works.

No green bar in last ten minutes. You’ re not relying on your tests anymore if it’s been a while since you’ve seen green. Is it ever a good time to not know whether you’ve broken something? Take smaller steps.

Skipping the refactoring step. It’s a truly bad idea to make a mess “for now” and promise to clean it up in the mythical “later” (Robert Martin has addressed this fallacy in his many keynote addresses).

Skipping easy tests. You won’t save much time by not testing simple code, and you’ll possibly create silly mistakes that go undetected in the code.

Skipping hard tests. Difficulty in testing drives us to reconsider and improve our design (to something that is also easier to test!).

Not writing the test first. TDD requires you to drive your code changes from the outside. Writing tests later makes code harder to test and hard to refactor for testing.

Setting code coverage targets. This is a well-understood failure because of the Hawthorne Effect (roughly, “you get what you measure”). It is better to discourage artificial inflation (gaming) of metrics.

--

“I have not failed, I’ve just found 1000 ways that won’t work.” ― Thomas Edison

~ Remember failure is necessary before people can understand success.

Agile in a Flash 51

Test Double Troubles The Code
Agile in a Flash by Jeff Langr and Tim Ottinger (card #51)

> Inhibited refactoring
> Tool complexity
> Passing tests that indicate nothing
> Mocking the SUT
> Low readability
> Ambitious mock implementations
> Vendor dependency

--

Inhibited refactoring. Test doubles exploit the linkages between classes, so refactoring class relationships may cause mock-based tests to fail.

Tool complexity. Mock tools have extensive APIs and can be idiosyncratic. It’s costly to understand both the tool and the code it produces.

Passing tests that indicate nothing. Fakes and stubs may not act quite like the classes they replace, leading to tests that pass and code that fails.

Mocking the SUT. Complex setup can bury the embarrassing fact that code we need to be real has been replaced with mock code, invalidating the test.

Low readability. Mock setup can be dense and idiomatic, making it difficult to see what’s really being tested.

Ambitious mock implementations. A fake is an alternative implementation of the class it replaces and can have bugs of its own. Keep your test doubles small and purposed to a single test or test fixture.

Vendor dependency. All third-party tools eventually fall out of favor, replaced with something newer and better. Don’t get stuck with yesterday’s tool.

To remedy most of these challenges, keep your use of mocks isolated and minimal. Refactor tests to emphasize abstraction and eliminate redundant mock detail.

--

“If you could kick the person in the pants for most of your trouble, you wouldn’t sit for a month.” ― Theodore Roosevelt

Agile in a Flash 50

Break Unit Test Writer’s Block The Code
Agile in a Flash by Jeff Langr and Tim Ottinger (card #50)

> Test that you can call the method at all
> Pick the most interesting functionality
> Pick the easiest functionality
> Write an assertion first
> Rename or refactor something
> Switch programming partners
> Reread the tests and code

--

Software authors can also get writers block. A good trick to beating it is to do some useful programming activity to get the creative juices flowing.

In legacy code, calling a method at all can require considerable setup, mocking, and dependency breaking. These activities get you entrenched in the code and activate your urge to clean up the code base.

Let your curiosity guide you. Test the most interesting bit of the functionality you’re writing. You’ll have more energy to do work that interests you most.

Alternatively, pick the easiest functionality so that you experience some success and build momentum. “Simple is as simple does.”

When writing a method normally (begin-to-end) fails, try writing the assertion first and working backward from there.

If you rename or refactor something, you’ll have your head wrapped around the problem more and end up with a code improvement to boot!

If the flow of ideas is getting stale, “a change is as good as a break,” so switch programming partners.

You may be stuck because you don’t really understand the code you need to change. Help your brain: reread the tests and code that already exist.

--

“Don’t waste time waiting for inspiration. Begin, and inspiration will find you.” ― H. Jackson Brown Jr.

Agile in a Flash 49

Field Guide to Mocks The Code
Agile in a Flash by Jeff Langr and Tim Ottinger (card #49)

> Test double: Any object that emulates another
> Stub: Returns a fixed value to the SUT
> Fake: Emulates a production object
> Mock: Self-verifies
> Partial mock: Combines production and mock methods
> Spy: Records messages for later verification

--

Test doubles are emulation objects you build to simplify testing. Surrounding the System Under Test (SUT) with test doubles allows you to exercise and observe it in isolation. Otherwise, accessing database, network, external hardware, or other subsystems can slow or break the test run and cause false positives. Stubs, fakes, mocks (partial or not), and spies are nuanced kinds of test doubles. This field guide, based on XUnit Test Patterns, will help you sort out standard terminology (used in many mock tools).

Test doubles have changed the art of test-driving by allowing tests to be smaller, simpler, and faster running. They can allow you to have a suite of thousands of tests that runs in a minute or less. Such rapid feedback gives you the confidence to dramatically change your code every few minutes. But take care; you should do the following:

> Learn to use test doubles, but employ them only when you need the isolation.
> Use a mock tool (instead of hand-coding them) if it improves test quality.
> Learn the various types of mocks summarized in this field guide.*12
> Read Card 51, Test DoubleTroubles, to avoid their many pitfalls.

--
12. If only to avoid being embarrassed by your peers. Or just keep this card handy.


"The secret of life is honesty and fair dealing. IF you can fake that, you've got it made." Groucho Marx

Thursday, June 12, 2014

Agile in a Flash 48

Refactoring Inhibitors The Code
Agile in a Flash by Jeff Langr and Tim Ottinger (card #48)

> Insufficient tests
> Long-lived branches
> Implementation-specific tests
> Crushing technical debt
> No know-how
> Premature performance infatuation
> Management metric mandates

--

Agile’s demand for continual change can rapidly degrade a system’s design. Refactoring is essential to keeping maintenance costs low in an agile environment.

Insufficient tests. TDD gives you the confidence to refactor and do the right thing, when you would not otherwise for fear of breaking existing code.

Long-lived branches. Working on a branch, you’ll plead for minimal trunk refactoring in order to avoid merge hell. Most branches should be short-lived.

Implementation-specific tests. You may want to discard tests, ultimately killing refactoring, if small refactorings breaks many tests at once. Minimize testto-System Under Test (SUT) encapsulation violations, which mocks can create.

Crushing technical debt. Insufficient refactoring creates overwhelming rampant duplication and difficult code. Refactor on every green to minimize debt.

No know-how. You can’t refactor if you don’t know which direction to take the code. Learn all you can about design, starting with simple design.

Premature performance infatuation. Don’t let baseless performance fear stifle cohesive designs. Make it run, make it right, and only then make it fast.

Management metric mandates. Governance by often one-dimensional and insufficient metrics (such as increase coverage) can discourage refactoring.


-- Do not assume you wrote good code or did good refactoring. Test, Test and then test again.

Wednesday, June 11, 2014

Agile in a Flash 47

Prevent Code Rot Through Refactoring The Code
Agile in a Flash by Jeff Langr and Tim Ottinger (card #47)

> Refactor only while green
> Test constantly
> Work in tiny steps
> Add before removing
> Test extracted functionality
> Do not add functionality

--

Start with all tests passing (green), because code that does not have tests cannot be safely refactored. If necessary, add passing characterization tests so that you have a reasonable green starting point.

By working in tiny steps and running tests constantly, you will always know whether your last change has broken anything. Refactoring is much easier if you insist on always having only one reason to fail.

By adding new code before removing old code (while testing constantly), you ensure you are not creating blocks of untested code as you work. For a while, your code will be half-refactored, with the code of old and new implementations of a behavior present. Each act of refactoring takes several edit/test cycles to reach a clean end state. Work deliberately and incrementally.

When extracting classes or methods, remember that they may need unit tests too, especially if they expose behaviors that were previously private or hidden. Be watchful of introducing changes in functionality. When tempted to add some bit of functionality, complete the current refactoring and move on to the next iteration of the Red/Green/Refactor cycle.


“Learning to write clean code is hard work. It requires more than just the knowledge of principles and patterns. You must sweat over it. You must practice it yourself, and watch yourself fail. You must watch others practice it and fail. You must see them stumble and retrace their steps. You must see them agonize over decisions and see the price they pay for making those decisions the wrong way.” - Martin, Robert C. (2008) Clean Code: A Handbook of Agile Software Craftmanship. Prentice Hall.