Better Support Experience with Meaningful Logging

Better Support Experience with Meaningful Logging

Published
June 21, 2022
Author
Chandra Wijaya
Tags
Web Dev
Java

A good log is like a fairytale

One of my colleagues said that, "A good log is like a fairytale". This phrase is often used to emphasize the importance of creating effective logs that provide clear and concise information about events and issues within an application. A good log should be easy to read and understand, and should provide relevant context and additional details as needed.
Just as a fairytale tells a clear and concise story that is easy to understand and follow, a good log should provide the same quality of information about events and issues within an application. This can make it easier to manage and troubleshoot issues within your application, and can provide valuable information for tracking and analyzing specific events and issues within your system.
Basically, a good log should include details such as the time the event occurred, the logging level (e.g. info, warning, error), a descriptive message about the event, and any relevant context or additional details.
For example, a simple good log might look like this:
[INFO] [2022-12-30 12:00:00] User logged in successfully.
This log includes the time the event occurred, the logging level (info), and a clear message about the event (user logged in successfully).
But to leverage this to be more informative, it's also helpful to include context within the log message, such as the user's username or the specific action they took. For example:
[INFO] [2022-12-30 12:00:00] User JohnDoe logged in successfully.
Including context within the log message can make it easier to understand the log and track specific events within the system.

Only Experienced Software Engineers Know The Importance of Logging

Let me start with a short fictional story:
Alice is a junior software engineer who is working on a new application. She is responsible for implementing a feature that allows users to search for products within the application.

Alice is so focused on getting the feature working as quickly as possible, and does not think about the importance of logging. As a result, she does not include any logging code in her implementation.

A few days later, the application is deployed to production and users start reporting errors when using the search feature. Alice is unable to reproduce the errors in her development environment, and is unable to understand what is causing the errors.

Bob, an experienced software engineer who is working on the same project, recognizes the importance of logging and suggests that Alice include logging code in her implementation to help understand and troubleshoot the errors.
Alice follows Bob's advice and adds logging code to the implementation.

With the help of the log data, Alice is able to understand the root cause of the errors and fix the issue. Without the log data, Alice would have had a much harder time understanding and troubleshooting the issue, and it might have taken significantly longer to fix the issue.
Alice is a junior software engineer who is working on a new application. She is responsible for implementing a feature that allows users to search for products within the application. Alice is so focused on getting the feature working as quickly as possible, and does not think about the importance of logging. As a result, she does not include any logging code in her implementation. A few days later, the application is deployed to production and users start reporting errors when using the search feature. Alice is unable to reproduce the errors in her development environment, and is unable to understand what is causing the errors. Bob, an experienced software engineer who is working on the same project, recognizes the importance of logging and suggests that Alice include logging code in her implementation to help understand and troubleshoot the errors. Alice follows Bob's advice and adds logging code to the implementation. With the help of the log data, Alice is able to understand the root cause of the errors and fix the issue. Without the log data, Alice would have had a much harder time understanding and troubleshooting the issue, and it might have taken significantly longer to fix the issue.
Now from this simple story, I assume we all agree that logging is an important aspect of software engineering. It is a critical tool for understanding and troubleshooting issues within an application, and can provide valuable information for tracking and analyzing specific events and issues within your system.
Experience is often an important factor in understanding the importance of logging. As software engineers gain experience working on different types of applications and systems, they often come to appreciate the value of logging as a tool for managing and troubleshooting issues.
Good logging requires a good understanding of the various types of logs that are available, as well as the layout and format of log messages. It also requires an understanding of how to effectively use logging tools and frameworks to manage and analyze log data.
These skills are often developed over time through experience working with different types of applications and systems.

Effective Logging

Now that we understand how important logging is, the next question is, how do we write a good log? The key of a good log is as I mentioned, clear and concise. It must provide valuable information about the event or issue being logged without overwhelming the reader with unnecessary details.
And that comes to another strategy to pick if an information provided in the logs is valuable. This is what we call effective logging.
There are a few key pieces of information that should be included in order to create effective logging:
Timestamps: Timestamps should be included in log messages to provide a reference point for when the event or issue being logged occurred. This can be particularly useful for understanding the sequence of events and for tracking the duration of events or issues.
Event or issue details: Log messages should include clear and concise information about the event or issue being logged. This can include details about the type of event or issue, as well as any relevant context or additional details that may be useful for understanding and troubleshooting the issue.
Relevant context: Log messages should include relevant context and additional details as needed. Relevant means no unnecessary information being told. For example, information about the user or system that triggered the event or issue, or the request and response being sent , as well as any other relevant details that may be useful for understanding and troubleshooting the issue.
Log levels: Log messages should include log levels to indicate the severity of the event or issue being logged. This can help prioritize and prioritize issues based on their severity, and can be particularly useful for filtering log data to focus on specific types of events or issues.
So now, let's see how we would usually do the logging in our application.
public class ExampleClass { private static final Logger logger = LoggerFactory.getLogger(ExampleClass.class); public void doSomething() { try { // perform some action logger.info("Action performed successfully."); } catch (Exception e) { logger.error("An error occurred while performing the action.", e); } } }
There is nothing wrong about this logging way. But if we see more closely that in this example, the logging code is included directly within the doSomething() method, which means that you would need to include similar logging code in every method where you want to log events. This can be repetitive and can make it difficult to manage and maintain logging in your application.
And this is when it comes to the point where we might reconsider writing such repetitive code in this way and take a look at Aspect Oriented Programming (AOP).

Aspect Oriented Programming (AOP)

Let's see how we write same functional code as above using AOP.
@Aspect @Component public class LoggingAspect { @Before("execution(* com.example.app..*(..))") public void logBefore(JoinPoint joinPoint) { // log the event } @After("execution(* com.example.app..*(..))") public void logAfter(JoinPoint joinPoint) { // log the event } @AfterThrowing(pointcut = "execution(* com.example.app..*(..))", throwing = "exception") public void logAfterThrowing(JoinPoint joinPoint, Exception exception) { // log the event } }
and we can totally remove the logging code in our ExampleClass.
public class ExampleClass { public void doSomething() { try { // perform some action } catch (Exception e) { // whatever you want to do with the exception } } }
Now you see that we don't really write logging code in the doSomething() method like before, instead we remove it. But I assure you, it works the same. How could it be possible?
Aspect Oriented Programming (AOP) is a programming paradigm that allows you to manage cross-cutting concerns (e.g. logging, security, transactions) in a centralized and modular way. Using AOP can help you create an effective logging system for your application that is easy to manage and maintain, and provides valuable context for tracking and analyzing specific events and issues within your system.
As you can see in the comparison above, using AOP can make it easier to manage and maintain logging in your application, as you only need to update the logging aspect rather than searching for and updating logging code throughout your application.

Mapped Diagnostic Context (MDC)

In addition to using AOP to manage logging in a centralized and modular way, you can also use the Mapped Diagnostic Context (MDC) feature in Java Spring to add context to your logs. MDC allows you to add key-value pairs to the current thread's context, which can then be included in your logs. And this is very useful when it comes to provide valuable information we need in the log.
For example, you might use MDC to add the request ID and user information to your logs, like this:
@Aspect @Component public class LoggingAspect { @Before("execution(* com.example.app..*(..))") public void logBefore(JoinPoint joinPoint) { MDC.put("requestId", requestId); MDC.put("user", user); // log the event, for example: logger.info("Request ID {}: User {} retrieved data from the database.", MDC.get("requestId"), MDC.get("user")); } @After("execution(* com.example.app..*(..))") public void logAfter(JoinPoint joinPoint) { // log the event MDC.clear(); } @AfterThrowing(pointcut = "execution(* com.example.app..*(..))", throwing = "exception") public void logAfterThrowing(JoinPoint joinPoint, Exception exception) { // log the event MDC.clear(); } }
The log generated will be like:
[INFO] [2022-12-30 12:00:00] Request ID 123456: User JohnDoe retrieved data from the database.
Using MDC in this way allows you to add context to your logs centralized and modular just as AOP does. You can use the MDC.put() method to add key-value pairs to the MDC, and the MDC.clear() method to remove them when they are no longer needed.
MDC provides a way to enrich log messages with information that could be unavailable in the scope where the logging actually occurs but that can be indeed useful to better track the execution of the program. 👍

Elastic Search ft. Structured Logging

After the log is written in this way, it comes to the next moment of truth to prove its effectiveness and usefulness, it is monitoring time. What is all the purpose of all this if it is not for this, right? Now you might think that some of us still do monitoring like this.
Log monitoring through terminal tailing
Log monitoring through terminal tailing
Don't get me wrong. Monitoring our system's logs in this way is absolutely fine. I still find that using terminal to log into system is still the fastest way especially when you want to go commando.
But what if I tell you that we can have easier and more comprehensive way viewing this data via table-like format? It can give us better point of view especially when it comes to data analysis.
Kibana Dashboard
Kibana Dashboard
As you see in the picture above, that is the power of Elasticsearch. Elasticsearch is an open-source search and analytics engine that is commonly used to search, analyze, and visualize log data. Elasticsearch is designed to be highly scalable and efficient, and can handle a large volume of log data in real-time.
Using a table-like format for your logs can make it easier to read and analyze log data, especially when using tools like Kibana, which provides a visual interface for viewing and analyzing log data.
To create a table-like format for your logs, you can use a structured logging library or framework that allows you to include log data in a structured format, such as JSON. Structured logging libraries and frameworks provide a consistent and standardized way to include log data in a structured format, making it easier to read and analyze log data.
Here is an example of a log message in JSON format using Logback:
{ "timestamp": "2022-12-30 12:00:00", "level": "INFO", "requestId": "123456", "user": "JohnDoe", "message": "User retrieved data from the database." }
In this example, the log data is included in a structured format with clearly defined fields for the timestamp, logging level, request ID, user, and message. This makes it easier to read and analyze the log data, and also allows tools like Kibana to easily extract and display the log data in a table-like format.
By using a structured logging library or framework and including log data in a structured format, you can create logs that are easy to read and analyze, especially when using tools like Kibana.

Few Tips for effective logging

Now that we have learned how important logging is, how to effectively doing it, also combine it with Elasticsearch, I hope this information could help fellowdevs to be a better developer through enhancing this aspect. To close this, I would like to add some tips for creating effective logs in your application:
  • Choose a logging library that meets your needs. Some popular logging libraries for Java include Logback, Log4j, and Java Util Logging.
  • Create a logging configuration file that is appropriate for the logging library you are using. This file will control how logging is configured, such as the logging level, output, and format. You can find more detailed documentation on how to configure this file on the website of the logging library you are using.
  • For specific framework like Spring, use the @Slf4j annotation in the class you want to log. This annotation will create a logger variable that can be used to manage logging in that class. You can use commands like logger.info() or logger.error() to manage logging in that class.
  • Use proper logging level. The logging level is responsible for determining the importance of a log, such as info, warning, or error. Use higher levels only for logs that are truly important, so you can minimize the number of logs displayed and make logging more effective.
  • Use the appropriate format for your logs. A good format will include important information such as the time, logging level, and log message, as well as provide useful context for understanding the log. You can use placeholders like {} to display the desired information within your log.
  • Add requestID or correlationID to the log. It helps to know the whole flow of a certain request so that we can have a better point of view when inspecting what happened in specific error.
  • Finally, use AOP! It will surely help you much.

References