logging with log4j and log
play

Logging with Log4j and log aggregation with Apache flume By - PowerPoint PPT Presentation

Logging with Log4j and log aggregation with Apache flume By Arivoli.K,MDS201903 Naveen Kumar Reddy,MDS201909 Saager Babu NG,MDS201917 Suman Polley,MDS201935 Avinash Kumar, MDS201907 Why Logging is necessary? Here comes log4j Overview


  1. Logging with Log4j and log aggregation with Apache flume By Arivoli.K,MDS201903 Naveen Kumar Reddy,MDS201909 Saager Babu NG,MDS201917 Suman Polley,MDS201935 Avinash Kumar, MDS201907

  2. Why Logging is necessary?

  3. Here comes log4j

  4. Overview • Log4j is a reliable, fast, and flexible logging framework (APIs) written in Java, which is distributed under the Apache Software License. • Log4j has been ported to the C, C++, C#, Perl, Python, Ruby, and Eiffel languages. • It views the logging process in terms of levels of priorities

  5. Components Log4j has three main components: • Loggers: Responsible for capturing logging information. • Appenders: Responsible for publishing logging information to various preferred destinations. • Layouts: Responsible for formatting logging information in different styles.

  6. History • Started in early 1996 as tracing API for the E.U. SEMPER (Secure Electronic Marketplace for Europe) project. • After countless enhancements and several incarnations, the initial API has evolved to become log4j, a popular logging package for Java. • The package is distributed under the Apache Software License, a full- fledged open source license certified by the open source initiative.

  7. Features • It is thread-safe. • It is optimized for speed. • It is based on a named logger hierarchy. • It supports multiple output appenders per logger. • It is fail-stop but log4j does not guarantee that each log statement will be delivered to its destination. • And many more!!!

  8. Pros of logging • Quick debugging • Easy maintenance • Structured storage of an application's runtime information.

  9. Cons of logging • Slows down an application. • If too verbose, it can cause scrolling blindness. To alleviate these concerns, log4j is designed to be reliable, fast, and extensible

  10. Logger object • Logger Object is the top-level layer is the Logger which provides the Logger object. • The Logger object is responsible for capturing logging information and they are stored in a namespace hierarchy

  11. Appender object • This is a lower-level layer which provides Appender objects. • The Appender object is responsible for publishing logging information to various preferred destinations such as a database, file, console, UNIX Syslog, etc

  12. Layout object • The Layout layer provides objects which are used to format logging information in different styles. • It provides support to appender objects before publishing logging information. • Layout objects play an important role in publishing logging information in a way that is human-readable and reusable.

  13. Framework of log4j

  14. Brief overview of the support objects 1)Level Object : The Level object defines the granularity and priority of any logging information. There are seven levels of logging defined within the API: OFF, DEBUG, INFO, ERROR, WARN, FATAL, and ALL.

  15. Logging levels

  16. Support objects 2) Filter Object : The Filter object is used to analyze logging information and make further decisions on whether that information should be logged or not. 3)ObjectRenderer : The ObjectRenderer object is specialized in providing a String representation of different objects passed to the logging framework. 4) LogManager: The LogManager object manages the logging framework.

  17. Syntax

  18. Types of appenders • RollingFileAppender • SMTPAppender • SocketAppender • SocketHubAppender • AppenderSkeleton • AsyncAppender • ConsoleAppender

  19. Layout • We have used PatternLayout with our appender. • All the possible options are: DateLayout ,HTMLLayout ,PatternLayout ,SimpleLayout, XMLLayout

  20. Logging methods • Logger class provides a variety of methods to handle logging activities. • The Logger class does not allow us to instantiate a new Logger instance but it provides two static methods for obtaining a Logger object: public static Logger getRootLogger(), public static Logger getLogger(String name) static Logger log = Logger.getLogger(log4jExample.class.getName())

  21. Logging methods • Once we obtain an instance of a named logger, we can use several methods of the logger to log messages. • The Logger class has the following methods for printing the logging information as shown in the next 2 slides.

  22. Logging methods

  23. Logging methods

  24. Log 4j appenders • Flume provides two log4j appenders that can be plugged into your application: 1) One that can write data to exactly one flume agent. 2) Another that can choose one of many configured Flume agents in a round-robin or random order.

  25. Flume appender

  26. Load balancing log4j appender • Log4j appenders can be configured to load balance between multiple flume agents, using a round-robin or random strategy. These log4j appenders come bundled up with flume and doesn’t require us to write any code which is another reason why it is extremely popular.

  27. Apache Flume and log aggregating Introduction ● ● Philosophy Apache Flume in HDFS ecosystem ● ● Pros and cons -Suman Polley MDS201935

  28. INTRODUCTION: ● Apache Flume is a distributed system for efficiently collecting, aggregating and moving large amounts of log data from many different sources to a centralized data store.

  29. ● The use of Apache Flume is not only restricted to log data aggregation. Since data sources are customizable, Flume can be used to transport massive quantities of event data including but not limited to network traffic data, social-media-generated data, email messages and pretty much any data source possible.

  30. PHILOSOPHY: Distributed pipeline architecture. ● Pushing data into HDFS using an intermediate system is a common case ● use.Flume acts as a buffer between source and destination.Thus by balancing out any inconsistency in data Flume maintains a smooth flow of data. Low-cost of installation ,operation and maintenance. ● ● Highly customizable and extendable.

  31. Flumes position in Hadoop ecosystem:

  32. ARCHITECTURE:

  33. PROS: ● RELIABILITY & RECOVERABILITY : The events are staged in a channel on each agent. The events are then delivered to the next agent or terminal repository (like HDFS) in the flow. The events are removed from a channel only after they are stored in the channel of next agent or in the terminal repository.This ensures reliable data transfer and ensures recoverability.

  34. PROS: ● DOWNSTREAMING: There could be hundreds even thousands of source .HDFS requires that exactly one clint writes at a time to the database.This could be a problem (!!!!!...)as it will create severe stress to the destination server.

  35. PROS: Solution: By connecting multiple agents to each other Flume creates a data pipeline . It is possible to scale down the no of servers that write to the HDFS by adding intermediate Flume agents. This structure has its own problems!! If n-th tier has same volume as (n-1)th tier then n-th tier will easily overflow creating flow back-pressure.

  36. Points to remember: ● Event volume is least in the outermost tier. ● Event volume increases as flow converges. ● Event volume is the greatest in the innermost tier.

  37. PROS: ● HANDLING AGENT FALIURE : If the Flume agent goes down, then the all the flows hosted on that agent are aborted. Once the agent is restarted, then flow will resume.

  38. PROS: The flow using file channel or other stable channel will resume processing events where it left off.

  39. CONS: ● Channels in Flume act as buffers at various hops. These buffers have a fixed capacity, and once that capacity is full it will create back pressure on earlier points in the flow. If this pressure propagates to the source of the flow, Flume will become unavailable and may lose data. Rule of Thumb: Event volume must be equal to worst case data ingestion rate (max data ingestion rate )sustained over the worst case downstream outage interval.

  40. A BETTER SOLUTION: Adding another Flume agent WHat if the single node balance the load and it gets goes down? better at downstream faliure handling.

  41. Summery: ● All of the above points makes Apache Flume a great real time log aggregator. ● Although it was created for log aggregating ,since then it has evolved to handle many type of streaming data. ● Weak ordering nad prone to duplicacy hinders Flumes application beyond logging (like IoT, Instant messaging service).

  42. NAME NODE CLOGGING NAME NODE Spark Map Reduce What if all the web servers collecting log data tried to connect to hdfs and write at the same time? I m p a l a 1

  43. FLUME: EVENT An Event is the fundamental unit of data transported by flume from its point of origination to its final destination. A Flume event is defined as a unit of data flow having a byte payload and an optional set of string attributes. Payload is opaque to Flume ● Headers are specified as an unordered collection of string key-value pairs ● These headers help in contextual routing ● 2

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend