Cassandra appender - distributed logging, distributed software logback appender

At the last Scala meetup of the lunar year, Liu Ying was inspired to share her professional software development experie...

At the last Scala meetup of the lunar year, Liu Ying was inspired to share her professional software development experience. I suddenly realized that I haven't completely followed any standard development specifications. It is true that there will be no strict requirements for standardized operation in the process of technical research and learning. Once the technology is put into application and product development begins, only in strict accordance with professional software development specifications can the quality of software products be guaranteed. In meetup, Liu Ying mentioned exception handling and process tracking as important parts of software development specifications. Let's start with logging. Logging is an indispensable part of a successful and complete software, which can help developers track the running of the software by recording the running process of the software and analyze the operation results or the causes of exceptions. Logback should be the most popular and common logger in the java ecological chain. Although logback has provided a variety of tracking information output modes, such as STDOUT, FILE, DB, etc., namely, ConsoleAppender, FileAppender, DBAppender, the appender for distributed applications still needs customization. Because distributed software runs across systems, tracking information will naturally be generated and stored in different systems, so distributed applications need distributed storage to achieve the global management of tracking information. Logback is a set of development architecture. Any customized appender can be easily integrated into logback. Then we will try to develop a set of logback appender based on cassandra.

First of all, recognize logback: I feel that the core of logging operation that needs to be understood is the operation of message level. Message level means that logback filters the messages to be recorded according to different message levels. Logback supports the following message levels, arranged from weak to strong according to the coverage of each record action, including:

Trace - > debug - > Info - > warn - > error correspond to trace(msg),debug(msg),info(msg),warn(msg),error(msg) respectively

The rules of log back filtering by message level are as follows:

Suppose the record function is p, and the message level of a class is Q: when p > = q, select record message. In other words, when the function error(msg) is called, logback will record all level messages, whereas trace(msg) can only record TRACE level messages. The logback manual is as follows:

TRACE DEBUG INFO WARN ERROR OFF
trace() YES NO NO NO NO NO
debug() YES YES NO NO NO NO
info() YES YES YES NO NO NO
warn() YES YES YES YES NO NO
error() YES YES YES YES YES NO

The default message level of each class in logback can be inherited according to the type inheritance tree structure. When a subclass does not define a message level, it inherits the message level of its parent class, that is, the default message level of Z in X.Y.Z inherits from Y.

Well, the above operation processes are included in the logback function, which has nothing to do with the message storage appender. Now we start to make a set of cassandra based appender. As mentioned above, logback is an open framework. Any appender developed according to the requirements of logback can be easily integrated into the functions of logback. Here is an appender framework of logback:

package com.datatech.logback import ch.qos.logback.classic.spi.ILoggingEvent import ch.qos.logback.core.UnsynchronizedAppenderBase import com.datastax.driver.core.querybuilder.QueryBuilder class CassandraAppender extends UnsynchronizedAppenderBase[ILoggingEvent]{ override def append(eventObject: ILoggingEvent): Unit = { //write log message to cassandra } override def start(): Unit = { //setup cassandra super.start() } override def stop(): Unit = { super.stop() //clean up, closing cassandra } }

First, we implement a complete logback configuration file logback.xml, including consoleappender, fileappender, Cassandra appender

<configuration> <appender name="STDOUT"> <encoder> <Pattern> %d [%thread] %-5level %logger{36} - %msg%n </Pattern> </encoder> </appender> <appender name="FILE"> <!-- path to your log file, where you want to store logs --> <file>/Users/Tiger/logback.log</file> <append>false</append> <encoder> <pattern>%d [%thread] %-5level %logger{36} - %msg%n</pattern> </encoder> </appender> <appender name="cassandraLogger"> <hosts>192.168.0.189</hosts> <port>9042</port> <appName>posware</appName> <defaultFieldValues>{"app_customer":"bayakala.com","app_device":"1001"}</defaultFieldValues> <keyspaceName>applog</keyspaceName> <columnFamily>txnlog</columnFamily> </appender> <root level="debug"> <appender-ref ref="cassandraLogger" /> <appender-ref ref="STDOUT" /> </root> <shutdownHook/> </configuration>

The properties of Cassandra appender in the configuration file, such as hosts,port,keyspaceName, are implemented in the scala program as follows:

private var _hosts: String = "" def setHosts(hosts: String): Unit = _hosts = hosts private var _port: Int = 9042 // for the binary protocol, 9160 is default for thrift def setPort(port: Int): Unit = _port = port private var _username: String = "" def setUsername(username: String): Unit = _username = username private var _password: String = "" def setPassword(password: String): Unit = _password = password

Properties are used as follows:

writeLog(eventObject)(optSession.get, _keyspaceName, _columnFamily)(_appName,ip,hostname,_defaultFieldValues)

In fact, these attributes in logback.xml can be set at runtime, as follows:

//get appender instances val log: Logger = LoggerFactory.getLogger(org.slf4j.Logger.ROOT_LOGGER_NAME).asInstanceOf[Logger] val cassAppender = log.getAppender("cassandraLogger").asInstanceOf[CassandraAppender] val stdoutAppender = log.getAppender("STDOUT").asInstanceOf[ConsoleAppender[ILoggingEvent]] val fileAppender = log.getAppender("FILE").asInstanceOf[FileAppender[ILoggingEvent]] if(cassAppender != null) { cassAppender.setHosts("192.168.0.189") cassAppender.setPort(9042) cassAppender.start() }

Different from the general appender: we need to interact with logback in the application, because we need to record some specific tracking targets in the specific application as database fields. These specific tracking targets, such as userid and productid, are unique to the application business and cannot be covered by general loggers. So we focus on a set of general loggers at the application level. To achieve this goal, first of all, you can show the business characteristics of the application in the database table structure schema. Here is an example:

CREATE TABLE IF NOT EXISTS applog.txnlog ( class_name text, file_name text, host_ip text, host_name text, level text, line_number text, logger_name text, method_name text, thread_name text, throwable_str_rep text, log_date text, log_time text, log_msg text, app_name text, app_customer text, app_device text, PRIMARY KEY (app_customer, app_device, log_date, log_time) );

In the above schema, app_customer and app_device belong to the application business attribute, because we want to classify and manage messages from the perspective of users or devices. For other applications, we also design another set of schema covering business features. The fields of these response characteristics must be provided when calling the message record function in application, because the contents of these fields are dynamic (for example, there may be hundreds of thousands of users in a server software). We can only pass the values of these fields through recorded messages. Remember, logback can support its own Appenders, such as ConsoleAppender,FileAppender, and Cassandra appender. You can share the MSG obtained from logback, but we must process the msg to add the value of dynamic attribute. In order not to affect the readability of the MSG, you can use json to process the MSG, as follows:

var msg = event.getMessage() try { val logMap = fromJson[Map[String,String]](msg) logMap.foreach ( m => qryInsert = qryInsert.value(m._1, m._2)) } catch { case e: Throwable => qryInsert = qryInsert.value(MESSAGE, msg) try { val dftMap = fromJson[Map[String,String]](default) dftMap.foreach ( m => qryInsert = qryInsert.value(m._1, m._2)) } catch { case e: Throwable => } } session.execute(qryInsert)

If the msg obtained by event.getMessage() is not in json format (for example, the message is generated by the third-party tool library referenced in the application), the default value (also in json format) defined in the configuration file is used, such as the < defaultfieldvalues > attribute in the configuration file above.

cassandra is easy to use, and we only use one operation of insert. The complete cassandra appender source code is as follows:

package com.datatech.logback import ch.qos.logback.classic.spi._ import ch.qos.logback.core.UnsynchronizedAppenderBase import com.datastax.driver.core._ import com.datastax.driver.core.querybuilder. import java.net.InetAddress import java.time._ import java.time.format._ import java.util.Locale class CassandraAppender extends UnsynchronizedAppenderBase[ILoggingEvent]{ import CassandraAppender._ private var _hosts: String = "" def setHosts(hosts: String): Unit = _hosts = hosts private var _port: Int = 9042 // for the binary protocol, 9160 is default for thrift def setPort(port: Int): Unit = _port = port private var _username: String = "" def setUsername(username: String): Unit = _username = username private var _password: String = "" def setPassword(password: String): Unit = _password = password private var _defaultFieldValues: String = "" def setDefaultFieldValues(defaultFieldValues: String) = _defaultFieldValues = defaultFieldValues private val ip: String = getIP() private val hostname: String = getHostName() // Keyspace/ColumnFamily information private var _keyspaceName: String = "Logging" def setKeyspaceName(keyspaceName: String): Unit = _keyspaceName = keyspaceName private var _columnFamily: String = "log_entries" def setColumnFamily(columnFamily: String): Unit = _columnFamily = columnFamily private var _appName: String = "default" def setAppName(appName: String): Unit = _appName = appName private var _replication: String = "{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }" def setReplication(replication: String): Unit = _replication = replication private var _consistencyLevelWrite: ConsistencyLevel = ConsistencyLevel.ONE def setConsistencyLevelWrite(consistencyLevelWrite: String): Unit = { try { _consistencyLevelWrite = ConsistencyLevel.valueOf(consistencyLevelWrite.trim) } catch { case e: Throwable => throw new IllegalArgumentException("Consistency level " + consistencyLevelWrite + " wasn't found.") } } private var optCluster: Option[Cluster] = None private var optSession: Option[Session] = None def connectDB(): Unit = { try { val cluster = new Cluster .Builder() .addContactPoints(_hosts) .withPort(_port) .build() val session = cluster.connect() optCluster = Some(cluster) optSession = Some(session) } catch { case e: Throwable => optCluster = None optSession = None println(s"error when logger connecting to cassandra [$:$]") } } override def append(eventObject: ILoggingEvent): Unit = { if(optSession.isDefined) { try { writeLog(eventObject)(optSession.get, _keyspaceName, _columnFamily)(_appName,ip,hostname,_defaultFieldValues) } catch { case e: Throwable => } } } override def start(): Unit = { if(! _hosts.isEmpty) { connectDB() super.start() } } override def stop(): Unit = { super.stop() if(optSession.isDefined) { optSession.get.closeAsync() optCluster.get.closeAsync() } } } object CassandraAppender extends JsonConverter { // CF column names val HOST_IP: String = "host_ip" val HOST_NAME: String = "host_name" val APP_NAME: String = "app_name" val LOGGER_NAME: String = "logger_name" val LEVEL: String = "level" val CLASS_NAME: String = "class_name" val FILE_NAME: String = "file_name" val LINE_NUMBER: String = "line_number" val METHOD_NAME: String = "method_name" val THREAD_NAME: String = "thread_name" val THROWABLE_STR: String = "throwable_str_rep" val LOG_DATE: String = "log_date" val LOG_TIME: String = "log_time" val MESSAGE: String = "log_msg" val dateTimeFormatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSS", Locale.US) def logDate: String = java.time.LocalDate.now.format(DateTimeFormatter.ofPattern("yyyy-MM-dd")) def logTime: String = LocalDateTime.now.format(dateTimeFormatter).substring(11) def writeLog(event: ILoggingEvent)(session: Session, kspc: String, tbl: String)(appName: String, ip: String, hostName: String, default: String): ResultSet = { var qryInsert = QueryBuilder.insertInto(kspc,tbl) .value(APP_NAME,appName) .value(HOST_IP,ip) .value(HOST_NAME,hostName) .value(LOGGER_NAME,event.getLoggerName()) .value(LEVEL,event.getLevel().toString) .value(THREAD_NAME,event.getThreadName()) .value(LOG_DATE,logDate) .value(LOG_TIME,logTime) try { val callerData = event.getCallerData() if (callerData.nonEmpty) { qryInsert = qryInsert.value(CLASS_NAME, callerData.head.getClassName()) .value(FILE_NAME, callerData.head.getFileName()) .value(LINE_NUMBER, callerData.head.getLineNumber().toString) .value(METHOD_NAME, callerData.head.getMethodName()) } } catch {case e: Throwable => println(s"logging event error: $")} try { if (event.getThrowableProxy() != null) { val throwableStrs = event.getThrowableProxy().getSuppressed().asInstanceOf[List[IThrowableProxy]] val throwableStr = throwableStrs.foldLeft("") { case (b, t) => b + "," + t.getMessage() } qryInsert = qryInsert.value(THROWABLE_STR, throwableStr) } } catch {case e: Throwable => println(s"logging event error: $")} var msg = event.getMessage() try { val logMap = fromJson[Map[String,String]](msg) logMap.foreach ( m => qryInsert = qryInsert.value(m._1, m._2)) } catch { case e: Throwable => qryInsert = qryInsert.value(MESSAGE, msg) try { val dftMap = fromJson[Map[String,String]](default) dftMap.foreach ( m => qryInsert = qryInsert.value(m._1, m._2)) } catch { case e: Throwable => } } session.execute(qryInsert) } def getHostName(): String = { var hostname = "unknown" try { val addr: InetAddress = InetAddress.getLocalHost() hostname = addr.getHostName() } catch { case e: Throwable => hostname = "error"} hostname } def getIP(): String = { var ip: String = "unknown" try { val addr: InetAddress = InetAddress.getLocalHost() ip = addr.getHostAddress() } catch { case e: Throwable => ip = "error" } ip } }

Here is the test code:

import ch.qos.logback.classic.Logger import ch.qos.logback.core. import com.datatech.logback. import ch.qos.logback.classic.spi.ILoggingEvent import org.slf4j.LoggerFactory import ch.qos.logback.classic.LoggerContext import java.time._ import java.time.format._ import java.util.Locale import scala.io._ import com.datastax.driver.core._ object LoggingDemo extends App with JsonConverter { val log: Logger = LoggerFactory.getLogger(org.slf4j.Logger.ROOT_LOGGER_NAME).asInstanceOf[Logger] val cassAppender = log.getAppender("cassandraLogger").asInstanceOf[CassandraAppender] val stdoutAppender = log.getAppender("STDOUT").asInstanceOf[ConsoleAppender[ILoggingEvent]] val fileAppender = log.getAppender("FILE").asInstanceOf[FileAppender[ILoggingEvent]] /* val cluster = new Cluster .Builder() .addContactPoints("192.168.0.189") .withPort(9042) .build() val session = cluster.connect() val keyspace = getClass.getResource("/logger.schema") val table = getClass.getResource("/txnlog.schema") val qrykspc = Source.fromFile(keyspace.getPath).getLines.mkString session.execute(qrykspc) val qrytbl = Source.fromFile(table.getPath).getLines.mkString session.execute(qrytbl) session.close() cluster.close() val json = toJson(loggedItems) println(s"json = $json") val m = fromJson[Map[String,String]](json) println(s"map = $m") //stop the appenders if (stdoutAppender != null) stdoutAppender.stop() if (fileAppender != null) fileAppender.stop() */ if(cassAppender != null) { cassAppender.setHosts("192.168.0.189") cassAppender.setPort(9042) cassAppender.start() } val dateTimeFormatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSS", Locale.US) val now = LocalDateTime.now.format(dateTimeFormatter) log.info("************this is a info message ..." + now) log.debug("***********debugging message here ..." + now) var loggedItems = Map[String,String]() // loggedItems += ("app_name" -> "test") loggedItems = loggedItems ++ Map( ("app_customer" -> "logback.com"), ("app_device" -> "9101"), ("log_msg" -> "specific message for cassandra ...")) log.debug(toJson(loggedItems)) //stop the logger val loggerContext = LoggerFactory.getILoggerFactory.asInstanceOf[LoggerContext] loggerContext.stop() }

12 February 2020, 08:46 | Views: 3834

Add new comment

For adding a comment, please log in
or create account

0 comments