×
Syslog-ng – Part 3: Minimized logging to mysql with dynamic tables & trimming


syslog cgnat

Revision: 7th-JAN-2020


In continuation to existing posts related to syslog-ng, Following post illustrates on how you can log only particular messages with pattern matching and let syslog-ng creates dynamic table based on the dates so that searching/querying becomes easy.

This task was required in relation to CGNAT logging. you may want to read it here

https://aacable.wordpress.com/2020/01/01/mikrotik-cgnat/

Hardware Software used in this post:

  • Mikrotik Routerboard – firmware 6.46.x
  • Ubuntu 16.4 Server x64 along with syslog-ng version 3.25.1 on some decent hardware

Requirements:

Ubuntu OS


Ref: Installing latest version of syslog-ng

#Make sure to change the version, I have used this CMD on Ubuntu 16.04 , for version 18, you may change this to 18.04

1
2
3
4
5
touch /etc/apt/sources.list.d/syslog-ng-obs.list
echo "deb http://download.opensuse.org/repositories/home:/laszlo_budai:/syslog-ng/xUbuntu_16.04 ./" > /etc/apt/sources.list.d/syslog-ng-obs.list
apt-get update
apt-get -y install apache2 mc wget make gcc mysql-server mysql-client curl phpmyadmin libdbd-pgsql aptitude libboost-system-dev libboost-thread-dev libboost-regex-dev libmongo-client0 libesmtp6 syslog-ng-mod-sql libdbd-mysql libdbd-mysql syslog-ng

Note: during above packages installation, it will ask you to enter mysql/phpmyadmin password, you can use your root password to continue the installations. It may download around  after installation finishes, you can check syslog-ng version.

At the time I did installation I got this

syslog-ng -V

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
root@nab-syslog:~# syslog-ng -V
syslog-ng 3 (3.30.1)
Config version: 3.29
Installer-Version: 3.30.1
Revision: 3.30.1-2
Compile-Date: Nov 19 2020 16:33:22
Module-Directory: /usr/lib/syslog-ng/3.30
Module-Path: /usr/lib/syslog-ng/3.30
Include-Path: /usr/share/syslog-ng/include
Error opening plugin module; module='mod-java', error='libjvm.so: cannot open shared object file: No such file or directory'
Available-Modules: syslogformat,azure-auth-header,hook-commands,linux-kmsg-format,kafka,afmongodb,json-plugin,cef,secure-logging,afsocket,pseudofile,kvformat,add-contextual-data,afamqp,riemann,http,appmodel,stardate,tfgetent,redis,cryptofuncs,sdjournal,afuser,pacctformat,graphite,confgen,geoip2-plugin,affile,basicfuncs,xml,mod-python,examples,afsmtp,timestamp,map-value-pairs,disk-buffer,afsnmp,system-source,afsql,afstomp,csvparser,tags-parser,afprog,dbparser
Enable-Debug: off
Enable-GProf: off
Enable-Memtrace: off
Enable-IPv6: on
Enable-Spoof-Source: on
Enable-TCP-Wrapper: on
Enable-Linux-Caps: on
Enable-Systemd: on

Status:

1
2
3
4
5
6
7
8
9
10
11
root@nab-syslog:~# service syslog-ng status
syslog-ng.service - System Logger Daemon
Loaded: loaded (/lib/systemd/system/syslog-ng.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-01-25 00:20:55 EST; 1min 26s ago
Docs: man:syslog-ng(8)
Main PID: 21596 (syslog-ng)
CGroup: /system.slice/syslog-ng.service
21596 /usr/sbin/syslog-ng -F
 
Jan 25 00:20:55 nab-syslog systemd[1]: Starting System Logger Daemon...
Jan 25 00:20:55 nab-syslog systemd[1]: Started System Logger Daemon.

Create Database in mySQL to store dynamic tables

Create Base Database for storing dynamically created date wise tables

1
mysql -uroot -pXXX -e "create database syslog;"

Now edit the syslog-ng file

1
nano /etc/syslog-ng/syslog-ng.conf

& use following as sample. I would recommend that you should add only relevant part, just dont do blind copy paste. This is just sample for demonstration purposes only …


Syslog-ng Sample File

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
@version: 3.30
@include "scl.conf"
# First, set some global options.
options { chain_hostnames(off); flush_lines(0); use_dns(no); use_fqdn(no);
dns_cache(no); owner("root"); group("adm"); perm(0640);
stats_freq(0); bad_hostname("^gconfd$");
};
########################
# Sources
########################
# This is the default behavior of sysklogd package
# Logs may come from unix stream, but not from another machine.
#
source s_src {
system();
internal();
};
########################
# Destinations
########################
# First some standard logfile
#
destination d_auth { file("/var/log/auth.log"); };
destination d_cron { file("/var/log/cron.log"); };
destination d_daemon { file("/var/log/daemon.log"); };
destination d_kern { file("/var/log/kern.log"); };
destination d_lpr { file("/var/log/lpr.log"); };
destination d_mail { file("/var/log/mail.log"); };
destination d_syslog { file("/var/log/syslog"); };
destination d_user { file("/var/log/user.log"); };
destination d_uucp { file("/var/log/uucp.log"); };
destination d_mailinfo { file("/var/log/mail.info"); };
destination d_mailwarn { file("/var/log/mail.warn"); };
destination d_mailerr { file("/var/log/mail.err"); };
destination d_newscrit { file("/var/log/news/news.crit"); };
destination d_newserr { file("/var/log/news/news.err"); };
destination d_newsnotice { file("/var/log/news/news.notice"); };
destination d_debug { file("/var/log/debug"); };
destination d_error { file("/var/log/error"); };
destination d_messages { file("/var/log/messages"); };
destination d_console { usertty("root"); };
destination d_console_all { file(`tty10`); };
destination d_xconsole { pipe("/dev/xconsole"); };
destination d_ppp { file("/var/log/ppp.log"); };
########################
# Filters
########################
# Here's come the filter options. With this rules, we can set which
# message go where.
 
filter f_dbg { level(debug); };
filter f_info { level(info); };
filter f_notice { level(notice); };
filter f_warn { level(warn); };
filter f_err { level(err); };
filter f_crit { level(crit .. emerg); };
filter f_debug { level(debug) and not facility(auth, authpriv, news, mail); };
filter f_error { level(err .. emerg) ; };
filter f_messages { level(info,notice,warn) and
not facility(auth,authpriv,cron,daemon,mail,news); };
filter f_auth { facility(auth, authpriv) and not filter(f_debug); };
filter f_cron { facility(cron) and not filter(f_debug); };
filter f_daemon { facility(daemon) and not filter(f_debug); };
filter f_kern { facility(kern) and not filter(f_debug); };
filter f_lpr { facility(lpr) and not filter(f_debug); };
filter f_local { facility(local0, local1, local3, local4, local5,
local6, local7) and not filter(f_debug); };
filter f_mail { facility(mail) and not filter(f_debug); };
filter f_news { facility(news) and not filter(f_debug); };
filter f_syslog3 { not facility(auth, authpriv, mail) and not filter(f_debug); };
filter f_user { facility(user) and not filter(f_debug); };
filter f_uucp { facility(uucp) and not filter(f_debug); };
 
filter f_cnews { level(notice, err, crit) and facility(news); };
filter f_cother { level(debug, info, notice, warn) or facility(daemon, mail); };
filter f_ppp { facility(local2) and not filter(f_debug); };
filter f_console { level(warn .. emerg); };
########################
# Log paths
########################
log { source(s_src); filter(f_auth); destination(d_auth); };
log { source(s_src); filter(f_cron); destination(d_cron); };
log { source(s_src); filter(f_daemon); destination(d_daemon); };
log { source(s_src); filter(f_kern); destination(d_kern); };
log { source(s_src); filter(f_lpr); destination(d_lpr); };
log { source(s_src); filter(f_syslog3); destination(d_syslog); };
log { source(s_src); filter(f_user); destination(d_user); };
log { source(s_src); filter(f_uucp); destination(d_uucp); };
log { source(s_src); filter(f_mail); destination(d_mail); };
log { source(s_src); filter(f_news); filter(f_crit); destination(d_newscrit); };
log { source(s_src); filter(f_news); filter(f_err); destination(d_newserr); };
log { source(s_src); filter(f_news); filter(f_notice); destination(d_newsnotice); };
log { source(s_src); filter(f_debug); destination(d_debug); };
log { source(s_src); filter(f_error); destination(d_error); };
log { source(s_src); filter(f_messages); destination(d_messages); };
log { source(s_src); filter(f_console); destination(d_console_all);
destination(d_xconsole); };
log { source(s_src); filter(f_crit); destination(d_console); };
@include "/etc/syslog-ng/conf.d/*.conf"
 
######## Zaib Section Starts here
# Accept connection on UDP
source s_net { udp (); };
 
# Adding filter for our Mikrotik Routerboard, store logs in FILE as primary
# MIKROTIK ###########
 
# This entry will LOG all information coming from this IP, change this to match your mikrotik NAS
filter f_mikrotik_192.168.0.1 { host("192.168.0.1"); };
# add info in LOG (Part1)
destination df_mikrotik_192.168.0.1 {
file("/var/log/zlogs/${HOST}.${YEAR}.${MONTH}.${DAY}.log"
template-escape(no));
};
source s_mysql {
udp(port(514));
tcp(port(514));
};
 
# Store Logs in MYSQL DB as secondary # add info in MYSQL (Part2)
destination d_mysql {
sql(type(mysql)
host("localhost")
# MAKE SURE TO CHANGE CREDENTIALS
username("root")
password("XXXXX")
database("syslog")
table("${R_YEAR}_${R_MONTH}_${R_DAY}")
columns( "id int(11) unsigned not null auto_increment primary key", "host varchar(40) not null", "date datetime", "message text not null")
values("0", "$FULLHOST", "$R_YEAR-$R_MONTH-$R_DAY $R_HOUR:$R_MIN:$R_SEC", "$MSG")
indexes("id"));
};
log {
source(s_net);
filter(f_mikrotik_192.168.0.1);
destination(d_mysql);
};

IMPORTANT:

Create ‘zlogs‘ folder in /var/log , so that mikrotik logs will be saved in separate file if required by you

1
mkdir /var/log/zlogs

Mikrotik rule to LOG Forward chain

Now we need to create a rule in mikrotik FILTER section so that it can log all packets being forward to/from pppoe users. Make sure you in source address list you select your local pppoe users pool there to avoid un-related excessive logging. In below example we are doing only TCP base connection for NEW tcp connections only.

LOG SIZE Example: at one ISP who had around 1200+ online users , its log size for TCP connection was around 25 GB. to lower the size, I configured it log only new TCP connections which reduced the DB Size by 50%.

1
2
/ip firewall filter
add action=log chain=forward connection-state=new protocol=tcp src-address-list=pppoe_allowed_users

Mikrotik rule to send LOG to SYSLOG-NG Server

1
2
3
4
5
6
7
/system logging action
add name=syslogng remote=192.168.101.1 target=remote
# Change IP address pointed towards syslog server
 
/system logging
set 0 topics=info,!firewall
add action=syslogng topics=firewall

Restart Syslog-ng server

Now restart syslog-ng service

1
service syslog-ng restart

and you will see the dynamic tables created as follows

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
mysql -uroot -pXXXXX
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 411
Server version: 5.7.28-0ubuntu0.18.04.4-log (Ubuntu)
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
 
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
 
mysql> use syslog;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
 
Database changed
mysql> show tables;
+------------------+
| Tables_in_syslog |
+------------------+
| 2020_01_08 |
+------------------+
1 row in set (0.00 sec)
 
mysql> describe 2020_01_08;
+---------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------+------------------+------+-----+---------+----------------+
| id | int(11) unsigned | NO | PRI | NULL | auto_increment |
| host | varchar(40) | NO | | NULL | |
| date | datetime | YES | | NULL | |
| message | text | NO | | NULL | |
+---------+------------------+------+-----+---------+----------------+
4 rows in set (0.00 sec)

& you can then see data insertion into the table as soon LOG is received from remote devices

1
2
3
4
5
6
7
8
9
10
2020-01-08T07:49:43.020811Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:49:28', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto TCP (ACK,PSH), 172.16.0.2:57193->172.217.19.174:443, NAT (172.16.0.2:57193->101.11.11.252:2244)->172.217.19.174:443, len 79')
2020-01-08T07:49:43.031281Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:49:28', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto TCP (ACK,FIN), 172.16.0.2:57096->3.228.94.102:443, NAT (172.16.0.2:57096->101.11.11.252:2219)->3.228.94.102:443, len 40')
2020-01-08T07:49:43.041420Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:49:38', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto UDP, 172.16.0.2:49247->216.58.208.234:443, NAT (172.16.0.2:49247->101.11.11.252:2202)->216.58.208.234:443, len 1378')
2020-01-08T07:49:43.051112Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:49:38', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto UDP, 172.16.0.2:49247->216.58.208.234:443, NAT (172.16.0.2:49247->101.11.11.252:2202)->216.58.208.234:443, len 1378')
2020-01-08T07:49:43.061280Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:49:39', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto UDP, 172.16.0.2:49760->172.217.19.1:443, NAT (172.16.0.2:49760->101.11.11.252:2202)->172.217.19.1:443, len 1378')
2020-01-08T07:49:43.071449Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:49:39', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto UDP, 172.16.0.2:49760->172.217.19.1:443, NAT (172.16.0.2:49760->101.11.11.252:2202)->172.217.19.1:443, len 1378')
2020-01-08T07:49:44.828993Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:49:44', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto UDP, 172.16.0.2:53503->216.58.208.234:443, NAT (172.16.0.2:53503->101.11.11.252:2203)->216.58.208.234:443, len 827')
2020-01-08T07:49:44.851034Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:49:44', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto UDP, 172.16.0.2:53503->216.58.208.234:443, NAT (172.16.0.2:53503->101.11.11.252:2203)->216.58.208.234:443, len 827')
2020-01-08T07:51:37.518276Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:51:37', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto TCP (ACK), 172.16.0.2:57202->91.195.240.126:80, NAT (172.16.0.2:57202->101.11.11.252:2260)->91.195.240.126:80, len 41')
2020-01-08T07:51:37.522015Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:51:37', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto TCP (ACK), 172.16.0.2:57202->91.195.240.126:80, NAT (172.16.0.2:57202->101.11.11.252:2260)->91.195.240.126:80, len 41')

syslog-ng dynamic table data from phpmyadmin.PNG


TIPS

Deleting all tables inside particular DB

1
2
3
4
5
6
7
8
9
10
11
#!/bin/bash
# drop tables matching filter
force=1;
u=root;
p=SQLPASS;
db=syslog;
filter=users_;
for t in $(mysql -u $u -p$p -D $db -Bse 'show tables' | grep $filter); do
echo Dropping $t;
[[ $force -eq 1 ]] && mysql -u root -p$p -D $db -Bse "drop table \`$t\`"
done
×

Notice!!

All Quantic user are requested to use our hybrid cloud drive for you project and Data base . We had added new module of cronjob to schedule and optimise your backup .