Obsah

Demo installation overview

The Mentat project is still work in progress. Although it is operated in production environment, there is still long way to go and quite a few rough edges. Also be aware, that we are currently rewriting and redesigning the whole system and migrating from Perl to Python. So currently you have to install and make yourself familiar with both environments with many corresponding tools and libraries. This is only temporary and it should get much better over time. Update often, we try to introduce new features as fast as we can.

This document should serve as a quick overview of the layout of Mentat demonstration installation, so that newcomers can quickly start looking in the right directions.

News from 9 Aug 2017

Important facts

# Current firewall status
ufw status
 
# If you are sure, maybe just turn it off
ufw disable
# Make sure you are up to date
apt-get update
apt-get upgrade
 
# Build default batch of indices
mentat-dbmngr.py --command status --shell
mentat-dbmngr.py --command init --shell
mentat-dbmngr.py --command status --shell

Directory layout

For convenience all custom packages related to Mentat system installation are located inside this directory.

Configuration files for Mentat modules.

Working directory for all Mentat modules. Pid files, runlogs, log files and other runtime data is located inside this directory.

Apache webserver is configured to serve Mentat`s web interface on port HTTPS/443. For convenience, there is also a redirection configured from HTTP/80.

Some Mentat modules are launched periodically via cron.

MongoDB database files.

System overview

The whole Mentat sysyem is installed, however only some of the modules are enabled by default. The demo installation is configured into following topology/setup:

Mentat demo installation topology

As you can see, there are two real-time message processing chains. The primary entry point is the queue directory of mentat-inspector.py module. Following after that is the mentat-enricher module and finally there is the mentat-storage module.

As an example of how it is done, there is another inspector instance mentat-inspector-b.py prepared. You may configure the first inspector to dispatch/duplicate certain messages to the second inspector instance for different processing. Configuration file for a inspector is highly commented, which currently serves as a documentation substitute.

From the post-processing modules, the mentat-statistician is enabled to provide statistical services, and the mentat-cleanup.py is enabled to provide data management and retention mechanism.

Communication between the real-time processing modules is really simple. Messages are passed between modules using filesystem based queue directories. Every real-time processing module has its own queue directory. Consider for example the system entry-point mentat-inspector.py:

root@mentat-demo:~# ll /var/mentat/spool/mentat-inspector.py/
total 25M
drwxrwxr-x  7 mentat mentat 4,0K Jan 13 13:19 .
drwxrwxr-x 15 mentat mentat 4,0K Mar 22 13:02 ..
drwxrwxr-x  2 mentat mentat  12M Mar 22 13:17 errors
drwxrwxr-x  2 mentat mentat  13M Mar 23 15:23 incoming
drwxrwxr-x  2 mentat mentat 4,0K Mar 23 15:23 pending
drwxrwxr-x  2 mentat mentat 1,3M Jan 13 13:27 tmp

As you can see, there following subdirectories:

Conclusion: Inserting new message into Mentat`s processing chain is just a simple matter of atomically moving the message file into appropriate directory and the module will pick it up and process it.

Quickstart

# The 'status' command is default, so this is a short form 
mentat-controller.py
 
# And the same, but more verbose
mentat-controller.py --command status
 
# Or even
mentat-controller.py --command=status
# Most interesting Mentat modules for the beginning:
mentat-controller.py --help
mentat-inspector.py --help
mentat-enricher --help
mentat-storage --help
mentat-cleanup.py --help
mentat-ideagen.py --help
mentat-statistician --help
...
mentat-controller.py --command start
mentat-ideagen.py --count 1000
hawat-cli db view count

Receiving events from Warden

If you want to receive alerts from Warden system and process them with Mentat, take following steps:

...
"receiver": {
        "dir": "/var/mentat/spool/mentat-inspector.py",
        ...
}
...

After starting the receiving warden_filer daemon you should see IDEA messages appearing in /var/mentat/spool/mentat-inspector.py/incoming directory.

Real-world performance statistics

Following information may serve as a rough reference regarding consumption of system resources and overall system performance. These statistics were gathered on our production installation of Mentat system. We are utilizing following system configuration:

CPU: 2 x Intel(R) Xeon(R) E5-2650, 2.00GHz
RAM: 192GB
HDD: 1,1 TB, 15k, SAS
OS:  Debian GNU/Linux 8.7 (Jessie)
DB:  MongoDB 3.4.3, WiredTiger engine

We are registering following performance:

Currently, we are able to archive maximal throughput of 10 mil. alerts per day (empiric testing). However, this value is limited by performance of currently used mentat-storage daemon, which is implemented using legacy Perl-based framework. We are working on replacement, which should provide notable performance increase.

Source code

Troubleshooting

Images can not be seen in web interface

If you cannot see any images in Statistics section of web interface, please execute manually following commands to make sure, that the resources are correctly linked to root directory of the web application:

mkdir -p /var/mentat/www/hawat/root/data
ln -s /var/mentat/reports/briefer /var/mentat/www/hawat/root/data/briefer
ln -s /var/mentat/charts /var/mentat/www/hawat/root/data/charts
ln -s /var/mentat/reports/dashboard /var/mentat/www/hawat/root/data/dashboard
ln -s /var/mentat/reports/reporter /var/mentat/www/hawat/root/data/reporter
ln -s /var/mentat/rrds /var/mentat/www/hawat/root/data/rrds
ln -s /var/mentat/reports/statistician /var/mentat/www/hawat/root/data/statistician

Web interface does not start

Please execute following command as root:

root@mentat-demo:~# grep "mail_test" /var/log/apache2/error.log
Couldn't instantiate component "Hawat::Model::Reports", "Attribute (mail_test) is required at /usr/local/lib/x86_64-linux-gnu/perl/5.20.2/Moose/Object.pm line 24

If the output is similar to the example on previous snippet, the problem lies in missing configuration value. This might have happened after upgrade to certain version of Mentat system. In this case the solution is to provide any value in main Hawat configuration file /var/mentat/www/hawat/hawat.json:

...
 
"Model::Reports": {
    ...
    # Add additional configuration value called 'mail_test'
    "mail_test" : "email@domain.org",
    ...
},
"Model::Briefs": {
    ...
    # Add additional configuration value called 'mail_test'
    "mail_test" : "email@domain.org",
    ...
},
 
...

If you do not want to go through the whole configuration file, another option is to apply following patch:

--- /var/mentat/www/hawat/hawat.json.orig       2017-03-31 16:52:09.869829279 +0200
+++ /var/mentat/www/hawat/hawat.json    2017-03-30 14:33:58.959615672 +0200
@@ -142,6 +142,7 @@
         "mail_from" : "CESNET-CERTS Reporting <reporting@cesnet.cz>",
         "mail_devel" : "reporter-test@mentat-demo.cesnet.cz",
         "mail_admin" : "mentat-admin@mentat-demo.cesnet.cz",
+        "mail_test" : "reporter-test@mentat-demo.cesnet.cz",
         "mail_bcc" : "mentat-admin@mentat-demo.cesnet.cz",
         "reply_to" : "abuse@cesnet.cz",
         "test_mode": 0,
@@ -157,6 +158,7 @@
         "mail_from" : "CESNET-CERTS Reporting <reporting@cesnet.cz>",
         "mail_devel" : "reporter-test@mentat-demo.cesnet.cz",
         "mail_admin" : "mentat-admin@mentat-demo.cesnet.cz",
+        "mail_test" : "reporter-test@mentat-demo.cesnet.cz",
         "mail_bcc" : "mentat-admin@mentat-demo.cesnet.cz",
         "reply_to" : "abuse@cesnet.cz",
         "test_mode": 0,

Save it to some local file and then apply the patch using following command:

patch /var/mentat/www/hawat/hawat.json /path/to/your/patch/file.patch

We are working on fixing this issue at the package level, so that this does not happen again.