В споре с одним из людей по ссылке на гуглоплюс Леннарта Поттериинга, мне привели такой аргумент в пользу Journal и бинарных логов:
If you have a problem, and want solve it with regexps - you have 2 problems. With every new release any distro(and kernel) may start produce new logging messages, any hardware/software warnings/errors. Regular logs will help you found out reason of failure, only after failure. Its just too late.
Skilled sysadmins doesn't use sed and vim anymore, they use zabbix and puppet (just example), and they want software logs/output to be easy collected/aggregated/understand for tools which they use to automate admins work.
Once more, for better (more reliable and more representative) monitoring - logs should be normalized. Every skilled developer knows it. Then you need think about some compressed normalized-log format. On typical highload server you could get about 100G logs in a day, and before rotation its waste of space. Ok. Lets think about monitoring, sometimes(once a second/minute/hour) you need parse logs - with general log structure its just read hole file (imagine about 2-3G logs per hour) - its a) takes too long, b) wipes buffer cache on your server. So you need some custom binary format, or put data in mysql/sqlite/etc. Its now starting looking like journald even more.
На русском - то, что для машинной обработки с помощью zabbix и тп лучше иметь бинарные логи, и то, что sed/awk/perl уже не модно. Мол админы раньше были школьниками - по 10 машин на лицо, а теперь по 100+. И grep уже не тот, что раньше.
Вы согласны с этим? Я не могу с этим спорить, потому что в общем-то я не админ с 100+ серверами.