LINUX.ORG.RU

Сообщения chemtech

 

Есть ли у кого-нибудь на примете примеры багнутых проектов на maven для Sonarqube?

Форум — Development

Пытаюсь делать fail тасок в gitlab ci при плохом качестве кода при проверке в Sonarqube.

Так же думаю написать статью/пост как это все настроить.

Но не могу найти свободных багнутых проектов на maven.

Нашел пока что вот такие.

https://github.com/uweplonus/spotbugs-examples

https://github.com/daggerok/findbugs-example

Есть ли у кого-нибудь на примете примеры багнутых проектов на maven для Sonarqube ?

 , ,

chemtech
()

nginx-log-collector clickhouse DB::Exception: Replica name must be a string literal

Форум — Admin

Установил кластер clickhouse с помощью https://github.com/danwerspb/ansible-clickhouse-dp

Пытаюсь создать таблицу из этого файла https://github.com/avito-tech/nginx-log-collector/blob/master/etc/examples/clickhouse/table_schema.sql

clickhouse-client 
ClickHouse client version 19.17.6.36 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 19.17.6 revision 54428.

nginx-log-collector-apatsev-2 :) CREATE TABLE nginx.error_log
(
    event_datetime DateTime,
    event_date Date,
    server_name LowCardinality(String),
    http_referer String,
    pid UInt32,
    sid UInt32,
    tid UInt64,
    host LowCardinality(String),
    client String,
    request String,
    message String,
    login String,
    upstream String,
    subrequest String,
    hostname LowCardinality(String)
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/logs_replicator/nginx.error2_log', _SET_ME_, event_date, (server_name, request, event_date), 8192)

CREATE TABLE nginx.error_log
(
    `event_datetime` DateTime, 
    `event_date` Date, 
    `server_name` LowCardinality(String), 
    `http_referer` String, 
    `pid` UInt32, 
    `sid` UInt32, 
    `tid` UInt64, 
    `host` LowCardinality(String), 
    `client` String, 
    `request` String, 
    `message` String, 
    `login` String, 
    `upstream` String, 
    `subrequest` String, 
    `hostname` LowCardinality(String)
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/logs_replicator/nginx.error2_log', _SET_ME_, event_date, (server_name, request, event_date), 8192)

Received exception from server (version 19.17.6):
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Replica name must be a string literal

Попытался поменять logs_replicator на {shard}

CREATE TABLE nginx.error_log
(
    event_datetime DateTime,
    event_date Date,
    server_name LowCardinality(String),
    http_referer String,
    pid UInt32,
    sid UInt32,
    tid UInt64,
    host LowCardinality(String),
    client String,
    request String,
    message String,
    login String,
    upstream String,
    subrequest String,
    hostname LowCardinality(String)
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/nginx.error2_log', _SET_ME_, event_date, (server_name, request, event_date), 8192)

CREATE TABLE nginx.error_log
(
    `event_datetime` DateTime, 
    `event_date` Date, 
    `server_name` LowCardinality(String), 
    `http_referer` String, 
    `pid` UInt32, 
    `sid` UInt32, 
    `tid` UInt64, 
    `host` LowCardinality(String), 
    `client` String, 
    `request` String, 
    `message` String, 
    `login` String, 
    `upstream` String, 
    `subrequest` String, 
    `hostname` LowCardinality(String)
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/nginx.error2_log', _SET_ME_, event_date, (server_name, request, event_date), 8192)

Received exception from server (version 19.17.6):
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Replica name must be a string literal

Как сделать реплику?

 

chemtech
()

Не могу установить кластер clickhouse от AlexeySetevoi

Форум — Admin

Не могу установить кластер clickhouse от AlexeySetevoi

  1. Установка zookeeper cluster inventory:
[zookeeper]
172.26.9.211 zookeeper_myid=1
172.26.9.212 zookeeper_myid=2
172.26.9.213 zookeeper_myid=3

[zookeeper-quorum]
host[1:3]

playbook

- hosts: zookeeper
  roles:
    - andrewrothstein.zookeeper-cluster
  1. Установка clickhouse cluster inventory:
[clickhouse_cluster]
172.26.9.211
172.26.9.212
172.26.9.213

playbook

- hosts: clickhouse_cluster
  become: yes
  vars:
    clickhouse_shards:
      your_shard_name:
        - { host: "172.26.9.211", port: 9000 }
        - { host: "172.26.9.212", port: 9000 }
        - { host: "172.26.9.213", port: 9000 }
    clickhouse_zookeeper_nodes:
      - { host: "172.26.9.211", port: 2181 }
      - { host: "172.26.9.212", port: 2181 }
      - { host: "172.26.9.213", port: 2181 }
  roles:
    - ansible-clickhouse

Конфиг:

[root@nginx-log-collector-apatsev-2 ~]# cat /etc/clickhouse-server/config.xml |grep -A 10 remote
    <remote_servers incl="clickhouse_remote_servers" />


    <!-- If element has 'incl' attribute, then for it's value will be used corresponding substitution from another file.
         By default, path to file with substitutions is /etc/metrika.xml. It could be changed in config in 'include_from' element.
         Values for substitutions are specified in /yandex/name_of_substitution elements in that file.
      -->

    <!-- ZooKeeper is used to store metadata about replicas, when using Replicated tables.
         Optional. If you don't use replicated tables, you could omit that.

Может быть playbook как-то по другому нужно было описать?

 

chemtech
()

npm2rpm ERROR: the path for dependency already exists

Форум — Development

Я использую npm2rpm для сборки npm пакета verdaccio в rpm Мои исходники https://github.com/patsevanton/verdaccio-rpm

spec файл https://github.com/patsevanton/verdaccio-rpm/blob/master/verdaccio-rpm.spec

Создаю нужные директории

mkdir -p ./{RPMS,SRPMS,BUILD,SOURCES,SPECS}

Выкачиваю исходники

spectool --directory SOURCES -g verdaccio-rpm.spec
Getting https://registry.npmjs.org/verdaccio/-/verdaccio-4.4.0.tgz to SOURCES/verdaccio-4.4.0.tgz
SOURCES/verdaccio-4.4.0.tgz already exists, skipping download

запускаю rpmbuild

rpmbuild --clean --define "_topdir `pwd`" -bi verdaccio-rpm.spec
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.n8iiLV
+ umask 022
+ cd /root/verdaccio-rpm/BUILD
+ ls
+ pwd
/root/verdaccio-rpm/BUILD
+ find . -name '*node_modules*'
+ find . -name '*ui-theme*'
+ cd /root/verdaccio-rpm/BUILD
+ rm -rf package
+ /usr/bin/gzip -dc /root/verdaccio-rpm/SOURCES/verdaccio-4.4.0.tgz
+ /usr/bin/tar -xf -
+ STATUS=0
+ '[' 0 -ne 0 ']'
+ cd package
+ /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w .
+ find . -name '*node_modules*'
+ find . -name '*ui-theme*'
+ exit 0
Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.hF9JZw
+ umask 022
+ cd /root/verdaccio-rpm/BUILD
+ '[' /root/verdaccio-rpm/BUILDROOT/nodejs-verdaccio-4.4.0-1.el7.x86_64 '!=' / ']'
+ rm -rf /root/verdaccio-rpm/BUILDROOT/nodejs-verdaccio-4.4.0-1.el7.x86_64
++ dirname /root/verdaccio-rpm/BUILDROOT/nodejs-verdaccio-4.4.0-1.el7.x86_64
+ mkdir -p /root/verdaccio-rpm/BUILDROOT
+ mkdir /root/verdaccio-rpm/BUILDROOT/nodejs-verdaccio-4.4.0-1.el7.x86_64
+ cd package
+ mkdir -p /root/verdaccio-rpm/BUILDROOT/nodejs-verdaccio-4.4.0-1.el7.x86_64/usr/lib/node_modules/verdaccio
+ cp -pfr bin /root/verdaccio-rpm/BUILDROOT/nodejs-verdaccio-4.4.0-1.el7.x86_64/usr/lib/node_modules/verdaccio
+ cp -pfr build /root/verdaccio-rpm/BUILDROOT/nodejs-verdaccio-4.4.0-1.el7.x86_64/usr/lib/node_modules/verdaccio
+ cp -pfr conf /root/verdaccio-rpm/BUILDROOT/nodejs-verdaccio-4.4.0-1.el7.x86_64/usr/lib/node_modules/verdaccio
+ cp -pfr index.js /root/verdaccio-rpm/BUILDROOT/nodejs-verdaccio-4.4.0-1.el7.x86_64/usr/lib/node_modules/verdaccio
+ cp -pfr package.json /root/verdaccio-rpm/BUILDROOT/nodejs-verdaccio-4.4.0-1.el7.x86_64/usr/lib/node_modules/verdaccio
+ cp -pfr systemd /root/verdaccio-rpm/BUILDROOT/nodejs-verdaccio-4.4.0-1.el7.x86_64/usr/lib/node_modules/verdaccio
+ cp -pfr tsconfig.json /root/verdaccio-rpm/BUILDROOT/nodejs-verdaccio-4.4.0-1.el7.x86_64/usr/lib/node_modules/verdaccio
+ mkdir -p /root/verdaccio-rpm/BUILDROOT/nodejs-verdaccio-4.4.0-1.el7.x86_64/usr/bin
+ chmod 0755 /root/verdaccio-rpm/BUILDROOT/nodejs-verdaccio-4.4.0-1.el7.x86_64/usr/lib/node_modules/verdaccio/bin/verdaccio
+ ln -sf /usr/lib/node_modules/verdaccio/bin/verdaccio /root/verdaccio-rpm/BUILDROOT/nodejs-verdaccio-4.4.0-1.el7.x86_64/usr/bin/verdaccio
+ /usr/lib/rpm/nodejs-symlink-deps /usr/lib/node_modules

ERROR: the path for dependency "@verdaccio/ui-theme" already exists

This could mean that bundled modules are being installed.  Bundled libraries are
forbidden in Fedora. For more information, see:
    <https://fedoraproject.org/wiki/Packaging:No_Bundled_Libraries>
    
It is generally reccomended to remove the entire "node_modules" directory in
%prep when it exists. For more information, see:
    <https://fedoraproject.org/wiki/Packaging:Node.js#Removing_bundled_modules>
    
If you have obtained permission from the Fedora Packaging Committee to bundle
libraries, please use `%nodejs_fixdep -r` in %prep to remove the dependency on
the bundled module. This will prevent an unnecessary dependency on the system
version of the module and eliminate this error.
error: Bad exit status from /var/tmp/rpm-tmp.hF9JZw (%install)


RPM build errors:
    Bad exit status from /var/tmp/rpm-tmp.hF9JZw (%install)

Как исправить ошибку ? %nodejs_fixdep -r не помогает Спасибо

 , ,

chemtech
()

Доработка отправки логов nginx используя fluentd в clickhouse

Форум — Job

Доработать отправку логов nginx используя fluentd в clickhouse. Если ничего дорабатывать не нужно, написать инструкцию как отправлять логи nginx используя fluentd напрямую в clickhouse.

Есть несколько направлений для fluentd для логов nginx отправки в clickhouse fluent-plugin-clickhouse fluent-plugin-clickhouse-json

Причем их несколько разновидностей от разных авторов. Но они похожи.

Главное отправить дату. С полем даты как раз все проблемы. Я пытался исправить поле даты, но ничего не вышло. Изменение time_format в fluentd(td-agent) из time_iso8601 в %Y-%m-%d %H:%M:%S

Подробности https://freelance.habr.com/tasks/281824

 , ,

chemtech
()

Доработка отправки логов nginx используя fluent-bit в clickhouse

Форум — Job

Доработать отправку логов nginx используя fluent-bit в clickhouse. Если ничего дорабатывать не нужно, написать инструкцию как отправлять логи nginx используя fluent-bit напрямую в clickhouse.

Есть несколько issue на эту тему. https://github.com/fluent/fluent-bit/issues/745 https://github.com/fluent/fluent-bit/issues/848 https://github.com/fluent/fluent-bit/pull/1111

Подробности https://freelance.habr.com/tasks/281822

 ,

chemtech
()

Изменение time_format в fluentd(td-agent) из time_iso8601 в %Y-%m-%d %H:%M:%S

Форум — Admin

Имеется nginx.conf

nginx.conf

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '"$remote_addr" "$time_iso8601"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

Логи access.log

"127.0.0.1" "2019-12-28T10:53:20+00:00"
"127.0.0.1" "2019-12-28T10:53:20+00:00"

Имеется fluent(td-agent)

td-agent.conf
<source>
  @type tail
  path /var/log/nginx/access.log
  pos_file /tmp/nginx-access-log.pos
  tag nginx
  format /"(?<remote_addr>[0-9,\.]*)" "(?<time_iso8601>[^ ]*)"/
  time_format %Y-%m-%dT%H:%M:%S.%NZ
</source>


<filter foo.bar>
  @type record_transformer
  enable_ruby
  <record>
    time_iso8601 ${Time.strptime(record['time_iso8601'], '%Y-%m-%d %H:%M:%S').iso8601}
  </record>
</filter>

<match nginx>
    @type clickhousejson
    host 127.0.0.1
    port 8123
    database fluent
    table fluent
    datetime_name time_iso8601
</match>

В clickhouse создал бд и таблицу.

create database fluent;

CREATE TABLE fluent.fluent (Date Date MATERIALIZED toDate(DateTime), remoteip String, DateTime DateTime) ENGINE = MergeTree(Date, DateTime, 8192);

Логи отправляются в clickhouse.

│          │ 0000-00-00 00:00:00 │
│          │ 0000-00-00 00:00:00 │
│          │ 0000-00-00 00:00:00 │

Как изменить time_format в fluentd(td-agent) из time_iso8601 в %Y-%m-%d %H:%M:%S и отправить их в clickhouse?

 , ,

chemtech
()

TOP remote_addr из nginx_request_exporter c помощью promql

Форум — Admin

Начал изучать https://github.com/markuslindenberg/nginx_request_exporter

Часть конфига nginx.conf:

    log_format prometheus 'time:$request_time host="$host" remote_addr="$remote_addr" ';

    access_log syslog:server=127.0.0.1:9514 prometheus;

на выходе получаю метрики:

nginx_request_time_bucket{host="vhost2",remote_addr="127.0.0.1",le="5"} 2621
nginx_request_time_bucket{host="vhost2",remote_addr="127.0.0.1",le="+Inf"} 2621
nginx_request_time_sum{host="vhost2",remote_addr="127.0.0.1"} 0.7450000000000006
nginx_request_time_count{host="vhost2",remote_addr="127.0.0.1"} 2621
nginx_request_time_bucket{host="vhost2",remote_addr="172.25.247.99",le="5"} 2
nginx_request_time_bucket{host="vhost2",remote_addr="172.25.247.99",le="+Inf"} 2
nginx_request_time_sum{host="vhost2",remote_addr="172.25.247.99"} 0.001
nginx_request_time_count{host="vhost2",remote_addr="172.25.247.99"} 2
nginx_request_time_bucket{host="vhost2",remote_addr="172.26.9.198",le="5"} 1462
nginx_request_time_bucket{host="vhost2",remote_addr="172.26.9.198",le="+Inf"} 1462
nginx_request_time_sum{host="vhost2",remote_addr="172.26.9.198"} 0.45500000000000035
nginx_request_time_count{host="vhost2",remote_addr="172.26.9.198"} 1462

Как с помощью promql получить график популярных IP адресов клиентов (remote_addr) ?

 ,

chemtech
()

Есть ли примеры как использовать Buckets for the Prometheus histogram?

Форум — Admin

Я запустил https://github.com/markuslindenberg/nginx_request_exporter

Мне выдались вот такие метрики:

# TYPE nginx_request_time histogram
nginx_request_time_bucket{host="vhost2",method="GET",status="200",upstream="127.0.0.1:8001",le="0.005"} 38251
nginx_request_time_bucket{host="vhost2",method="GET",status="200",upstream="127.0.0.1:8001",le="0.01"} 38251
nginx_request_time_bucket{host="vhost2",method="GET",status="200",upstream="127.0.0.1:8001",le="0.025"} 38251
nginx_request_time_bucket{host="vhost2",method="GET",status="200",upstream="127.0.0.1:8001",le="0.05"} 38251
nginx_request_time_bucket{host="vhost2",method="GET",status="200",upstream="127.0.0.1:8001",le="0.1"} 38251
nginx_request_time_bucket{host="vhost2",method="GET",status="200",upstream="127.0.0.1:8001",le="0.25"} 38251
nginx_request_time_bucket{host="vhost2",method="GET",status="200",upstream="127.0.0.1:8001",le="0.5"} 38251
nginx_request_time_bucket{host="vhost2",method="GET",status="200",upstream="127.0.0.1:8001",le="1"} 38251
nginx_request_time_bucket{host="vhost2",method="GET",status="200",upstream="127.0.0.1:8001",le="2.5"} 38251
nginx_request_time_bucket{host="vhost2",method="GET",status="200",upstream="127.0.0.1:8001",le="5"} 38251
nginx_request_time_bucket{host="vhost2",method="GET",status="200",upstream="127.0.0.1:8001",le="10"} 38251
nginx_request_time_bucket{host="vhost2",method="GET",status="200",upstream="127.0.0.1:8001",le="+Inf"} 38251
nginx_request_time_sum{host="vhost2",method="GET",status="200",upstream="127.0.0.1:8001"} 8.229000000000879
nginx_request_time_count{host="vhost2",method="GET",status="200",upstream="127.0.0.1:8001"} 38251

В -help вижу подсказку:

  -histogram.buckets string
    	Buckets for the Prometheus histogram. (default ".005,.01,.025,.05,.1,.25,.5,1,2.5,5,10")

Но как их использовать?

Есть ли примеры как использовать Buckets for the Prometheus histogram?

В гугле 1 статья, но что то она мне не помогает.

Как вы настраиваете histogram.buckets string ?

Можно ли сделать -histogram.buckets каким-то одним числом? например равному scrape_interval в prometheus ?

 

chemtech
()

prometheus-nginxlog-exporter не показывает нужные метрики

Форум — Admin

Может кто сталкивался с prometheus-nginxlog-exporter

Завел issue https://github.com/martin-helmich/prometheus-nginxlog-exporter/issues/90

Продублирую тут:

Support request Can not view metric by/from log

How to reproduce it (as minimally and precisely as possible): Download https://github.com/martin-helmich/prometheus-nginxlog-exporter/releases/download/v1.3.0/prometheus-nginxlog-exporter

Other important information:

Configuration file (remove section, if not applicable):

listen {
  port = 4040
}


namespace "app1" {
  format = "$remote_addr - $remote_user [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" \"$http_x_forwarded_for\""
  source {
    files = [
      "/var/log/nginx/access-main.log"
    ]
  }
  labels {
    app = "application-one"
    environment = "production"
    foo = "bar"
  }

  histogram_buckets = [.005, .01, .025, .05, .1, .25, .5, 1, 2.5, 5, 10]
}

Run:

./prometheus-nginxlog-exporter -config-file config.hcl

Metrics output (remove section, if not applicable):

# HELP app1_parse_errors_total Total number of log file lines that could not be parsed
# TYPE app1_parse_errors_total counter
app1_parse_errors_total 0
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0
go_gc_duration_seconds_sum 0
go_gc_duration_seconds_count 0
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 12
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 1.603024e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 1.603024e+06
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.442999e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 373
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 2.240512e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 1.603024e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 6.422528e+07
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 2.4576e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 3581
# HELP go_memstats_heap_released_bytes_total Total number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes_total counter
go_memstats_heap_released_bytes_total 0
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 6.668288e+07
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 0
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 3954
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 3472
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 22464
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 32768
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 4.473924e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 789569
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 425984
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 425984
# HELP go_memstats_sys_bytes Number of bytes obtained by system. Sum of all system allocations.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 7.1631096e+07
# HELP http_request_duration_microseconds The HTTP request latencies in microseconds.
# TYPE http_request_duration_microseconds summary
http_request_duration_microseconds{handler="prometheus",quantile="0.5"} 1975.627
http_request_duration_microseconds{handler="prometheus",quantile="0.9"} 1975.627
http_request_duration_microseconds{handler="prometheus",quantile="0.99"} 1975.627
http_request_duration_microseconds_sum{handler="prometheus"} 1975.627
http_request_duration_microseconds_count{handler="prometheus"} 1
# HELP http_request_size_bytes The HTTP request sizes in bytes.
# TYPE http_request_size_bytes summary
http_request_size_bytes{handler="prometheus",quantile="0.5"} 444
http_request_size_bytes{handler="prometheus",quantile="0.9"} 444
http_request_size_bytes{handler="prometheus",quantile="0.99"} 444
http_request_size_bytes_sum{handler="prometheus"} 444
http_request_size_bytes_count{handler="prometheus"} 1
# HELP http_requests_total Total number of HTTP requests made.
# TYPE http_requests_total counter
http_requests_total{code="200",handler="prometheus",method="get"} 1
# HELP http_response_size_bytes The HTTP response sizes in bytes.
# TYPE http_response_size_bytes summary
http_response_size_bytes{handler="prometheus",quantile="0.5"} 1309
http_response_size_bytes{handler="prometheus",quantile="0.9"} 1309
http_response_size_bytes{handler="prometheus",quantile="0.99"} 1309
http_response_size_bytes_sum{handler="prometheus"} 1309
http_response_size_bytes_count{handler="prometheus"} 1
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1024
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 7
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 4.866048e+06
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.57725037049e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.1288576e+08

Exporter output (remove section, if not applicable):

loading configuration file config.hcl
using configuration {Listen:{Port:4040 Address:0.0.0.0} Consul:{Enable:false Address: Datacenter: Scheme: Token: Service:{ID: Name: Tags:[]}} Namespaces:[{Name:app1 SourceFiles:[] Format:$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for" Labels:map[app:application-one environment:production foo:bar] RelabelConfigs:[] HistogramBuckets:[0.005 0.01 0.025 0.05 0.1 0.25 0.5 1 2.5 5 10] OrderedLabelNames:[] OrderedLabelValues:[]}] EnableExperimentalFeatures:false EnableExperimentalFeaturesOld:false}
starting listener for namespace app1
running HTTP server on address 0.0.0.0:4040

Example log file (remove section, if not applicable):

10.rr.ii.yy - - [25/Dec/2019:08:06:42 +0300] "GET /offsets/topic/xxx-xxx/partition/0 HTTP/1.1" 200 96 "-" "Java/11.0.2" "-"
10.rr.ii.yy - - [25/Dec/2019:08:06:42 +0300] "GET /api/v2/topics/xxx-xxxx/partitions/0/offsets HTTP/1.1" 200 68 "-" "Java/11.0.2" "-"
10.rr.ii.yy - - [25/Dec/2019:08:06:42 +0300] "GET /api/v2/topics/xxxxxx/partitions/2/messages?offset=96&count=10 HTTP/1.1" 200 41 "-" "Java/11.0.2" "-"

Environment:

  • Exporter version: 1.3.0
  • OS (e.g. from /etc/os-release): cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core)

 ,

chemtech
()

Как получить доступ к maven.fabric.io/public через artifactory/nexus

Форум — Development

Имеется build.gradle


buildscript {
    ext.kotlin_version = '1.3.50'

    repositories {
        maven { url 'http://artifactory/artifactory/repo' }
        maven { url 'http://artifactory/artifactory/remote-repos' }
        maven { url 'http://artifactory/artifactory/libs-release' }
        maven { url 'http://artifactory/artifactory/libs-snapshot' }
        maven { url 'http://artifactory/artifactory/plugins-release' }
        maven { url 'http://artifactory/artifactory/plugins-snapshot' }
        
    }
    dependencies {
        classpath "org.jfrog.buildinfo:build-info-extractor-gradle:latest.release"
        classpath 'com.android.tools.build:gradle:3.5.1'
        classpath 'io.fabric.tools:gradle:1.29.0'
        classpath "io.realm:realm-gradle-plugin:5.10.0"
        classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
        classpath 'net.researchgate:gradle-release:2.6.0'

    }
}

allprojects {
    repositories {
        maven { url 'http://artifactory/artifactory/repo' }
        maven { url 'http://artifactory/artifactory/remote-repos' }
        maven { url 'http://artifactory/artifactory/libs-release' }
        maven { url 'http://artifactory/artifactory/libs-snapshot' }
        maven { url 'http://artifactory/artifactory/plugins-release' }
        maven { url 'http://artifactory/artifactory/plugins-snapshot' }
    }
}

}

Он выдает ошибку. Кто-нибудь сталкивался с такой ошибкой? И как ее можно исправить?

FAILURE: Build failed with an exception.

* What went wrong:
> Could not resolve all artifacts for configuration ':classpath'.
   > Could not resolve io.fabric.tools:gradle:1.29.0.
     Required by:
         project :
      > Could not resolve io.fabric.tools:gradle:1.29.0.
         > Could not get resource 'http://artifactory/artifactory/repo/io/fabric/tools/gradle/1.29.0/gradle-1.29.0.pom'.
            > Could not GET 'http://artifactory/artifactory/repo/io/fabric/tools/gradle/1.29.0/gradle-1.29.0.pom'. Received status code 401 from server: Unauthorized
      > Could not resolve io.fabric.tools:gradle:1.29.0.
         > Could not get resource 'http://artifactory/artifactory/remote-repos/io/fabric/tools/gradle/1.29.0/gradle-1.29.0.pom'.
            > Could not GET 'http://artifactory/artifactory/remote-repos/io/fabric/tools/gradle/1.29.0/gradle-1.29.0.pom'. Received status code 401 from server: Unauthorized
      > Could not resolve io.fabric.tools:gradle:1.29.0.
         > Could not get resource 'http://artifactory/artifactory/plugins-release/io/fabric/tools/gradle/1.29.0/gradle-1.29.0.pom'.
            > Could not GET 'http://artifactory/artifactory/plugins-release/io/fabric/tools/gradle/1.29.0/gradle-1.29.0.pom'. Received status code 401 from server: Unauthorized

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

При попытке добавить https://maven.fabric.io/public в artifactory выдаетется ошибка.

 , ,

chemtech
()

Google() репозиторий в gradle через Artifactory или nexus для сборки Android приложений

Форум — Development

Есть Android проект, который скачивает зависимости напряму через интернет.

В файле build.gradle указано:

    repositories {
        google()
        jcenter()
        maven { url 'https://maven.fabric.io/public' }
        maven { url 'https://plugins.gradle.org/m2/' }
    }

Как сделать чтобы зависимости качались для репозитория google() и других репозиториев качались через артифактори или nexus ?

 , , ,

chemtech
()

Как выключить линтер для gradle в gitlab ci без правки gradle.properties

Форум — Admin

Пробовал вот такой gitlab-ci.yaml

---
variables:
  GRADLE_OPTS: "-Dorg.gradle.daemon=false"

build:
  image: mreichelt/android:23
  stage: build
  script:
    - chmod +x ./gradlew
    - ./gradlew assemble -x lint

Но все равно вижу много таких сообщений

 Parameter 'session' is never used

Спасибо

 , , ,

chemtech
()

Can`t Run LeoGateway on node with leo_manager, Leo_storage - Node is already running

Форум — Admin

Не могу запустить LeoGateway с leo_manager, Leo_storage на одной ноде (сервере) - выдает ‘Node is already running’

Создал issue https://github.com/leo-project/leofs/issues/1199

Ansible Inventory:

# Please check roles/common/vars/leofs_releases for available versions
[all:vars]
leofs_version=1.4.3
build_temp_path="/tmp/leofs_builder"
build_install_path="/tmp/"
build_branch="master"
source="package"

# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_0]
172.26.9.190

# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_1]
172.26.9.189

[leo_storage]
172.26.9.190 leofs_module_nodename=S0@172.26.9.190
172.26.9.189 leofs_module_nodename=S1@172.26.9.189

[leo_gateway]
172.26.9.190 leofs_module_nodename=G0@172.26.9.190
172.26.9.189 leofs_module_nodename=G1@172.26.9.189

[leofs_nodes:children]
leo_manager_0
leo_manager_1
leo_gateway
leo_storage
TASK [leo_gateway : Run LeoGateway] *************************************************************************************************************
fatal: [172.26.9.190]: FAILED! => {
    "changed": true, 
    "cmd": [
        "bin/leo_gateway", 
        "start"
    ], 
    "delta": "0:00:00.768302", 
    "end": "2019-12-15 11:30:37.554137", 
    "rc": 1, 
    "start": "2019-12-15 11:30:36.785835"
}

STDOUT:

Node is already running!


MSG:

non-zero return code

fatal: [172.26.9.189]: FAILED! => {
    "changed": true, 
    "cmd": [
        "bin/leo_gateway", 
        "start"
    ], 
    "delta": "0:00:00.820479", 
    "end": "2019-12-15 11:30:37.647186", 
    "rc": 1, 
    "start": "2019-12-15 11:30:36.826707"
}

STDOUT:

Node is already running!


MSG:

non-zero return code

Schema: leofs-cluster

Как запустить LeoGateway с leo_manager, Leo_storage на одной ноде (сервере) ?

 ,

chemtech
()

Поиск key в golang для списка json.

Форум — Development

Получаю через Github API список релизов в JSON формате.

curl https://api.github.com/repos/rabbitmq/erlang-rpm/releases

Получается примерно такая картина:

Картинка не грузится - ссылка https://habrastorage.org/webt/zv/yk/jb/zvykjbbxxvbntq-eyh3dpcdopvo.jpeg

Я пытаюсь получить ссылки на rpm на последние релизы используя Golang.

Вот репо https://github.com/patsevanton/few_latest_artefact_github_release

Пытался и через стандартный JSON unmarshal и через gjson.

Сейчас активирован gjson.

https://github.com/patsevanton/few_latest_artefact_github_release/blob/master/main.go#L108

Сейчас, если запустить go run main.go, то получим пустую строку.

Я так понимаю что парсится не просто JSON, а список JSON - в начале видно [ а потом уже {.

Как получить доступ хотя бы к assets_url ? Спасибо.

 , ,

chemtech
()

Как передать текущую директорию скрипту в systemd?

Форум — Admin

Имеется репо https://github.com/patsevanton/static-server-in-dir

Имеется systemd unit https://github.com/patsevanton/static-server-in-dir/blob/master/static-server-in-dir.service

Имеется rpm http://copr.fedorainfracloud.org/coprs/antonpatsev/static-server-in-dir/

Устанавливаем

yum -y install yum-plugin-copr

yum copr enable antonpatsev/static-server-in-dir

yum -y install static-server-in-dir

systemctl start static-server-in-dir

Как передать текущую директорию скрипту в systemd?

Мне нужно как то временно запустить локальный web сервер и потом его оставить.

Можно запустить локальный сервер в фоне и вызвать kill, но возможно пострадают другие процессы.

Я сделал systemd unit, который запускает локальный веб сервер. Но не могу динамически менять директорию, которую будет этот веб сервер публиковать/раздавать.

Пробовал менять через WorkingDirectory - не работает.

systemctl set-environment WorkingDirectory=/var

 

chemtech
()

S3 error: 403 (AccessDenied) на кластерной файловой системе LeoFS

Форум — Admin

Установил кластерную файловую систему LeoFS по инструкциям и комментариям в этом issue https://github.com/leo-project/leofs_ansible/issues/4

Status пишет что кластер работает:

leofs-adm status
 [System Confiuration]
-----------------------------------+----------
 Item                              | Value    
-----------------------------------+----------
 Basic/Consistency level
-----------------------------------+----------
                    system version | 1.2.22
                        cluster Id | leofs_1
                             DC Id | dc_1
                    Total replicas | 2
          number of successes of R | 1
          number of successes of W | 1
          number of successes of D | 1
 number of rack-awareness replicas | 0
                         ring size | 2^128
-----------------------------------+----------
 Multi DC replication settings
-----------------------------------+----------
        max number of joinable DCs | 2
           number of replicas a DC | 1
-----------------------------------+----------
 Manager RING hash
-----------------------------------+----------
                 current ring-hash | 5599d172
                previous ring-hash | 5599d172
-----------------------------------+----------

 [State of Node(s)]
-------+----------------------+--------------+----------------+----------------+----------------------------
 type  |         node         |    state     |  current ring  |   prev ring    |          updated at         
-------+----------------------+--------------+----------------+----------------+----------------------------
  S    | S0@172.26.9.179      | running      | 5599d172       | 5599d172       | 2019-12-02 10:40:05 +0000
  S    | S0@172.26.9.181      | running      | 5599d172       | 5599d172       | 2019-12-02 10:40:05 +0000
  G    | G0@172.26.9.180      | running      | 5599d172       | 5599d172       | 2019-12-02 10:40:07 +0000
-------+----------------------+--------------+----------------+----------------+----------------------------


Создал юзера

leofs-adm create-user leofs leofs
  access-key-id: 9c2615f32e81e6a1caf5
  secret-access-key: 8aaaa35c1ad78a2cbfa1a6cd49ba8aaeb3ba39eb

Список юзеров:

leofs-adm get-users
user_id     | role_id | access_key_id          | created_at                
------------+---------+------------------------+---------------------------
_test_leofs | 9       | 05236                  | 2019-12-02 06:56:49 +0000
leofs       | 1       | 9c2615f32e81e6a1caf5   | 2019-12-02 10:43:29 +0000

Сделал bucket

leofs-adm add-bucket leofs 9c2615f32e81e6a1caf5
OK

Список bucket:

 leofs-adm get-buckets
cluster id   | bucket   | owner  | permissions      | created at                
-------------+----------+--------+------------------+---------------------------
leofs_1      | leofs    | leofs  | Me(full_control) | 2019-12-02 10:44:02 +0000

Конфигурирование s3cmd:

s3cmd --configure 

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key [9c2615f32e81e6a1caf5]: 
Secret Key [8aaaa35c1ad78a2cbfa1a6cd49ba8aaeb3ba39eb]: 
Default Region [US]: 

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: 

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: leofs

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: 
Path to GPG program [/usr/bin/gpg]: 

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [No]: 

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name [172.26.9.180]: 
HTTP Proxy server port [8080]: 

New settings:
  Access Key: 9c2615f32e81e6a1caf5
  Secret Key: 8aaaa35c1ad78a2cbfa1a6cd49ba8aaeb3ba39eb
  Default Region: US
  S3 Endpoint: s3.amazonaws.com
  DNS-style bucket+hostname:port template for accessing a bucket: leofs
  Encryption password: 
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: False
  HTTP Proxy server name: 172.26.9.180
  HTTP Proxy server port: 8080

Test access with supplied credentials? [Y/n] Y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Not configured. Never mind.

Save settings? [y/N] y
Configuration saved to '/home/user/.s3cfg'

Закачка файлов

s3cmd put test.py s3://leofs/
upload: 'test.py' -> 's3://leofs/test.py'  [1 of 1]
 382 of 382   100% in    0s     3.40 kB/s  done
ERROR: S3 error: 403 (AccessDenied): Access Denied

Если поменять права на bucket, то можно будет закачать файл

leofs-adm update-acl leofs 9c2615f32e81e6a1caf5 public-read-write
OK

Закачка c public-read-write bucket

s3cmd put test.py s3://leofs/
upload: 'test.py' -> 's3://leofs/test.py'  [1 of 1]
 382 of 382   100% in    0s     2.92 kB/s  done

 , ,

chemtech
()

Правильные setCapability для запуска chrome в android

Форум — Development

Тестирую этот проект https://github.com/aerokube/demo-tests

Запущенные контейнеры

 docker ps
CONTAINER ID        IMAGE                         COMMAND                  CREATED             STATUS              PORTS                    NAMES
ffd64121947a        aerokube/selenoid:latest      "/usr/bin/selenoid -…"   3 days ago          Up 3 days           0.0.0.0:4444->4444/tcp   selenoid
c7c74a3b6589        aerokube/selenoid-ui:latest   "/selenoid-ui --sele…"   3 days ago          Up 3 days           0.0.0.0:8080->8080/tcp   selenoid-ui

Логи ffd64121947a

docker logs ffd64121947a
2019/12/02 12:16:03 [3258] [LOCATING_SERVICE] [chrome] [6.0]
2019/12/02 12:16:03 [3258] [ENVIRONMENT_NOT_AVAILABLE] [chrome] [6.0]

browsers.json:

{
    "android": {
        "default": "6.0",
        "versions": {
            "6.0": {
                "image": "nexus/selenoid/android:6.0",
                "port": "4444",
                "path": "/wd/hub"
            }
        }
    },
    "chrome": {
        "default": "78.0",
        "versions": {
            "78.0": {
                "image": "nexus/selenoid/chrome:78.0",
                "port": "4444",
                "path": "/"
            }
        }
    }
}

Прокси поправил

package com.aerokube.selenoid;

import org.apache.commons.io.FileUtils;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.openqa.selenium.*;
import org.openqa.selenium.remote.Augmenter;
import org.openqa.selenium.remote.DesiredCapabilities;
import org.openqa.selenium.remote.RemoteWebDriver;

import java.io.File;
import java.net.URL;

import static com.aerokube.selenoid.Screenshot.takeScreenshot;

public class DemoTest {

    private RemoteWebDriver driver;

    @Before
    public void openDriver() throws Exception {
        Proxy proxy = new Proxy();
        proxy.setSslProxy("http://proxy:3128");
        final DesiredCapabilities browser = DesiredCapabilities.chrome();
        browser.setCapability("deviceName", "android");
        browser.setCapability("version", "6.0");
        browser.setCapability("appPackage", "com.android.chrome");
        browser.setCapability("appActivity", "com.google.android.apps.chrome.Main");
        browser.setCapability("proxy", proxy);
        browser.setCapability("enableVideo", true);
        browser.setCapability("enableLog", true);
        browser.setCapability("enableVNC", true);
        driver = new RemoteWebDriver(new URL(
                "http://localhost:4444/wd/hub" //Replace with correct host and port
        ), browser);
    }

    @Test
    public void browserTest() throws Exception {
        try {
            driver.get("https://duckduckgo.com/");
            WebElement input = driver.findElement(By.cssSelector("input#search_form_input_homepage"));
            input.sendKeys(Keys.chord("selenium", Keys.ENTER));
        } finally {
            takeScreenshot(driver);
        }

    }

    @After
    public void closeDriver(){
        if (driver != null){
            driver.quit();
            driver = null;
        }
    }
}


Пишет:

Tests in error: 
  browserTest(com.aerokube.selenoid.DemoTest): Requested environment is not available (WARNING: The server did not provide any stacktrace information)

Как исправить ошибку?

 

chemtech
()

Глобальный gitlab хук, который отклоняет коммиты с адресами электронной почты не из белого списка

Форум — Admin

Добрый вечер.

Пытаюсь настроить в Gitlab блокировку юзеров по email домену.

Имеется вот такой скрипт: https://github.com/github/platform-samples/blob/master/pre-receive-hooks/reject-external-email.sh

Я его положил в директорию /opt/gitlab/embedded/service/gitlab-shell/hooks/pre-receive.d/

Начало скрипта у меня такое:

DOMAIN=[2.com]
COMPANY_NAME=[MyCompany]
CONTACT_EMAIL=help@company.com
SLACK=#help-git
HELP_URL=https://pages.github.company.com/org/repo

Делаю коммит:

git log
commit 700ad4a4d1f351dfdb994c8fef47561bf8144a8d (HEAD -> master)
Author: Anton Patsev <1@2.com>
Date:   Tue Nov 26 17:06:24 2019 +0600

    add space

При git push выходит reject:

remote: WARNING:
remote: WARNING: At least one commit on 'master' does not have an '[2.com]' email address.
remote: WARNING:         commit: 700ad4a4d1f351dfdb994c8fef47561bf8144a8d
remote: WARNING:   author email: 1@2.com
remote: WARNING:
remote: WARNING: See https://pages.github.company.com/org/repo for instructions.
remote: WARNING:
remote: WARNING: Contact help@company.com or #help-git on Slack for assistance!

Если опции сделать так:

DOMAIN=2.com

То коммит проходит.

Но как быть если нужно указать несколько доменов?

Если указать так:

DOMAIN=2.com,2.ru

то будет reject такой

remote: WARNING: At least one commit on 'master' does not have an '2.com,2.ru' email address.
remote: WARNING:         commit: 11a653f39c9237816ea2f45fbe390a23ae990edd
remote: WARNING:   author email: 1@2.ru

 ,

chemtech
()

Как исправить ошибку undefined: gonx.NewParserReader в проекте nginx-clickhouse

Форум — Development

Кто-нибудь пожет подсказать:

Пытаюсь скомпилировать https://github.com/mintance/nginx-clickhouse

Поправил неправильные пути, добавил go mod. но появляется ошибка: nginx/nginx.go:67:12: undefined: gonx.NewParserReader

Но эта функция уже есть https://github.com/satyrius/gonx/blob/master/reader.go#L20

Полный лог моих действий после полной очистки GOPATH

```
git clone https://github.com/mintance/nginx-clickhouse.git
cd nginx-clickhouse
go mod init github.com/mintance/nginx-clickhouse
find ./ -type f -exec sed -i 's/Sirupsen/sirupsen/g' {} \;
go get
go build
```
nginx/nginx.go:67:12: undefined: gonx.NewParserReader

Спасибо.

 ,

chemtech
()

RSS подписка на новые темы