FreeBSD Bugzilla – Attachment 156293 Details for
Bug 199893
[New Port] sysutils/graylog
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Help
|
New Account
|
Log In
Remember
[x]
|
Forgot Password
Login:
[x]
shar-archive
graylog.shar (text/plain), 25.60 KB, created by
Thomas Bartelmess
on 2015-05-03 17:03:29 UTC
(
hide
)
Description:
shar-archive
Filename:
MIME Type:
Creator:
Thomas Bartelmess
Created:
2015-05-03 17:03:29 UTC
Size:
25.60 KB
patch
obsolete
># This is a shell archive. Save it in a file, remove anything before ># this line, and then unpack it by entering "sh file". Note, it may ># create directories; files and directories will be owned by you and ># have default permissions. ># ># This archive contains: ># ># graylog ># graylog/Makefile ># graylog/distinfo ># graylog/pkg-descr ># graylog/pkg-plist ># graylog/files ># graylog/files/graylog.in ># graylog/files/graylog_logging.xml ># graylog/files/graylog.conf.example ># >echo c - graylog >mkdir -p graylog > /dev/null 2>&1 >echo x - graylog/Makefile >sed 's/^X//' >graylog/Makefile << '0c1ce5e67eab6f0934d9f02e604a4e2c' >X# $FreeBSD$ >X >XPORTNAME= graylog >XPORTVERSION= 1.0.2 >XCATEGORIES= sysutils java >XMASTER_SITES= https://packages.graylog2.org/releases/graylog2-server/ \ >X http://packages.graylog2.org/releases/graylog2-server/ >X >XMAINTAINER= thomas@bartelmess.io >XCOMMENT= Tool for centralized log collection >X >XLICENSE= GPLv3 >X >XUSES= tar:tgz >XUSE_JAVA= yes >XJAVA_VERSION= 1.7+ >XJAVA_EXTRACT= yes >XJAVA_RUN= yes >XNO_BUILD= yes >X >XGRAYLOG_DIR= ${PREFIX}/${PORTNAME} >X >XUSE_RC_SUBR= graylog >X >XGRAYLOGUSER?= graylog >XGRAYLOGGROUP?= ${GRAYLOGUSER} >XUSERS= ${GRAYLOGUSER} >XGROUPS= ${GRAYLOGGROUP} >X >XSUB_LIST= GRAYLOGUSER=${GRAYLOGUSER} \ >X GRAYLOGGROUP=${GRAYLOGGROUP} \ >X JAVA_HOME=${JAVA_HOME} \ >X GRAYLOG_DIR=${GRAYLOG_DIR} \ >X ETCDIR=${ETCDIR} >X >Xdo-install: >X ${MKDIR} ${STAGEDIR}${GRAYLOG_DIR} >X ${MKDIR} ${STAGEDIR}${ETCDIR} >X ${INSTALL_DATA} ${WRKSRC}/graylog.jar ${STAGEDIR}${GRAYLOG_DIR} >X ${INSTALL_DATA} ${WRKSRC}/graylog.conf.example ${STAGEDIR}${ETCDIR} >X ${INSTALL_DATA} ${FILESDIR}/graylog_logging.xml ${STAGEDIR}${ETCDIR} >X >X.include <bsd.port.mk> >0c1ce5e67eab6f0934d9f02e604a4e2c >echo x - graylog/distinfo >sed 's/^X//' >graylog/distinfo << '90fd99191d6592b0216730283cdcd2d3' >XSHA256 (graylog-1.0.2.tgz) = 66b415a3d5512b2546cb925f43e90a04fa2840d127dd401252d7c5e077e43eae >XSIZE (graylog-1.0.2.tgz) = 74565448 >90fd99191d6592b0216730283cdcd2d3 >echo x - graylog/pkg-descr >sed 's/^X//' >graylog/pkg-descr << '2856cbddf1a228a5219892d4fd5f6cb1' >XGraylog is a centralized log server that accepts various structured >Xand unstructred log data. Logs are stored in Elasticsearch. Graylog >Xlet's you search and analyze logs using a REST HTTP API. >X >XWWW: www.graylog.org >2856cbddf1a228a5219892d4fd5f6cb1 >echo x - graylog/pkg-plist >sed 's/^X//' >graylog/pkg-plist << 'b426cee03c9bcc143c2b8f2e7d864715' >X%%ETCDIR%%/graylog.conf.example >X%%ETCDIR%%/graylog_logging.xml >Xgraylog/graylog.jar >b426cee03c9bcc143c2b8f2e7d864715 >echo c - graylog/files >mkdir -p graylog/files > /dev/null 2>&1 >echo x - graylog/files/graylog.in >sed 's/^X//' >graylog/files/graylog.in << '575c31a763bd6d573e83a644d7d432f1' >X#!/bin/sh >X# >X# PROVIDE: graylog >X# REQUIRE: NETWORKING SERVERS >X# BEFORE: DAEMON >X# KEYWORD: shutdown >X# >X >X# graylog_enable (bool): >X# Default value: "NO" >X# Flag that determines whether graylog is enabled >X# >X# graylog_user (username): >X# Default value: "graylog" >X# Name of the graylog user account >X# >X# graylog_group (group): >X# Default value: "graylog" >X# Name of the graylog group >X# >X# graylog_config (string) >X# Default value %%ETCDIR%%/graylog.conf >X# Path to the graylog configuration file >X# >X# graylog_min_mem (string): >X# Default value: 256m >X# Minumum JVM heap size >X# >X# graylog_max_mem (string): >X# Default value: 1g >X# Maximum JVM heap size >X# >X# graylog_dir (string): >X# Default value: %%GRAYLOG_DIR%% >X# Path the the graylog installation. >X# >X# graylog_run_dir (string): >X# Default value: /var/graylog >X# Path the the graylog run folder. >X# >X# graylog_java_home (string): >X# Default value: %%JAVA_HOME%% >X# Root directory of the desired Java SDK. >X# >X# graylog_log4j_config (string): >X# Default value: file://%%ETCDIR%%/graylog_logging.xml >X# Path to the log4j configuration file for graylog >X >X. /etc/rc.subr >X >Xname=graylog >Xrcvar=graylog_enable >Xload_rc_config $name >X >X: ${graylog_enable:="NO"} >X: ${graylog_user:="%%GRAYLOGUSER%%"} >X: ${graylog_group:="%%GRAYLOGGROUP%%"} >X: ${graylog_config:="%%ETCDIR%%/${name}.conf"} >X: ${graylog_min_mem:="256m"} >X: ${graylog_max_mem:="1g"} >X: ${graylog_dir:="%%GRAYLOG_DIR%%"} >X: ${graylog_run_dir:="/var/graylog"} >X: ${graylog_java_home:="%%JAVA_HOME%%"} >X: ${graylog_log4j_config="file://%%ETCDIR%%/graylog_logging.xml"} >X >X >Xjava_options=" -Xms${graylog_min_mem} \ >X -Xmx${graylog_max_mem} \ >X -XX:NewRatio=1 \ >X -XX:+ResizeTLAB \ >X -XX:+UseConcMarkSweepGC \ >X -XX:+CMSConcurrentMTEnabled \ >X -XX:+CMSClassUnloadingEnabled \ >X -XX:MaxPermSize=512m \ >X -XX:+UseParNewGC \ >X -XX:-OmitStackTraceInFastThrow\ >X -Djava.library.path=${graylog_dir}/lib/sigar \ >X -Dlog4j.configuration=${graylog_log4j_config}" >X >Xstart_precmd="graylog_precmd" >X >Xcommand="/usr/sbin/daemon" >Xcommand_args="-f -u {$graylog_user} -P ${pidfile} /usr/local/bin/java ${java_options} -jar ${graylog_dir}/graylog.jar server --configfile ${graylog2_config}" >X >Xgraylog_precmd() { >X /usr/bin/install -d -o ${graylog_group} -g ${graylog_group} -m 755 /var/log/graylog >X /usr/bin/install -d -o ${graylog_group} -g ${graylog_group} -m 755 /var/log/graylog/server >X /usr/bin/install -d -o ${graylog_group} -g ${graylog_group} -m 755 ${graylog_run_dir} >X touch $pidfile >X chown ${graylog_user}:${graylog_group} ${pidfile} >X cd ${graylog_run_dir} >X} >X >Xrun_rc_command "$1" >575c31a763bd6d573e83a644d7d432f1 >echo x - graylog/files/graylog_logging.xml >sed 's/^X//' >graylog/files/graylog_logging.xml << '96c78ed840bdebab8040eddb82c2f65e' >X<?xml version="1.0" encoding="UTF-8"?> >X<!DOCTYPE log4j:configuration PUBLIC "-//APACHE//DTD LOG4J 1.2//EN" "log4j.dtd"> >X<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/"> >X >X <appender name="FILE" class="org.apache.log4j.rolling.RollingFileAppender"> >X <rollingPolicy class="org.apache.log4j.rolling.FixedWindowRollingPolicy" > >X <param name="activeFileName" value="/var/log/graylog/server.log" /> <!-- ADAPT --> >X <param name="fileNamePattern" value="/var/log/graylog/server.%i.log" /> <!-- ADAPT --> >X <param name="minIndex" value="1" /> <!-- ADAPT --> >X <param name="maxIndex" value="10" /> <!-- ADAPT --> >X </rollingPolicy> >X <triggeringPolicy class="org.apache.log4j.rolling.SizeBasedTriggeringPolicy"> >X <param name="maxFileSize" value="5767168" /> <!-- ADAPT: For example 5.5MB in bytes --> >X </triggeringPolicy> >X <layout class="org.apache.log4j.PatternLayout"> >X <param name="ConversionPattern" value="%d %-5p: %c - %m%n"/> >X </layout> >X </appender> >X >X <!-- Application Loggers --> >X <logger name="org.graylog2"> >X <level value="info"/> >X </logger> >X <!-- this emits a harmless warning for ActiveDirectory every time which we can't work around :( --> >X <logger name="org.apache.directory.api.ldap.model.message.BindRequestImpl"> >X <level value="error"/> >X </logger> >X <!-- Root Logger --> >X <root> >X <priority value="info"/> >X <appender-ref ref="FILE"/> >X </root> >X >X</log4j:configuration> >96c78ed840bdebab8040eddb82c2f65e >echo x - graylog/files/graylog.conf.example >sed 's/^X//' >graylog/files/graylog.conf.example << '71d4c9c36db76049e7700e92ac0ff627' >X# If you are running more than one instances of graylog2-server you have to select one of these >X# instances as master. The master will perform some periodical tasks that non-masters won't perform. >Xis_master = true >X >X# The auto-generated node ID will be stored in this file and read after restarts. It is a good idea >X# to use an absolute file path here if you are starting graylog2-server from init scripts or similar. >Xnode_id_file = /var/graylog/server/node-id >X >X# You MUST set a secret to secure/pepper the stored user passwords here. Use at least 64 characters. >X# Generate one by using for example: pwgen -N 1 -s 96 >Xpassword_secret = >X >X# The default root user is named 'admin' >X#root_username = admin >X >X# You MUST specify a hash password for the root user (which you only need to initially set up the >X# system and in case you lose connectivity to your authentication backend) >X# This password cannot be changed using the API or via the web interface. If you need to change it, >X# modify it in this file. >X# Create one by using for example: echo -n yourpassword | shasum -a 256 >X# and put the resulting hash value into the following line >Xroot_password_sha2 = >X >X# The email address of the root user. >X# Default is empty >X#root_email = "" >X >X# The time zone setting of the root user. >X# Default is UTC >X#root_timezone = UTC >X >X# Set plugin directory here (relative or absolute) >Xplugin_dir = plugin >X >X# REST API listen URI. Must be reachable by other graylog2-server nodes if you run a cluster. >Xrest_listen_uri = http://127.0.0.1:12900/ >X >X# REST API transport address. Defaults to the value of rest_listen_uri. Exception: If rest_listen_uri >X# is set to a wildcard IP address (0.0.0.0) the first non-loopback IPv4 system address is used. >X# If set, his will be promoted in the cluster discovery APIs, so other nodes may try to connect on >X# this address and it is used to generate URLs addressing entities in the REST API. (see rest_listen_uri) >X# You will need to define this, if your Graylog server is running behind a HTTP proxy that is rewriting >X# the scheme, host name or URI. >X#rest_transport_uri = http://192.168.1.1:12900/ >X >X# Enable CORS headers for REST API. This is necessary for JS-clients accessing the server directly. >X# If these are disabled, modern browsers will not be able to retrieve resources from the server. >X# This is disabled by default. Uncomment the next line to enable it. >X#rest_enable_cors = true >X >X# Enable GZIP support for REST API. This compresses API responses and therefore helps to reduce >X# overall round trip times. This is disabled by default. Uncomment the next line to enable it. >X#rest_enable_gzip = true >X >X# Enable HTTPS support for the REST API. This secures the communication with the REST API with >X# TLS to prevent request forgery and eavesdropping. This is disabled by default. Uncomment the >X# next line to enable it. >X#rest_enable_tls = true >X >X# The X.509 certificate file to use for securing the REST API. >X#rest_tls_cert_file = /path/to/graylog2.crt >X >X# The private key to use for securing the REST API. >X#rest_tls_key_file = /path/to/graylog2.key >X >X# The password to unlock the private key used for securing the REST API. >X#rest_tls_key_password = secret >X >X# The maximum size of a single HTTP chunk in bytes. >X#rest_max_chunk_size = 8192 >X >X# The maximum size of the HTTP request headers in bytes. >X#rest_max_header_size = 8192 >X >X# The maximal length of the initial HTTP/1.1 line in bytes. >X#rest_max_initial_line_length = 4096 >X >X# The size of the execution handler thread pool used exclusively for serving the REST API. >X#rest_thread_pool_size = 16 >X >X# The size of the worker thread pool used exclusively for serving the REST API. >X#rest_worker_threads_max_pool_size = 16 >X >X# Embedded Elasticsearch configuration file >X# pay attention to the working directory of the server, maybe use an absolute path here >X#elasticsearch_config_file = /usr/local/etc/graylog/server/elasticsearch.yml >X >X# Graylog will use multiple indices to store documents in. You can configured the strategy it uses to determine >X# when to rotate the currently active write index. >X# It supports multiple rotation strategies: >X# - "count" of messages per index, use elasticsearch_max_docs_per_index below to configure >X# - "size" per index, use elasticsearch_max_size_per_index below to configure >X# valid values are "count", "size" and "time", default is "count" >Xrotation_strategy = count >X >X# (Approximate) maximum number of documents in an Elasticsearch index before a new index >X# is being created, also see no_retention and elasticsearch_max_number_of_indices. >X# Configure this if you used 'rotation_strategy = count' above. >Xelasticsearch_max_docs_per_index = 20000000 >X >X# (Approximate) maximum size in bytes per Elasticsearch index on disk before a new index is being created, also see >X# no_retention and elasticsearch_max_number_of_indices. Default is 1GB. >X# Configure this if you used 'rotation_strategy = size' above. >X#elasticsearch_max_size_per_index = 1073741824 >X >X# (Approximate) maximum time before a new Elasticsearch index is being created, also see >X# no_retention and elasticsearch_max_number_of_indices. Default is 1 day. >X# Configure this if you used 'rotation_strategy = time' above. >X# Please note that this rotation period does not look at the time specified in the received messages, but is >X# using the real clock value to decide when to rotate the index! >X# Specify the time using a duration and a suffix indicating which unit you want: >X# 1w = 1 week >X# 1d = 1 day >X# 12h = 12 hours >X# Permitted suffixes are: d for day, h for hour, m for minute, s for second. >X#elasticsearch_max_time_per_index = 1d >X >X# Disable checking the version of Elasticsearch for being compatible with this Graylog release. >X# WARNING: Using Graylog with unsupported and untested versions of Elasticsearch may lead to data loss! >X#elasticsearch_disable_version_check = true >X >X# Disable message retention on this node, i. e. disable Elasticsearch index rotation. >X#no_retention = false >X >X# How many indices do you want to keep? >Xelasticsearch_max_number_of_indices = 20 >X >X# Decide what happens with the oldest indices when the maximum number of indices is reached. >X# The following strategies are availble: >X# - delete # Deletes the index completely (Default) >X# - close # Closes the index and hides it from the system. Can be re-opened later. >Xretention_strategy = delete >X >X# How many Elasticsearch shards and replicas should be used per index? Note that this only applies to newly created indices. >Xelasticsearch_shards = 4 >Xelasticsearch_replicas = 0 >X >X# Prefix for all Elasticsearch indices and index aliases managed by Graylog. >Xelasticsearch_index_prefix = graylog2 >X >X# Do you want to allow searches with leading wildcards? This can be extremely resource hungry and should only >X# be enabled with care. See also: https://www.graylog.org/documentation/general/queries/ >Xallow_leading_wildcard_searches = false >X >X# Do you want to allow searches to be highlighted? Depending on the size of your messages this can be memory hungry and >X# should only be enabled after making sure your Elasticsearch cluster has enough memory. >Xallow_highlighting = false >X >X# settings to be passed to elasticsearch's client (overriding those in the provided elasticsearch_config_file) >X# all these >X# this must be the same as for your Elasticsearch cluster >X#elasticsearch_cluster_name = graylog2 >X >X# you could also leave this out, but makes it easier to identify the graylog2 client instance >X#elasticsearch_node_name = graylog2-server >X >X# we don't want the graylog2 server to store any data, or be master node >X#elasticsearch_node_master = false >X#elasticsearch_node_data = false >X >X# use a different port if you run multiple Elasticsearch nodes on one machine >X#elasticsearch_transport_tcp_port = 9350 >X >X# we don't need to run the embedded HTTP server here >X#elasticsearch_http_enabled = false >X >X#elasticsearch_discovery_zen_ping_multicast_enabled = false >X#elasticsearch_discovery_zen_ping_unicast_hosts = 192.168.1.203:9300 >X >X# Change the following setting if you are running into problems with timeouts during Elasticsearch cluster discovery. >X# The setting is specified in milliseconds, the default is 5000ms (5 seconds). >X#elasticsearch_cluster_discovery_timeout = 5000 >X >X# the following settings allow to change the bind addresses for the Elasticsearch client in graylog2 >X# these settings are empty by default, letting Elasticsearch choose automatically, >X# override them here or in the 'elasticsearch_config_file' if you need to bind to a special address >X# refer to http://www.elasticsearch.org/guide/en/elasticsearch/reference/0.90/modules-network.html >X# for special values here >X#elasticsearch_network_host = >X#elasticsearch_network_bind_host = >X#elasticsearch_network_publish_host = >X >X# The total amount of time discovery will look for other Elasticsearch nodes in the cluster >X# before giving up and declaring the current node master. >X#elasticsearch_discovery_initial_state_timeout = 3s >X >X# Analyzer (tokenizer) to use for message and full_message field. The "standard" filter usually is a good idea. >X# All supported analyzers are: standard, simple, whitespace, stop, keyword, pattern, language, snowball, custom >X# Elasticsearch documentation: http://www.elasticsearch.org/guide/reference/index-modules/analysis/ >X# Note that this setting only takes effect on newly created indices. >Xelasticsearch_analyzer = standard >X >X# Batch size for the Elasticsearch output. This is the maximum (!) number of messages the Elasticsearch output >X# module will get at once and write to Elasticsearch in a batch call. If the configured batch size has not been >X# reached within output_flush_interval seconds, everything that is available will be flushed at once. Remember >X# that every outputbuffer processor manages its own batch and performs its own batch write calls. >X# ("outputbuffer_processors" variable) >Xoutput_batch_size = 500 >X >X# Flush interval (in seconds) for the Elasticsearch output. This is the maximum amount of time between two >X# batches of messages written to Elasticsearch. It is only effective at all if your minimum number of messages >X# for this time period is less than output_batch_size * outputbuffer_processors. >Xoutput_flush_interval = 1 >X >X# As stream outputs are loaded only on demand, an output which is failing to initialize will be tried over and >X# over again. To prevent this, the following configuration options define after how many faults an output will >X# not be tried again for an also configurable amount of seconds. >Xoutput_fault_count_threshold = 5 >Xoutput_fault_penalty_seconds = 30 >X >X# The number of parallel running processors. >X# Raise this number if your buffers are filling up. >Xprocessbuffer_processors = 5 >Xoutputbuffer_processors = 3 >X >X#outputbuffer_processor_keep_alive_time = 5000 >X#outputbuffer_processor_threads_core_pool_size = 3 >X#outputbuffer_processor_threads_max_pool_size = 30 >X >X# UDP receive buffer size for all message inputs (e. g. SyslogUDPInput). >X#udp_recvbuffer_sizes = 1048576 >X >X# Wait strategy describing how buffer processors wait on a cursor sequence. (default: sleeping) >X# Possible types: >X# - yielding >X# Compromise between performance and CPU usage. >X# - sleeping >X# Compromise between performance and CPU usage. Latency spikes can occur after quiet periods. >X# - blocking >X# High throughput, low latency, higher CPU usage. >X# - busy_spinning >X# Avoids syscalls which could introduce latency jitter. Best when threads can be bound to specific CPU cores. >Xprocessor_wait_strategy = blocking >X >X# Size of internal ring buffers. Raise this if raising outputbuffer_processors does not help anymore. >X# For optimum performance your LogMessage objects in the ring buffer should fit in your CPU L3 cache. >X# Start server with --statistics flag to see buffer utilization. >X# Must be a power of 2. (512, 1024, 2048, ...) >Xring_size = 65536 >X >Xinputbuffer_ring_size = 65536 >Xinputbuffer_processors = 2 >Xinputbuffer_wait_strategy = blocking >X >X# Enable the disk based message journal. >Xmessage_journal_enabled = true >X >X# The directory which will be used to store the message journal. The directory must me exclusively used by Graylog and >X# must not contain any other files than the ones created by Graylog itself. >Xmessage_journal_dir = data/journal >X >X# Journal hold messages before they could be written to Elasticsearch. >X# For a maximum of 12 hours or 5 GB whichever happens first. >X# During normal operation the journal will be smaller. >X#message_journal_max_age = 12h >X#message_journal_max_size = 5gb >X >X#message_journal_flush_age = 1m >X#message_journal_flush_interval = 1000000 >X#message_journal_segment_age = 1h >X#message_journal_segment_size = 100mb >X >X# Number of threads used exclusively for dispatching internal events. Default is 2. >X#async_eventbus_processors = 2 >X >X# EXPERIMENTAL: Dead Letters >X# Every failed indexing attempt is logged by default and made visible in the web-interface. You can enable >X# the experimental dead letters feature to write every message that was not successfully indexed into the >X# MongoDB "dead_letters" collection to make sure that you never lose a message. The actual writing of dead >X# letter should work fine already but it is not heavily tested yet and will get more features in future >X# releases. >Xdead_letters_enabled = false >X >X# How many seconds to wait between marking node as DEAD for possible load balancers and starting the actual >X# shutdown process. Set to 0 if you have no status checking load balancers in front. >Xlb_recognition_period_seconds = 3 >X >X# Every message is matched against the configured streams and it can happen that a stream contains rules which >X# take an unusual amount of time to run, for example if its using regular expressions that perform excessive backtracking. >X# This will impact the processing of the entire server. To keep such misbehaving stream rules from impacting other >X# streams, Graylog limits the execution time for each stream. >X# The default values are noted below, the timeout is in milliseconds. >X# If the stream matching for one stream took longer than the timeout value, and this happened more than "max_faults" times >X# that stream is disabled and a notification is shown in the web interface. >X#stream_processing_timeout = 2000 >X#stream_processing_max_faults = 3 >X >X# Length of the interval in seconds in which the alert conditions for all streams should be checked >X# and alarms are being sent. >X#alert_check_interval = 60 >X >X# Since 0.21 the graylog2 server supports pluggable output modules. This means a single message can be written to multiple >X# outputs. The next setting defines the timeout for a single output module, including the default output module where all >X# messages end up. >X# >X# Time in milliseconds to wait for all message outputs to finish writing a single message. >X#output_module_timeout = 10000 >X >X# Time in milliseconds after which a detected stale master node is being rechecked on startup. >X#stale_master_timeout = 2000 >X >X# Time in milliseconds which Graylog is waiting for all threads to stop on shutdown. >X#shutdown_timeout = 30000 >X >X# MongoDB Configuration >Xmongodb_useauth = false >X#mongodb_user = grayloguser >X#mongodb_password = 123 >Xmongodb_host = 127.0.0.1 >X#mongodb_replica_set = localhost:27017,localhost:27018,localhost:27019 >Xmongodb_database = graylog2 >Xmongodb_port = 27017 >X >X# Raise this according to the maximum connections your MongoDB server can handle if you encounter MongoDB connection problems. >Xmongodb_max_connections = 100 >X >X# Number of threads allowed to be blocked by MongoDB connections multiplier. Default: 5 >X# If mongodb_max_connections is 100, and mongodb_threads_allowed_to_block_multiplier is 5, then 500 threads can block. More than that and an exception will be thrown. >X# http://api.mongodb.org/java/current/com/mongodb/MongoOptions.html#threadsAllowedToBlockForConnectionMultiplier >Xmongodb_threads_allowed_to_block_multiplier = 5 >X >X# Drools Rule File (Use to rewrite incoming log messages) >X# See: https://www.graylog.org/documentation/general/rewriting/ >X#rules_file = /usr/local/etc/graylog/server/rules.drl >X >X# Email transport >X#transport_email_enabled = false >X#transport_email_hostname = mail.example.com >X#transport_email_port = 587 >X#transport_email_use_auth = true >X#transport_email_use_tls = true >X#transport_email_use_ssl = true >X#transport_email_auth_username = you@example.com >X#transport_email_auth_password = secret >X#transport_email_subject_prefix = [graylog2] >X#transport_email_from_email = graylog2@example.com >X >X# Specify and uncomment this if you want to include links to the stream in your stream alert mails. >X# This should define the fully qualified base url to your web interface exactly the same way as it is accessed by your users. >X#transport_email_web_interface_url = https://graylog2.example.com >X >X# HTTP proxy for outgoing HTTP calls >X#http_proxy_uri = >X >X# Disable the optimization of Elasticsearch indices after index cycling. This may take some load from Elasticsearch >X# on heavily used systems with large indices, but it will decrease search performance. The default is to optimize >X# cycled indices. >X#disable_index_optimization = true >X >X# Optimize the index down to <= index_optimization_max_num_segments. A higher number may take some load from Elasticsearch >X# on heavily used systems with large indices, but it will decrease search performance. The default is 1. >X#index_optimization_max_num_segments = 1 >X >X# Disable the index range calculation on all open/available indices and only calculate the range for the latest >X# index. This may speed up index cycling on systems with large indices but it might lead to wrong search results >X# in regard to the time range of the messages (i. e. messages within a certain range may not be found). The default >X# is to calculate the time range on all open/available indices. >X#disable_index_range_calculation = true >X >X# The threshold of the garbage collection runs. If GC runs take longer than this threshold, a system notification >X# will be generated to warn the administrator about possible problems with the system. Default is 1 second. >X#gc_warning_threshold = 1s >X >X# Connection timeout for a configured LDAP server (e. g. ActiveDirectory) in milliseconds. >X#ldap_connection_timeout = 2000 >X >X# https://github.com/bazhenov/groovy-shell-server >X#groovy_shell_enable = false >X#groovy_shell_port = 6789 >X >X# Enable collection of Graylog-related metrics into MongoDB >X#enable_metrics_collection = false >X >X# Disable the use of SIGAR for collecting system stats >X#disable_sigar = false >X >71d4c9c36db76049e7700e92ac0ff627 >exit >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 199893
: 156293