log monitoring and collecting tool
Responsible Changed From-To: freebsd-ports-bugs->madpilot I'll take it.
Thanks a lot for your submission. Unluckily the shar archive got truncated, it misses the pkg-descr. Could you please send it again? Thanks. -- Guido Falsi <madpilot@FreeBSD.org>
Please find attached the full shar. Thanks Ari -- --------------------------> Aristedes Maniatis ish http://www.ish.com.au Level 1, 30 Wilson Street Newtown 2042 Australia phone +61 2 9550 5001 fax +61 2 9550 4001 GPG fingerprint CBFB 84B4 738D 4E87 5E5C 5EFA EF6A 7D2E 3E49 102A
Thank you again. Looking at your submission I noticed a few problems: - including separated bsd.port.pre.mk and bsd.port.post.mk when there is no need to do that. separating these includes should be done only and only if doing some processing of options or other ports system variables defined by those includes. For parsing options using bsd.port.opsion.mk is preferred though. - is there any reason for the operations being done in pre-install target not to be done in do-install? - commented out tests and unnecessary Makefile parts should be removed before final submission - this port is missing a plist, either as make variables or as pkg-plist file. This is a very important part of making a port, the ports system needs this information to track installed files and allow for package creation and clean deinstallation of port/package. All this is exaplained in detail in the porter's handook: http://www.freebsd.org/doc/en_US.ISO8859-1/books/porters-handbook/ if you find any missing information or unclear parts please report that so we can fix it. I also see that your port is installing example configuration files, so you should pay spacial attention to chapter 7.3 "Configuration Files" of the porter's handbook and set up the Makefile and plist of the port accordingly. One last detail, at the start of the port makefile, before the $FreeBSD$ line there should be the following lines(I'm pasting from the example in the porter's handbook): # New ports collection makefile for: oneko # Date created: 5 December 1994 # Whom: asami # which your submission is missing. Please fix these issues, thanks! -- Guido Falsi <madpilot@FreeBSD.org>
State Changed From-To: open->feedback Ask for submitter fix.
Thanks for your feedback. This is our first port, so we'll try and fix these things.
Hello, I was looking into porting logstash to freebsd myself and I was glad someone else had already had done it. Is there any update on the plist? -- ~Anthony Garcia
Hi! I have fixed the submission and updated it to version 1.1.1. I'm attaching the shar archieve. for testing. Feedback if appreciated from all users. If you (the submitter) are still interested in maintaining the port I'll wait for your approval before committing it. If you need any clearifications about my changes to the port feel free to ask. Thanks! -- Guido Falsi <madpilot@FreeBSD.org>
Hi, have you seen my last followup to your submission, I have a fixed port which I could commit with your approval since you are the maintainer. Could you please test it and give feedback? thank you. -- Guido Falsi <madpilot@FreeBSD.org>
Responsible Changed From-To: madpilot->freebsd-ports-bugs Back to pool. Submitter vanished. If anyone is interested in maintaining this port ping me.
State Changed From-To: feedback->open Return to open status.
State Changed From-To: open->closed submitter timeout.
Hey there, Really interested in make this one alive. Let me know if i can do some long term thing for it. Regards, -- Regis A. Despres
Hi, I couple of weeks ago I went back to use logstash and finished the port. I'll try to send it today. On 27/03/13 12:44 AM, Regis A. Despres wrote: > Hey there, > > Really interested in make this one alive. > Let me know if i can do some long term thing for it. > > > Regards, > >
Hey Daniel, You might have forgot to push it live =) Anything i can do in order to make this alive ? -- Regis A. Despres
State Changed From-To: closed->open There seems to be some interest in the port still, maybe a change of maintainer ?
Hello hello, Tested on a fresh 9.1 install. Works like a charm. One character difference into logstash.conf.sample that was throwing a error. Regards, -- Regis A. Despres
Hello hello, In case of non response from Daniel, i'll go maintain this one as it's stands for a really useful sysadmin tool. Attached shar updated accordingly. Regards, -- Regis A. Despres
Hi, Thanks for helping reviving this PR. Daniel, are you still interested in maintaining this port? I'll wait a few days more to give the original submitter time to reply and state if he still is interested in being the maintainer. If there is no followup from him I'll commit with you as maintainer. Is this Ok with you? -- Guido Falsi <madpilot@FreeBSD.org>
Hello Guido , Daniel For sure i'm ok with. Cdt, --=20 Regis A. Despres Le 9 juil. 2013 =C3=A0 22:37, Guido Falsi <madpilot@FreeBSD.org> a =C3=A9cri= t : > Hi, >=20 > Thanks for helping reviving this PR. >=20 > Daniel, are you still interested in maintaining this port? >=20 > I'll wait a few days more to give the original submitter time to reply and= state if he still is interested in being the maintainer. >=20 > If there is no followup from him I'll commit with you as maintainer. Is th= is Ok with you? >=20 > --=20 > Guido Falsi <madpilot@FreeBSD.org>
Unfortunately Daniel is no longer working in this job and he has gone over to a role in the Linux universe. I'd hate to see his work on this port go to waste though. Regis, as Daniel's ex-employer, you have my blessing to update this port and resolve any remaining issues with it. I've cced Daniel's own email address if he has an interest in continuing to assist with the port in the future. Regis, you might want to bump the logstash version up to 1.1.13. I think that will just be a change to the version and checksums. Cheers Ari Maniatis
Hello hello, Everyone seems to agree regarding the 1.1.13 =) I've added some greetings, killed two portlint wrn and updated the version. Regards, -- Regis A. Despres
Hi again. I fixed a few style issues in the rc script. Can you please test it and approve this last change? Thanks a lot! -- Guido Falsi <madpilot@FreeBSD.org>
Hello hello, Everything works perfectly once the space between "--" and "web" added back line 70 in the standalone args definition =) SHAR updated accordingly in attachment. Regards, -- Regis A. Despres
Author: madpilot Date: Wed Jul 17 23:20:21 2013 New Revision: 323192 URL: http://svnweb.freebsd.org/changeset/ports/323192 Log: Logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching). Speaking of searching, logstash comes with a web interface for searching and drilling into all of your logs. WWW: http://logstash.net/ PR: ports/168266 Submitted by: Daniel Solsona <daniel@ish.com.au>, Regis A. Despres <regis.despres@gmail.com> Added: head/sysutils/logstash/ head/sysutils/logstash/Makefile (contents, props changed) head/sysutils/logstash/distinfo (contents, props changed) head/sysutils/logstash/files/ head/sysutils/logstash/files/elasticsearch.yml.sample (contents, props changed) head/sysutils/logstash/files/logstash.conf.sample (contents, props changed) head/sysutils/logstash/files/logstash.in (contents, props changed) head/sysutils/logstash/pkg-descr (contents, props changed) head/sysutils/logstash/pkg-plist (contents, props changed) Modified: head/sysutils/Makefile Modified: head/sysutils/Makefile ============================================================================== --- head/sysutils/Makefile Wed Jul 17 22:12:15 2013 (r323191) +++ head/sysutils/Makefile Wed Jul 17 23:20:21 2013 (r323192) @@ -498,6 +498,7 @@ SUBDIR += logmon SUBDIR += logrotate SUBDIR += logstalgia + SUBDIR += logstash SUBDIR += logtool SUBDIR += logwatch SUBDIR += lookat Added: head/sysutils/logstash/Makefile ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/sysutils/logstash/Makefile Wed Jul 17 23:20:21 2013 (r323192) @@ -0,0 +1,50 @@ +# Created by: Daniel Solsona <daniel@ish.com.au>, Guido Falsi <madpilot@FreeBSD.org> +# $FreeBSD$ + +PORTNAME= logstash +PORTVERSION= 1.1.13 +CATEGORIES= sysutils java +MASTER_SITES= https://logstash.objects.dreamhost.com/release/ \ + http://semicomplete.com/files/logstash/ +DISTNAME= ${PORTNAME}-${PORTVERSION}-flatjar +EXTRACT_SUFX= .jar +EXTRACT_ONLY= + +MAINTAINER= regis.despres@gmail.com +COMMENT= Tool for managing events and logs + +USE_JAVA= yes +JAVA_VERSION= 1.5+ + +NO_BUILD= yes + +USE_RC_SUBR= logstash + +LOGSTASH_HOME?= ${PREFIX}/${PORTNAME} +LOGSTASH_HOME_REL?= ${LOGSTASH_HOME:S,^${PREFIX}/,,} +LOGSTASH_JAR?= ${DISTNAME}${EXTRACT_SUFX} +LOGSTASH_RUN?= /var/run/${PORTNAME} +LOGSTASH_DATA_DIR?= /var/db/${PORTNAME} + +SUB_LIST= LOGSTASH_DATA_DIR=${LOGSTASH_DATA_DIR} JAVA_HOME=${JAVA_HOME} \ + LOGSTASH_HOME=${LOGSTASH_HOME} LOGSTASH_JAR=${LOGSTASH_JAR} +PLIST_SUB+= LOGSTASH_HOME=${LOGSTASH_HOME_REL} LOGSTASH_JAR=${LOGSTASH_JAR} \ + LOGSTASH_RUN=${LOGSTASH_RUN} \ + LOGSTASH_DATA_DIR=${LOGSTASH_DATA_DIR} + +do-install: + ${MKDIR} ${LOGSTASH_RUN} + ${MKDIR} ${ETCDIR} + ${MKDIR} ${LOGSTASH_HOME} + ${MKDIR} ${LOGSTASH_DATA_DIR} + ${INSTALL_DATA} ${DISTDIR}/${DIST_SUBDIR}/${LOGSTASH_JAR} ${LOGSTASH_HOME} + ${INSTALL_DATA} ${FILESDIR}/logstash.conf.sample ${ETCDIR} + @if [ ! -f ${ETCDIR}/logstash.conf ]; then \ + ${CP} -p ${ETCDIR}/logstash.conf.sample ${ETCDIR}/logstash.conf ; \ + fi + ${INSTALL_DATA} ${FILESDIR}/elasticsearch.yml.sample ${ETCDIR} + @if [ ! -f ${ETCDIR}/elasticsearch.yml ]; then \ + ${CP} -p ${ETCDIR}/elasticsearch.yml.sample ${ETCDIR}/elasticsearch.yml ; \ + fi + +.include <bsd.port.mk> Added: head/sysutils/logstash/distinfo ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/sysutils/logstash/distinfo Wed Jul 17 23:20:21 2013 (r323192) @@ -0,0 +1,2 @@ +SHA256 (logstash-1.1.13-flatjar.jar) = 5ba0639ff4da064c2a4f6a04bd7006b1997a6573859d3691e210b6855e1e47f1 +SIZE (logstash-1.1.13-flatjar.jar) = 69485313 Added: head/sysutils/logstash/files/elasticsearch.yml.sample ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/sysutils/logstash/files/elasticsearch.yml.sample Wed Jul 17 23:20:21 2013 (r323192) @@ -0,0 +1,337 @@ +##################### ElasticSearch Configuration Example ##################### + +# This file contains an overview of various configuration settings, +# targeted at operations staff. Application developers should +# consult the guide at <http://elasticsearch.org/guide>. +# +# The installation procedure is covered at +# <http://elasticsearch.org/guide/reference/setup/installation.html>. +# +# ElasticSearch comes with reasonable defaults for most settings, +# so you can try it out without bothering with configuration. +# +# Most of the time, these defaults are just fine for running a production +# cluster. If you're fine-tuning your cluster, or wondering about the +# effect of certain configuration option, please _do ask_ on the +# mailing list or IRC channel [http://elasticsearch.org/community]. + +# Any element in the configuration can be replaced with environment variables +# by placing them in ${...} notation. For example: +# +# node.rack: ${RACK_ENV_VAR} + +# See <http://elasticsearch.org/guide/reference/setup/configuration.html> +# for information on supported formats and syntax for the configuration file. + + +################################### Cluster ################################### + +# Cluster name identifies your cluster for auto-discovery. If you're running +# multiple clusters on the same network, make sure you're using unique names. +# +# cluster.name: elasticsearch + + +#################################### Node ##################################### + +# Node names are generated dynamically on startup, so you're relieved +# from configuring them manually. You can tie this node to a specific name: +# +# node.name: "Franz Kafka" + +# Every node can be configured to allow or deny being eligible as the master, +# and to allow or deny to store the data. +# +# Allow this node to be eligible as a master node (enabled by default): +# +# node.master: true +# +# Allow this node to store data (enabled by default): +# +# node.data: true + +# You can exploit these settings to design advanced cluster topologies. +# +# 1. You want this node to never become a master node, only to hold data. +# This will be the "workhorse" of your cluster. +# +# node.master: false +# node.data: true +# +# 2. You want this node to only serve as a master: to not store any data and +# to have free resources. This will be the "coordinator" of your cluster. +# +# node.master: true +# node.data: false +# +# 3. You want this node to be neither master nor data node, but +# to act as a "search load balancer" (fetching data from nodes, +# aggregating results, etc.) +# +# node.master: false +# node.data: false + +# Use the Cluster Health API [http://localhost:9200/_cluster/health], the +# Node Info API [http://localhost:9200/_cluster/nodes] or GUI tools +# such as <http://github.com/lukas-vlcek/bigdesk> and +# <http://mobz.github.com/elasticsearch-head> to inspect the cluster state. + +# A node can have generic attributes associated with it, which can later be used +# for customized shard allocation filtering, or allocation awareness. An attribute +# is a simple key value pair, similar to node.key: value, here is an example: +# +# node.rack: rack314 + + +#################################### Index #################################### + +# You can set a number of options (such as shard/replica options, mapping +# or analyzer definitions, translog settings, ...) for indices globally, +# in this file. +# +# Note, that it makes more sense to configure index settings specifically for +# a certain index, either when creating it or by using the index templates API. +# +# See <http://elasticsearch.org/guide/reference/index-modules/> and +# <http://elasticsearch.org/guide/reference/api/admin-indices-create-index.html> +# for more information. + +# Set the number of shards (splits) of an index (5 by default): +# +# index.number_of_shards: 5 + +# Set the number of replicas (additional copies) of an index (1 by default): +# +# index.number_of_replicas: 1 + +# Note, that for development on a local machine, with small indices, it usually +# makes sense to "disable" the distributed features: +# +# index.number_of_shards: 1 +# index.number_of_replicas: 0 + +# These settings directly affect the performance of index and search operations +# in your cluster. Assuming you have enough machines to hold shards and +# replicas, the rule of thumb is: +# +# 1. Having more *shards* enhances the _indexing_ performance and allows to +# _distribute_ a big index across machines. +# 2. Having more *replicas* enhances the _search_ performance and improves the +# cluster _availability_. +# +# The "number_of_shards" is a one-time setting for an index. +# +# The "number_of_replicas" can be increased or decreased anytime, +# by using the Index Update Settings API. +# +# ElasticSearch takes care about load balancing, relocating, gathering the +# results from nodes, etc. Experiment with different settings to fine-tune +# your setup. + +# Use the Index Status API (<http://localhost:9200/A/_status>) to inspect +# the index status. + + +#################################### Paths #################################### + +# Path to directory containing configuration (this file and logging.yml): +# +# path.conf: /path/to/conf + +# Path to directory where to store index data allocated for this node. +# +# path.data: /path/to/data +# +# Can optionally include more than one location, causing data to be striped across +# the locations on a file level, favouring locations with most free +# space on creation. For example: +# +# path.data: /path/to/data1,/path/to/data2 + +# Path to temporary files: +# +# path.work: /path/to/work + +# Path to log files: +# +# path.logs: /path/to/logs + +# Path to where plugins are installed: +# +# path.plugins: /path/to/plugins + + +################################### Memory #################################### + +# ElasticSearch performs poorly when JVM starts swapping: you should ensure that +# it _never_ swaps. +# +# Set this property to true to lock the memory: +# +# bootstrap.mlockall: true + +# Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set +# to the same value, and that the machine has enough memory to allocate +# for ElasticSearch, leaving enough memory for the operating system itself. +# +# You should also make sure that the ElasticSearch process is allowed to lock +# the memory, eg. by using `ulimit -l unlimited`. + + +############################## Network And HTTP ############################### + +# ElasticSearch, by default, binds itself to the 0.0.0.0 address, and listens +# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node +# communication. (the range means that if the port is busy, it will automatically +# try the next port). + +# Set the bind address specifically (IPv4 or IPv6): +# +# network.bind_host: 192.168.0.1 + +# Set the address other nodes will use to communicate with this node. If not +# set, it is automatically derived. It must point to an actual IP address. +# +# network.publish_host: 192.168.0.1 + +# Set both 'bind_host' and 'publish_host': +# +# network.host: 192.168.0.1 + +# Set a custom port for the node to node communication (9300 by default): +# +# transport.port: 9300 + +# Enable compression for all communication between nodes (disabled by default): +# +# transport.tcp.compress: true + +# Set a custom port to listen for HTTP traffic: +# +# http.port: 9200 + +# Set a custom allowed content length: +# +# http.max_content_length: 100mb + +# Disable HTTP completely: +# +# http.enabled: false + + +################################### Gateway ################################### + +# The gateway allows for persisting the cluster state between full cluster +# restarts. Every change to the state (such as adding an index) will be stored +# in the gateway, and when the cluster starts up for the first time, +# it will read its state from the gateway. + +# There are several types of gateway implementations. For more information, +# see <http://elasticsearch.org/guide/reference/modules/gateway>. + +# The default gateway type is the "local" gateway (recommended): +# +# gateway.type: local + +# Settings below control how and when to start the initial recovery process on +# a full cluster restart (to reuse as much local data as possible). + +# Allow recovery process after N nodes in a cluster are up: +# +# gateway.recover_after_nodes: 1 + +# Set the timeout to initiate the recovery process, once the N nodes +# from previous setting are up (accepts time value): +# +# gateway.recover_after_time: 5m + +# Set how many nodes are expected in this cluster. Once these N nodes +# are up, begin recovery process immediately: +# +# gateway.expected_nodes: 2 + + +############################# Recovery Throttling ############################# + +# These settings allow to control the process of shards allocation between +# nodes during initial recovery, replica allocation, rebalancing, +# or when adding and removing nodes. + +# Set the number of concurrent recoveries happening on a node: +# +# 1. During the initial recovery +# +# cluster.routing.allocation.node_initial_primaries_recoveries: 4 +# +# 2. During adding/removing nodes, rebalancing, etc +# +# cluster.routing.allocation.node_concurrent_recoveries: 2 + +# Set to throttle throughput when recovering (eg. 100mb, by default unlimited): +# +# indices.recovery.max_size_per_sec: 0 + +# Set to limit the number of open concurrent streams when +# recovering a shard from a peer: +# +# indices.recovery.concurrent_streams: 5 + + +################################## Discovery ################################## + +# Discovery infrastructure ensures nodes can be found within a cluster +# and master node is elected. Multicast discovery is the default. + +# Set to ensure a node sees N other master eligible nodes to be considered +# operational within the cluster. Set this option to a higher value (2-4) +# for large clusters: +# +# discovery.zen.minimum_master_nodes: 1 + +# Set the time to wait for ping responses from other nodes when discovering. +# Set this option to a higher value on a slow or congested network +# to minimize discovery failures: +# +# discovery.zen.ping.timeout: 3s + +# See <http://elasticsearch.org/guide/reference/modules/discovery/zen.html> +# for more information. + +# Unicast discovery allows to explicitly control which nodes will be used +# to discover the cluster. It can be used when multicast is not present, +# or to restrict the cluster communication-wise. +# +# 1. Disable multicast discovery (enabled by default): +# +# discovery.zen.ping.multicast.enabled: false +# +# 2. Configure an initial list of master nodes in the cluster +# to perform discovery when new nodes (master or data) are started: +# +# discovery.zen.ping.unicast.hosts: ["host1", "host2:port", "host3[portX-portY]"] + +# EC2 discovery allows to use AWS EC2 API in order to perform discovery. +# +# You have to install the cloud-aws plugin for enabling the EC2 discovery. +# +# See <http://elasticsearch.org/guide/reference/modules/discovery/ec2.html> +# for more information. +# +# See <http://elasticsearch.org/tutorials/2011/08/22/elasticsearch-on-ec2.html> +# for a step-by-step tutorial. + + +################################## Slow Log ################################## + +# Shard level query and fetch threshold logging. + +#index.search.slowlog.level: TRACE +#index.search.slowlog.threshold.query.warn: 10s +#index.search.slowlog.threshold.query.info: 5s +#index.search.slowlog.threshold.query.debug: 2s +#index.search.slowlog.threshold.query.trace: 500ms + +#index.search.slowlog.threshold.fetch.warn: 1s +#index.search.slowlog.threshold.fetch.info: 800ms +#index.search.slowlog.threshold.fetch.debug: 500ms +#index.search.slowlog.threshold.fetch.trace: 200ms Added: head/sysutils/logstash/files/logstash.conf.sample ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/sysutils/logstash/files/logstash.conf.sample Wed Jul 17 23:20:21 2013 (r323192) @@ -0,0 +1,38 @@ +input { + file { + type => "system logs" + + # # Wildcards work, here :) + # path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ] + path => [ "/var/log/messages" ] + } + + #file { + # type => "Hudson-access" + # path => "/var/log/www/hudson.ish.com.au-access_log" + #} + + #file { + # type => "Syslog" + # path => "/var/log/messages" + #} +} + +output { + # Emit events to stdout for easy debugging of what is going through + # logstash. + #stdout { } + + # This will use elasticsearch to store your logs. + # The 'embedded' option will cause logstash to run the elasticsearch + # server in the same process, so you don't have to worry about + # how to download, configure, or run elasticsearch! + elasticsearch { + embedded => true + #embedded_http_port => 9200 + #cluster => elasticsearch + #host => host + #port => port + + } +} Added: head/sysutils/logstash/files/logstash.in ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/sysutils/logstash/files/logstash.in Wed Jul 17 23:20:21 2013 (r323192) @@ -0,0 +1,81 @@ +#!/bin/sh + +# $FreeBSD$ +# +# PROVIDE: logstash +# REQUIRE: LOGIN +# KEYWORD: shutdown +# +# +# Configuration settings for logstash in /etc/rc.conf: +# +# logstash_enable (bool): +# Set to "NO" by default. +# Set it to "YES" to enable logstash +# +# logstash_mode : +# Set to "standalone" by default. +# Valid options: +# "standalone": agent, web & elasticsearch +# "web": Starts logstash as a web ui +# "agent": Justs works as a log shipper +# +# logstash_logging (bool): +# Set to "NO" by default. +# Set it to "YES" to enable logstash logging to file +# Default output to /var/log/logstash.log +# + +. /etc/rc.subr + +name=logstash +rcvar=logstash_enable + +load_rc_config ${name} + +: ${logstash_enable="NO"} +: ${logstash_home="%%LOGSTASH_HOME%%"} +: ${logstash_config="%%PREFIX%%/etc/${name}/${name}.conf"} +: ${logstash_jar="%%LOGSTASH_HOME%%/%%LOGSTASH_JAR%%"} +: ${logstash_java_home="%%JAVA_HOME%%"} +: ${logstash_log="NO"} +: ${logstash_mode="standalone"} +: ${logstash_port="9292"} +: ${logstash_elastic_backend=""} +: ${logstash_log_file="${logdir}/${name}.log"} +: ${logstash_elastic_datadir="%%LOGSTASH_DATA_DIR%%"} + +piddir=/var/run/${name} +pidfile=${piddir}/${name}.pid + +if [ -d $piddir ]; then + mkdir -p $piddir +fi + +logdir="/var/log" +command="/usr/sbin/daemon" + +java_cmd="${logstash_java_home}/bin/java" +procname="${java_cmd}" + +logstash_chdir=${logstash_home} +logstash_log_options="" +logstash_elastic_options="" + +if checkyesno logstash_log; then + logstash_log_options=" --log ${logstash_log_file}" +fi + +if [ ${logstash_mode} = "standalone" ]; then + logstash_args="agent -f ${logstash_config} -- web --port ${logstash_port} --backend elasticsearch:///?local ${logstash_log_options}" + logstash_elastic_options="-Des.path.data=${logstash_elastic_datadir}" +elif [ ${logstash_mode} = "agent" ]; then + logstash_args="agent -f ${logstash_config} ${logstash_log_options}" +elif [ ${logstash_mode} = "web" ]; then + logstash_args="web --port ${logstash_port} --backend elasticsearch://${logstash_elastic_backend}/ ${logstash_log_options}" +fi + +command_args="-f -p ${pidfile} ${java_cmd} ${logstash_elastic_options} -jar ${logstash_jar} ${logstash_args}" +required_files="${java_cmd} ${logstash_config}" + +run_rc_command "$1" Added: head/sysutils/logstash/pkg-descr ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/sysutils/logstash/pkg-descr Wed Jul 17 23:20:21 2013 (r323192) @@ -0,0 +1,6 @@ +Logstash is a tool for managing events and logs. You can use it to +collect logs, parse them, and store them for later use (like, for +searching). Speaking of searching, logstash comes with a web interface +for searching and drilling into all of your logs. + +WWW: http://logstash.net/ Added: head/sysutils/logstash/pkg-plist ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/sysutils/logstash/pkg-plist Wed Jul 17 23:20:21 2013 (r323192) @@ -0,0 +1,13 @@ +%%LOGSTASH_HOME%%/%%LOGSTASH_JAR%% +@exec mkdir -p %%LOGSTASH_RUN%% +@exec mkdir -p %%LOGSTASH_DATA_DIR%% +@unexec if cmp -s %D/%%ETCDIR%%/logstash.conf.sample %D/%%ETCDIR%%/logstash.conf; then rm -f %D/%%ETCDIR%%/logstash.conf; fi +%%ETCDIR%%/logstash.conf.sample +@exec if [ ! -f %D/%%ETCDIR%%/logstash.conf ] ; then cp -p %D/%F %B/logstash.conf; fi +@unexec if cmp -s %D/%%ETCDIR%%/elasticsearch.yml.sample %D/%%ETCDIR%%/elasticsearch.yml; then rm -f %D/%%ETCDIR%%/elasticsearch.yml; fi +%%ETCDIR%%/elasticsearch.yml.sample +@exec if [ ! -f %D/%%ETCDIR%%/elasticsearch.yml ] ; then cp -p %D/%F %B/elasticsearch.yml; fi +@dirrmtry %%LOGSTASH_DATA_DIR%% +@dirrmtry %%LOGSTASH_HOME%% +@dirrmtry %%ETCDIR%% +@dirrmtry %%LOGSTASH_RUN%% _______________________________________________ svn-ports-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-ports-all To unsubscribe, send any mail to "svn-ports-all-unsubscribe@freebsd.org"
State Changed From-To: open->closed New port added. Thanks!