Created attachment 150439 [details] patch rc.d file In the past, it was standard to set min heap size < max heap size for a JVM.The current standard approach is to keep them the same. This avoids the cost of resizing the JVM heap. I encountered this issue today when the JVM started spouting repeated errors (see below). Conversations in the elasticsearch IRC channel brought the above recommendation. I implemented it in my /etc/rc.conf file and the problem did not occur upon restart. 2014-12-10 19:38:26,155][WARN ][index.translog ] [James Dr. Power] [logstash-2014.12.10][3] failed to flush shard on translog threshold org.elasticsearch.index.engine.FlushFailedEngineException: [logstash-2014.12.10][3] Flush failed at org.elasticsearch.index.engine.internal.InternalEngine.flush(InternalEngine.java:901) at org.elasticsearch.index.shard.service.InternalIndexShard.flush(InternalIndexShard.java:627) at org.elasticsearch.index.translog.TranslogService$TranslogBasedFlush$1.run(TranslogService.java:201) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:698) at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:712) at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3063) at org.elasticsearch.index.engine.internal.InternalEngine.flush(InternalEngine.java:891) ... 5 more Caused by: java.lang.OutOfMemoryError: Java heap space
Auto-assigned to maintainer tj@FreeBSD.org
I think that maintainer timeout was hit many times. Could be this change committed?
The best practice is to have a Xms ~= 75% Xmx, not 100%.
(In reply to loic.blot from comment #3) You're wrong. According to official ElasticSearch documentation, Xmx _must_ be equal to Xms. https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html >Ensure that the min (Xms) and max (Xmx) sizes are the same to prevent the heap from resizing at runtime, a very costly process.
A commit references this bug: Author: tj Date: Tue Dec 15 17:06:12 UTC 2015 New revision: 403794 URL: https://svnweb.freebsd.org/changeset/ports/403794 Log: Update to 2.1. Changes: https://www.elastic.co/guide/en/elasticsearch/reference/2.1/release-notes-2.1.0.html - Fix path to allow service to start at boot - Misc cleanup from ohauer PR: 195861, 204821, 204902, 204910 Changes: head/textproc/elasticsearch2/Makefile head/textproc/elasticsearch2/distinfo head/textproc/elasticsearch2/files/patch-bin-elasticsearch.in.sh head/textproc/elasticsearch2/pkg-descr head/textproc/elasticsearch2/pkg-plist
Hello, Tom Can you please explain why this PR rejected? Do you really think that official documentation (with explanation) is not enough to make this change?
1) textproc/elasticsearch is a legacy port for the 1.x line of elasticsearch, textproc/elasticsearch2 does not use the same rc script and uses the defaults provided by upstream elasticsearch. 2) The documentation you provided states that the defaults are inadequate for almost every deployment and should be tune to the hardware used, as such having defaults that 'work' is ok but as everyone needs to change them there are no optimal defaults: <quote> The default installation of Elasticsearch is configured with a 1 GB heap. For just about every deployment, this number is far too small. If you are using the default heap values, your cluster is probably configured incorrectly. </quote> 3) Everyone should be upgrading to the 2.x line of elasticsearch and shortly the 5.x unified version numbering releases of elastic/logstash/kibana/etc