我正在centos5上工作,并且使用-Xms808m -Xmx808m -Xss256k参数运行版本1.0.0的elasticsearch。有17个索引,共有30200583个文档。每个索引的文档数在1000000到2000000之间。
{
"query": {
"bool": {
"must": [
{
"range": {
"date": {
"to": "2014-06-01 14:14:00",
"from": "2014-04-01 00:00:00"
}
}
}
],
"should": [],
"must_not": [],
"minimum_number_should_match": 1
}
},
"from": 0,
"size": "50"
}
它给予回应;
{
took: 5903
timed_out: false
_shards: {
total: 17
successful: 17
failed: 0
},
hits: {
total: 30200583
...
...
...}
但是,当我在elasticsearch-head工具上发送查询后50行时;
{
...
...
...
"from": 30200533,
"size": "50"
}
它不会给出响应并抛出异常,例如;
ava.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.store.DataOutput.copyBytes(DataOutput.java:247)
at org.apache.lucene.store.Directory.copy(Directory.java:186)
at org.elasticsearch.index.store.Store$StoreDirectory.copy(Store.java:348)
at org.apache.lucene.store.TrackingDirectoryWrapper.copy(TrackingDirectoryWrapper.java:50)
at org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:4596)
at org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:535)
at org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:502)
at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:506)
at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:616)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:370)
at org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:285)
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:260)
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:250)
at org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:170)
at org.apache.lucene.search.XSearcherManager.refreshIfNeeded(XSearcherManager.java:123)
at org.apache.lucene.search.XSearcherManager.refreshIfNeeded(XSearcherManager.java:59)
at org.apache.lucene.search.XReferenceManager.doMaybeRefresh(XReferenceManager.java:180)
at org.apache.lucene.search.XReferenceManager.maybeRefresh(XReferenceManager.java:229)
at org.elasticsearch.index.engine.internal.InternalEngine.refresh(InternalEngine.java:730)
at org.elasticsearch.index.shard.service.InternalIndexShard.refresh(InternalIndexShard.java:477)
at org.elasticsearch.index.shard.service.InternalIndexShard$EngineRefresher$1.run(InternalIndexShard.java:924)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
问题是什么?是不是Java堆空间不足,还是我的查询导致此堆空间错误?
这两个问题的答案都是“是”。您没有足够的堆空间,这就是您看到此错误的原因,并且由于您没有足够的堆空间,所以查询导致了该错误。
原因是因为分类,深分页非常昂贵。要检索第20个元素,您需要将1-20个元素保留在内存中并进行排序。要检索第1,000,000个元素,您需要将元素1-999,999保留在内存中并进行排序。
这通常需要大量的内存。
有几种选择:
本文收集自互联网,转载请注明来源。
如有侵权,请联系[email protected] 删除。
我来说两句