Discussion:
[Neo4j] array size exceeds maximum allowed size
Clark Richey
2015-05-06 14:51:46 UTC
Permalink
Hello,
I’m running Neo4J 2.2.1 with 150G heap space on a box with 240G. I set the neo4j.neostore.nodestore.dbms.pagecache.memory
to 60G (slightly less than 75% of remaining system memory as recommended). However, when I startup I get an error that the system can’t start because I’m trying to allocate an array whose size exceeds the maximum allowed size.



---
Clark Richey
***@gmail.com
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email to neo4j+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Sumit Gupta
2015-05-07 00:24:45 UTC
Permalink
hi,

Please provide the exact exception along with the modified parameters.

Thanks,
Sumit
Post by Clark Richey
Hello,
I’m running Neo4J 2.2.1 with 150G heap space on a box with 240G. I set the
neo4j.neostore.nodestore.dbms.pagecache.memory
to 60G (slightly less than 75% of remaining system memory as
recommended). However, when I startup I get an error that the system can’t
start because I’m trying to allocate an array whose size exceeds the
maximum allowed size.
---
Clark Richey
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email to neo4j+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Clark Richey
2015-05-07 15:51:06 UTC
Permalink
There is no usable stack trace: : Error creating bean with name 'neoService': Invocation of init method failed; nested exception is java.lang.OutOfMemoryError: Requested array size exceeds VM limit


Further testing indicates that it is either the node_cache_array_fraction and the relationship_cache_array_faction are causing the problem. It is supposed to default to 1%. On an 150G heap that should be 1.5G. However the array size being generated is too long. Explicitly setting node_cache_size and relationship_cache_size seems to address this although it is far from ideal.




Clark Richey
Post by Sumit Gupta
hi,
Please provide the exact exception along with the modified parameters.
Thanks,
Sumit
Hello,
I’m running Neo4J 2.2.1 with 150G heap space on a box with 240G. I set the neo4j.neostore.nodestore.dbms.pagecache.memory
to 60G (slightly less than 75% of remaining system memory as recommended). However, when I startup I get an error that the system can’t start because I’m trying to allocate an array whose size exceeds the maximum allowed size.
---
Clark Richey
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email to neo4j+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Chris Vest
2015-05-10 20:47:53 UTC
Permalink
I think this might be caused by a miscalculation in the High Performance Cache settings heuristics. Does the problem go away if you change the cache_type setting away from “hpc” (which is the default in our enterprise edition), or use the 2.3-M1 milestone?

By the way, the “dbms.pagecache.memory” setting is on its own; it is not prefixed with “neo4j.neostore.nodestore.” or anything like that.

--
Chris Vest
System Engineer, Neo Technology
[ skype: mr.chrisvest, twitter: chvest ]
Post by Clark Richey
There is no usable stack trace: : Error creating bean with name 'neoService': Invocation of init method failed; nested exception is java.lang.OutOfMemoryError: Requested array size exceeds VM limit
Further testing indicates that it is either the node_cache_array_fraction and the relationship_cache_array_faction are causing the problem. It is supposed to default to 1%. On an 150G heap that should be 1.5G. However the array size being generated is too long. Explicitly setting node_cache_size and relationship_cache_size seems to address this although it is far from ideal.
Clark Richey
Post by Sumit Gupta
hi,
Please provide the exact exception along with the modified parameters.
Thanks,
Sumit
Hello,
I’m running Neo4J 2.2.1 with 150G heap space on a box with 240G. I set the neo4j.neostore.nodestore.dbms.pagecache.memory
to 60G (slightly less than 75% of remaining system memory as recommended). However, when I startup I get an error that the system can’t start because I’m trying to allocate an array whose size exceeds the maximum allowed size.
---
Clark Richey
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email to neo4j+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Clark Richey
2015-05-11 13:47:36 UTC
Permalink
yes, if I change the cache to soft then I can just set the page cache and everything works great with having to manually configure the relationships or node cache sizes.



Clark Richey
Post by Chris Vest
I think this might be caused by a miscalculation in the High Performance Cache settings heuristics. Does the problem go away if you change the cache_type setting away from “hpc” (which is the default in our enterprise edition), or use the 2.3-M1 milestone?
By the way, the “dbms.pagecache.memory” setting is on its own; it is not prefixed with “neo4j.neostore.nodestore.” or anything like that.
--
Chris Vest
System Engineer, Neo Technology
[ skype: mr.chrisvest, twitter: chvest ]
Post by Clark Richey
There is no usable stack trace: : Error creating bean with name 'neoService': Invocation of init method failed; nested exception is java.lang.OutOfMemoryError: Requested array size exceeds VM limit
Further testing indicates that it is either the node_cache_array_fraction and the relationship_cache_array_faction are causing the problem. It is supposed to default to 1%. On an 150G heap that should be 1.5G. However the array size being generated is too long. Explicitly setting node_cache_size and relationship_cache_size seems to address this although it is far from ideal.
Clark Richey
Post by Sumit Gupta
hi,
Please provide the exact exception along with the modified parameters.
Thanks,
Sumit
Hello,
I’m running Neo4J 2.2.1 with 150G heap space on a box with 240G. I set the neo4j.neostore.nodestore.dbms.pagecache.memory
to 60G (slightly less than 75% of remaining system memory as recommended). However, when I startup I get an error that the system can’t start because I’m trying to allocate an array whose size exceeds the maximum allowed size.
---
Clark Richey
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email to neo4j+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Chris Vest
2015-05-12 13:42:16 UTC
Permalink
The HPC heuristics will be fixed in the next 2.2.x release.

--
Chris Vest
System Engineer, Neo Technology
[ skype: mr.chrisvest, twitter: chvest ]
Post by Clark Richey
yes, if I change the cache to soft then I can just set the page cache and everything works great with having to manually configure the relationships or node cache sizes.
Clark Richey
Post by Chris Vest
I think this might be caused by a miscalculation in the High Performance Cache settings heuristics. Does the problem go away if you change the cache_type setting away from “hpc” (which is the default in our enterprise edition), or use the 2.3-M1 milestone?
By the way, the “dbms.pagecache.memory” setting is on its own; it is not prefixed with “neo4j.neostore.nodestore.” or anything like that.
--
Chris Vest
System Engineer, Neo Technology
[ skype: mr.chrisvest, twitter: chvest ]
Post by Clark Richey
There is no usable stack trace: : Error creating bean with name 'neoService': Invocation of init method failed; nested exception is java.lang.OutOfMemoryError: Requested array size exceeds VM limit
Further testing indicates that it is either the node_cache_array_fraction and the relationship_cache_array_faction are causing the problem. It is supposed to default to 1%. On an 150G heap that should be 1.5G. However the array size being generated is too long. Explicitly setting node_cache_size and relationship_cache_size seems to address this although it is far from ideal.
Clark Richey
Post by Sumit Gupta
hi,
Please provide the exact exception along with the modified parameters.
Thanks,
Sumit
Hello,
I’m running Neo4J 2.2.1 with 150G heap space on a box with 240G. I set the neo4j.neostore.nodestore.dbms.pagecache.memory
to 60G (slightly less than 75% of remaining system memory as recommended). However, when I startup I get an error that the system can’t start because I’m trying to allocate an array whose size exceeds the maximum allowed size.
---
Clark Richey
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email to neo4j+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Clark Richey
2015-05-12 14:00:27 UTC
Permalink
Excellent. Related to this, I believe, I am seeing what looks like a memory leak in the memory mapped files when using the HPC starting with 2.2.0. I have a system batch process that grabs Place nodes that need to be geocoded in chunks of 10k, holds on to their graph id (releasing the nodes and closing the transaction). Then, we grab one node at a time (by id) and use an external service to get the lat / lon for the node’s address and update the node. Only one node is updated per transaction here. This worked great before 2.2.0. Since then, after running for several hours the Linux kernel ends up killing the entire process due to it eating too much system memory. Note that I don’t run out of heap space but rather system memory leading me to believe it is an issue with the memory mapped outside of the heap. Since I changed the system to the soft cache yesterday to verify the other issue with HPC and array size the system hasn’t run out of memory.



Clark Richey
Post by Chris Vest
The HPC heuristics will be fixed in the next 2.2.x release.
--
Chris Vest
System Engineer, Neo Technology
[ skype: mr.chrisvest, twitter: chvest ]
Post by Clark Richey
yes, if I change the cache to soft then I can just set the page cache and everything works great with having to manually configure the relationships or node cache sizes.
Clark Richey
Post by Chris Vest
I think this might be caused by a miscalculation in the High Performance Cache settings heuristics. Does the problem go away if you change the cache_type setting away from “hpc” (which is the default in our enterprise edition), or use the 2.3-M1 milestone?
By the way, the “dbms.pagecache.memory” setting is on its own; it is not prefixed with “neo4j.neostore.nodestore.” or anything like that.
--
Chris Vest
System Engineer, Neo Technology
[ skype: mr.chrisvest, twitter: chvest ]
Post by Clark Richey
There is no usable stack trace: : Error creating bean with name 'neoService': Invocation of init method failed; nested exception is java.lang.OutOfMemoryError: Requested array size exceeds VM limit
Further testing indicates that it is either the node_cache_array_fraction and the relationship_cache_array_faction are causing the problem. It is supposed to default to 1%. On an 150G heap that should be 1.5G. However the array size being generated is too long. Explicitly setting node_cache_size and relationship_cache_size seems to address this although it is far from ideal.
Clark Richey
Post by Sumit Gupta
hi,
Please provide the exact exception along with the modified parameters.
Thanks,
Sumit
Hello,
I’m running Neo4J 2.2.1 with 150G heap space on a box with 240G. I set the neo4j.neostore.nodestore.dbms.pagecache.memory
to 60G (slightly less than 75% of remaining system memory as recommended). However, when I startup I get an error that the system can’t start because I’m trying to allocate an array whose size exceeds the maximum allowed size.
---
Clark Richey
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email to neo4j+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Loading...