Comments for AryaNet https://aryanet.com Technology | Leadership Fri, 09 Sep 2022 16:52:42 +0000 hourly 1 https://wordpress.org/?v=6.7.1 Comment on Cassandra Garbage Collector Tuning, Find and Fix long GC Pauses by Kumar Saras https://aryanet.com/blog/cassandra-garbage-collector-tuning#comment-35073 Fri, 05 Jun 2015 09:36:03 +0000 http://aryanet.com/?p=710#comment-35073 I want to use jstat for analyzing GC for my Cassandra database. I want to know how can I do this using jstat. I am not able to use jstat. When I am trying to run jsat it says Command not found. please tell me the way out.

]]>
Comment on Cassandra Garbage Collector Tuning, Find and Fix long GC Pauses by Nick https://aryanet.com/blog/cassandra-garbage-collector-tuning#comment-35014 Wed, 13 May 2015 19:17:50 +0000 http://aryanet.com/?p=710#comment-35014 Would you be willing to create/share your Cassandra template for Zabbix?

]]>
Comment on Cassandra Garbage Collector Tuning, Find and Fix long GC Pauses by Arya https://aryanet.com/blog/cassandra-garbage-collector-tuning#comment-34925 Thu, 09 Apr 2015 23:33:50 +0000 http://aryanet.com/?p=710#comment-34925 In reply to chathuri.

Those are in cassandra-env.sh

https://github.com/apache/cassandra/blob/trunk/conf/cassandra-env.sh#L276

]]>
Comment on Cassandra Garbage Collector Tuning, Find and Fix long GC Pauses by chathuri https://aryanet.com/blog/cassandra-garbage-collector-tuning#comment-34662 Fri, 13 Mar 2015 08:16:14 +0000 http://aryanet.com/?p=710#comment-34662 Can you please tell me how I can change the Garbage Collector(Lets say I need to use Concurrent Mark Sweep) in Cassendra?

]]>
Comment on Cassandra Garbage Collector Tuning, Find and Fix long GC Pauses by Ran Rubinstein https://aryanet.com/blog/cassandra-garbage-collector-tuning#comment-33962 Tue, 16 Dec 2014 12:30:35 +0000 http://aryanet.com/?p=710#comment-33962 Thanks man, great stuff. Helped me a lot with my GC-stressed C*’s

]]>
Comment on Cassandra Garbage Collector Tuning, Find and Fix long GC Pauses by Wayne Schroeder https://aryanet.com/blog/cassandra-garbage-collector-tuning#comment-32287 Fri, 25 Jul 2014 18:29:45 +0000 http://aryanet.com/?p=710#comment-32287 This article was extremely helpful for scaling Cassandra under extreme read/write load in a real time bidding environment. We analyzed gc logs and determined we were having significant premature tenuring leading to promotion failures. For us, a combination of increasing max tenure threshold and increasing the young generation size produced excellent results at full load. We were concerned about increasing the size of the young generation, but our minor GC events suffered very little increase in time and now happen FAR less often, bringing sanity to tenuring.

]]>
Comment on Cassandra Garbage Collector Tuning, Find and Fix long GC Pauses by Arya https://aryanet.com/blog/cassandra-garbage-collector-tuning#comment-31322 Wed, 07 May 2014 03:14:23 +0000 http://aryanet.com/?p=710#comment-31322 Sorry for the late reply. If you have not figured things out, I recommend reducing the memtable size and commit log if you don’t have disk IO issues. Your memtable size is 5G which is larger than 4G YoungGen you chose. So, you would definitely get lots of memtable data pushed to OldGen. Also the memtable_flush_writers of 16 seems a bit too much. If you machine doesn’t have enough CPU and disk IO, you are telling it to saturate itself by holding at least 16 memtables in memory and maintain 16 flush writers which will be blocked by IO (in case of slow disk) and potentially things get accumulated in memory. The blocked flush threads suggest that you ran out of IO bandwidth hence proving the point.

]]>
Comment on Cassandra Garbage Collector Tuning, Find and Fix long GC Pauses by Ruchir https://aryanet.com/blog/cassandra-garbage-collector-tuning#comment-30849 Thu, 10 Apr 2014 18:59:04 +0000 http://aryanet.com/?p=710#comment-30849 We are a write intensive system and are running into some GC issues. The above chart depicts our old gen utilization on one of our 12 cassandra nodes. As you can see, we keep see-sawing between 20% – 75% utilization. The 75% number is due to the default value of flushed_largest_memtables_at = 0.75. If we set this setting lower, we are able to see an immediate impact.

My theory is that we are not able to flush the memtables fast enough and therefore build up objects in the old gen until we hit the 75% max at which point, Cassandra flushes the largest memtables. The next thing I did is look at nodetool tpstats to see if the flush writers were blocked, and saw that the “All Time Blocked” to “Completed” ratio is about 0.45. I tried jacking up the flush writers and this number went down, but this still did not have any impact on our old gen utilization pattern.

Our Settings:
Each node is 16G with YoungGen = 4G and OldGen=8G. Memtable size = 5G (around 1/3rd of heap size). Commit Log = 1G. memtable_flush_writers=16.

What am I missing?

]]>