2015-01-01 15 views
2

所以我在查詢表中的數據時遇到了這個異常。我在網上閱讀了很多內容,從我的理解中可以看出,這是因爲我有很多空行。但是,解決這個問題的方法是什麼?我可以輕鬆擺脫所有這些空值嗎?Cassandra中的TombstoneOverwhelmingException

更新: 我跑nodetool compact並試圖擦洗。在這兩種情況下,我都明白

Exception in thread "main" java.lang.AssertionError: [SSTableReader(path='/var/lib/cassandra/data/bitcoin/okcoin_order_book_btc_usd/bitcoin-okcoin_order_book_btc_usd-jb-538-Data.db'), SSTableReader(path='/var/lib/cassandra/data/bitcoin/okcoin_order_book_btc_usd/bitcoin-okcoin_order_book_btc_usd-jb-710-Data.db'), SSTableReader(path='/var/lib/cassandra/data/bitcoin/okcoin_order_book_btc_usd/bitcoin-okcoin_order_book_btc_usd-jb-627-Data.db'), SSTableReader(path='/var/lib/cassandra/data/bitcoin/okcoin_order_book_btc_usd/bitcoin-okcoin_order_book_btc_usd-jb-437-Data.db')] 
at org.apache.cassandra.db.ColumnFamilyStore$13.call(ColumnFamilyStore.java:2132) 
at org.apache.cassandra.db.ColumnFamilyStore$13.call(ColumnFamilyStore.java:2129) 
at org.apache.cassandra.db.ColumnFamilyStore.runWithCompactionsDisabled(ColumnFamilyStore.java:2111) 
at org.apache.cassandra.db.ColumnFamilyStore.markAllCompacting(ColumnFamilyStore.java:2142) 
at org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getMaximalTask(SizeTieredCompactionStrategy.java:254) 
at org.apache.cassandra.db.compaction.CompactionManager.submitMaximal(CompactionManager.java:290) 
at org.apache.cassandra.db.compaction.CompactionManager.performMaximal(CompactionManager.java:282) 
at org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:1941) 
at org.apache.cassandra.service.StorageService.forceKeyspaceCompaction(StorageService.java:2182) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:606) 
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75) 
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:606) 
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279) 
at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112) 
at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46) 
at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) 
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) 
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) 
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819) 
at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) 
at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487) 
at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97) 
at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328) 
at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420) 
at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:606) 
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322) 
at sun.rmi.transport.Transport$1.run(Transport.java:177) 
at sun.rmi.transport.Transport$1.run(Transport.java:174) 
at java.security.AccessController.doPrivileged(Native Method) 
at sun.rmi.transport.Transport.serviceCall(Transport.java:173) 
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556) 
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811) 
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745) 

而這些都是從system.log

INFO [CompactionExecutor:1888] 2015-01-03 07:22:54,272 CompactionController.java (line 192) Compacting large row bitcoin/okcoin_trade_btc_cny:1972-05 (225021398 bytes) incrementally 
INFO [CompactionExecutor:1888] 2015-01-03 07:23:07,528 CompactionController.java (line 192) Compacting large row bitcoin/okcoin_trade_btc_cny:1972-06 (217772702 bytes) incrementally 
INFO [CompactionExecutor:1888] 2015-01-03 07:23:20,508 CompactionController.java (line 192) Compacting large row bitcoin/okcoin_trade_btc_cny:2014-05 (121911398 bytes) incrementally 
INFO [ScheduledTasks:1] 2015-01-03 07:23:30,941 GCInspector.java (line 116) GC for ParNew: 223 ms for 1 collections, 5642103584 used; max is 8375238656 
INFO [CompactionExecutor:1888] 2015-01-03 07:23:33,436 CompactionController.java (line 192) Compacting large row bitcoin/okcoin_trade_btc_cny:2014-07 (106408526 bytes) incrementally 
INFO [CompactionExecutor:1888] 2015-01-03 07:23:38,787 CompactionController.java (line 192) Compacting large row bitcoin/okcoin_trade_btc_cny:2014-02 (112031822 bytes) incrementally 
INFO [CompactionExecutor:1888] 2015-01-03 07:23:46,055 ColumnFamilyStore.java (line 794) Enqueuing flush of [email protected](0/0 serialized/live bytes, 1 ops) 
INFO [FlushWriter:62] 2015-01-03 07:23:46,055 Memtable.java (line 355) Writing [email protected](0/0 serialized/live bytes, 1 ops) 
INFO [FlushWriter:62] 2015-01-03 07:23:46,268 Memtable.java (line 395) Completed flushing /var/lib/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-jb-22-Data.db (42 bytes) for commitlog position ReplayPosition(segmentId=1420135510457, position=14938165) 
INFO [CompactionExecutor:1888] 2015-01-03 07:23:46,354 CompactionTask.java (line 287) Compacted 2 sstables to [/var/lib/cassandra/data/bitcoin/okcoin_trade_btc_cny/bitcoin-okcoin_trade_btc_cny-jb-554,]. 881,267,752 bytes to 881,266,793 (~99% of original) in 162,878ms = 5.159945MB/s. 24 total partitions merged to 23. Partition merge counts were {1:22, 2:1, } 
WARN [RMI TCP Connection(39)-128.31.5.27] 2015-01-03 07:24:46,452 ColumnFamilyStore.java (line 2103) Unable to cancel in-progress compactions for okcoin_order_book_btc_usd. Probably there is an unusually large row in progress somewhere. It is also possible that buggy code left some sstables compacting after it was done with them 

的最後幾行,我不知道最後一行表示。似乎沒有非常大的行(我不知道如何找到是否有)。作爲一個說明,仍然存在壓縮在60.33%,並堅持在okcoin_order_book_btc_usd。 我正在運行Cassandra 2.0.11

+0

'AssertionError'指示一個錯誤。如果這是Cassandra日誌文件,則表示Cassandra中存在一個錯誤。 – Raedwald

+0

對斷言錯誤很好的理解。只是好奇,是從nodetool壓縮出來的,還是system.log文件中的內容?我希望system.log文件有一個堆棧跟蹤,如果你可以捕獲它並將它放在一個很好的粘貼上,因爲這可能是一個cassandra錯誤。另外,什麼版本的2.0?你在2.0.11嗎? –

+0

我更新了問題。謝謝! –

回答

5

刪除行或從Cassandra過期時會創建墓碑。在對該行執行gc_grace_seconds之後,它們將在壓縮SSTables時被刪除。

有幾件事我能想到的,以幫助減輕墓碑數量:

  1. 設置,有很多墓碑的表更低的gc_grace_seconds - gc_grace_seconds通常應爲1比前一交易日如何更大通常你正在修理。如果你的修理次數比這更多,你可以考慮降低gc_grace_seconds。
  2. 看看你的壓實過程如何。你有很多待定的壓縮? (每個節點上的nodetool -h localhost compactionstats將顯示此內容)。有可能你在壓縮方面落後,數據沒有儘快清理乾淨。如果合適的話,也可以考慮改變你的壓實策略。例如,如果您正在使用SizeTieredCompactionStrategy,則可能需要查看LeveledCompactionStrategy,此策略通常會導致更多壓縮活動(因此請確保您有固態硬盤),這樣可以更快地清理您的墓碑。
  3. 看看你的數據模型和你正在做的查詢。您經常在正在閱讀的分區中刪除或過期數據嗎?考慮更改分區(主鍵)策略,以便刪除或過期的行不太可能存在於「實時」數據中。這是一個很好的例子,它爲您的主鍵添加時間/日期。
  4. 在cassandra.yaml中調整tombstone_failure_threshold - 可能不會考慮這樣做,因爲這是一個很好的跡象表明您需要查看數據。
+0

非常感謝!當我查看壓實統計數據時,現在有一個小時左右的壓縮率似乎在60.33%。這是正常的嗎? –

+0

這可能表明分區非常廣泛,我懷疑您的情況可能會出現這種情況,因爲在進行讀取時您有很多墓碑。 你認爲有可能你有一些非常寬的分區? (有很多行的分區鍵) 另外,有沒有任何未決的任務或它是否坐在0? 如果您具有監視功能,我將設置監視未決壓縮和總字節壓縮,您可以在JMX中找到這些統計信息。 [關於cassandra監控的好消息](https://www.youtube.com/watch?v=RhUoQPHNA1Y)。 –

+0

哦,應該加你自問。這不是很不尋常,但1小時很長時間,我會密切關注它。 –