2017-07-02 58 views
0

我想使用Cassandra 2.1.17中的壓力工具編寫100 MB的分區。爲了簡單起見,首先我只是試圖用一個列寫一個分區。 我的壓力YAML看起來是這樣的:使用cassandra-stress編寫100 MB的列

keyspace: stresscql 
keyspace_definition: | 
    CREATE KEYSPACE stresscql WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}; 

table: insanitytest 
table_definition: | 
    CREATE TABLE insanitytest (
     name text, 
     value blob, 
     PRIMARY KEY(name) 
); 

columnspec: 
    - name: value 
    size: FIXED(100000000) 


insert: 
    partitions: fixed(1)    # number of unique partitions to update in a single operation 
            # if batchcount > 1, multiple batches will be used but all partitions will 
            # occur in all batches (unless they finish early); only the row counts will vary 
    batchtype: LOGGED    # type of batch to use 
    select: fixed(1)/1    # uniform chance any single generated CQL row will be visited in a partition; 
            # generated for each partition independently, each time we visit it 

queries: 
    simple1: 
     cql: select * from insanitytest where name = ? LIMIT 100 
     fields: samerow    # samerow or multirow (select arguments from the same row, or randomly from all rows in the partition) 

我與運行它:

./tools/bin/cassandra-stress user profile=~/Software/cassandra/tools/cqlstress-insanity-example.yaml n=1 "ops(insert=1,simple1=0)" 

望着輸出I有:

Connected to cluster: Test Cluster 
Datatacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1 
Created schema. Sleeping 1s for propagation. 
Sleeping 2s... 
Running with 4 threadCount 
Running [insert, simple1] with 4 threads for 1 iteration 
type,  total ops, op/s, pk/s, row/s, mean,  med,  .95,  .99, .999,  max, time, stderr, errors, gc: #, max ms, sum ms, sdv ms,  mb 
Generating batches with [1..1] partitions and [1..1] rows (of [1..1] total rows in the partitions) 
com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency LOCAL_ONE (1 replica were required but only 0 acknowledged the write) 
insert,   1,  0,  0,  0, 3985.0, 3985.0, 3985.0, 3985.0, 3985.0, 3985.0, 4.0, -0.00000,  0,  1,  34,  34,  0,  219 
simple1,   0,  NaN,  NaN,  NaN,  NaN,  0.0,  0.0,  0.0,  0.0,  0.0, 0.0, -0.00000,  0,  1,  34,  34,  0,  219 
total,    1,  0,  0,  0, 3985.0, 3985.0, 3985.0, 3985.0, 3985.0, 3985.0, 4.0, -0.00000,  0,  1,  34,  34,  0,  219 


Results: 
op rate     : 0 [insert:0, simple1:NaN] 
partition rate   : 0 [insert:0, simple1:NaN] 
row rate     : 0 [insert:0, simple1:NaN] 
latency mean    : 3985.0 [insert:3985.0, simple1:NaN] 
latency median   : 3985.0 [insert:3985.0, simple1:0.0] 
latency 95th percentile : 3985.0 [insert:3985.0, simple1:0.0] 
latency 99th percentile : 3985.0 [insert:3985.0, simple1:0.0] 
latency 99.9th percentile : 3985.0 [insert:3985.0, simple1:0.0] 
latency max    : 3985.0 [insert:3985.0, simple1:0.0] 
Total partitions   : 1 [insert:1, simple1:0] 
Total errors    : 0 [insert:0, simple1:0] 
total gc count   : 1 
total gc mb    : 219 
total gc time (s)   : 0 
avg gc time(ms)   : 34 
stdev gc time(ms)   : 0 
Total operation time  : 00:00:03 

然而,看着「nodetool tpstats '我有一個成功的突變(所以即使我有超時突變似乎是成功的):

Pool Name     Active Pending  Completed Blocked All time blocked 
MutationStage      0   0    1   0     0 
ReadStage       0   0    33   0     0 
RequestResponseStage    0   0    0   0     0 
ReadRepairStage     0   0    0   0     0 
CounterMutationStage    0   0    0   0     0 
MiscStage       0   0    0   0     0 
HintedHandoff      0   0    0   0     0 
GossipStage      0   0    0   0     0 
CacheCleanupExecutor    0   0    0   0     0 
InternalResponseStage    0   0    0   0     0 
CommitLogArchiver     0   0    0   0     0 
CompactionExecutor    0   0    30   0     0 
ValidationExecutor    0   0    0   0     0 
MigrationStage     0   0    3   0     0 
AntiEntropyStage     0   0    0   0     0 
PendingRangeCalculator   0   0    1   0     0 
Sampler       0   0    0   0     0 
MemtableFlushWriter    0   0    13   0     0 
MemtablePostFlush     0   0    24   0     0 
MemtableReclaimMemory    0   0    13   0     0 
Native-Transport-Requests   0   0   170   0     0 

Message type   Dropped 
READ       0 
RANGE_SLICE     0 
_TRACE      0 
MUTATION      0 
COUNTER_MUTATION    0 
BINARY      0 
REQUEST_RESPONSE    0 
PAGED_RANGE     0 
READ_REPAIR     0 

但如果我不「nodetool刷新」和「nodetool狀態stresscql」,這是我得到:

Datacenter: datacenter1 
======================= 
Status=Up/Down 
|/ State=Normal/Leaving/Joining/Moving 
-- Address Load  Tokens Owns (effective) Host ID        Rack 
UN 127.0.0.1 131.99 KB 256  100.0%   285b13ec-0b9b-4325-9095-c5f5c0f51079 rack1 

由於沒有交易被放棄了,哪兒來的數據去?根據我的說明,在Load列中我應該有〜100MB的值,對吧?

回答

0

問題不在壓力或數據定義,而是在commit_log_segment_size_in_mb。它至少比數據塊大50%。更多的信息在這answer