将单个令牌节点添加到现有的datastax cassandra群集和数据传输不起作用

问题描述:

将每个节点的新单个令牌添加到现有的datastax群集和数据传输不起作用。接下来的过程在下面提到。感谢将单个令牌节点添加到现有的datastax cassandra群集和数据传输不起作用

我们在AWS EC2 Datacenter中有3个单令牌范围datastax节点,它们都启用了搜索和图表功能。我们计划将3个节点添加到我们的数据中心。我们正在为我们的密钥空间使用DseSimpleSnitch和简单网络拓扑结构。 另外我们目前的复制因子是2

节点1:10.10.1.36
节点2:10.10.1.46
节点3:10.10.1.56

cat /etc/default/dse | grep -E 'GRAPH_ENABLED=|SOLR_ENABLED=' 
    GRAPH_ENABLED=1 
    SOLR_ENABLED=1 

数据中心:SearchGraph

Address  Rack   Status State Load  Owns Token    
10.10.1.46 rack1  Up  Normal 760.14 MiB ? -9223372036854775808     
10.10.1.36 rack1  Up  Normal 737.69 MiB ? -3074457345618258603     
10.10.1.56 rack1  Up  Normal 752.25 MiB ? 3074457345618258602     

步骤( 1)为了将3个新节点添加到我们的数据中心,首先我们改变编辑我们的密钥空间拓扑结构并将其发现到网络感知。

1)改变了告密者。 cat /etc/dse/cassandra/cassandra.yaml | grep的endpoint_snitch: endpoint_snitch:GossipingPropertyFileSnitch

cat /etc/dse/cassandra/cassandra-rackdc.properties |grep -E 'dc=|rack=' 
    dc=SearchGraph 
    rack=rack1 

2) (一)关闭所有的节点,然后重新启动它们。

(b)在每个节点上运行顺序修复和nodetool清理。

3)已更改的键空间拓扑。

ALTER KEYSPACE tech_app1 WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'SearchGraph' : 2}; 
ALTER KEYSPACE tech_app2 WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'SearchGraph' : 2}; 
ALTER KEYSPACE tech_chat WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'SearchGraph' : 2}; 

参考:http://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsChangeKSStrategy.htmlhttp://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsSwitchSnitch.html

步骤(2)为了更新令牌范围和建立新的卡桑德拉节点,我们按照以下过程。

1)重新计算令牌范围

[email protected]:~# token-generator 

DC#1:新节点

Node #1: -9223372036854775808 
Node #2: -6148914691236517206 
Node #3: -3074457345618258604 
Node #4:     -2 
Node #5: 3074457345618258600 
Node #6: 6148914691236517202 

2)安装Datastax企业版本相同。

3)停止节点服务并清除数据。 (a)按以下方式为新节点分配令牌范围。

Node 4: 10.10.2.96  Range: -2 
Node 5: 10.10.2.97  Range: 3074457345618258600 
Node 6: 10.10.2.86  Range: 6148914691236517202 

4)(b)配置的cassandra。YAML每个新节点上:

节点4:

cluster_name: 'SearchGraph' 
num_tokens: 1 
initial_token: -2 
parameters: 
- seeds: "10.10.1.46, 10.10.1.56" 
listen_address: 10.10.2.96 
rpc_address: 10.10.2.96 
endpoint_snitch: GossipingPropertyFileSnitch 

节点5:

cluster_name: 'SearchGraph' 
num_tokens: 1 
initial_token: 3074457345618258600 
parameters: 
- seeds: "10.10.1.46, 10.10.1.56" 
listen_address: 10.10.2.97 
rpc_address: 10.10.2.97 
endpoint_snitch: GossipingPropertyFileSnitch 

节点6:

cluster_name: 'SearchGraph' 
num_tokens: 1 
initial_token: 6148914691236517202 
parameters: 
- seeds: "10.10.1.46, 10.10.1.56" 
listen_address: 10.10.2.86 
rpc_address: 10.10.2.86 
endpoint_snitch: GossipingPropertyFileSnitch 

5)改变了打小报告。

cat /etc/dse/cassandra/cassandra.yaml | grep endpoint_snitch: 
endpoint_snitch: GossipingPropertyFileSnitch 

cat /etc/dse/cassandra/cassandra-rackdc.properties |grep -E 'dc=|rack=' 
dc=SearchGraph 
rack=rack1 

6),其与consistent.rangemovement2分钟间隔每个新节点上的开始DataStax Enterprise关闭:

JVM_OPTS="$JVM_OPTS -Dcassandra.consistent.rangemovement=false 

7)的新节点被充分自举后,所用nodetool移动到分配根据在步骤4(a)完成的令牌重新计算,为现有节点添加新的initial_token。在每个节点上一次完成一个进程。

On Node 1(10.10.1.36) : nodetool move -3074457345618258603 
On Node 2(10.10.1.46) : nodetool move -9223372036854775808 
On Node 3(10.10.1.56) : nodetool move 3074457345618258602 

数据中心:SearchGraph

Address  Rack  Status State Load   Owns    Token 

10.10.1.46 rack1  Up  Normal 852.93 MiB ? -9223372036854775808 
10.10.1.36 rack1  Up  Moving 900.12 MiB ? -3074457345618258603 
10.10.2.96 rack1  UP  Normal 465.02 KiB ? -2 
10.10.2.97 rack1  Up  Normal 109.16 MiB ? 3074457345618258600 
10.10.1.56 rack1  Up  Moving 594.49 MiB ? 3074457345618258602 
10.10.2.86 rack1  Up  Normal 663.94 MiB ? 6148914691236517202 

帖子更新时间:

但我们正在同时加入节点下面的错误。

AbstractSolrSecondaryIndex.java:1884 - Cannot find core chat.chat_history 
AbstractSolrSecondaryIndex.java:1884 - Cannot find core chat.history 
AbstractSolrSecondaryIndex.java:1884 - Cannot find core search.business_units 
AbstractSolrSecondaryIndex.java:1884 - Cannot find core search.feeds 
AbstractSolrSecondaryIndex.java:1884 - Cannot find core search.feeds_2 
AbstractSolrSecondaryIndex.java:1884 - Cannot find core search.knowledegmodule 
AbstractSolrSecondaryIndex.java:1884 - Cannot find core search.userdetails 
AbstractSolrSecondaryIndex.java:1884 - Cannot find core search.userdetails_2 
AbstractSolrSecondaryIndex.java:1884 - Cannot find core search.vault_details 
AbstractSolrSecondaryIndex.java:1884 - Cannot find core search.workgroup 
AbstractSolrSecondaryIndex.java:1884 - Cannot find core cloud.feeds 
AbstractSolrSecondaryIndex.java:1884 - Cannot find core cloud.knowledgemodule 
AbstractSolrSecondaryIndex.java:1884 - Cannot find core cloud.organizations 
AbstractSolrSecondaryIndex.java:1884 - Cannot find core cloud.userdetails 
AbstractSolrSecondaryIndex.java:1884 - Cannot find core cloud.vaults 
AbstractSolrSecondaryIndex.java:1884 - Cannot find core cloud.workgroup 

节点加入失败,以下错误:

ERROR [main] 2017-08-10 04:22:08,449 DseDaemon.java:488 - Unable to start DSE server. 
com.datastax.bdp.plugin.PluginManager$PluginActivationException: Unable to activate plugin com.datastax.bdp.plugin.SolrContainerPlugin 


Caused by: java.lang.IllegalStateException: Cannot find secondary index for core ekamsearch.userdetails_2, did you create it? 
If yes, please consider increasing the value of the dse.yaml option load_max_time_per_core, current value in minutes is: 10 

ERROR [main] 2017-08-10 04:22:08,450 CassandraDaemon.java:705 - Exception encountered during startup 
java.lang.RuntimeException: com.datastax.bdp.plugin.PluginManager$PluginActivationException: Unable to activate plugin 

有没有人遇到过这些错误或警告?

+0

,而你是手动分配的令牌,而你可以设置任何特别的原因在Cassandra.yaml中numtoken = 1并让Cassandra为你处理。 – dilsingi

+0

我已经按照上述步骤2(1)中提到的重新计算配置了num_tokens:1和initial_token范围。我们想手动指定initial_token范围,而不是Cassandra来处理它,因为我认为当前的集群Solr不能工作,如果我们改变它并使用Opscenter重新平衡,请澄清我是否错误。我们遵循的上述步骤是正确的吗?用于添加节点。 –

+0

我相信在缩放cassandra节点时手动管理令牌非常繁琐。 num_tokens:1本身将自动帮助管理Cassandra级别的数据,并且当数据重新平衡到新节点时,Solr将为它们编制索引。随着数据移动到新节点,相应的记录将从旧节点中删除,因为您运行nodetool清理。由于记录在旧节点中死去,所以Solr中的相应索引条目也是如此。从Solr核心中,您将能够看到正在索引的记录数量,并且您可以在添加节点后进行验证。我会避免手动分配令牌。 – dilsingi

令牌分配问题::

1) I had wrongly assigned token range in Step 4) (a). Assign token which 
    bisect or trisect the value which are generated using 
    "token-generator" 
     Node 4: 10.10.2.96  Range: -6148914691236517206 
     Node 5: 10.10.2.97  Range: -2 
     Node 6: 10.10.2.86  Range: 6148914691236517202 

Note : We don't need to change the token range of existing nodes in data 
     center.No need to follow procedure in Step 7 which i have mentioned 
     above. 

Solr中已解决的问题:找不到心病::

Increased load_max_time_per_core value in dse.yaml configuration file, 
still i was receving the error.Finalys solved the issue 
by following method 

    1) Started the new nodes as non-solr and wait for all cassandra data 
     to migrate to joining nodes. 
    2) Add the parameter auto_bootstrap: False directive to the 
     cassandra.yaml file 
    3) Re-start the same nodes after enabling solr. Changed parameter 
     SOLR_ENABLED=1 in /etc/default/dse 
    3) Re-index in all new joined nodes. I had to reloaded all core 
     required with the reindex=true and distributed=false parameters in 
     new joined nodes. 
     Ref : http://docs.datastax.com/en/archived/datastax_enterprise/4.0/datastax_enterprise/srch/srchReldCore.html