Search in sources :

Example 1 with ObjectIntHashMap

use of com.carrotsearch.hppc.ObjectIntHashMap in project elasticsearch by elastic.

the class AwarenessAllocationIT method testAwarenessZones.

public void testAwarenessZones() throws Exception {
    Settings commonSettings = Settings.builder().put(AwarenessAllocationDecider.CLUSTER_ROUTING_ALLOCATION_AWARENESS_FORCE_GROUP_SETTING.getKey() + "zone.values", "a,b").put(AwarenessAllocationDecider.CLUSTER_ROUTING_ALLOCATION_AWARENESS_ATTRIBUTE_SETTING.getKey(), "zone").put(ZenDiscovery.JOIN_TIMEOUT_SETTING.getKey(), "10s").build();
    logger.info("--> starting 4 nodes on different zones");
    List<String> nodes = internalCluster().startNodes(Settings.builder().put(commonSettings).put("node.attr.zone", "a").build(), Settings.builder().put(commonSettings).put("node.attr.zone", "b").build(), Settings.builder().put(commonSettings).put("node.attr.zone", "b").build(), Settings.builder().put(commonSettings).put("node.attr.zone", "a").build());
    String A_0 = nodes.get(0);
    String B_0 = nodes.get(1);
    String B_1 = nodes.get(2);
    String A_1 = nodes.get(3);
    logger.info("--> waiting for nodes to form a cluster");
    ClusterHealthResponse health = client().admin().cluster().prepareHealth().setWaitForNodes("4").execute().actionGet();
    assertThat(health.isTimedOut(), equalTo(false));
    client().admin().indices().prepareCreate("test").setSettings(Settings.builder().put("index.number_of_shards", 5).put("index.number_of_replicas", 1)).execute().actionGet();
    logger.info("--> waiting for shards to be allocated");
    health = client().admin().cluster().prepareHealth().setWaitForEvents(Priority.LANGUID).setWaitForGreenStatus().setWaitForNoRelocatingShards(true).execute().actionGet();
    assertThat(health.isTimedOut(), equalTo(false));
    ClusterState clusterState = client().admin().cluster().prepareState().execute().actionGet().getState();
    ObjectIntHashMap<String> counts = new ObjectIntHashMap<>();
    for (IndexRoutingTable indexRoutingTable : clusterState.routingTable()) {
        for (IndexShardRoutingTable indexShardRoutingTable : indexRoutingTable) {
            for (ShardRouting shardRouting : indexShardRoutingTable) {
                counts.addTo(clusterState.nodes().get(shardRouting.currentNodeId()).getName(), 1);
            }
        }
    }
    assertThat(counts.get(A_1), anyOf(equalTo(2), equalTo(3)));
    assertThat(counts.get(B_1), anyOf(equalTo(2), equalTo(3)));
    assertThat(counts.get(A_0), anyOf(equalTo(2), equalTo(3)));
    assertThat(counts.get(B_0), anyOf(equalTo(2), equalTo(3)));
}
Also used : ClusterState(org.elasticsearch.cluster.ClusterState) IndexRoutingTable(org.elasticsearch.cluster.routing.IndexRoutingTable) IndexShardRoutingTable(org.elasticsearch.cluster.routing.IndexShardRoutingTable) ClusterHealthResponse(org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse) ObjectIntHashMap(com.carrotsearch.hppc.ObjectIntHashMap) ShardRouting(org.elasticsearch.cluster.routing.ShardRouting) Settings(org.elasticsearch.common.settings.Settings)

Example 2 with ObjectIntHashMap

use of com.carrotsearch.hppc.ObjectIntHashMap in project cassandra by apache.

the class RowIteratorMergeListener method close.

public void close() {
    boolean hasRepairs = false;
    for (int i = 0; !hasRepairs && i < repairs.length; ++i) hasRepairs = repairs[i] != null;
    if (!hasRepairs)
        return;
    PartitionUpdate fullDiffRepair = null;
    if (buildFullDiff && repairs[repairs.length - 1] != null)
        fullDiffRepair = repairs[repairs.length - 1].build();
    Map<Replica, Mutation> mutations = Maps.newHashMapWithExpectedSize(writePlan.contacts().size());
    ObjectIntHashMap<InetAddressAndPort> sourceIds = new ObjectIntHashMap<>(((repairs.length + 1) * 4) / 3);
    for (int i = 0; i < readPlan.contacts().size(); ++i) sourceIds.put(readPlan.contacts().get(i).endpoint(), 1 + i);
    for (Replica replica : writePlan.contacts()) {
        PartitionUpdate update = null;
        int i = -1 + sourceIds.get(replica.endpoint());
        if (i < 0)
            update = fullDiffRepair;
        else if (repairs[i] != null)
            update = repairs[i].build();
        Mutation mutation = BlockingReadRepairs.createRepairMutation(update, readPlan.consistencyLevel(), replica.endpoint(), false);
        if (mutation == null)
            continue;
        mutations.put(replica, mutation);
    }
    readRepair.repairPartition(partitionKey, mutations, writePlan);
}
Also used : InetAddressAndPort(org.apache.cassandra.locator.InetAddressAndPort) ObjectIntHashMap(com.carrotsearch.hppc.ObjectIntHashMap) Mutation(org.apache.cassandra.db.Mutation) Replica(org.apache.cassandra.locator.Replica) PartitionUpdate(org.apache.cassandra.db.partitions.PartitionUpdate)

Example 3 with ObjectIntHashMap

use of com.carrotsearch.hppc.ObjectIntHashMap in project crate by crate.

the class AwarenessAllocationDecider method underCapacity.

private Decision underCapacity(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation, boolean moveToNode) {
    if (awarenessAttributes.isEmpty()) {
        return allocation.decision(Decision.YES, NAME, "allocation awareness is not enabled, set cluster setting [%s] to enable it", CLUSTER_ROUTING_ALLOCATION_AWARENESS_ATTRIBUTE_SETTING.getKey());
    }
    IndexMetadata indexMetadata = allocation.metadata().getIndexSafe(shardRouting.index());
    // 1 for primary
    int shardCount = indexMetadata.getNumberOfReplicas() + 1;
    for (String awarenessAttribute : awarenessAttributes) {
        // the node the shard exists on must be associated with an awareness attribute
        if (node.node().getAttributes().containsKey(awarenessAttribute) == false) {
            return allocation.decision(Decision.NO, NAME, "node does not contain the awareness attribute [%s]; required attributes cluster setting [%s=%s]", awarenessAttribute, CLUSTER_ROUTING_ALLOCATION_AWARENESS_ATTRIBUTE_SETTING.getKey(), allocation.debugDecision() ? Strings.collectionToCommaDelimitedString(awarenessAttributes) : null);
        }
        // build attr_value -> nodes map
        ObjectIntHashMap<String> nodesPerAttribute = allocation.routingNodes().nodesPerAttributesCounts(awarenessAttribute);
        // build the count of shards per attribute value
        ObjectIntHashMap<String> shardPerAttribute = new ObjectIntHashMap<>();
        for (ShardRouting assignedShard : allocation.routingNodes().assignedShards(shardRouting.shardId())) {
            if (assignedShard.started() || assignedShard.initializing()) {
                // Note: this also counts relocation targets as that will be the new location of the shard.
                // Relocation sources should not be counted as the shard is moving away
                RoutingNode routingNode = allocation.routingNodes().node(assignedShard.currentNodeId());
                shardPerAttribute.addTo(routingNode.node().getAttributes().get(awarenessAttribute), 1);
            }
        }
        if (moveToNode) {
            if (shardRouting.assignedToNode()) {
                String nodeId = shardRouting.relocating() ? shardRouting.relocatingNodeId() : shardRouting.currentNodeId();
                if (node.nodeId().equals(nodeId) == false) {
                    // we work on different nodes, move counts around
                    shardPerAttribute.putOrAdd(allocation.routingNodes().node(nodeId).node().getAttributes().get(awarenessAttribute), 0, -1);
                    shardPerAttribute.addTo(node.node().getAttributes().get(awarenessAttribute), 1);
                }
            } else {
                shardPerAttribute.addTo(node.node().getAttributes().get(awarenessAttribute), 1);
            }
        }
        int numberOfAttributes = nodesPerAttribute.size();
        List<String> fullValues = forcedAwarenessAttributes.get(awarenessAttribute);
        if (fullValues != null) {
            for (String fullValue : fullValues) {
                if (shardPerAttribute.containsKey(fullValue) == false) {
                    numberOfAttributes++;
                }
            }
        }
        // TODO should we remove ones that are not part of full list?
        final int currentNodeCount = shardPerAttribute.get(node.node().getAttributes().get(awarenessAttribute));
        // ceil(shardCount/numberOfAttributes)
        final int maximumNodeCount = (shardCount + numberOfAttributes - 1) / numberOfAttributes;
        if (currentNodeCount > maximumNodeCount) {
            return allocation.decision(Decision.NO, NAME, "there are too many copies of the shard allocated to nodes with attribute [%s], there are [%d] total configured " + "shard copies for this shard id and [%d] total attribute values, expected the allocated shard count per " + "attribute [%d] to be less than or equal to the upper bound of the required number of shards per attribute [%d]", awarenessAttribute, shardCount, numberOfAttributes, currentNodeCount, maximumNodeCount);
        }
    }
    return allocation.decision(Decision.YES, NAME, "node meets all awareness attribute requirements");
}
Also used : ObjectIntHashMap(com.carrotsearch.hppc.ObjectIntHashMap) RoutingNode(org.elasticsearch.cluster.routing.RoutingNode) IndexMetadata(org.elasticsearch.cluster.metadata.IndexMetadata) ShardRouting(org.elasticsearch.cluster.routing.ShardRouting)

Example 4 with ObjectIntHashMap

use of com.carrotsearch.hppc.ObjectIntHashMap in project elasticsearch by elastic.

the class AwarenessAllocationDecider method underCapacity.

private Decision underCapacity(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation, boolean moveToNode) {
    if (awarenessAttributes.length == 0) {
        return allocation.decision(Decision.YES, NAME, "allocation awareness is not enabled, set cluster setting [%s] to enable it", CLUSTER_ROUTING_ALLOCATION_AWARENESS_ATTRIBUTE_SETTING.getKey());
    }
    IndexMetaData indexMetaData = allocation.metaData().getIndexSafe(shardRouting.index());
    // 1 for primary
    int shardCount = indexMetaData.getNumberOfReplicas() + 1;
    for (String awarenessAttribute : awarenessAttributes) {
        // the node the shard exists on must be associated with an awareness attribute
        if (!node.node().getAttributes().containsKey(awarenessAttribute)) {
            return allocation.decision(Decision.NO, NAME, "node does not contain the awareness attribute [%s]; required attributes cluster setting [%s=%s]", awarenessAttribute, CLUSTER_ROUTING_ALLOCATION_AWARENESS_ATTRIBUTE_SETTING.getKey(), allocation.debugDecision() ? Strings.arrayToCommaDelimitedString(awarenessAttributes) : null);
        }
        // build attr_value -> nodes map
        ObjectIntHashMap<String> nodesPerAttribute = allocation.routingNodes().nodesPerAttributesCounts(awarenessAttribute);
        // build the count of shards per attribute value
        ObjectIntHashMap<String> shardPerAttribute = new ObjectIntHashMap<>();
        for (ShardRouting assignedShard : allocation.routingNodes().assignedShards(shardRouting.shardId())) {
            if (assignedShard.started() || assignedShard.initializing()) {
                // Note: this also counts relocation targets as that will be the new location of the shard.
                // Relocation sources should not be counted as the shard is moving away
                RoutingNode routingNode = allocation.routingNodes().node(assignedShard.currentNodeId());
                shardPerAttribute.addTo(routingNode.node().getAttributes().get(awarenessAttribute), 1);
            }
        }
        if (moveToNode) {
            if (shardRouting.assignedToNode()) {
                String nodeId = shardRouting.relocating() ? shardRouting.relocatingNodeId() : shardRouting.currentNodeId();
                if (!node.nodeId().equals(nodeId)) {
                    // we work on different nodes, move counts around
                    shardPerAttribute.putOrAdd(allocation.routingNodes().node(nodeId).node().getAttributes().get(awarenessAttribute), 0, -1);
                    shardPerAttribute.addTo(node.node().getAttributes().get(awarenessAttribute), 1);
                }
            } else {
                shardPerAttribute.addTo(node.node().getAttributes().get(awarenessAttribute), 1);
            }
        }
        int numberOfAttributes = nodesPerAttribute.size();
        String[] fullValues = forcedAwarenessAttributes.get(awarenessAttribute);
        if (fullValues != null) {
            for (String fullValue : fullValues) {
                if (!shardPerAttribute.containsKey(fullValue)) {
                    numberOfAttributes++;
                }
            }
        }
        // TODO should we remove ones that are not part of full list?
        int averagePerAttribute = shardCount / numberOfAttributes;
        int totalLeftover = shardCount % numberOfAttributes;
        int requiredCountPerAttribute;
        if (averagePerAttribute == 0) {
            // if we have more attributes values than shard count, no leftover
            totalLeftover = 0;
            requiredCountPerAttribute = 1;
        } else {
            requiredCountPerAttribute = averagePerAttribute;
        }
        int leftoverPerAttribute = totalLeftover == 0 ? 0 : 1;
        int currentNodeCount = shardPerAttribute.get(node.node().getAttributes().get(awarenessAttribute));
        // if we are above with leftover, then we know we are not good, even with mod
        if (currentNodeCount > (requiredCountPerAttribute + leftoverPerAttribute)) {
            return allocation.decision(Decision.NO, NAME, "there are too many copies of the shard allocated to nodes with attribute [%s], there are [%d] total configured " + "shard copies for this shard id and [%d] total attribute values, expected the allocated shard count per " + "attribute [%d] to be less than or equal to the upper bound of the required number of shards per attribute [%d]", awarenessAttribute, shardCount, numberOfAttributes, currentNodeCount, requiredCountPerAttribute + leftoverPerAttribute);
        }
        // all is well, we are below or same as average
        if (currentNodeCount <= requiredCountPerAttribute) {
            continue;
        }
    }
    return allocation.decision(Decision.YES, NAME, "node meets all awareness attribute requirements");
}
Also used : ObjectIntHashMap(com.carrotsearch.hppc.ObjectIntHashMap) RoutingNode(org.elasticsearch.cluster.routing.RoutingNode) ShardRouting(org.elasticsearch.cluster.routing.ShardRouting) IndexMetaData(org.elasticsearch.cluster.metadata.IndexMetaData)

Example 5 with ObjectIntHashMap

use of com.carrotsearch.hppc.ObjectIntHashMap in project elasticsearch by elastic.

the class GetTermVectorsIT method createString.

private String createString(String[] tokens, Map<String, List<BytesRef>> payloads, int encoding, char delimiter) {
    String resultString = "";
    ObjectIntHashMap<String> payloadCounter = new ObjectIntHashMap<>();
    for (String token : tokens) {
        if (!payloadCounter.containsKey(token)) {
            payloadCounter.putIfAbsent(token, 0);
        } else {
            payloadCounter.put(token, payloadCounter.get(token) + 1);
        }
        resultString = resultString + token;
        BytesRef payload = payloads.get(token).get(payloadCounter.get(token));
        if (payload.length > 0) {
            resultString = resultString + delimiter;
            switch(encoding) {
                case 0:
                    {
                        resultString = resultString + Float.toString(PayloadHelper.decodeFloat(payload.bytes, payload.offset));
                        break;
                    }
                case 1:
                    {
                        resultString = resultString + Integer.toString(PayloadHelper.decodeInt(payload.bytes, payload.offset));
                        break;
                    }
                case 2:
                    {
                        resultString = resultString + payload.utf8ToString();
                        break;
                    }
                default:
                    {
                        throw new ElasticsearchException("unsupported encoding type");
                    }
            }
        }
        resultString = resultString + " ";
    }
    return resultString;
}
Also used : ObjectIntHashMap(com.carrotsearch.hppc.ObjectIntHashMap) ElasticsearchException(org.elasticsearch.ElasticsearchException) BytesRef(org.apache.lucene.util.BytesRef)

Aggregations

ObjectIntHashMap (com.carrotsearch.hppc.ObjectIntHashMap)8 ShardRouting (org.elasticsearch.cluster.routing.ShardRouting)5 ClusterHealthResponse (org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse)3 ClusterState (org.elasticsearch.cluster.ClusterState)3 IndexRoutingTable (org.elasticsearch.cluster.routing.IndexRoutingTable)3 IndexShardRoutingTable (org.elasticsearch.cluster.routing.IndexShardRoutingTable)3 Settings (org.elasticsearch.common.settings.Settings)3 RoutingNode (org.elasticsearch.cluster.routing.RoutingNode)2 Mutation (org.apache.cassandra.db.Mutation)1 PartitionUpdate (org.apache.cassandra.db.partitions.PartitionUpdate)1 InetAddressAndPort (org.apache.cassandra.locator.InetAddressAndPort)1 Replica (org.apache.cassandra.locator.Replica)1 BytesRef (org.apache.lucene.util.BytesRef)1 ElasticsearchException (org.elasticsearch.ElasticsearchException)1 IndexMetaData (org.elasticsearch.cluster.metadata.IndexMetaData)1 IndexMetadata (org.elasticsearch.cluster.metadata.IndexMetadata)1 AbstractStage (teetime.framework.AbstractStage)1 Traverser (teetime.framework.Traverser)1