Search in sources :

Example 1 with CounterMutation

use of org.apache.cassandra.db.CounterMutation in project cassandra by apache.

the class StorageProxy method counterWriteTask.

private static Runnable counterWriteTask(final IMutation mutation, final ReplicaPlan.ForTokenWrite replicaPlan, final AbstractWriteResponseHandler<IMutation> responseHandler, final String localDataCenter) {
    return new DroppableRunnable(Verb.COUNTER_MUTATION_REQ) {

        @Override
        public void runMayThrow() throws OverloadedException, WriteTimeoutException {
            assert mutation instanceof CounterMutation;
            Mutation result = ((CounterMutation) mutation).applyCounterMutation();
            responseHandler.onResponse(null);
            sendToHintedReplicas(result, replicaPlan, responseHandler, localDataCenter, Stage.COUNTER_MUTATION);
        }
    };
}
Also used : CounterMutation(org.apache.cassandra.db.CounterMutation) Mutation(org.apache.cassandra.db.Mutation) CounterMutation(org.apache.cassandra.db.CounterMutation) IMutation(org.apache.cassandra.db.IMutation)

Example 2 with CounterMutation

use of org.apache.cassandra.db.CounterMutation in project cassandra by apache.

the class StorageProxy method mutate.

/**
 * Use this method to have these Mutations applied
 * across all replicas. This method will take care
 * of the possibility of a replica being down and hint
 * the data across to some other replica.
 *
 * @param mutations the mutations to be applied across the replicas
 * @param consistencyLevel the consistency level for the operation
 * @param queryStartNanoTime the value of nanoTime() when the query started to be processed
 */
public static void mutate(List<? extends IMutation> mutations, ConsistencyLevel consistencyLevel, long queryStartNanoTime) throws UnavailableException, OverloadedException, WriteTimeoutException, WriteFailureException {
    Tracing.trace("Determining replicas for mutation");
    final String localDataCenter = DatabaseDescriptor.getEndpointSnitch().getLocalDatacenter();
    long startTime = nanoTime();
    List<AbstractWriteResponseHandler<IMutation>> responseHandlers = new ArrayList<>(mutations.size());
    WriteType plainWriteType = mutations.size() <= 1 ? WriteType.SIMPLE : WriteType.UNLOGGED_BATCH;
    try {
        for (IMutation mutation : mutations) {
            if (hasLocalMutation(mutation))
                writeMetrics.localRequests.mark();
            else
                writeMetrics.remoteRequests.mark();
            if (mutation instanceof CounterMutation)
                responseHandlers.add(mutateCounter((CounterMutation) mutation, localDataCenter, queryStartNanoTime));
            else
                responseHandlers.add(performWrite(mutation, consistencyLevel, localDataCenter, standardWritePerformer, null, plainWriteType, queryStartNanoTime));
        }
        // upgrade to full quorum any failed cheap quorums
        for (int i = 0; i < mutations.size(); ++i) {
            if (// at the moment, only non-counter writes support cheap quorums
            !(mutations.get(i) instanceof CounterMutation))
                responseHandlers.get(i).maybeTryAdditionalReplicas(mutations.get(i), standardWritePerformer, localDataCenter);
        }
        // wait for writes.  throws TimeoutException if necessary
        for (AbstractWriteResponseHandler<IMutation> responseHandler : responseHandlers) responseHandler.get();
    } catch (WriteTimeoutException | WriteFailureException ex) {
        if (consistencyLevel == ConsistencyLevel.ANY) {
            hintMutations(mutations);
        } else {
            if (ex instanceof WriteFailureException) {
                writeMetrics.failures.mark();
                writeMetricsForLevel(consistencyLevel).failures.mark();
                WriteFailureException fe = (WriteFailureException) ex;
                Tracing.trace("Write failure; received {} of {} required replies, failed {} requests", fe.received, fe.blockFor, fe.failureReasonByEndpoint.size());
            } else {
                writeMetrics.timeouts.mark();
                writeMetricsForLevel(consistencyLevel).timeouts.mark();
                WriteTimeoutException te = (WriteTimeoutException) ex;
                Tracing.trace("Write timeout; received {} of {} required replies", te.received, te.blockFor);
            }
            throw ex;
        }
    } catch (UnavailableException e) {
        writeMetrics.unavailables.mark();
        writeMetricsForLevel(consistencyLevel).unavailables.mark();
        Tracing.trace("Unavailable");
        throw e;
    } catch (OverloadedException e) {
        writeMetrics.unavailables.mark();
        writeMetricsForLevel(consistencyLevel).unavailables.mark();
        Tracing.trace("Overloaded");
        throw e;
    } finally {
        long latency = nanoTime() - startTime;
        writeMetrics.addNano(latency);
        writeMetricsForLevel(consistencyLevel).addNano(latency);
        updateCoordinatorWriteLatencyTableMetric(mutations, latency);
    }
}
Also used : IMutation(org.apache.cassandra.db.IMutation) WriteType(org.apache.cassandra.db.WriteType) ArrayList(java.util.ArrayList) UnavailableException(org.apache.cassandra.exceptions.UnavailableException) OverloadedException(org.apache.cassandra.exceptions.OverloadedException) Hint(org.apache.cassandra.hints.Hint) CounterMutation(org.apache.cassandra.db.CounterMutation) CasWriteTimeoutException(org.apache.cassandra.exceptions.CasWriteTimeoutException) WriteTimeoutException(org.apache.cassandra.exceptions.WriteTimeoutException) WriteFailureException(org.apache.cassandra.exceptions.WriteFailureException)

Example 3 with CounterMutation

use of org.apache.cassandra.db.CounterMutation in project cassandra by apache.

the class SingleTableUpdatesCollector method toMutations.

/**
 * Returns a collection containing all the mutations.
 * @return a collection containing all the mutations.
 */
public List<IMutation> toMutations() {
    List<IMutation> ms = new ArrayList<>(puBuilders.size());
    for (PartitionUpdate.Builder builder : puBuilders.values()) {
        IMutation mutation;
        if (metadata.isVirtual())
            mutation = new VirtualMutation(builder.build());
        else if (metadata.isCounter())
            mutation = new CounterMutation(new Mutation(builder.build()), counterConsistencyLevel);
        else
            mutation = new Mutation(builder.build());
        mutation.validateIndexedColumns();
        ms.add(mutation);
    }
    return ms;
}
Also used : CounterMutation(org.apache.cassandra.db.CounterMutation) IMutation(org.apache.cassandra.db.IMutation) VirtualMutation(org.apache.cassandra.db.virtual.VirtualMutation) ArrayList(java.util.ArrayList) CounterMutation(org.apache.cassandra.db.CounterMutation) VirtualMutation(org.apache.cassandra.db.virtual.VirtualMutation) Mutation(org.apache.cassandra.db.Mutation) IMutation(org.apache.cassandra.db.IMutation) PartitionUpdate(org.apache.cassandra.db.partitions.PartitionUpdate)

Example 4 with CounterMutation

use of org.apache.cassandra.db.CounterMutation in project eiger by wlloyd.

the class BatchMutateTransactionUtil method convertToInternalMutations.

public static List<IMutation> convertToInternalMutations(String keyspace, Map<ByteBuffer, Map<String, List<Mutation>>> mutation_map, ByteBuffer coordinatorKey) throws InvalidRequestException {
    // the timestamp and localCommitTime are set when we apply the transaction, so we'll set them to invalid values here
    long timestamp = Long.MIN_VALUE;
    long localCommitTime = Long.MIN_VALUE;
    List<IMutation> rowMutations = new ArrayList<IMutation>();
    // Note, permission was checked when the thrift interface received the transaction.
    for (Map.Entry<ByteBuffer, Map<String, List<Mutation>>> mutationEntry : mutation_map.entrySet()) {
        ByteBuffer key = mutationEntry.getKey();
        // We need to separate row mutation for standard cf and counter cf (that will be encapsulated in a
        // CounterMutation) because it doesn't follow the same code path
        RowMutation rmStandard = null;
        RowMutation rmCounter = null;
        Map<String, List<Mutation>> columnFamilyToMutations = mutationEntry.getValue();
        for (Map.Entry<String, List<Mutation>> columnFamilyMutations : columnFamilyToMutations.entrySet()) {
            String cfName = columnFamilyMutations.getKey();
            CFMetaData metadata = ThriftValidation.validateColumnFamily(keyspace, cfName);
            ThriftValidation.validateKey(metadata, key);
            RowMutation rm;
            if (metadata.getDefaultValidator().isCommutative()) {
                ThriftValidation.validateCommutativeForWrite(metadata, ConsistencyLevel.ONE);
                rmCounter = rmCounter == null ? new RowMutation(keyspace, key) : rmCounter;
                rm = rmCounter;
            } else {
                rmStandard = rmStandard == null ? new RowMutation(keyspace, key) : rmStandard;
                rm = rmStandard;
            }
            for (Mutation mutation : columnFamilyMutations.getValue()) {
                ThriftValidation.validateMutation(metadata, mutation);
                if (mutation.deletion != null) {
                    rm.deleteColumnOrSuperColumn(cfName, mutation.deletion, timestamp, localCommitTime, coordinatorKey);
                }
                if (mutation.column_or_supercolumn != null) {
                    rm.addColumnOrSuperColumn(cfName, mutation.column_or_supercolumn, timestamp, localCommitTime, coordinatorKey);
                }
            }
        }
        if (rmStandard != null && !rmStandard.isEmpty())
            rowMutations.add(rmStandard);
        if (rmCounter != null && !rmCounter.isEmpty())
            rowMutations.add(new org.apache.cassandra.db.CounterMutation(rmCounter, ConsistencyLevel.ONE));
    }
    logger.debug("Mutations are {}", rowMutations);
    return rowMutations;
}
Also used : IMutation(org.apache.cassandra.db.IMutation) ByteBuffer(java.nio.ByteBuffer) CounterMutation(org.apache.cassandra.db.CounterMutation) RowMutation(org.apache.cassandra.db.RowMutation) CFMetaData(org.apache.cassandra.config.CFMetaData) CounterMutation(org.apache.cassandra.db.CounterMutation) RowMutation(org.apache.cassandra.db.RowMutation) Mutation(org.apache.cassandra.thrift.Mutation) IMutation(org.apache.cassandra.db.IMutation) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap)

Example 5 with CounterMutation

use of org.apache.cassandra.db.CounterMutation in project eiger by wlloyd.

the class UpdateStatement method mutationForKey.

/**
 * Compute a row mutation for a single key
 *
 * @param keyspace working keyspace
 * @param key key to change
 * @param metadata information about CF
 * @param timestamp global timestamp to use for every key mutation
 *
 * @param clientState
 * @return row mutation
 *
 * @throws InvalidRequestException on the wrong request
 */
private IMutation mutationForKey(String keyspace, ByteBuffer key, CFMetaData metadata, Long timestamp, ClientState clientState, List<String> variables) throws InvalidRequestException {
    AbstractType<?> comparator = getComparator(keyspace);
    // if true we need to wrap RowMutation into CounterMutation
    boolean hasCounterColumn = false;
    RowMutation rm = new RowMutation(keyspace, key);
    for (Map.Entry<Term, Operation> column : getColumns().entrySet()) {
        ByteBuffer colName = column.getKey().getByteBuffer(comparator, variables);
        Operation op = column.getValue();
        if (op.isUnary()) {
            if (hasCounterColumn)
                throw new InvalidRequestException("Mix of commutative and non-commutative operations is not allowed.");
            ByteBuffer colValue = op.a.getByteBuffer(getValueValidator(keyspace, colName), variables);
            validateColumn(metadata, colName, colValue);
            rm.add(new QueryPath(columnFamily, null, colName), colValue, (timestamp == null) ? getTimestamp(clientState) : timestamp, getTimeToLive());
        } else {
            hasCounterColumn = true;
            if (!column.getKey().getText().equals(op.a.getText()))
                throw new InvalidRequestException("Only expressions like X = X + <long> are supported.");
            long value;
            try {
                value = Long.parseLong(op.b.getText());
            } catch (NumberFormatException e) {
                throw new InvalidRequestException(String.format("'%s' is an invalid value, should be a long.", op.b.getText()));
            }
            rm.addCounter(new QueryPath(columnFamily, null, colName), value, timestamp, timestamp, null);
        }
    }
    return (hasCounterColumn) ? new CounterMutation(rm, getConsistencyLevel()) : rm;
}
Also used : ByteBuffer(java.nio.ByteBuffer) QueryPath(org.apache.cassandra.db.filter.QueryPath) CounterMutation(org.apache.cassandra.db.CounterMutation) RowMutation(org.apache.cassandra.db.RowMutation) InvalidRequestException(org.apache.cassandra.thrift.InvalidRequestException)

Aggregations

CounterMutation (org.apache.cassandra.db.CounterMutation)5 IMutation (org.apache.cassandra.db.IMutation)4 ByteBuffer (java.nio.ByteBuffer)2 ArrayList (java.util.ArrayList)2 Mutation (org.apache.cassandra.db.Mutation)2 RowMutation (org.apache.cassandra.db.RowMutation)2 ConcurrentHashMap (java.util.concurrent.ConcurrentHashMap)1 CFMetaData (org.apache.cassandra.config.CFMetaData)1 WriteType (org.apache.cassandra.db.WriteType)1 QueryPath (org.apache.cassandra.db.filter.QueryPath)1 PartitionUpdate (org.apache.cassandra.db.partitions.PartitionUpdate)1 VirtualMutation (org.apache.cassandra.db.virtual.VirtualMutation)1 CasWriteTimeoutException (org.apache.cassandra.exceptions.CasWriteTimeoutException)1 OverloadedException (org.apache.cassandra.exceptions.OverloadedException)1 UnavailableException (org.apache.cassandra.exceptions.UnavailableException)1 WriteFailureException (org.apache.cassandra.exceptions.WriteFailureException)1 WriteTimeoutException (org.apache.cassandra.exceptions.WriteTimeoutException)1 Hint (org.apache.cassandra.hints.Hint)1 InvalidRequestException (org.apache.cassandra.thrift.InvalidRequestException)1 Mutation (org.apache.cassandra.thrift.Mutation)1