Search in sources :

Example 1 with CheckAndMutate

use of org.apache.hadoop.hbase.client.CheckAndMutate in project hbase by apache.

the class RSRpcServices method checkAndMutate.

private CheckAndMutateResult checkAndMutate(HRegion region, OperationQuota quota, MutationProto mutation, CellScanner cellScanner, Condition condition, long nonceGroup, ActivePolicyEnforcement spaceQuota) throws IOException {
    long before = EnvironmentEdgeManager.currentTime();
    CheckAndMutate checkAndMutate = ProtobufUtil.toCheckAndMutate(condition, mutation, cellScanner);
    long nonce = mutation.hasNonce() ? mutation.getNonce() : HConstants.NO_NONCE;
    checkCellSizeLimit(region, (Mutation) checkAndMutate.getAction());
    spaceQuota.getPolicyEnforcement(region).check((Mutation) checkAndMutate.getAction());
    quota.addMutation((Mutation) checkAndMutate.getAction());
    CheckAndMutateResult result = null;
    if (region.getCoprocessorHost() != null) {
        result = region.getCoprocessorHost().preCheckAndMutate(checkAndMutate);
    }
    if (result == null) {
        result = region.checkAndMutate(checkAndMutate, nonceGroup, nonce);
        if (region.getCoprocessorHost() != null) {
            result = region.getCoprocessorHost().postCheckAndMutate(checkAndMutate, result);
        }
    }
    MetricsRegionServer metricsRegionServer = server.getMetrics();
    if (metricsRegionServer != null) {
        long after = EnvironmentEdgeManager.currentTime();
        metricsRegionServer.updateCheckAndMutate(region.getRegionInfo().getTable(), after - before);
        MutationType type = mutation.getMutateType();
        switch(type) {
            case PUT:
                metricsRegionServer.updateCheckAndPut(region.getRegionInfo().getTable(), after - before);
                break;
            case DELETE:
                metricsRegionServer.updateCheckAndDelete(region.getRegionInfo().getTable(), after - before);
                break;
            default:
                break;
        }
    }
    return result;
}
Also used : MutationType(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.MutationProto.MutationType) CheckAndMutateResult(org.apache.hadoop.hbase.client.CheckAndMutateResult) CheckAndMutate(org.apache.hadoop.hbase.client.CheckAndMutate)

Example 2 with CheckAndMutate

use of org.apache.hadoop.hbase.client.CheckAndMutate in project hbase by apache.

the class ProtobufUtil method toCheckAndMutate.

public static CheckAndMutate toCheckAndMutate(ClientProtos.Condition condition, List<Mutation> mutations) throws IOException {
    assert mutations.size() > 0;
    byte[] row = condition.getRow().toByteArray();
    CheckAndMutate.Builder builder = CheckAndMutate.newBuilder(row);
    Filter filter = condition.hasFilter() ? ProtobufUtil.toFilter(condition.getFilter()) : null;
    if (filter != null) {
        builder.ifMatches(filter);
    } else {
        builder.ifMatches(condition.getFamily().toByteArray(), condition.getQualifier().toByteArray(), CompareOperator.valueOf(condition.getCompareType().name()), ProtobufUtil.toComparator(condition.getComparator()).getValue());
    }
    TimeRange timeRange = condition.hasTimeRange() ? ProtobufUtil.toTimeRange(condition.getTimeRange()) : TimeRange.allTime();
    builder.timeRange(timeRange);
    try {
        if (mutations.size() == 1) {
            Mutation m = mutations.get(0);
            if (m instanceof Put) {
                return builder.build((Put) m);
            } else if (m instanceof Delete) {
                return builder.build((Delete) m);
            } else if (m instanceof Increment) {
                return builder.build((Increment) m);
            } else if (m instanceof Append) {
                return builder.build((Append) m);
            } else {
                throw new DoNotRetryIOException("Unsupported mutate type: " + m.getClass().getSimpleName().toUpperCase());
            }
        } else {
            return builder.build(new RowMutations(mutations.get(0).getRow()).add(mutations));
        }
    } catch (IllegalArgumentException e) {
        throw new DoNotRetryIOException(e.getMessage());
    }
}
Also used : Delete(org.apache.hadoop.hbase.client.Delete) DoNotRetryIOException(org.apache.hadoop.hbase.DoNotRetryIOException) CheckAndMutate(org.apache.hadoop.hbase.client.CheckAndMutate) Put(org.apache.hadoop.hbase.client.Put) RowMutations(org.apache.hadoop.hbase.client.RowMutations) TimeRange(org.apache.hadoop.hbase.io.TimeRange) Append(org.apache.hadoop.hbase.client.Append) Filter(org.apache.hadoop.hbase.filter.Filter) Increment(org.apache.hadoop.hbase.client.Increment) Mutation(org.apache.hadoop.hbase.client.Mutation)

Example 3 with CheckAndMutate

use of org.apache.hadoop.hbase.client.CheckAndMutate in project hbase by apache.

the class RequestConverter method buildNoDataRegionActions.

/**
 * Create a protocol buffer multirequest with NO data for a list of actions (data is carried
 * otherwise than via protobuf).  This means it just notes attributes, whether to write the
 * WAL, etc., and the presence in protobuf serves as place holder for the data which is
 * coming along otherwise.  Note that Get is different.  It does not contain 'data' and is always
 * carried by protobuf.  We return references to the data by adding them to the passed in
 * <code>data</code> param.
 * <p> Propagates Actions original index.
 * <p> The passed in multiRequestBuilder will be populated with region actions.
 * @param regionName The region name of the actions.
 * @param actions The actions that are grouped by the same region name.
 * @param cells Place to stuff references to actual data.
 * @param multiRequestBuilder The multiRequestBuilder to be populated with region actions.
 * @param regionActionBuilder regionActionBuilder to be used to build region action.
 * @param actionBuilder actionBuilder to be used to build action.
 * @param mutationBuilder mutationBuilder to be used to build mutation.
 * @param nonceGroup nonceGroup to be applied.
 * @param indexMap Map of created RegionAction to the original index for a
 *   RowMutations/CheckAndMutate within the original list of actions
 * @throws IOException
 */
public static void buildNoDataRegionActions(final byte[] regionName, final Iterable<Action> actions, final List<CellScannable> cells, final MultiRequest.Builder multiRequestBuilder, final RegionAction.Builder regionActionBuilder, final ClientProtos.Action.Builder actionBuilder, final MutationProto.Builder mutationBuilder, long nonceGroup, final Map<Integer, Integer> indexMap) throws IOException {
    regionActionBuilder.clear();
    RegionAction.Builder builder = getRegionActionBuilderWithRegion(regionActionBuilder, regionName);
    ClientProtos.CoprocessorServiceCall.Builder cpBuilder = null;
    boolean hasNonce = false;
    List<Action> rowMutationsList = new ArrayList<>();
    List<Action> checkAndMutates = new ArrayList<>();
    for (Action action : actions) {
        Row row = action.getAction();
        actionBuilder.clear();
        actionBuilder.setIndex(action.getOriginalIndex());
        mutationBuilder.clear();
        if (row instanceof Get) {
            Get g = (Get) row;
            builder.addAction(actionBuilder.setGet(ProtobufUtil.toGet(g)));
        } else if (row instanceof Put) {
            buildNoDataRegionAction((Put) row, cells, builder, actionBuilder, mutationBuilder);
        } else if (row instanceof Delete) {
            buildNoDataRegionAction((Delete) row, cells, builder, actionBuilder, mutationBuilder);
        } else if (row instanceof Append) {
            buildNoDataRegionAction((Append) row, cells, action.getNonce(), builder, actionBuilder, mutationBuilder);
            hasNonce = true;
        } else if (row instanceof Increment) {
            buildNoDataRegionAction((Increment) row, cells, action.getNonce(), builder, actionBuilder, mutationBuilder);
            hasNonce = true;
        } else if (row instanceof RegionCoprocessorServiceExec) {
            RegionCoprocessorServiceExec exec = (RegionCoprocessorServiceExec) row;
            // DUMB COPY!!! FIX!!! Done to copy from c.g.p.ByteString to shaded ByteString.
            org.apache.hbase.thirdparty.com.google.protobuf.ByteString value = org.apache.hbase.thirdparty.com.google.protobuf.UnsafeByteOperations.unsafeWrap(exec.getRequest().toByteArray());
            if (cpBuilder == null) {
                cpBuilder = ClientProtos.CoprocessorServiceCall.newBuilder();
            } else {
                cpBuilder.clear();
            }
            builder.addAction(actionBuilder.setServiceCall(cpBuilder.setRow(UnsafeByteOperations.unsafeWrap(exec.getRow())).setServiceName(exec.getMethod().getService().getFullName()).setMethodName(exec.getMethod().getName()).setRequest(value)));
        } else if (row instanceof RowMutations) {
            rowMutationsList.add(action);
        } else if (row instanceof CheckAndMutate) {
            checkAndMutates.add(action);
        } else {
            throw new DoNotRetryIOException("Multi doesn't support " + row.getClass().getName());
        }
    }
    if (builder.getActionCount() > 0) {
        multiRequestBuilder.addRegionAction(builder.build());
    }
    // We maintain a map to keep track of this RegionAction and the original Action index.
    for (Action action : rowMutationsList) {
        builder.clear();
        getRegionActionBuilderWithRegion(builder, regionName);
        boolean hasIncrementOrAppend = buildNoDataRegionAction((RowMutations) action.getAction(), cells, action.getNonce(), builder, actionBuilder, mutationBuilder);
        if (hasIncrementOrAppend) {
            hasNonce = true;
        }
        builder.setAtomic(true);
        multiRequestBuilder.addRegionAction(builder.build());
        // This rowMutations region action is at (multiRequestBuilder.getRegionActionCount() - 1)
        // in the overall multiRequest.
        indexMap.put(multiRequestBuilder.getRegionActionCount() - 1, action.getOriginalIndex());
    }
    // Action index.
    for (Action action : checkAndMutates) {
        builder.clear();
        getRegionActionBuilderWithRegion(builder, regionName);
        CheckAndMutate cam = (CheckAndMutate) action.getAction();
        builder.setCondition(ProtobufUtil.toCondition(cam.getRow(), cam.getFamily(), cam.getQualifier(), cam.getCompareOp(), cam.getValue(), cam.getFilter(), cam.getTimeRange()));
        if (cam.getAction() instanceof Put) {
            actionBuilder.clear();
            mutationBuilder.clear();
            buildNoDataRegionAction((Put) cam.getAction(), cells, builder, actionBuilder, mutationBuilder);
        } else if (cam.getAction() instanceof Delete) {
            actionBuilder.clear();
            mutationBuilder.clear();
            buildNoDataRegionAction((Delete) cam.getAction(), cells, builder, actionBuilder, mutationBuilder);
        } else if (cam.getAction() instanceof Increment) {
            actionBuilder.clear();
            mutationBuilder.clear();
            buildNoDataRegionAction((Increment) cam.getAction(), cells, action.getNonce(), builder, actionBuilder, mutationBuilder);
            hasNonce = true;
        } else if (cam.getAction() instanceof Append) {
            actionBuilder.clear();
            mutationBuilder.clear();
            buildNoDataRegionAction((Append) cam.getAction(), cells, action.getNonce(), builder, actionBuilder, mutationBuilder);
            hasNonce = true;
        } else if (cam.getAction() instanceof RowMutations) {
            boolean hasIncrementOrAppend = buildNoDataRegionAction((RowMutations) cam.getAction(), cells, action.getNonce(), builder, actionBuilder, mutationBuilder);
            if (hasIncrementOrAppend) {
                hasNonce = true;
            }
            builder.setAtomic(true);
        } else {
            throw new DoNotRetryIOException("CheckAndMutate doesn't support " + cam.getAction().getClass().getName());
        }
        multiRequestBuilder.addRegionAction(builder.build());
        // This CheckAndMutate region action is at (multiRequestBuilder.getRegionActionCount() - 1)
        // in the overall multiRequest.
        indexMap.put(multiRequestBuilder.getRegionActionCount() - 1, action.getOriginalIndex());
    }
    if (!multiRequestBuilder.hasNonceGroup() && hasNonce) {
        multiRequestBuilder.setNonceGroup(nonceGroup);
    }
}
Also used : Delete(org.apache.hadoop.hbase.client.Delete) Action(org.apache.hadoop.hbase.client.Action) RegionAction(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.RegionAction) DoNotRetryIOException(org.apache.hadoop.hbase.DoNotRetryIOException) ByteString(org.apache.hbase.thirdparty.com.google.protobuf.ByteString) ArrayList(java.util.ArrayList) RegionAction(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.RegionAction) CheckAndMutate(org.apache.hadoop.hbase.client.CheckAndMutate) Put(org.apache.hadoop.hbase.client.Put) RegionCoprocessorServiceExec(org.apache.hadoop.hbase.client.RegionCoprocessorServiceExec) RowMutations(org.apache.hadoop.hbase.client.RowMutations) Append(org.apache.hadoop.hbase.client.Append) Get(org.apache.hadoop.hbase.client.Get) Increment(org.apache.hadoop.hbase.client.Increment) Row(org.apache.hadoop.hbase.client.Row)

Example 4 with CheckAndMutate

use of org.apache.hadoop.hbase.client.CheckAndMutate in project hbase by apache.

the class TestHRegion method testCheckAndRowMutations.

@Test
public void testCheckAndRowMutations() throws Throwable {
    final byte[] row = Bytes.toBytes("row");
    final byte[] q1 = Bytes.toBytes("q1");
    final byte[] q2 = Bytes.toBytes("q2");
    final byte[] q3 = Bytes.toBytes("q3");
    final byte[] q4 = Bytes.toBytes("q4");
    final String v1 = "v1";
    region = initHRegion(tableName, method, CONF, fam1);
    // Initial values
    region.batchMutate(new Mutation[] { new Put(row).addColumn(fam1, q2, Bytes.toBytes("toBeDeleted")), new Put(row).addColumn(fam1, q3, Bytes.toBytes(5L)), new Put(row).addColumn(fam1, q4, Bytes.toBytes("a")) });
    // Do CheckAndRowMutations
    CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row).ifNotExists(fam1, q1).build(new RowMutations(row).add(Arrays.asList(new Put(row).addColumn(fam1, q1, Bytes.toBytes(v1)), new Delete(row).addColumns(fam1, q2), new Increment(row).addColumn(fam1, q3, 1), new Append(row).addColumn(fam1, q4, Bytes.toBytes("b")))));
    CheckAndMutateResult result = region.checkAndMutate(checkAndMutate);
    assertTrue(result.isSuccess());
    assertEquals(6L, Bytes.toLong(result.getResult().getValue(fam1, q3)));
    assertEquals("ab", Bytes.toString(result.getResult().getValue(fam1, q4)));
    // Verify the value
    Result r = region.get(new Get(row));
    assertEquals(v1, Bytes.toString(r.getValue(fam1, q1)));
    assertNull(r.getValue(fam1, q2));
    assertEquals(6L, Bytes.toLong(r.getValue(fam1, q3)));
    assertEquals("ab", Bytes.toString(r.getValue(fam1, q4)));
    // Do CheckAndRowMutations again
    checkAndMutate = CheckAndMutate.newBuilder(row).ifNotExists(fam1, q1).build(new RowMutations(row).add(Arrays.asList(new Delete(row).addColumns(fam1, q1), new Put(row).addColumn(fam1, q2, Bytes.toBytes(v1)), new Increment(row).addColumn(fam1, q3, 1), new Append(row).addColumn(fam1, q4, Bytes.toBytes("b")))));
    result = region.checkAndMutate(checkAndMutate);
    assertFalse(result.isSuccess());
    assertNull(result.getResult());
    // Verify the value
    r = region.get(new Get(row));
    assertEquals(v1, Bytes.toString(r.getValue(fam1, q1)));
    assertNull(r.getValue(fam1, q2));
    assertEquals(6L, Bytes.toLong(r.getValue(fam1, q3)));
    assertEquals("ab", Bytes.toString(r.getValue(fam1, q4)));
}
Also used : Delete(org.apache.hadoop.hbase.client.Delete) Append(org.apache.hadoop.hbase.client.Append) CheckAndMutateResult(org.apache.hadoop.hbase.client.CheckAndMutateResult) Increment(org.apache.hadoop.hbase.client.Increment) Get(org.apache.hadoop.hbase.client.Get) CheckAndMutate(org.apache.hadoop.hbase.client.CheckAndMutate) ArgumentMatchers.anyString(org.mockito.ArgumentMatchers.anyString) ByteString(org.apache.hbase.thirdparty.com.google.protobuf.ByteString) Put(org.apache.hadoop.hbase.client.Put) RowMutations(org.apache.hadoop.hbase.client.RowMutations) CheckAndMutateResult(org.apache.hadoop.hbase.client.CheckAndMutateResult) Result(org.apache.hadoop.hbase.client.Result) Test(org.junit.Test)

Example 5 with CheckAndMutate

use of org.apache.hadoop.hbase.client.CheckAndMutate in project hbase by apache.

the class TableOperationSpanBuilder method unpackRowOperations.

private static Set<Operation> unpackRowOperations(final Row row) {
    final Set<Operation> ops = new HashSet<>();
    if (row instanceof CheckAndMutate) {
        final CheckAndMutate cam = (CheckAndMutate) row;
        ops.addAll(unpackRowOperations(cam));
    }
    if (row instanceof RowMutations) {
        final RowMutations mutations = (RowMutations) row;
        final List<Operation> operations = mutations.getMutations().stream().map(TableOperationSpanBuilder::valueFrom).collect(Collectors.toList());
        ops.addAll(operations);
    }
    return ops;
}
Also used : CheckAndMutate(org.apache.hadoop.hbase.client.CheckAndMutate) Operation(org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.Operation) HashSet(java.util.HashSet) RowMutations(org.apache.hadoop.hbase.client.RowMutations)

Aggregations

CheckAndMutate (org.apache.hadoop.hbase.client.CheckAndMutate)8 Append (org.apache.hadoop.hbase.client.Append)5 Increment (org.apache.hadoop.hbase.client.Increment)5 RowMutations (org.apache.hadoop.hbase.client.RowMutations)5 DoNotRetryIOException (org.apache.hadoop.hbase.DoNotRetryIOException)4 CheckAndMutateResult (org.apache.hadoop.hbase.client.CheckAndMutateResult)4 Delete (org.apache.hadoop.hbase.client.Delete)4 Put (org.apache.hadoop.hbase.client.Put)4 Get (org.apache.hadoop.hbase.client.Get)3 Mutation (org.apache.hadoop.hbase.client.Mutation)3 MutationType (org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.MutationProto.MutationType)3 ArrayList (java.util.ArrayList)2 Result (org.apache.hadoop.hbase.client.Result)2 Filter (org.apache.hadoop.hbase.filter.Filter)2 TimeRange (org.apache.hadoop.hbase.io.TimeRange)2 ByteString (org.apache.hbase.thirdparty.com.google.protobuf.ByteString)2 Test (org.junit.Test)2 HashSet (java.util.HashSet)1 Action (org.apache.hadoop.hbase.client.Action)1 RegionCoprocessorServiceExec (org.apache.hadoop.hbase.client.RegionCoprocessorServiceExec)1