Search in sources :

Example 11 with Increment

use of org.apache.hadoop.hbase.client.Increment in project hbase by apache.

the class ProtobufUtil method toCheckAndMutate.

public static CheckAndMutate toCheckAndMutate(ClientProtos.Condition condition, List<Mutation> mutations) throws IOException {
    assert mutations.size() > 0;
    byte[] row = condition.getRow().toByteArray();
    CheckAndMutate.Builder builder = CheckAndMutate.newBuilder(row);
    Filter filter = condition.hasFilter() ? ProtobufUtil.toFilter(condition.getFilter()) : null;
    if (filter != null) {
        builder.ifMatches(filter);
    } else {
        builder.ifMatches(condition.getFamily().toByteArray(), condition.getQualifier().toByteArray(), CompareOperator.valueOf(condition.getCompareType().name()), ProtobufUtil.toComparator(condition.getComparator()).getValue());
    }
    TimeRange timeRange = condition.hasTimeRange() ? ProtobufUtil.toTimeRange(condition.getTimeRange()) : TimeRange.allTime();
    builder.timeRange(timeRange);
    try {
        if (mutations.size() == 1) {
            Mutation m = mutations.get(0);
            if (m instanceof Put) {
                return builder.build((Put) m);
            } else if (m instanceof Delete) {
                return builder.build((Delete) m);
            } else if (m instanceof Increment) {
                return builder.build((Increment) m);
            } else if (m instanceof Append) {
                return builder.build((Append) m);
            } else {
                throw new DoNotRetryIOException("Unsupported mutate type: " + m.getClass().getSimpleName().toUpperCase());
            }
        } else {
            return builder.build(new RowMutations(mutations.get(0).getRow()).add(mutations));
        }
    } catch (IllegalArgumentException e) {
        throw new DoNotRetryIOException(e.getMessage());
    }
}
Also used : Delete(org.apache.hadoop.hbase.client.Delete) DoNotRetryIOException(org.apache.hadoop.hbase.DoNotRetryIOException) CheckAndMutate(org.apache.hadoop.hbase.client.CheckAndMutate) Put(org.apache.hadoop.hbase.client.Put) RowMutations(org.apache.hadoop.hbase.client.RowMutations) TimeRange(org.apache.hadoop.hbase.io.TimeRange) Append(org.apache.hadoop.hbase.client.Append) Filter(org.apache.hadoop.hbase.filter.Filter) Increment(org.apache.hadoop.hbase.client.Increment) Mutation(org.apache.hadoop.hbase.client.Mutation)

Example 12 with Increment

use of org.apache.hadoop.hbase.client.Increment in project hbase by apache.

the class ProtobufUtil method toIncrement.

/**
 * Convert a protocol buffer Mutate to an Increment
 *
 * @param proto the protocol buffer Mutate to convert
 * @return the converted client Increment
 * @throws IOException
 */
public static Increment toIncrement(final MutationProto proto, final CellScanner cellScanner) throws IOException {
    MutationType type = proto.getMutateType();
    assert type == MutationType.INCREMENT : type.name();
    Increment increment = toDelta((Bytes row) -> new Increment(row.get(), row.getOffset(), row.getLength()), Increment::add, proto, cellScanner);
    if (proto.hasTimeRange()) {
        TimeRange timeRange = toTimeRange(proto.getTimeRange());
        increment.setTimeRange(timeRange.getMin(), timeRange.getMax());
    }
    return increment;
}
Also used : Bytes(org.apache.hadoop.hbase.util.Bytes) TimeRange(org.apache.hadoop.hbase.io.TimeRange) MutationType(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.MutationProto.MutationType) Increment(org.apache.hadoop.hbase.client.Increment)

Example 13 with Increment

use of org.apache.hadoop.hbase.client.Increment in project hbase by apache.

the class RequestConverter method buildNoDataRegionAction.

/**
 * @return whether or not the rowMutations has a Increment or Append
 */
private static boolean buildNoDataRegionAction(final RowMutations rowMutations, final List<CellScannable> cells, long nonce, final RegionAction.Builder regionActionBuilder, final ClientProtos.Action.Builder actionBuilder, final MutationProto.Builder mutationBuilder) throws IOException {
    boolean ret = false;
    for (Mutation mutation : rowMutations.getMutations()) {
        mutationBuilder.clear();
        MutationProto mp;
        if (mutation instanceof Increment || mutation instanceof Append) {
            mp = ProtobufUtil.toMutationNoData(getMutationType(mutation), mutation, mutationBuilder, nonce);
            ret = true;
        } else {
            mp = ProtobufUtil.toMutationNoData(getMutationType(mutation), mutation, mutationBuilder);
        }
        cells.add(mutation);
        actionBuilder.clear();
        regionActionBuilder.addAction(actionBuilder.setMutation(mp).build());
    }
    return ret;
}
Also used : Append(org.apache.hadoop.hbase.client.Append) Increment(org.apache.hadoop.hbase.client.Increment) Mutation(org.apache.hadoop.hbase.client.Mutation) MutationProto(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.MutationProto)

Example 14 with Increment

use of org.apache.hadoop.hbase.client.Increment in project hbase by apache.

the class RequestConverter method buildNoDataRegionActions.

/**
 * Create a protocol buffer multirequest with NO data for a list of actions (data is carried
 * otherwise than via protobuf).  This means it just notes attributes, whether to write the
 * WAL, etc., and the presence in protobuf serves as place holder for the data which is
 * coming along otherwise.  Note that Get is different.  It does not contain 'data' and is always
 * carried by protobuf.  We return references to the data by adding them to the passed in
 * <code>data</code> param.
 * <p> Propagates Actions original index.
 * <p> The passed in multiRequestBuilder will be populated with region actions.
 * @param regionName The region name of the actions.
 * @param actions The actions that are grouped by the same region name.
 * @param cells Place to stuff references to actual data.
 * @param multiRequestBuilder The multiRequestBuilder to be populated with region actions.
 * @param regionActionBuilder regionActionBuilder to be used to build region action.
 * @param actionBuilder actionBuilder to be used to build action.
 * @param mutationBuilder mutationBuilder to be used to build mutation.
 * @param nonceGroup nonceGroup to be applied.
 * @param indexMap Map of created RegionAction to the original index for a
 *   RowMutations/CheckAndMutate within the original list of actions
 * @throws IOException
 */
public static void buildNoDataRegionActions(final byte[] regionName, final Iterable<Action> actions, final List<CellScannable> cells, final MultiRequest.Builder multiRequestBuilder, final RegionAction.Builder regionActionBuilder, final ClientProtos.Action.Builder actionBuilder, final MutationProto.Builder mutationBuilder, long nonceGroup, final Map<Integer, Integer> indexMap) throws IOException {
    regionActionBuilder.clear();
    RegionAction.Builder builder = getRegionActionBuilderWithRegion(regionActionBuilder, regionName);
    ClientProtos.CoprocessorServiceCall.Builder cpBuilder = null;
    boolean hasNonce = false;
    List<Action> rowMutationsList = new ArrayList<>();
    List<Action> checkAndMutates = new ArrayList<>();
    for (Action action : actions) {
        Row row = action.getAction();
        actionBuilder.clear();
        actionBuilder.setIndex(action.getOriginalIndex());
        mutationBuilder.clear();
        if (row instanceof Get) {
            Get g = (Get) row;
            builder.addAction(actionBuilder.setGet(ProtobufUtil.toGet(g)));
        } else if (row instanceof Put) {
            buildNoDataRegionAction((Put) row, cells, builder, actionBuilder, mutationBuilder);
        } else if (row instanceof Delete) {
            buildNoDataRegionAction((Delete) row, cells, builder, actionBuilder, mutationBuilder);
        } else if (row instanceof Append) {
            buildNoDataRegionAction((Append) row, cells, action.getNonce(), builder, actionBuilder, mutationBuilder);
            hasNonce = true;
        } else if (row instanceof Increment) {
            buildNoDataRegionAction((Increment) row, cells, action.getNonce(), builder, actionBuilder, mutationBuilder);
            hasNonce = true;
        } else if (row instanceof RegionCoprocessorServiceExec) {
            RegionCoprocessorServiceExec exec = (RegionCoprocessorServiceExec) row;
            // DUMB COPY!!! FIX!!! Done to copy from c.g.p.ByteString to shaded ByteString.
            org.apache.hbase.thirdparty.com.google.protobuf.ByteString value = org.apache.hbase.thirdparty.com.google.protobuf.UnsafeByteOperations.unsafeWrap(exec.getRequest().toByteArray());
            if (cpBuilder == null) {
                cpBuilder = ClientProtos.CoprocessorServiceCall.newBuilder();
            } else {
                cpBuilder.clear();
            }
            builder.addAction(actionBuilder.setServiceCall(cpBuilder.setRow(UnsafeByteOperations.unsafeWrap(exec.getRow())).setServiceName(exec.getMethod().getService().getFullName()).setMethodName(exec.getMethod().getName()).setRequest(value)));
        } else if (row instanceof RowMutations) {
            rowMutationsList.add(action);
        } else if (row instanceof CheckAndMutate) {
            checkAndMutates.add(action);
        } else {
            throw new DoNotRetryIOException("Multi doesn't support " + row.getClass().getName());
        }
    }
    if (builder.getActionCount() > 0) {
        multiRequestBuilder.addRegionAction(builder.build());
    }
    // We maintain a map to keep track of this RegionAction and the original Action index.
    for (Action action : rowMutationsList) {
        builder.clear();
        getRegionActionBuilderWithRegion(builder, regionName);
        boolean hasIncrementOrAppend = buildNoDataRegionAction((RowMutations) action.getAction(), cells, action.getNonce(), builder, actionBuilder, mutationBuilder);
        if (hasIncrementOrAppend) {
            hasNonce = true;
        }
        builder.setAtomic(true);
        multiRequestBuilder.addRegionAction(builder.build());
        // This rowMutations region action is at (multiRequestBuilder.getRegionActionCount() - 1)
        // in the overall multiRequest.
        indexMap.put(multiRequestBuilder.getRegionActionCount() - 1, action.getOriginalIndex());
    }
    // Action index.
    for (Action action : checkAndMutates) {
        builder.clear();
        getRegionActionBuilderWithRegion(builder, regionName);
        CheckAndMutate cam = (CheckAndMutate) action.getAction();
        builder.setCondition(ProtobufUtil.toCondition(cam.getRow(), cam.getFamily(), cam.getQualifier(), cam.getCompareOp(), cam.getValue(), cam.getFilter(), cam.getTimeRange()));
        if (cam.getAction() instanceof Put) {
            actionBuilder.clear();
            mutationBuilder.clear();
            buildNoDataRegionAction((Put) cam.getAction(), cells, builder, actionBuilder, mutationBuilder);
        } else if (cam.getAction() instanceof Delete) {
            actionBuilder.clear();
            mutationBuilder.clear();
            buildNoDataRegionAction((Delete) cam.getAction(), cells, builder, actionBuilder, mutationBuilder);
        } else if (cam.getAction() instanceof Increment) {
            actionBuilder.clear();
            mutationBuilder.clear();
            buildNoDataRegionAction((Increment) cam.getAction(), cells, action.getNonce(), builder, actionBuilder, mutationBuilder);
            hasNonce = true;
        } else if (cam.getAction() instanceof Append) {
            actionBuilder.clear();
            mutationBuilder.clear();
            buildNoDataRegionAction((Append) cam.getAction(), cells, action.getNonce(), builder, actionBuilder, mutationBuilder);
            hasNonce = true;
        } else if (cam.getAction() instanceof RowMutations) {
            boolean hasIncrementOrAppend = buildNoDataRegionAction((RowMutations) cam.getAction(), cells, action.getNonce(), builder, actionBuilder, mutationBuilder);
            if (hasIncrementOrAppend) {
                hasNonce = true;
            }
            builder.setAtomic(true);
        } else {
            throw new DoNotRetryIOException("CheckAndMutate doesn't support " + cam.getAction().getClass().getName());
        }
        multiRequestBuilder.addRegionAction(builder.build());
        // This CheckAndMutate region action is at (multiRequestBuilder.getRegionActionCount() - 1)
        // in the overall multiRequest.
        indexMap.put(multiRequestBuilder.getRegionActionCount() - 1, action.getOriginalIndex());
    }
    if (!multiRequestBuilder.hasNonceGroup() && hasNonce) {
        multiRequestBuilder.setNonceGroup(nonceGroup);
    }
}
Also used : Delete(org.apache.hadoop.hbase.client.Delete) Action(org.apache.hadoop.hbase.client.Action) RegionAction(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.RegionAction) DoNotRetryIOException(org.apache.hadoop.hbase.DoNotRetryIOException) ByteString(org.apache.hbase.thirdparty.com.google.protobuf.ByteString) ArrayList(java.util.ArrayList) RegionAction(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.RegionAction) CheckAndMutate(org.apache.hadoop.hbase.client.CheckAndMutate) Put(org.apache.hadoop.hbase.client.Put) RegionCoprocessorServiceExec(org.apache.hadoop.hbase.client.RegionCoprocessorServiceExec) RowMutations(org.apache.hadoop.hbase.client.RowMutations) Append(org.apache.hadoop.hbase.client.Append) Get(org.apache.hadoop.hbase.client.Get) Increment(org.apache.hadoop.hbase.client.Increment) Row(org.apache.hadoop.hbase.client.Row)

Example 15 with Increment

use of org.apache.hadoop.hbase.client.Increment in project beam by apache.

the class HBaseMutationCoderTest method testMutationEncoding.

@Test
public void testMutationEncoding() throws Exception {
    Mutation put = new Put("1".getBytes(StandardCharsets.UTF_8));
    CoderProperties.structuralValueDecodeEncodeEqual(CODER, put);
    Mutation delete = new Delete("1".getBytes(StandardCharsets.UTF_8));
    CoderProperties.structuralValueDecodeEncodeEqual(CODER, delete);
    Mutation increment = new Increment("1".getBytes(StandardCharsets.UTF_8));
    thrown.expect(IllegalArgumentException.class);
    thrown.expectMessage("Only Put and Delete are supported");
    CoderProperties.coderDecodeEncodeEqual(CODER, increment);
}
Also used : Delete(org.apache.hadoop.hbase.client.Delete) Increment(org.apache.hadoop.hbase.client.Increment) Mutation(org.apache.hadoop.hbase.client.Mutation) Put(org.apache.hadoop.hbase.client.Put) Test(org.junit.Test)

Aggregations

Increment (org.apache.hadoop.hbase.client.Increment)81 Test (org.junit.Test)42 Put (org.apache.hadoop.hbase.client.Put)31 Append (org.apache.hadoop.hbase.client.Append)25 Result (org.apache.hadoop.hbase.client.Result)25 Delete (org.apache.hadoop.hbase.client.Delete)21 Get (org.apache.hadoop.hbase.client.Get)19 IOException (java.io.IOException)16 TableName (org.apache.hadoop.hbase.TableName)15 Table (org.apache.hadoop.hbase.client.Table)15 ArrayList (java.util.ArrayList)14 Cell (org.apache.hadoop.hbase.Cell)11 DoNotRetryIOException (org.apache.hadoop.hbase.DoNotRetryIOException)11 CheckAndMutateResult (org.apache.hadoop.hbase.client.CheckAndMutateResult)9 Mutation (org.apache.hadoop.hbase.client.Mutation)9 RowMutations (org.apache.hadoop.hbase.client.RowMutations)9 List (java.util.List)8 Map (java.util.Map)8 Scan (org.apache.hadoop.hbase.client.Scan)7 KeyValue (org.apache.hadoop.hbase.KeyValue)5