Search in sources :

Example 6 with CompactionDescriptor

use of org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.CompactionDescriptor in project hbase by apache.

the class ProtobufUtil method toCompactionDescriptor.

public static CompactionDescriptor toCompactionDescriptor(HRegionInfo info, byte[] regionName, byte[] family, List<Path> inputPaths, List<Path> outputPaths, Path storeDir) {
    // compaction descriptor contains relative paths.
    // input / output paths are relative to the store dir
    // store dir is relative to region dir
    CompactionDescriptor.Builder builder = CompactionDescriptor.newBuilder().setTableName(UnsafeByteOperations.unsafeWrap(info.getTable().toBytes())).setEncodedRegionName(UnsafeByteOperations.unsafeWrap(regionName == null ? info.getEncodedNameAsBytes() : regionName)).setFamilyName(UnsafeByteOperations.unsafeWrap(family)).setStoreHomeDir(//make relative
    storeDir.getName());
    for (Path inputPath : inputPaths) {
        //relative path
        builder.addCompactionInput(inputPath.getName());
    }
    for (Path outputPath : outputPaths) {
        builder.addCompactionOutput(outputPath.getName());
    }
    builder.setRegionName(UnsafeByteOperations.unsafeWrap(info.getRegionName()));
    return builder.build();
}
Also used : Path(org.apache.hadoop.fs.Path) CompactionDescriptor(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.CompactionDescriptor)

Example 7 with CompactionDescriptor

use of org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.CompactionDescriptor in project hbase by apache.

the class HStore method writeCompactionWalRecord.

/**
   * Writes the compaction WAL record.
   * @param filesCompacted Files compacted (input).
   * @param newFiles Files from compaction.
   */
private void writeCompactionWalRecord(Collection<StoreFile> filesCompacted, Collection<StoreFile> newFiles) throws IOException {
    if (region.getWAL() == null)
        return;
    List<Path> inputPaths = new ArrayList<>(filesCompacted.size());
    for (StoreFile f : filesCompacted) {
        inputPaths.add(f.getPath());
    }
    List<Path> outputPaths = new ArrayList<>(newFiles.size());
    for (StoreFile f : newFiles) {
        outputPaths.add(f.getPath());
    }
    HRegionInfo info = this.region.getRegionInfo();
    CompactionDescriptor compactionDescriptor = ProtobufUtil.toCompactionDescriptor(info, family.getName(), inputPaths, outputPaths, fs.getStoreDir(getFamily().getNameAsString()));
    // Fix reaching into Region to get the maxWaitForSeqId.
    // Does this method belong in Region altogether given it is making so many references up there?
    // Could be Region#writeCompactionMarker(compactionDescriptor);
    WALUtil.writeCompactionMarker(this.region.getWAL(), this.region.getReplicationScope(), this.region.getRegionInfo(), compactionDescriptor, this.region.getMVCC());
}
Also used : Path(org.apache.hadoop.fs.Path) ArrayList(java.util.ArrayList) CompactionDescriptor(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.CompactionDescriptor)

Aggregations

CompactionDescriptor (org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.CompactionDescriptor)7 Path (org.apache.hadoop.fs.Path)5 ArrayList (java.util.ArrayList)3 FileSystem (org.apache.hadoop.fs.FileSystem)3 WAL (org.apache.hadoop.hbase.wal.WAL)3 IOException (java.io.IOException)2 Cell (org.apache.hadoop.hbase.Cell)2 FlushDescriptor (org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.FlushDescriptor)2 WALKey (org.apache.hadoop.hbase.wal.WALKey)2 EOFException (java.io.EOFException)1 InterruptedIOException (java.io.InterruptedIOException)1 ParseException (java.text.ParseException)1 List (java.util.List)1 Configuration (org.apache.hadoop.conf.Configuration)1 FileStatus (org.apache.hadoop.fs.FileStatus)1 ByteBufferCell (org.apache.hadoop.hbase.ByteBufferCell)1 DoNotRetryIOException (org.apache.hadoop.hbase.DoNotRetryIOException)1 Admin (org.apache.hadoop.hbase.client.Admin)1 Get (org.apache.hadoop.hbase.client.Get)1 Put (org.apache.hadoop.hbase.client.Put)1