Search in sources :

Example 46 with DataInputViewStreamWrapper

use of org.apache.flink.core.memory.DataInputViewStreamWrapper in project flink by apache.

the class RocksDBReducingState method mergeNamespaces.

@Override
public void mergeNamespaces(N target, Collection<N> sources) throws Exception {
    if (sources == null || sources.isEmpty()) {
        return;
    }
    // cache key and namespace
    final K key = backend.getCurrentKey();
    final int keyGroup = backend.getCurrentKeyGroupIndex();
    try {
        V current = null;
        // merge the sources to the target
        for (N source : sources) {
            if (source != null) {
                writeKeyWithGroupAndNamespace(keyGroup, key, source, keySerializationStream, keySerializationDataOutputView);
                final byte[] sourceKey = keySerializationStream.toByteArray();
                final byte[] valueBytes = backend.db.get(columnFamily, sourceKey);
                if (valueBytes != null) {
                    V value = valueSerializer.deserialize(new DataInputViewStreamWrapper(new ByteArrayInputStreamWithPos(valueBytes)));
                    if (current != null) {
                        current = reduceFunction.reduce(current, value);
                    } else {
                        current = value;
                    }
                }
            }
        }
        // if something came out of merging the sources, merge it or write it to the target
        if (current != null) {
            // create the target full-binary-key 
            writeKeyWithGroupAndNamespace(keyGroup, key, target, keySerializationStream, keySerializationDataOutputView);
            final byte[] targetKey = keySerializationStream.toByteArray();
            final byte[] targetValueBytes = backend.db.get(columnFamily, targetKey);
            if (targetValueBytes != null) {
                // target also had a value, merge
                V value = valueSerializer.deserialize(new DataInputViewStreamWrapper(new ByteArrayInputStreamWithPos(targetValueBytes)));
                current = reduceFunction.reduce(current, value);
            }
            // serialize the resulting value
            keySerializationStream.reset();
            valueSerializer.serialize(current, keySerializationDataOutputView);
            // write the resulting value
            backend.db.put(columnFamily, writeOptions, targetKey, keySerializationStream.toByteArray());
        }
    } catch (Exception e) {
        throw new Exception("Error while merging state in RocksDB", e);
    }
}
Also used : ByteArrayInputStreamWithPos(org.apache.flink.core.memory.ByteArrayInputStreamWithPos) DataInputViewStreamWrapper(org.apache.flink.core.memory.DataInputViewStreamWrapper) IOException(java.io.IOException) RocksDBException(org.rocksdb.RocksDBException)

Example 47 with DataInputViewStreamWrapper

use of org.apache.flink.core.memory.DataInputViewStreamWrapper in project flink by apache.

the class RocksDBValueState method value.

@Override
public V value() {
    try {
        writeCurrentKeyWithGroupAndNamespace();
        byte[] key = keySerializationStream.toByteArray();
        byte[] valueBytes = backend.db.get(columnFamily, key);
        if (valueBytes == null) {
            return stateDesc.getDefaultValue();
        }
        return valueSerializer.deserialize(new DataInputViewStreamWrapper(new ByteArrayInputStream(valueBytes)));
    } catch (IOException | RocksDBException e) {
        throw new RuntimeException("Error while retrieving data from RocksDB.", e);
    }
}
Also used : RocksDBException(org.rocksdb.RocksDBException) ByteArrayInputStream(java.io.ByteArrayInputStream) IOException(java.io.IOException) DataInputViewStreamWrapper(org.apache.flink.core.memory.DataInputViewStreamWrapper)

Example 48 with DataInputViewStreamWrapper

use of org.apache.flink.core.memory.DataInputViewStreamWrapper in project flink by apache.

the class StateDescriptor method readObject.

private void readObject(final ObjectInputStream in) throws IOException, ClassNotFoundException {
    // read the non-transient fields
    in.defaultReadObject();
    // read the default value field
    boolean hasDefaultValue = in.readBoolean();
    if (hasDefaultValue) {
        int size = in.readInt();
        byte[] buffer = new byte[size];
        in.readFully(buffer);
        try (ByteArrayInputStream bais = new ByteArrayInputStream(buffer);
            DataInputViewStreamWrapper inView = new DataInputViewStreamWrapper(bais)) {
            defaultValue = serializer.deserialize(inView);
        } catch (Exception e) {
            throw new IOException("Unable to deserialize default value.", e);
        }
    } else {
        defaultValue = null;
    }
}
Also used : ByteArrayInputStream(java.io.ByteArrayInputStream) IOException(java.io.IOException) DataInputViewStreamWrapper(org.apache.flink.core.memory.DataInputViewStreamWrapper) IOException(java.io.IOException)

Example 49 with DataInputViewStreamWrapper

use of org.apache.flink.core.memory.DataInputViewStreamWrapper in project flink by apache.

the class BinaryInputFormat method createAndReadBlockInfo.

private BlockInfo createAndReadBlockInfo() throws IOException {
    BlockInfo blockInfo = new BlockInfo();
    if (this.splitLength > blockInfo.getInfoSize()) {
        // At first we go and read  the block info containing the recordCount, the accumulatedRecordCount
        // and the firstRecordStart offset in the current block. This is written at the end of the block and
        // is of fixed size, currently 3 * Long.SIZE.
        // TODO: seek not supported by compressed streams. Will throw exception
        this.stream.seek(this.splitStart + this.splitLength - blockInfo.getInfoSize());
        blockInfo.read(new DataInputViewStreamWrapper(this.stream));
    }
    return blockInfo;
}
Also used : DataInputViewStreamWrapper(org.apache.flink.core.memory.DataInputViewStreamWrapper)

Example 50 with DataInputViewStreamWrapper

use of org.apache.flink.core.memory.DataInputViewStreamWrapper in project flink by apache.

the class BinaryInputFormat method reopen.

@PublicEvolving
@Override
public void reopen(FileInputSplit split, Tuple2<Long, Long> state) throws IOException {
    Preconditions.checkNotNull(split, "reopen() cannot be called on a null split.");
    Preconditions.checkNotNull(state, "reopen() cannot be called with a null initial state.");
    try {
        this.open(split);
    } finally {
        this.blockInfo = this.createAndReadBlockInfo();
        long blockPos = state.f0;
        this.readRecords = state.f1;
        this.stream.seek(this.splitStart + blockPos);
        this.blockBasedInput = new BlockBasedInput(this.stream, (int) blockPos, this.splitLength);
        this.dataInputStream = new DataInputViewStreamWrapper(blockBasedInput);
    }
}
Also used : DataInputViewStreamWrapper(org.apache.flink.core.memory.DataInputViewStreamWrapper) PublicEvolving(org.apache.flink.annotation.PublicEvolving)

Aggregations

DataInputViewStreamWrapper (org.apache.flink.core.memory.DataInputViewStreamWrapper)74 DataOutputViewStreamWrapper (org.apache.flink.core.memory.DataOutputViewStreamWrapper)25 ByteArrayInputStream (java.io.ByteArrayInputStream)23 IOException (java.io.IOException)21 ByteArrayInputStreamWithPos (org.apache.flink.core.memory.ByteArrayInputStreamWithPos)19 Test (org.junit.Test)19 FSDataInputStream (org.apache.flink.core.fs.FSDataInputStream)13 ByteArrayOutputStreamWithPos (org.apache.flink.core.memory.ByteArrayOutputStreamWithPos)11 DataInputView (org.apache.flink.core.memory.DataInputView)11 RocksDBException (org.rocksdb.RocksDBException)10 KeyGroupStatePartitionStreamProvider (org.apache.flink.runtime.state.KeyGroupStatePartitionStreamProvider)7 ByteArrayOutputStream (java.io.ByteArrayOutputStream)6 ArrayList (java.util.ArrayList)6 InputStream (java.io.InputStream)4 PipedInputStream (java.io.PipedInputStream)4 PipedOutputStream (java.io.PipedOutputStream)4 EOFException (java.io.EOFException)3 ObjectInputStream (java.io.ObjectInputStream)3 HashMap (java.util.HashMap)3 ExecutionConfig (org.apache.flink.api.common.ExecutionConfig)3