Search in sources :

Example 1 with OrcInputStream

use of com.facebook.presto.orc.stream.OrcInputStream in project presto by prestodb.

the class StripeReader method createValueStreams.

private Map<StreamId, ValueStream<?>> createValueStreams(Map<StreamId, Stream> streams, Map<StreamId, OrcInputStream> streamsData, List<ColumnEncoding> columnEncodings) {
    ImmutableMap.Builder<StreamId, ValueStream<?>> valueStreams = ImmutableMap.builder();
    for (Entry<StreamId, Stream> entry : streams.entrySet()) {
        StreamId streamId = entry.getKey();
        Stream stream = entry.getValue();
        ColumnEncodingKind columnEncoding = columnEncodings.get(stream.getColumn()).getColumnEncodingKind();
        // skip index and empty streams
        if (isIndexStream(stream) || stream.getLength() == 0) {
            continue;
        }
        OrcInputStream inputStream = streamsData.get(streamId);
        OrcTypeKind columnType = types.get(stream.getColumn()).getOrcTypeKind();
        valueStreams.put(streamId, ValueStreams.createValueStreams(streamId, inputStream, columnType, columnEncoding, stream.isUseVInts()));
    }
    return valueStreams.build();
}
Also used : OrcInputStream(com.facebook.presto.orc.stream.OrcInputStream) ValueStream(com.facebook.presto.orc.stream.ValueStream) OrcInputStream(com.facebook.presto.orc.stream.OrcInputStream) Stream(com.facebook.presto.orc.metadata.Stream) InputStream(java.io.InputStream) OrcTypeKind(com.facebook.presto.orc.metadata.OrcType.OrcTypeKind) ImmutableMap(com.google.common.collect.ImmutableMap) ValueStream(com.facebook.presto.orc.stream.ValueStream) ColumnEncodingKind(com.facebook.presto.orc.metadata.ColumnEncoding.ColumnEncodingKind)

Example 2 with OrcInputStream

use of com.facebook.presto.orc.stream.OrcInputStream in project presto by prestodb.

the class StripeReader method readDiskRanges.

public Map<StreamId, OrcInputStream> readDiskRanges(long stripeOffset, Map<StreamId, DiskRange> diskRanges, AbstractAggregatedMemoryContext systemMemoryUsage) throws IOException {
    //
    // Note: this code does not use the Java 8 stream APIs to avoid any extra object allocation
    //
    // transform ranges to have an absolute offset in file
    ImmutableMap.Builder<StreamId, DiskRange> diskRangesBuilder = ImmutableMap.builder();
    for (Entry<StreamId, DiskRange> entry : diskRanges.entrySet()) {
        DiskRange diskRange = entry.getValue();
        diskRangesBuilder.put(entry.getKey(), new DiskRange(stripeOffset + diskRange.getOffset(), diskRange.getLength()));
    }
    diskRanges = diskRangesBuilder.build();
    // read ranges
    Map<StreamId, FixedLengthSliceInput> streamsData = orcDataSource.readFully(diskRanges);
    // transform streams to OrcInputStream
    String sourceName = orcDataSource.toString();
    ImmutableMap.Builder<StreamId, OrcInputStream> streamsBuilder = ImmutableMap.builder();
    for (Entry<StreamId, FixedLengthSliceInput> entry : streamsData.entrySet()) {
        streamsBuilder.put(entry.getKey(), new OrcInputStream(sourceName, entry.getValue(), compressionKind, bufferSize, systemMemoryUsage));
    }
    return streamsBuilder.build();
}
Also used : OrcInputStream(com.facebook.presto.orc.stream.OrcInputStream) FixedLengthSliceInput(io.airlift.slice.FixedLengthSliceInput) ImmutableMap(com.google.common.collect.ImmutableMap)

Example 3 with OrcInputStream

use of com.facebook.presto.orc.stream.OrcInputStream in project presto by prestodb.

the class StripeReader method readColumnIndexes.

private Map<StreamId, List<RowGroupIndex>> readColumnIndexes(Map<StreamId, Stream> streams, Map<StreamId, OrcInputStream> streamsData, StripeId stripeId) throws IOException {
    // read the bloom filter for each column
    Map<Integer, List<HiveBloomFilter>> bloomFilterIndexes = readBloomFilterIndexes(streams, streamsData);
    ImmutableMap.Builder<StreamId, List<RowGroupIndex>> columnIndexes = ImmutableMap.builder();
    for (Entry<StreamId, Stream> entry : streams.entrySet()) {
        StreamId streamId = entry.getKey();
        Stream stream = entry.getValue();
        if (stream.getStreamKind() == ROW_INDEX) {
            OrcInputStream inputStream = streamsData.get(streamId);
            List<HiveBloomFilter> bloomFilters = bloomFilterIndexes.get(streamId.getColumn());
            List<RowGroupIndex> rowGroupIndexes = stripeMetadataSource.getRowIndexes(metadataReader, hiveWriterVersion, stripeId, streamId, inputStream, bloomFilters, runtimeStats);
            columnIndexes.put(entry.getKey(), rowGroupIndexes);
        }
    }
    return columnIndexes.build();
}
Also used : OrcInputStream(com.facebook.presto.orc.stream.OrcInputStream) ImmutableMap(com.google.common.collect.ImmutableMap) HiveBloomFilter(com.facebook.presto.orc.metadata.statistics.HiveBloomFilter) RowGroupIndex(com.facebook.presto.orc.metadata.RowGroupIndex) List(java.util.List) ArrayList(java.util.ArrayList) ImmutableList(com.google.common.collect.ImmutableList) ValueInputStream(com.facebook.presto.orc.stream.ValueInputStream) OrcInputStream(com.facebook.presto.orc.stream.OrcInputStream) Stream(com.facebook.presto.orc.metadata.Stream) InputStream(java.io.InputStream)

Example 4 with OrcInputStream

use of com.facebook.presto.orc.stream.OrcInputStream in project presto by prestodb.

the class StripeReader method readStripe.

public Stripe readStripe(StripeInformation stripe, OrcAggregatedMemoryContext systemMemoryUsage, Optional<DwrfEncryptionInfo> decryptors, SharedBuffer sharedDecompressionBuffer) throws IOException {
    StripeId stripeId = new StripeId(orcDataSource.getId(), stripe.getOffset());
    // read the stripe footer
    StripeFooter stripeFooter = readStripeFooter(stripeId, stripe, systemMemoryUsage);
    // get streams for selected columns
    List<List<Stream>> allStreams = new ArrayList<>();
    allStreams.add(stripeFooter.getStreams());
    Map<StreamId, Stream> includedStreams = new HashMap<>();
    boolean hasRowGroupDictionary = addIncludedStreams(stripeFooter.getColumnEncodings(), stripeFooter.getStreams(), includedStreams);
    Map<Integer, ColumnEncoding> columnEncodings = new HashMap<>();
    Map<Integer, ColumnEncoding> stripeFooterEncodings = stripeFooter.getColumnEncodings();
    columnEncodings.putAll(stripeFooterEncodings);
    // included columns may be encrypted
    if (decryptors.isPresent()) {
        List<Slice> encryptedEncryptionGroups = stripeFooter.getStripeEncryptionGroups();
        for (Integer groupId : decryptors.get().getEncryptorGroupIds()) {
            StripeEncryptionGroup stripeEncryptionGroup = getStripeEncryptionGroup(decryptors.get().getEncryptorByGroupId(groupId), encryptedEncryptionGroups.get(groupId), dwrfEncryptionGroupColumns.get(groupId), systemMemoryUsage);
            allStreams.add(stripeEncryptionGroup.getStreams());
            columnEncodings.putAll(stripeEncryptionGroup.getColumnEncodings());
            boolean encryptedHasRowGroupDictionary = addIncludedStreams(stripeEncryptionGroup.getColumnEncodings(), stripeEncryptionGroup.getStreams(), includedStreams);
            hasRowGroupDictionary = encryptedHasRowGroupDictionary || hasRowGroupDictionary;
        }
    }
    // handle stripes with more than one row group or a dictionary
    boolean invalidCheckPoint = false;
    if ((stripe.getNumberOfRows() > rowsInRowGroup) || hasRowGroupDictionary) {
        // determine ranges of the stripe to read
        Map<StreamId, DiskRange> diskRanges = getDiskRanges(allStreams);
        diskRanges = Maps.filterKeys(diskRanges, Predicates.in(includedStreams.keySet()));
        // read the file regions
        Map<StreamId, OrcInputStream> streamsData = readDiskRanges(stripeId, diskRanges, systemMemoryUsage, decryptors, sharedDecompressionBuffer);
        // read the row index for each column
        Map<StreamId, List<RowGroupIndex>> columnIndexes = readColumnIndexes(includedStreams, streamsData, stripeId);
        if (writeValidation.isPresent()) {
            writeValidation.get().validateRowGroupStatistics(orcDataSource.getId(), stripe.getOffset(), columnIndexes);
        }
        // select the row groups matching the tuple domain
        Set<Integer> selectedRowGroups = selectRowGroups(stripe, columnIndexes);
        // if all row groups are skipped, return null
        if (selectedRowGroups.isEmpty()) {
            // set accounted memory usage to zero
            systemMemoryUsage.close();
            return null;
        }
        // value streams
        Map<StreamId, ValueInputStream<?>> valueStreams = createValueStreams(includedStreams, streamsData, columnEncodings);
        // build the dictionary streams
        InputStreamSources dictionaryStreamSources = createDictionaryStreamSources(includedStreams, valueStreams, columnEncodings);
        // build the row groups
        try {
            List<RowGroup> rowGroups = createRowGroups(stripe.getNumberOfRows(), includedStreams, valueStreams, columnIndexes, selectedRowGroups, columnEncodings);
            return new Stripe(stripe.getNumberOfRows(), columnEncodings, rowGroups, dictionaryStreamSources);
        } catch (InvalidCheckpointException e) {
            // we must fail because the length of the row group dictionary is contained in the checkpoint stream.
            if (hasRowGroupDictionary) {
                throw new OrcCorruptionException(e, orcDataSource.getId(), "Checkpoints are corrupt");
            }
            invalidCheckPoint = true;
        }
    }
    // stripe only has one row group and no dictionary
    ImmutableMap.Builder<StreamId, DiskRange> diskRangesBuilder = ImmutableMap.builder();
    for (Entry<StreamId, DiskRange> entry : getDiskRanges(allStreams).entrySet()) {
        StreamId streamId = entry.getKey();
        if (includedStreams.keySet().contains(streamId)) {
            diskRangesBuilder.put(entry);
        }
    }
    ImmutableMap<StreamId, DiskRange> diskRanges = diskRangesBuilder.build();
    // read the file regions
    Map<StreamId, OrcInputStream> streamsData = readDiskRanges(stripeId, diskRanges, systemMemoryUsage, decryptors, sharedDecompressionBuffer);
    long totalBytes = 0;
    for (Entry<StreamId, Stream> entry : includedStreams.entrySet()) {
        if (entry.getKey().getStreamKind() == ROW_INDEX) {
            List<RowGroupIndex> rowGroupIndexes = metadataReader.readRowIndexes(hiveWriterVersion, streamsData.get(entry.getKey()), null);
            checkState(rowGroupIndexes.size() == 1 || invalidCheckPoint, "expect a single row group or an invalid check point");
            for (RowGroupIndex rowGroupIndex : rowGroupIndexes) {
                ColumnStatistics columnStatistics = rowGroupIndex.getColumnStatistics();
                if (columnStatistics.hasMinAverageValueSizeInBytes()) {
                    totalBytes += columnStatistics.getTotalValueSizeInBytes();
                }
            }
        }
    }
    // value streams
    Map<StreamId, ValueInputStream<?>> valueStreams = createValueStreams(includedStreams, streamsData, columnEncodings);
    // build the dictionary streams
    InputStreamSources dictionaryStreamSources = createDictionaryStreamSources(includedStreams, valueStreams, columnEncodings);
    // build the row group
    ImmutableMap.Builder<StreamId, InputStreamSource<?>> builder = ImmutableMap.builder();
    for (Entry<StreamId, ValueInputStream<?>> entry : valueStreams.entrySet()) {
        builder.put(entry.getKey(), new ValueInputStreamSource<>(entry.getValue()));
    }
    RowGroup rowGroup = new RowGroup(0, 0, stripe.getNumberOfRows(), totalBytes, new InputStreamSources(builder.build()));
    return new Stripe(stripe.getNumberOfRows(), columnEncodings, ImmutableList.of(rowGroup), dictionaryStreamSources);
}
Also used : ValueInputStream(com.facebook.presto.orc.stream.ValueInputStream) HashMap(java.util.HashMap) ArrayList(java.util.ArrayList) InvalidCheckpointException(com.facebook.presto.orc.checkpoint.InvalidCheckpointException) InputStreamSource(com.facebook.presto.orc.stream.InputStreamSource) ValueInputStreamSource(com.facebook.presto.orc.stream.ValueInputStreamSource) List(java.util.List) ArrayList(java.util.ArrayList) ImmutableList(com.google.common.collect.ImmutableList) ValueInputStream(com.facebook.presto.orc.stream.ValueInputStream) OrcInputStream(com.facebook.presto.orc.stream.OrcInputStream) Stream(com.facebook.presto.orc.metadata.Stream) InputStream(java.io.InputStream) ColumnStatistics(com.facebook.presto.orc.metadata.statistics.ColumnStatistics) ColumnStatistics.mergeColumnStatistics(com.facebook.presto.orc.metadata.statistics.ColumnStatistics.mergeColumnStatistics) OrcInputStream(com.facebook.presto.orc.stream.OrcInputStream) ImmutableMap(com.google.common.collect.ImmutableMap) ColumnEncoding(com.facebook.presto.orc.metadata.ColumnEncoding) InputStreamSources(com.facebook.presto.orc.stream.InputStreamSources) StripeFooter(com.facebook.presto.orc.metadata.StripeFooter) RowGroupIndex(com.facebook.presto.orc.metadata.RowGroupIndex) Slice(io.airlift.slice.Slice) StripeEncryptionGroup(com.facebook.presto.orc.metadata.StripeEncryptionGroup) DwrfMetadataReader.toStripeEncryptionGroup(com.facebook.presto.orc.metadata.DwrfMetadataReader.toStripeEncryptionGroup)

Example 5 with OrcInputStream

use of com.facebook.presto.orc.stream.OrcInputStream in project presto by prestodb.

the class StripeReader method createValueStreams.

private Map<StreamId, ValueInputStream<?>> createValueStreams(Map<StreamId, Stream> streams, Map<StreamId, OrcInputStream> streamsData, Map<Integer, ColumnEncoding> columnEncodings) {
    ImmutableMap.Builder<StreamId, ValueInputStream<?>> valueStreams = ImmutableMap.builder();
    for (Entry<StreamId, Stream> entry : streams.entrySet()) {
        StreamId streamId = entry.getKey();
        Stream stream = entry.getValue();
        ColumnEncodingKind columnEncoding = columnEncodings.get(stream.getColumn()).getColumnEncoding(stream.getSequence()).getColumnEncodingKind();
        // skip index and empty streams
        if (isIndexStream(stream) || stream.getLength() == 0) {
            continue;
        }
        OrcInputStream inputStream = streamsData.get(streamId);
        OrcTypeKind columnType = types.get(stream.getColumn()).getOrcTypeKind();
        valueStreams.put(streamId, ValueStreams.createValueStreams(streamId, inputStream, columnType, columnEncoding, stream.isUseVInts()));
    }
    return valueStreams.build();
}
Also used : ValueInputStream(com.facebook.presto.orc.stream.ValueInputStream) OrcInputStream(com.facebook.presto.orc.stream.OrcInputStream) ValueInputStream(com.facebook.presto.orc.stream.ValueInputStream) OrcInputStream(com.facebook.presto.orc.stream.OrcInputStream) Stream(com.facebook.presto.orc.metadata.Stream) InputStream(java.io.InputStream) OrcTypeKind(com.facebook.presto.orc.metadata.OrcType.OrcTypeKind) ImmutableMap(com.google.common.collect.ImmutableMap) ColumnEncodingKind(com.facebook.presto.orc.metadata.ColumnEncoding.ColumnEncodingKind)

Aggregations

OrcInputStream (com.facebook.presto.orc.stream.OrcInputStream)14 InputStream (java.io.InputStream)11 ImmutableMap (com.google.common.collect.ImmutableMap)9 Stream (com.facebook.presto.orc.metadata.Stream)7 ImmutableList (com.google.common.collect.ImmutableList)6 ValueInputStream (com.facebook.presto.orc.stream.ValueInputStream)5 List (java.util.List)5 SharedBuffer (com.facebook.presto.orc.stream.SharedBuffer)4 Checkpoints.getDictionaryStreamCheckpoint (com.facebook.presto.orc.checkpoint.Checkpoints.getDictionaryStreamCheckpoint)3 StreamCheckpoint (com.facebook.presto.orc.checkpoint.StreamCheckpoint)3 ColumnEncodingKind (com.facebook.presto.orc.metadata.ColumnEncoding.ColumnEncodingKind)3 RowGroupIndex (com.facebook.presto.orc.metadata.RowGroupIndex)3 StripeFooter (com.facebook.presto.orc.metadata.StripeFooter)3 ValueStream (com.facebook.presto.orc.stream.ValueStream)3 Slice (io.airlift.slice.Slice)3 ArrayList (java.util.ArrayList)3 InvalidCheckpointException (com.facebook.presto.orc.checkpoint.InvalidCheckpointException)2 ColumnEncoding (com.facebook.presto.orc.metadata.ColumnEncoding)2 OrcTypeKind (com.facebook.presto.orc.metadata.OrcType.OrcTypeKind)2 ColumnStatistics (com.facebook.presto.orc.metadata.statistics.ColumnStatistics)2