Search in sources :

Example 11 with CloseableIterator

use of org.apache.druid.java.util.common.parsers.CloseableIterator in project druid by druid-io.

the class TimedShutoffInputSourceReader method decorateShutdownTimeout.

private <T> CloseableIterator<T> decorateShutdownTimeout(ScheduledExecutorService exec, CloseableIterator<T> delegateIterator) {
    final Closer closer = Closer.create();
    closer.register(delegateIterator);
    closer.register(exec::shutdownNow);
    final CloseableIterator<T> wrappingIterator = new CloseableIterator<T>() {

        /**
         * Indicates this iterator has been closed or not.
         * Volatile since there is a happens-before relationship between {@link #hasNext()} and {@link #close()}.
         */
        volatile boolean closed;

        /**
         * Caching the next item. The item returned from the underling iterator is either a non-null {@link InputRow}
         * or {@link InputRowListPlusRawValues}.
         * Not volatile since {@link #hasNext()} and {@link #next()} are supposed to be called by the same thread.
         */
        T next = null;

        @Override
        public boolean hasNext() {
            if (next != null) {
                return true;
            }
            if (!closed && delegateIterator.hasNext()) {
                next = delegateIterator.next();
                return true;
            } else {
                return false;
            }
        }

        @Override
        public T next() {
            if (next != null) {
                final T returnValue = next;
                next = null;
                return returnValue;
            } else {
                throw new NoSuchElementException();
            }
        }

        @Override
        public void close() throws IOException {
            closed = true;
            closer.close();
        }
    };
    exec.schedule(() -> {
        LOG.info("Closing delegate inputSource.");
        try {
            wrappingIterator.close();
        } catch (IOException e) {
            LOG.warn(e, "Failed to close delegate inputSource, ignoring.");
        }
    }, shutoffTime.getMillis() - System.currentTimeMillis(), TimeUnit.MILLISECONDS);
    return wrappingIterator;
}
Also used : Closer(org.apache.druid.java.util.common.io.Closer) CloseableIterator(org.apache.druid.java.util.common.parsers.CloseableIterator) IOException(java.io.IOException) NoSuchElementException(java.util.NoSuchElementException)

Example 12 with CloseableIterator

use of org.apache.druid.java.util.common.parsers.CloseableIterator in project druid by druid-io.

the class IntermediateRowParsingReader method read.

@Override
public CloseableIterator<InputRow> read() throws IOException {
    final CloseableIteratorWithMetadata<T> intermediateRowIteratorWithMetadata = intermediateRowIteratorWithMetadata();
    return new CloseableIterator<InputRow>() {

        // since parseInputRows() returns a list, the below line always iterates over the list,
        // which means it calls Iterator.hasNext() and Iterator.next() at least once per row.
        // This could be unnecessary if the row wouldn't be exploded into multiple inputRows.
        // If this line turned out to be a performance bottleneck, perhaps parseInputRows() interface might not be a
        // good idea. Subclasses could implement read() with some duplicate codes to avoid unnecessary iteration on
        // a singleton list.
        Iterator<InputRow> rows = null;

        long currentRecordNumber = 1;

        @Override
        public boolean hasNext() {
            if (rows == null || !rows.hasNext()) {
                if (!intermediateRowIteratorWithMetadata.hasNext()) {
                    return false;
                }
                final T row = intermediateRowIteratorWithMetadata.next();
                try {
                    rows = parseInputRows(row).iterator();
                    ++currentRecordNumber;
                } catch (IOException e) {
                    final Map<String, Object> metadata = intermediateRowIteratorWithMetadata.currentMetadata();
                    rows = new ExceptionThrowingIterator(new ParseException(String.valueOf(row), e, buildParseExceptionMessage(StringUtils.format("Unable to parse row [%s]", row), source(), currentRecordNumber, metadata)));
                } catch (ParseException e) {
                    final Map<String, Object> metadata = intermediateRowIteratorWithMetadata.currentMetadata();
                    // Replace the message of the ParseException e
                    rows = new ExceptionThrowingIterator(new ParseException(e.getInput(), e.isFromPartiallyValidRow(), buildParseExceptionMessage(e.getMessage(), source(), currentRecordNumber, metadata)));
                }
            }
            return true;
        }

        @Override
        public InputRow next() {
            if (!hasNext()) {
                throw new NoSuchElementException();
            }
            return rows.next();
        }

        @Override
        public void close() throws IOException {
            intermediateRowIteratorWithMetadata.close();
        }
    };
}
Also used : CloseableIterator(org.apache.druid.java.util.common.parsers.CloseableIterator) Iterator(java.util.Iterator) CloseableIterator(org.apache.druid.java.util.common.parsers.CloseableIterator) IOException(java.io.IOException) ParseException(org.apache.druid.java.util.common.parsers.ParseException) Map(java.util.Map) NoSuchElementException(java.util.NoSuchElementException)

Example 13 with CloseableIterator

use of org.apache.druid.java.util.common.parsers.CloseableIterator in project druid by druid-io.

the class IndexTask method collectIntervalsAndShardSpecs.

private Map<Interval, Optional<HyperLogLogCollector>> collectIntervalsAndShardSpecs(ObjectMapper jsonMapper, IndexIngestionSpec ingestionSchema, InputSource inputSource, File tmpDir, GranularitySpec granularitySpec, @Nonnull PartitionsSpec partitionsSpec, boolean determineIntervals) throws IOException {
    final Map<Interval, Optional<HyperLogLogCollector>> hllCollectors = new TreeMap<>(Comparators.intervalsByStartThenEnd());
    final Granularity queryGranularity = granularitySpec.getQueryGranularity();
    final Predicate<InputRow> rowFilter = inputRow -> {
        if (inputRow == null) {
            return false;
        }
        if (determineIntervals) {
            return true;
        }
        final Optional<Interval> optInterval = granularitySpec.bucketInterval(inputRow.getTimestamp());
        return optInterval.isPresent();
    };
    try (final CloseableIterator<InputRow> inputRowIterator = AbstractBatchIndexTask.inputSourceReader(tmpDir, ingestionSchema.getDataSchema(), inputSource, inputSource.needsFormat() ? getInputFormat(ingestionSchema) : null, rowFilter, determinePartitionsMeters, determinePartitionsParseExceptionHandler)) {
        while (inputRowIterator.hasNext()) {
            final InputRow inputRow = inputRowIterator.next();
            final Interval interval;
            if (determineIntervals) {
                interval = granularitySpec.getSegmentGranularity().bucket(inputRow.getTimestamp());
            } else {
                final Optional<Interval> optInterval = granularitySpec.bucketInterval(inputRow.getTimestamp());
                // this interval must exist since it passed the rowFilter
                assert optInterval.isPresent();
                interval = optInterval.get();
            }
            if (partitionsSpec.needsDeterminePartitions(false)) {
                hllCollectors.computeIfAbsent(interval, intv -> Optional.of(HyperLogLogCollector.makeLatestCollector()));
                List<Object> groupKey = Rows.toGroupKey(queryGranularity.bucketStart(inputRow.getTimestampFromEpoch()), inputRow);
                hllCollectors.get(interval).get().add(HASH_FUNCTION.hashBytes(jsonMapper.writeValueAsBytes(groupKey)).asBytes());
            } else {
                // we don't need to determine partitions but we still need to determine intervals, so add an Optional.absent()
                // for the interval and don't instantiate a HLL collector
                hllCollectors.putIfAbsent(interval, Optional.absent());
            }
            determinePartitionsMeters.incrementProcessed();
        }
    }
    // These metrics are reported in generateAndPublishSegments()
    if (determinePartitionsMeters.getThrownAway() > 0) {
        log.warn("Unable to find a matching interval for [%,d] events", determinePartitionsMeters.getThrownAway());
    }
    if (determinePartitionsMeters.getUnparseable() > 0) {
        log.warn("Unable to parse [%,d] events", determinePartitionsMeters.getUnparseable());
    }
    return hllCollectors;
}
Also used : TaskReport(org.apache.druid.indexing.common.TaskReport) TaskToolbox(org.apache.druid.indexing.common.TaskToolbox) BatchAppenderatorDriver(org.apache.druid.segment.realtime.appenderator.BatchAppenderatorDriver) JsonProperty(com.fasterxml.jackson.annotation.JsonProperty) Comparators(org.apache.druid.java.util.common.guava.Comparators) Produces(javax.ws.rs.Produces) IndexSpec(org.apache.druid.segment.IndexSpec) FireDepartmentMetrics(org.apache.druid.segment.realtime.FireDepartmentMetrics) IngestionState(org.apache.druid.indexer.IngestionState) CompletePartitionAnalysis(org.apache.druid.indexing.common.task.batch.partition.CompletePartitionAnalysis) MediaType(javax.ws.rs.core.MediaType) JodaUtils(org.apache.druid.java.util.common.JodaUtils) TaskActionClient(org.apache.druid.indexing.common.actions.TaskActionClient) Optional(com.google.common.base.Optional) SegmentTransactionalInsertAction(org.apache.druid.indexing.common.actions.SegmentTransactionalInsertAction) FiniteFirehoseFactory(org.apache.druid.data.input.FiniteFirehoseFactory) Map(java.util.Map) IAE(org.apache.druid.java.util.common.IAE) LinearPartitionAnalysis(org.apache.druid.indexing.common.task.batch.partition.LinearPartitionAnalysis) Property(org.apache.druid.indexer.Property) InputSourceSampler(org.apache.druid.indexing.overlord.sampler.InputSourceSampler) InputFormat(org.apache.druid.data.input.InputFormat) IngestionStatsAndErrorsTaskReportData(org.apache.druid.indexing.common.IngestionStatsAndErrorsTaskReportData) Set(java.util.Set) ISE(org.apache.druid.java.util.common.ISE) TaskRealtimeMetricsMonitorBuilder(org.apache.druid.indexing.common.TaskRealtimeMetricsMonitorBuilder) InputRow(org.apache.druid.data.input.InputRow) BaseAppenderatorDriver(org.apache.druid.segment.realtime.appenderator.BaseAppenderatorDriver) FirehoseFactoryToInputSourceAdaptor(org.apache.druid.data.input.FirehoseFactoryToInputSourceAdaptor) Granularity(org.apache.druid.java.util.common.granularity.Granularity) ParseExceptionHandler(org.apache.druid.segment.incremental.ParseExceptionHandler) AppenderatorConfig(org.apache.druid.segment.realtime.appenderator.AppenderatorConfig) GET(javax.ws.rs.GET) HashBasedNumberedShardSpec(org.apache.druid.timeline.partition.HashBasedNumberedShardSpec) Rows(org.apache.druid.data.input.Rows) SegmentWriteOutMediumFactory(org.apache.druid.segment.writeout.SegmentWriteOutMediumFactory) TaskStatus(org.apache.druid.indexer.TaskStatus) ArrayList(java.util.ArrayList) Interval(org.joda.time.Interval) HttpServletRequest(javax.servlet.http.HttpServletRequest) UOE(org.apache.druid.java.util.common.UOE) PartitionsSpec(org.apache.druid.indexer.partitions.PartitionsSpec) Nullable(javax.annotation.Nullable) FirehoseFactory(org.apache.druid.data.input.FirehoseFactory) IndexMerger(org.apache.druid.segment.IndexMerger) GranularitySpec(org.apache.druid.segment.indexing.granularity.GranularitySpec) Throwables(com.google.common.base.Throwables) Include(com.fasterxml.jackson.annotation.JsonInclude.Include) PartialHashSegmentGenerateTask(org.apache.druid.indexing.common.task.batch.parallel.PartialHashSegmentGenerateTask) IOException(java.io.IOException) FireDepartment(org.apache.druid.segment.realtime.FireDepartment) File(java.io.File) ExecutionException(java.util.concurrent.ExecutionException) TreeMap(java.util.TreeMap) AppendableIndexSpec(org.apache.druid.segment.incremental.AppendableIndexSpec) Preconditions(com.google.common.base.Preconditions) DataSchema(org.apache.druid.segment.indexing.DataSchema) ArbitraryGranularitySpec(org.apache.druid.segment.indexing.granularity.ArbitraryGranularitySpec) AuthorizerMapper(org.apache.druid.server.security.AuthorizerMapper) Path(javax.ws.rs.Path) HashPartitionAnalysis(org.apache.druid.indexing.common.task.batch.partition.HashPartitionAnalysis) PartitionAnalysis(org.apache.druid.indexing.common.task.batch.partition.PartitionAnalysis) TimeoutException(java.util.concurrent.TimeoutException) MonotonicNonNull(org.checkerframework.checker.nullness.qual.MonotonicNonNull) ChatHandler(org.apache.druid.segment.realtime.firehose.ChatHandler) QueryParam(javax.ws.rs.QueryParam) DefaultIndexTaskInputRowIteratorBuilder(org.apache.druid.indexing.common.task.batch.parallel.iterator.DefaultIndexTaskInputRowIteratorBuilder) DynamicPartitionsSpec(org.apache.druid.indexer.partitions.DynamicPartitionsSpec) CloseableIterator(org.apache.druid.java.util.common.parsers.CloseableIterator) Context(javax.ws.rs.core.Context) Predicate(java.util.function.Predicate) NumberedShardSpec(org.apache.druid.timeline.partition.NumberedShardSpec) StringUtils(org.apache.druid.java.util.common.StringUtils) HashedPartitionsSpec(org.apache.druid.indexer.partitions.HashedPartitionsSpec) InputRowParser(org.apache.druid.data.input.impl.InputRowParser) RealtimeIOConfig(org.apache.druid.segment.indexing.RealtimeIOConfig) Action(org.apache.druid.server.security.Action) IngestionSpec(org.apache.druid.segment.indexing.IngestionSpec) Objects(java.util.Objects) List(java.util.List) Response(javax.ws.rs.core.Response) DataSegment(org.apache.druid.timeline.DataSegment) HashFunction(com.google.common.hash.HashFunction) Logger(org.apache.druid.java.util.common.logger.Logger) ListenableFuture(com.google.common.util.concurrent.ListenableFuture) Hashing(com.google.common.hash.Hashing) HashMap(java.util.HashMap) RowIngestionMeters(org.apache.druid.segment.incremental.RowIngestionMeters) Function(java.util.function.Function) TuningConfig(org.apache.druid.segment.indexing.TuningConfig) TaskRealtimeMetricsMonitor(org.apache.druid.indexing.common.stats.TaskRealtimeMetricsMonitor) JsonTypeName(com.fasterxml.jackson.annotation.JsonTypeName) InputSource(org.apache.druid.data.input.InputSource) ImmutableList(com.google.common.collect.ImmutableList) SegmentsAndCommitMetadata(org.apache.druid.segment.realtime.appenderator.SegmentsAndCommitMetadata) Appenderator(org.apache.druid.segment.realtime.appenderator.Appenderator) Nonnull(javax.annotation.Nonnull) ParseExceptionReport(org.apache.druid.segment.incremental.ParseExceptionReport) BatchIOConfig(org.apache.druid.segment.indexing.BatchIOConfig) SecondaryPartitionType(org.apache.druid.indexer.partitions.SecondaryPartitionType) Period(org.joda.time.Period) HyperLogLogCollector(org.apache.druid.hll.HyperLogLogCollector) TransactionalSegmentPublisher(org.apache.druid.segment.realtime.appenderator.TransactionalSegmentPublisher) ObjectMapper(com.fasterxml.jackson.databind.ObjectMapper) CircularBuffer(org.apache.druid.utils.CircularBuffer) TimeUnit(java.util.concurrent.TimeUnit) Checks(org.apache.druid.indexer.Checks) IngestionStatsAndErrorsTaskReport(org.apache.druid.indexing.common.IngestionStatsAndErrorsTaskReport) JsonCreator(com.fasterxml.jackson.annotation.JsonCreator) JsonInclude(com.fasterxml.jackson.annotation.JsonInclude) Collections(java.util.Collections) Optional(com.google.common.base.Optional) InputRow(org.apache.druid.data.input.InputRow) TreeMap(java.util.TreeMap) Granularity(org.apache.druid.java.util.common.granularity.Granularity) Interval(org.joda.time.Interval)

Example 14 with CloseableIterator

use of org.apache.druid.java.util.common.parsers.CloseableIterator in project druid by druid-io.

the class OrcReader method intermediateRowIterator.

@Override
protected CloseableIterator<OrcStruct> intermediateRowIterator() throws IOException {
    final Closer closer = Closer.create();
    // We fetch here to cache a copy locally. However, this might need to be changed if we want to split an orc file
    // into several InputSplits in the future.
    final byte[] buffer = new byte[InputEntity.DEFAULT_FETCH_BUFFER_SIZE];
    final CleanableFile file = closer.register(source.fetch(temporaryDirectory, buffer));
    final Path path = new Path(file.file().toURI());
    final ClassLoader currentClassLoader = Thread.currentThread().getContextClassLoader();
    final Reader reader;
    try {
        Thread.currentThread().setContextClassLoader(getClass().getClassLoader());
        reader = closer.register(OrcFile.createReader(path, OrcFile.readerOptions(conf)));
    } finally {
        Thread.currentThread().setContextClassLoader(currentClassLoader);
    }
    // The below line will get the schmea to read the whole columns.
    // This can be improved by projecting some columns only what users want in the future.
    final TypeDescription schema = reader.getSchema();
    final RecordReader batchReader = reader.rows(reader.options());
    final OrcMapredRecordReader<OrcStruct> recordReader = new OrcMapredRecordReader<>(batchReader, schema);
    closer.register(recordReader::close);
    return new CloseableIterator<OrcStruct>() {

        final NullWritable key = recordReader.createKey();

        OrcStruct value = null;

        @Override
        public boolean hasNext() {
            if (value == null) {
                try {
                    // The returned OrcStruct in next() can be kept in memory for a while.
                    // Here, we create a new instance of OrcStruct before calling RecordReader.next(),
                    // so that we can avoid to share the same reference to the "value" across rows.
                    value = recordReader.createValue();
                    if (!recordReader.next(key, value)) {
                        value = null;
                    }
                } catch (IOException e) {
                    throw new RuntimeException(e);
                }
            }
            return value != null;
        }

        @Override
        public OrcStruct next() {
            if (value == null) {
                throw new NoSuchElementException();
            }
            final OrcStruct currentValue = value;
            value = null;
            return currentValue;
        }

        @Override
        public void close() throws IOException {
            closer.close();
        }
    };
}
Also used : Closer(org.apache.druid.java.util.common.io.Closer) Path(org.apache.hadoop.fs.Path) CloseableIterator(org.apache.druid.java.util.common.parsers.CloseableIterator) RecordReader(org.apache.orc.RecordReader) OrcMapredRecordReader(org.apache.orc.mapred.OrcMapredRecordReader) Reader(org.apache.orc.Reader) RecordReader(org.apache.orc.RecordReader) IntermediateRowParsingReader(org.apache.druid.data.input.IntermediateRowParsingReader) OrcMapredRecordReader(org.apache.orc.mapred.OrcMapredRecordReader) IOException(java.io.IOException) NullWritable(org.apache.hadoop.io.NullWritable) OrcStruct(org.apache.orc.mapred.OrcStruct) TypeDescription(org.apache.orc.TypeDescription) OrcMapredRecordReader(org.apache.orc.mapred.OrcMapredRecordReader) CleanableFile(org.apache.druid.data.input.InputEntity.CleanableFile) NoSuchElementException(java.util.NoSuchElementException)

Example 15 with CloseableIterator

use of org.apache.druid.java.util.common.parsers.CloseableIterator in project druid by druid-io.

the class DruidSegmentReader method intermediateRowIterator.

@Override
protected CloseableIterator<Map<String, Object>> intermediateRowIterator() throws IOException {
    final CleanableFile segmentFile = source.fetch(temporaryDirectory, null);
    final WindowedStorageAdapter storageAdapter = new WindowedStorageAdapter(new QueryableIndexStorageAdapter(indexIO.loadIndex(segmentFile.file())), source.getIntervalFilter());
    final Sequence<Cursor> cursors = storageAdapter.getAdapter().makeCursors(Filters.toFilter(dimFilter), storageAdapter.getInterval(), VirtualColumns.EMPTY, Granularities.ALL, false, null);
    // Retain order of columns from the original segments. Useful for preserving dimension order if we're in
    // schemaless mode.
    final Set<String> columnsToRead = Sets.newLinkedHashSet(Iterables.filter(storageAdapter.getAdapter().getRowSignature().getColumnNames(), columnsFilter::apply));
    final Sequence<Map<String, Object>> sequence = Sequences.concat(Sequences.map(cursors, cursor -> cursorToSequence(cursor, columnsToRead)));
    return makeCloseableIteratorFromSequenceAndSegmentFile(sequence, segmentFile);
}
Also used : TimestampSpec(org.apache.druid.data.input.impl.TimestampSpec) IndexedInts(org.apache.druid.segment.data.IndexedInts) ColumnProcessors(org.apache.druid.segment.ColumnProcessors) BaseFloatColumnValueSelector(org.apache.druid.segment.BaseFloatColumnValueSelector) Map(java.util.Map) CloseableIterator(org.apache.druid.java.util.common.parsers.CloseableIterator) BaseObjectColumnValueSelector(org.apache.druid.segment.BaseObjectColumnValueSelector) Sequence(org.apache.druid.java.util.common.guava.Sequence) ColumnsFilter(org.apache.druid.data.input.ColumnsFilter) Set(java.util.Set) Sets(com.google.common.collect.Sets) InputRow(org.apache.druid.data.input.InputRow) List(java.util.List) IntermediateRowParsingReader(org.apache.druid.data.input.IntermediateRowParsingReader) DimFilter(org.apache.druid.query.filter.DimFilter) Entry(java.util.Map.Entry) BaseDoubleColumnValueSelector(org.apache.druid.segment.BaseDoubleColumnValueSelector) Iterables(com.google.common.collect.Iterables) ParseException(org.apache.druid.java.util.common.parsers.ParseException) Supplier(com.google.common.base.Supplier) CollectionUtils(org.apache.druid.utils.CollectionUtils) InputRowSchema(org.apache.druid.data.input.InputRowSchema) ArrayList(java.util.ArrayList) Yielders(org.apache.druid.java.util.common.guava.Yielders) CleanableFile(org.apache.druid.data.input.InputEntity.CleanableFile) DimensionSelector(org.apache.druid.segment.DimensionSelector) Yielder(org.apache.druid.java.util.common.guava.Yielder) NoSuchElementException(java.util.NoSuchElementException) Sequences(org.apache.druid.java.util.common.guava.Sequences) QueryableIndexStorageAdapter(org.apache.druid.segment.QueryableIndexStorageAdapter) VirtualColumns(org.apache.druid.segment.VirtualColumns) Iterator(java.util.Iterator) MapInputRowParser(org.apache.druid.data.input.impl.MapInputRowParser) WindowedStorageAdapter(org.apache.druid.segment.realtime.firehose.WindowedStorageAdapter) DimensionsSpec(org.apache.druid.data.input.impl.DimensionsSpec) IOException(java.io.IOException) ColumnProcessorFactory(org.apache.druid.segment.ColumnProcessorFactory) File(java.io.File) Granularities(org.apache.druid.java.util.common.granularity.Granularities) BaseLongColumnValueSelector(org.apache.druid.segment.BaseLongColumnValueSelector) Cursor(org.apache.druid.segment.Cursor) ColumnType(org.apache.druid.segment.column.ColumnType) Preconditions(com.google.common.base.Preconditions) VisibleForTesting(com.google.common.annotations.VisibleForTesting) InputEntity(org.apache.druid.data.input.InputEntity) IndexIO(org.apache.druid.segment.IndexIO) Filters(org.apache.druid.segment.filter.Filters) CloseableUtils(org.apache.druid.utils.CloseableUtils) Collections(java.util.Collections) QueryableIndexStorageAdapter(org.apache.druid.segment.QueryableIndexStorageAdapter) CleanableFile(org.apache.druid.data.input.InputEntity.CleanableFile) Cursor(org.apache.druid.segment.Cursor) WindowedStorageAdapter(org.apache.druid.segment.realtime.firehose.WindowedStorageAdapter) Map(java.util.Map)

Aggregations

CloseableIterator (org.apache.druid.java.util.common.parsers.CloseableIterator)16 IOException (java.io.IOException)9 Map (java.util.Map)6 NoSuchElementException (java.util.NoSuchElementException)6 ArrayList (java.util.ArrayList)5 List (java.util.List)5 Collections (java.util.Collections)4 Iterator (java.util.Iterator)4 Set (java.util.Set)4 CleanableFile (org.apache.druid.data.input.InputEntity.CleanableFile)4 Closer (org.apache.druid.java.util.common.io.Closer)4 ObjectMapper (com.fasterxml.jackson.databind.ObjectMapper)3 Preconditions (com.google.common.base.Preconditions)3 ImmutableList (com.google.common.collect.ImmutableList)3 ListenableFuture (com.google.common.util.concurrent.ListenableFuture)3 File (java.io.File)3 Collectors (java.util.stream.Collectors)3 ParseException (org.apache.druid.java.util.common.parsers.ParseException)3 Supplier (com.google.common.base.Supplier)2 ByteBuffer (java.nio.ByteBuffer)2