Search in sources :

Example 1 with DrillFileSystem

use of org.apache.drill.exec.store.dfs.DrillFileSystem in project drill by apache.

the class OperatorContextImpl method newFileSystem.

@Override
public DrillFileSystem newFileSystem(Configuration conf) throws IOException {
    Preconditions.checkState(fs == null, "Tried to create a second FileSystem. Can only be called once per OperatorContext");
    fs = new DrillFileSystem(conf, getStats());
    return fs;
}
Also used : DrillFileSystem(org.apache.drill.exec.store.dfs.DrillFileSystem)

Example 2 with DrillFileSystem

use of org.apache.drill.exec.store.dfs.DrillFileSystem in project drill by apache.

the class UserSession method addTemporaryLocation.

/**
   * Session temporary tables are stored under temporary workspace location in session folder
   * defined by unique session id. These session temporary locations are deleted on session close.
   * If default temporary workspace file system or location is changed at runtime,
   * new session temporary location will be added with corresponding file system
   * to the list of session temporary locations. If location does not exist it will be created and
   * {@link StorageStrategy#TEMPORARY} storage rules will be applied to it.
   *
   * @param temporaryWorkspace temporary workspace
   * @throws IOException in case of error during temporary location creation
   */
private void addTemporaryLocation(WorkspaceSchemaFactory.WorkspaceSchema temporaryWorkspace) throws IOException {
    DrillFileSystem fs = temporaryWorkspace.getFS();
    Path temporaryLocation = new Path(fs.getUri().toString(), new Path(temporaryWorkspace.getDefaultLocation(), sessionId));
    FileSystem fileSystem = temporaryLocations.putIfAbsent(temporaryLocation, fs);
    if (fileSystem == null) {
        StorageStrategy.TEMPORARY.createPathAndApply(fs, temporaryLocation);
        Preconditions.checkArgument(fs.exists(temporaryLocation), String.format("Temporary location should exist [%s]", temporaryLocation.toUri().getPath()));
    }
}
Also used : Path(org.apache.hadoop.fs.Path) DrillFileSystem(org.apache.drill.exec.store.dfs.DrillFileSystem) FileSystem(org.apache.hadoop.fs.FileSystem) DrillFileSystem(org.apache.drill.exec.store.dfs.DrillFileSystem)

Example 3 with DrillFileSystem

use of org.apache.drill.exec.store.dfs.DrillFileSystem in project drill by apache.

the class EasyGroupScan method initFromSelection.

private void initFromSelection(FileSelection selection, EasyFormatPlugin<?> formatPlugin) throws IOException {
    @SuppressWarnings("resource") final DrillFileSystem dfs = ImpersonationUtil.createFileSystem(getUserName(), formatPlugin.getFsConf());
    this.selection = selection;
    BlockMapBuilder b = new BlockMapBuilder(dfs, formatPlugin.getContext().getBits());
    this.chunks = b.generateFileWork(selection.getStatuses(dfs), formatPlugin.isBlockSplittable());
    this.maxWidth = chunks.size();
    this.endpointAffinities = AffinityCreator.getAffinityMap(chunks);
}
Also used : DrillFileSystem(org.apache.drill.exec.store.dfs.DrillFileSystem) BlockMapBuilder(org.apache.drill.exec.store.schedule.BlockMapBuilder)

Example 4 with DrillFileSystem

use of org.apache.drill.exec.store.dfs.DrillFileSystem in project drill by apache.

the class OperatorContextImpl method newNonTrackingFileSystem.

/**
   * Creates a DrillFileSystem that does not automatically track operator stats.
   */
@Override
public DrillFileSystem newNonTrackingFileSystem(Configuration conf) throws IOException {
    Preconditions.checkState(fs == null, "Tried to create a second FileSystem. Can only be called once per OperatorContext");
    fs = new DrillFileSystem(conf, null);
    return fs;
}
Also used : DrillFileSystem(org.apache.drill.exec.store.dfs.DrillFileSystem)

Example 5 with DrillFileSystem

use of org.apache.drill.exec.store.dfs.DrillFileSystem in project drill by apache.

the class ParquetScanBatchCreator method getBatch.

@Override
public ScanBatch getBatch(FragmentContext context, ParquetRowGroupScan rowGroupScan, List<RecordBatch> children) throws ExecutionSetupException {
    Preconditions.checkArgument(children.isEmpty());
    OperatorContext oContext = context.newOperatorContext(rowGroupScan);
    final ImplicitColumnExplorer columnExplorer = new ImplicitColumnExplorer(context, rowGroupScan.getColumns());
    if (!columnExplorer.isStarQuery()) {
        rowGroupScan = new ParquetRowGroupScan(rowGroupScan.getUserName(), rowGroupScan.getStorageEngine(), rowGroupScan.getRowGroupReadEntries(), columnExplorer.getTableColumns(), rowGroupScan.getSelectionRoot(), rowGroupScan.getFilter());
        rowGroupScan.setOperatorId(rowGroupScan.getOperatorId());
    }
    DrillFileSystem fs;
    try {
        boolean useAsyncPageReader = context.getOptions().getOption(ExecConstants.PARQUET_PAGEREADER_ASYNC).bool_val;
        if (useAsyncPageReader) {
            fs = oContext.newNonTrackingFileSystem(rowGroupScan.getStorageEngine().getFsConf());
        } else {
            fs = oContext.newFileSystem(rowGroupScan.getStorageEngine().getFsConf());
        }
    } catch (IOException e) {
        throw new ExecutionSetupException(String.format("Failed to create DrillFileSystem: %s", e.getMessage()), e);
    }
    Configuration conf = new Configuration(fs.getConf());
    conf.setBoolean(ENABLE_BYTES_READ_COUNTER, false);
    conf.setBoolean(ENABLE_BYTES_TOTAL_COUNTER, false);
    conf.setBoolean(ENABLE_TIME_READ_COUNTER, false);
    // keep footers in a map to avoid re-reading them
    Map<String, ParquetMetadata> footers = Maps.newHashMap();
    List<RecordReader> readers = Lists.newArrayList();
    List<Map<String, String>> implicitColumns = Lists.newArrayList();
    Map<String, String> mapWithMaxColumns = Maps.newLinkedHashMap();
    for (RowGroupReadEntry e : rowGroupScan.getRowGroupReadEntries()) {
        /*
      Here we could store a map from file names to footers, to prevent re-reading the footer for each row group in a file
      TODO - to prevent reading the footer again in the parquet record reader (it is read earlier in the ParquetStorageEngine)
      we should add more information to the RowGroupInfo that will be populated upon the first read to
      provide the reader with all of th file meta-data it needs
      These fields will be added to the constructor below
      */
        try {
            Stopwatch timer = Stopwatch.createUnstarted();
            if (!footers.containsKey(e.getPath())) {
                timer.start();
                ParquetMetadata footer = ParquetFileReader.readFooter(conf, new Path(e.getPath()));
                long timeToRead = timer.elapsed(TimeUnit.MICROSECONDS);
                logger.trace("ParquetTrace,Read Footer,{},{},{},{},{},{},{}", "", e.getPath(), "", 0, 0, 0, timeToRead);
                footers.put(e.getPath(), footer);
            }
            boolean autoCorrectCorruptDates = rowGroupScan.formatConfig.autoCorrectCorruptDates;
            ParquetReaderUtility.DateCorruptionStatus containsCorruptDates = ParquetReaderUtility.detectCorruptDates(footers.get(e.getPath()), rowGroupScan.getColumns(), autoCorrectCorruptDates);
            if (logger.isDebugEnabled()) {
                logger.debug(containsCorruptDates.toString());
            }
            if (!context.getOptions().getOption(ExecConstants.PARQUET_NEW_RECORD_READER).bool_val && !isComplex(footers.get(e.getPath()))) {
                readers.add(new ParquetRecordReader(context, e.getPath(), e.getRowGroupIndex(), e.getNumRecordsToRead(), fs, CodecFactory.createDirectCodecFactory(fs.getConf(), new ParquetDirectByteBufferAllocator(oContext.getAllocator()), 0), footers.get(e.getPath()), rowGroupScan.getColumns(), containsCorruptDates));
            } else {
                ParquetMetadata footer = footers.get(e.getPath());
                readers.add(new DrillParquetReader(context, footer, e, columnExplorer.getTableColumns(), fs, containsCorruptDates));
            }
            Map<String, String> implicitValues = columnExplorer.populateImplicitColumns(e, rowGroupScan.getSelectionRoot());
            implicitColumns.add(implicitValues);
            if (implicitValues.size() > mapWithMaxColumns.size()) {
                mapWithMaxColumns = implicitValues;
            }
        } catch (IOException e1) {
            throw new ExecutionSetupException(e1);
        }
    }
    // all readers should have the same number of implicit columns, add missing ones with value null
    Map<String, String> diff = Maps.transformValues(mapWithMaxColumns, Functions.constant((String) null));
    for (Map<String, String> map : implicitColumns) {
        map.putAll(Maps.difference(map, diff).entriesOnlyOnRight());
    }
    return new ScanBatch(rowGroupScan, context, oContext, readers.iterator(), implicitColumns);
}
Also used : ImplicitColumnExplorer(org.apache.drill.exec.store.ImplicitColumnExplorer) ExecutionSetupException(org.apache.drill.common.exceptions.ExecutionSetupException) Configuration(org.apache.hadoop.conf.Configuration) ParquetMetadata(org.apache.parquet.hadoop.metadata.ParquetMetadata) ParquetRecordReader(org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader) RecordReader(org.apache.drill.exec.store.RecordReader) Stopwatch(com.google.common.base.Stopwatch) DrillFileSystem(org.apache.drill.exec.store.dfs.DrillFileSystem) OperatorContext(org.apache.drill.exec.ops.OperatorContext) ScanBatch(org.apache.drill.exec.physical.impl.ScanBatch) Path(org.apache.hadoop.fs.Path) DrillParquetReader(org.apache.drill.exec.store.parquet2.DrillParquetReader) IOException(java.io.IOException) ParquetRecordReader(org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader) Map(java.util.Map)

Aggregations

DrillFileSystem (org.apache.drill.exec.store.dfs.DrillFileSystem)10 Path (org.apache.hadoop.fs.Path)5 IOException (java.io.IOException)3 Configuration (org.apache.hadoop.conf.Configuration)3 Map (java.util.Map)2 SchemaPlus (org.apache.calcite.schema.SchemaPlus)2 ExecutionSetupException (org.apache.drill.common.exceptions.ExecutionSetupException)2 OperatorContext (org.apache.drill.exec.ops.OperatorContext)2 ScanBatch (org.apache.drill.exec.physical.impl.ScanBatch)2 ImplicitColumnExplorer (org.apache.drill.exec.store.ImplicitColumnExplorer)2 RecordReader (org.apache.drill.exec.store.RecordReader)2 Stopwatch (com.google.common.base.Stopwatch)1 ArrayList (java.util.ArrayList)1 Table (org.apache.calcite.schema.Table)1 SqlIdentifier (org.apache.calcite.sql.SqlIdentifier)1 RelConversionException (org.apache.calcite.tools.RelConversionException)1 ValidationException (org.apache.calcite.tools.ValidationException)1 FormatPluginConfig (org.apache.drill.common.logical.FormatPluginConfig)1 DrillTable (org.apache.drill.exec.planner.logical.DrillTable)1 SqlRefreshMetadata (org.apache.drill.exec.planner.sql.parser.SqlRefreshMetadata)1