Search in sources :

Example 11 with TrinoException

use of io.trino.spi.TrinoException in project trino by trinodb.

the class ChoicesScalarFunctionImplementation method getScalarFunctionInvoker.

@Override
public FunctionInvoker getScalarFunctionInvoker(InvocationConvention invocationConvention) {
    List<ScalarImplementationChoice> choices = new ArrayList<>();
    for (ScalarImplementationChoice choice : this.choices) {
        InvocationConvention callingConvention = choice.getInvocationConvention();
        if (functionAdapter.canAdapt(callingConvention, invocationConvention)) {
            choices.add(choice);
        }
    }
    if (choices.isEmpty()) {
        throw new TrinoException(FUNCTION_NOT_FOUND, format("Function implementation for (%s) cannot be adapted to convention (%s)", boundSignature, invocationConvention));
    }
    ScalarImplementationChoice bestChoice = Collections.max(choices, comparingInt(ScalarImplementationChoice::getScore));
    MethodHandle methodHandle = functionAdapter.adapt(bestChoice.getMethodHandle(), boundSignature.getArgumentTypes(), bestChoice.getInvocationConvention(), invocationConvention);
    return new FunctionInvoker(methodHandle, bestChoice.getInstanceFactory(), bestChoice.getLambdaInterfaces());
}
Also used : ArrayList(java.util.ArrayList) InvocationConvention(io.trino.spi.function.InvocationConvention) TrinoException(io.trino.spi.TrinoException) FunctionInvoker(io.trino.metadata.FunctionInvoker) MethodHandle(java.lang.invoke.MethodHandle)

Example 12 with TrinoException

use of io.trino.spi.TrinoException in project trino by trinodb.

the class RequestErrorTracker method requestFailed.

public void requestFailed(Throwable reason) throws TrinoException {
    // cancellation is not a failure
    if (reason instanceof CancellationException) {
        return;
    }
    if (reason instanceof RejectedExecutionException) {
        throw new TrinoException(REMOTE_TASK_ERROR, reason);
    }
    // log failure message
    if (isExpectedError(reason)) {
        // don't print a stack for a known errors
        log.warn("Error %s %s: %s: %s", jobDescription, taskId, reason.getMessage(), taskUri);
    } else {
        log.warn(reason, "Error %s %s: %s", jobDescription, taskId, taskUri);
    }
    // remember the first 10 errors
    if (errorsSinceLastSuccess.size() < 10) {
        errorsSinceLastSuccess.add(reason);
    }
    // fail the task, if we have more than X failures in a row and more than Y seconds have passed since the last request
    if (backoff.failure()) {
        // it is weird to mark the task failed locally and then cancel the remote task, but there is no way to tell a remote task that it is failed
        TrinoException exception = new TrinoTransportException(TOO_MANY_REQUESTS_FAILED, fromUri(taskUri), format("%s (%s %s - %s failures, failure duration %s, total failed request time %s)", WORKER_NODE_ERROR, jobDescription, taskUri, backoff.getFailureCount(), backoff.getFailureDuration().convertTo(SECONDS), backoff.getFailureRequestTimeTotal().convertTo(SECONDS)));
        errorsSinceLastSuccess.forEach(exception::addSuppressed);
        throw exception;
    }
}
Also used : TrinoTransportException(io.trino.spi.TrinoTransportException) CancellationException(java.util.concurrent.CancellationException) TrinoException(io.trino.spi.TrinoException) RejectedExecutionException(java.util.concurrent.RejectedExecutionException)

Example 13 with TrinoException

use of io.trino.spi.TrinoException in project trino by trinodb.

the class ExtractSpatialJoins method loadKdbTree.

private static KdbTree loadKdbTree(String tableName, Session session, Metadata metadata, SplitManager splitManager, PageSourceManager pageSourceManager) {
    QualifiedObjectName name = toQualifiedObjectName(tableName, session.getCatalog().get(), session.getSchema().get());
    TableHandle tableHandle = metadata.getTableHandle(session, name).orElseThrow(() -> new TrinoException(INVALID_SPATIAL_PARTITIONING, format("Table not found: %s", name)));
    Map<String, ColumnHandle> columnHandles = metadata.getColumnHandles(session, tableHandle);
    List<ColumnHandle> visibleColumnHandles = columnHandles.values().stream().filter(handle -> !metadata.getColumnMetadata(session, tableHandle, handle).isHidden()).collect(toImmutableList());
    checkSpatialPartitioningTable(visibleColumnHandles.size() == 1, "Expected single column for table %s, but found %s columns", name, columnHandles.size());
    ColumnHandle kdbTreeColumn = Iterables.getOnlyElement(visibleColumnHandles);
    Optional<KdbTree> kdbTree = Optional.empty();
    try (SplitSource splitSource = splitManager.getSplits(session, tableHandle, UNGROUPED_SCHEDULING, EMPTY, alwaysTrue())) {
        while (!Thread.currentThread().isInterrupted()) {
            SplitBatch splitBatch = getFutureValue(splitSource.getNextBatch(NOT_PARTITIONED, Lifespan.taskWide(), 1000));
            List<Split> splits = splitBatch.getSplits();
            for (Split split : splits) {
                try (ConnectorPageSource pageSource = pageSourceManager.createPageSource(session, split, tableHandle, ImmutableList.of(kdbTreeColumn), DynamicFilter.EMPTY)) {
                    do {
                        getFutureValue(pageSource.isBlocked());
                        Page page = pageSource.getNextPage();
                        if (page != null && page.getPositionCount() > 0) {
                            checkSpatialPartitioningTable(kdbTree.isEmpty(), "Expected exactly one row for table %s, but found more", name);
                            checkSpatialPartitioningTable(page.getPositionCount() == 1, "Expected exactly one row for table %s, but found %s rows", name, page.getPositionCount());
                            String kdbTreeJson = VARCHAR.getSlice(page.getBlock(0), 0).toStringUtf8();
                            try {
                                kdbTree = Optional.of(KdbTreeUtils.fromJson(kdbTreeJson));
                            } catch (IllegalArgumentException e) {
                                checkSpatialPartitioningTable(false, "Invalid JSON string for KDB tree: %s", e.getMessage());
                            }
                        }
                    } while (!pageSource.isFinished());
                } catch (IOException e) {
                    throw new UncheckedIOException(e);
                }
            }
            if (splitBatch.isLastBatch()) {
                break;
            }
        }
    }
    checkSpatialPartitioningTable(kdbTree.isPresent(), "Expected exactly one row for table %s, but got none", name);
    return kdbTree.get();
}
Also used : EMPTY(io.trino.spi.connector.DynamicFilter.EMPTY) SpatialJoinUtils.extractSupportedSpatialComparisons(io.trino.util.SpatialJoinUtils.extractSupportedSpatialComparisons) SymbolsExtractor.extractUnique(io.trino.sql.planner.SymbolsExtractor.extractUnique) SplitBatch(io.trino.split.SplitSource.SplitBatch) SplitManager(io.trino.split.SplitManager) SystemSessionProperties.getSpatialPartitioningTableName(io.trino.SystemSessionProperties.getSpatialPartitioningTableName) FilterNode(io.trino.sql.planner.plan.FilterNode) PlanNode(io.trino.sql.planner.plan.PlanNode) LEFT(io.trino.sql.planner.plan.JoinNode.Type.LEFT) PlanNodeId(io.trino.sql.planner.plan.PlanNodeId) Map(java.util.Map) SpatialJoinNode(io.trino.sql.planner.plan.SpatialJoinNode) ConnectorPageSource(io.trino.spi.connector.ConnectorPageSource) JoinNode(io.trino.sql.planner.plan.JoinNode) INTEGER(io.trino.spi.type.IntegerType.INTEGER) Splitter(com.google.common.base.Splitter) FunctionCall(io.trino.sql.tree.FunctionCall) Patterns.join(io.trino.sql.planner.plan.Patterns.join) TypeSignature(io.trino.spi.type.TypeSignature) ImmutableSet(com.google.common.collect.ImmutableSet) ImmutableMap(com.google.common.collect.ImmutableMap) Collection(java.util.Collection) ImmutableList.toImmutableList(com.google.common.collect.ImmutableList.toImmutableList) TypeSignatureTranslator.toSqlType(io.trino.sql.analyzer.TypeSignatureTranslator.toSqlType) KdbTree(io.trino.geospatial.KdbTree) Assignments(io.trino.sql.planner.plan.Assignments) Set(java.util.Set) TrinoException(io.trino.spi.TrinoException) ArrayType(io.trino.spi.type.ArrayType) SplitSource(io.trino.split.SplitSource) Context(io.trino.sql.planner.iterative.Rule.Context) ComparisonExpression(io.trino.sql.tree.ComparisonExpression) String.format(java.lang.String.format) Constraint.alwaysTrue(io.trino.spi.connector.Constraint.alwaysTrue) LESS_THAN_OR_EQUAL(io.trino.sql.tree.ComparisonExpression.Operator.LESS_THAN_OR_EQUAL) UncheckedIOException(java.io.UncheckedIOException) List(java.util.List) INVALID_SPATIAL_PARTITIONING(io.trino.spi.StandardErrorCode.INVALID_SPATIAL_PARTITIONING) NOT_PARTITIONED(io.trino.spi.connector.NotPartitionedPartitionHandle.NOT_PARTITIONED) Pattern(io.trino.matching.Pattern) SymbolReference(io.trino.sql.tree.SymbolReference) Split(io.trino.metadata.Split) DynamicFilter(io.trino.spi.connector.DynamicFilter) Optional(java.util.Optional) ExpressionNodeInliner.replaceExpression(io.trino.sql.planner.ExpressionNodeInliner.replaceExpression) Expression(io.trino.sql.tree.Expression) Session(io.trino.Session) PlannerContext(io.trino.sql.PlannerContext) Iterables(com.google.common.collect.Iterables) INNER(io.trino.sql.planner.plan.JoinNode.Type.INNER) Type(io.trino.spi.type.Type) Patterns.filter(io.trino.sql.planner.plan.Patterns.filter) Page(io.trino.spi.Page) Capture.newCapture(io.trino.matching.Capture.newCapture) Cast(io.trino.sql.tree.Cast) KdbTreeUtils(io.trino.geospatial.KdbTreeUtils) VARCHAR(io.trino.spi.type.VarcharType.VARCHAR) FunctionCallBuilder(io.trino.sql.planner.FunctionCallBuilder) ImmutableList(com.google.common.collect.ImmutableList) Verify.verify(com.google.common.base.Verify.verify) UNGROUPED_SCHEDULING(io.trino.spi.connector.ConnectorSplitManager.SplitSchedulingStrategy.UNGROUPED_SCHEDULING) Objects.requireNonNull(java.util.Objects.requireNonNull) Result(io.trino.sql.planner.iterative.Rule.Result) ColumnHandle(io.trino.spi.connector.ColumnHandle) Rule(io.trino.sql.planner.iterative.Rule) Lifespan(io.trino.execution.Lifespan) ProjectNode(io.trino.sql.planner.plan.ProjectNode) Symbol(io.trino.sql.planner.Symbol) StringLiteral(io.trino.sql.tree.StringLiteral) SystemSessionProperties.isSpatialJoinEnabled(io.trino.SystemSessionProperties.isSpatialJoinEnabled) IOException(java.io.IOException) PageSourceManager(io.trino.split.PageSourceManager) LESS_THAN(io.trino.sql.tree.ComparisonExpression.Operator.LESS_THAN) MoreFutures.getFutureValue(io.airlift.concurrent.MoreFutures.getFutureValue) UnnestNode(io.trino.sql.planner.plan.UnnestNode) Capture(io.trino.matching.Capture) QualifiedName(io.trino.sql.tree.QualifiedName) DOUBLE(io.trino.spi.type.DoubleType.DOUBLE) TableHandle(io.trino.metadata.TableHandle) TypeAnalyzer(io.trino.sql.planner.TypeAnalyzer) QualifiedObjectName(io.trino.metadata.QualifiedObjectName) Patterns.source(io.trino.sql.planner.plan.Patterns.source) Captures(io.trino.matching.Captures) Metadata(io.trino.metadata.Metadata) VisibleForTesting(com.google.common.annotations.VisibleForTesting) TypeManager(io.trino.spi.type.TypeManager) SpatialJoinUtils.extractSupportedSpatialFunctions(io.trino.util.SpatialJoinUtils.extractSupportedSpatialFunctions) ColumnHandle(io.trino.spi.connector.ColumnHandle) KdbTree(io.trino.geospatial.KdbTree) Page(io.trino.spi.Page) UncheckedIOException(java.io.UncheckedIOException) UncheckedIOException(java.io.UncheckedIOException) IOException(java.io.IOException) ConnectorPageSource(io.trino.spi.connector.ConnectorPageSource) QualifiedObjectName(io.trino.metadata.QualifiedObjectName) SplitBatch(io.trino.split.SplitSource.SplitBatch) TrinoException(io.trino.spi.TrinoException) TableHandle(io.trino.metadata.TableHandle) SplitSource(io.trino.split.SplitSource) Split(io.trino.metadata.Split)

Example 14 with TrinoException

use of io.trino.spi.TrinoException in project trino by trinodb.

the class SqlQueryExecution method analyze.

private static Analysis analyze(PreparedQuery preparedQuery, QueryStateMachine stateMachine, WarningCollector warningCollector, AnalyzerFactory analyzerFactory) {
    stateMachine.beginAnalysis();
    requireNonNull(preparedQuery, "preparedQuery is null");
    Analyzer analyzer = analyzerFactory.createAnalyzer(stateMachine.getSession(), preparedQuery.getParameters(), parameterExtractor(preparedQuery.getStatement(), preparedQuery.getParameters()), warningCollector);
    Analysis analysis;
    try {
        analysis = analyzer.analyze(preparedQuery.getStatement());
    } catch (StackOverflowError e) {
        throw new TrinoException(STACK_OVERFLOW, "statement is too large (stack overflow during analysis)", e);
    }
    stateMachine.setUpdateType(analysis.getUpdateType());
    stateMachine.setReferencedTables(analysis.getReferencedTables());
    stateMachine.setRoutines(analysis.getRoutines());
    stateMachine.endAnalysis();
    return analysis;
}
Also used : Analysis(io.trino.sql.analyzer.Analysis) TrinoException(io.trino.spi.TrinoException) Analyzer(io.trino.sql.analyzer.Analyzer) TypeAnalyzer(io.trino.sql.planner.TypeAnalyzer)

Example 15 with TrinoException

use of io.trino.spi.TrinoException in project trino by trinodb.

the class SqlTaskManager method failAbandonedTasks.

private void failAbandonedTasks() {
    DateTime now = DateTime.now();
    DateTime oldestAllowedHeartbeat = now.minus(clientTimeout.toMillis());
    for (SqlTask sqlTask : tasks.asMap().values()) {
        try {
            TaskInfo taskInfo = sqlTask.getTaskInfo();
            TaskStatus taskStatus = taskInfo.getTaskStatus();
            if (taskStatus.getState().isDone()) {
                continue;
            }
            DateTime lastHeartbeat = taskInfo.getLastHeartbeat();
            if (lastHeartbeat != null && lastHeartbeat.isBefore(oldestAllowedHeartbeat)) {
                log.info("Failing abandoned task %s", taskStatus.getTaskId());
                sqlTask.failed(new TrinoException(ABANDONED_TASK, format("Task %s has not been accessed since %s: currentTime %s", taskStatus.getTaskId(), lastHeartbeat, now)));
            }
        } catch (RuntimeException e) {
            log.warn(e, "Error while inspecting age of task %s", sqlTask.getTaskId());
        }
    }
}
Also used : SqlTask.createSqlTask(io.trino.execution.SqlTask.createSqlTask) TrinoException(io.trino.spi.TrinoException) DateTime(org.joda.time.DateTime)

Aggregations

TrinoException (io.trino.spi.TrinoException)623 IOException (java.io.IOException)151 ImmutableList (com.google.common.collect.ImmutableList)105 List (java.util.List)100 Type (io.trino.spi.type.Type)93 ImmutableList.toImmutableList (com.google.common.collect.ImmutableList.toImmutableList)90 SchemaTableName (io.trino.spi.connector.SchemaTableName)83 Path (org.apache.hadoop.fs.Path)83 Optional (java.util.Optional)79 ArrayList (java.util.ArrayList)77 Map (java.util.Map)76 ImmutableMap (com.google.common.collect.ImmutableMap)70 Objects.requireNonNull (java.util.Objects.requireNonNull)69 ConnectorSession (io.trino.spi.connector.ConnectorSession)63 TableNotFoundException (io.trino.spi.connector.TableNotFoundException)62 ImmutableSet (com.google.common.collect.ImmutableSet)56 VarcharType (io.trino.spi.type.VarcharType)54 Set (java.util.Set)54 Slice (io.airlift.slice.Slice)53 Table (io.trino.plugin.hive.metastore.Table)52