Search in sources :

Example 6 with Schemas

use of io.crate.metadata.Schemas in project crate by crate.

the class RestoreSnapshotPlan method bind.

@VisibleForTesting
public static BoundRestoreSnapshot bind(AnalyzedRestoreSnapshot restoreSnapshot, CoordinatorTxnCtx txnCtx, NodeContext nodeCtx, Row parameters, SubQueryResults subQueryResults, Schemas schemas) {
    Function<? super Symbol, Object> eval = x -> SymbolEvaluator.evaluate(txnCtx, nodeCtx, x, parameters, subQueryResults);
    Settings settings = GenericPropertiesConverter.genericPropertiesToSettings(restoreSnapshot.properties().map(eval), SnapshotSettings.SETTINGS);
    HashSet<BoundRestoreSnapshot.RestoreTableInfo> restoreTables = new HashSet<>(restoreSnapshot.tables().size());
    for (Table<Symbol> table : restoreSnapshot.tables()) {
        var relationName = RelationName.of(table.getName(), txnCtx.sessionContext().searchPath().currentSchema());
        try {
            DocTableInfo docTableInfo = schemas.getTableInfo(relationName, Operation.RESTORE_SNAPSHOT);
            if (table.partitionProperties().isEmpty()) {
                throw new RelationAlreadyExists(relationName);
            }
            var partitionName = toPartitionName(docTableInfo, Lists2.map(table.partitionProperties(), x -> x.map(eval)));
            if (docTableInfo.partitions().contains(partitionName)) {
                throw new PartitionAlreadyExistsException(partitionName);
            }
            restoreTables.add(new BoundRestoreSnapshot.RestoreTableInfo(relationName, partitionName));
        } catch (RelationUnknown | SchemaUnknownException e) {
            if (table.partitionProperties().isEmpty()) {
                restoreTables.add(new BoundRestoreSnapshot.RestoreTableInfo(relationName, null));
            } else {
                var partitionName = toPartitionName(relationName, Lists2.map(table.partitionProperties(), x -> x.map(eval)));
                restoreTables.add(new BoundRestoreSnapshot.RestoreTableInfo(relationName, partitionName));
            }
        }
    }
    return new BoundRestoreSnapshot(restoreSnapshot.repository(), restoreSnapshot.snapshot(), restoreTables, restoreSnapshot.includeTables(), restoreSnapshot.includeCustomMetadata(), restoreSnapshot.customMetadataTypes(), restoreSnapshot.includeGlobalSettings(), restoreSnapshot.globalSettings(), settings);
}
Also used : PartitionAlreadyExistsException(io.crate.exceptions.PartitionAlreadyExistsException) IndexParts(io.crate.metadata.IndexParts) RelationName(io.crate.metadata.RelationName) CompletableFuture(java.util.concurrent.CompletableFuture) Operation(io.crate.metadata.table.Operation) GetSnapshotsRequest(org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsRequest) SnapshotInfo(org.elasticsearch.snapshots.SnapshotInfo) Function(java.util.function.Function) PartitionName(io.crate.metadata.PartitionName) ArrayList(java.util.ArrayList) DependencyCarrier(io.crate.planner.DependencyCarrier) HashSet(java.util.HashSet) WAIT_FOR_COMPLETION(io.crate.analyze.SnapshotSettings.WAIT_FOR_COMPLETION) SymbolEvaluator(io.crate.analyze.SymbolEvaluator) Settings(org.elasticsearch.common.settings.Settings) RestoreSnapshotRequest(org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotRequest) SchemaUnknownException(io.crate.exceptions.SchemaUnknownException) BoundRestoreSnapshot(io.crate.analyze.BoundRestoreSnapshot) IndicesOptions(org.elasticsearch.action.support.IndicesOptions) GenericPropertiesConverter(io.crate.analyze.GenericPropertiesConverter) OneRowActionListener(io.crate.execution.support.OneRowActionListener) FutureActionListener(io.crate.action.FutureActionListener) PartitionPropertiesAnalyzer.toPartitionName(io.crate.analyze.PartitionPropertiesAnalyzer.toPartitionName) GetSnapshotsResponse(org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsResponse) DocTableInfo(io.crate.metadata.doc.DocTableInfo) AnalyzedRestoreSnapshot(io.crate.analyze.AnalyzedRestoreSnapshot) NodeContext(io.crate.metadata.NodeContext) Table(io.crate.sql.tree.Table) Set(java.util.Set) Lists2(io.crate.common.collections.Lists2) SnapshotSettings(io.crate.analyze.SnapshotSettings) RowConsumer(io.crate.data.RowConsumer) List(java.util.List) Row(io.crate.data.Row) RelationAlreadyExists(io.crate.exceptions.RelationAlreadyExists) Symbol(io.crate.expression.symbol.Symbol) PlannerContext(io.crate.planner.PlannerContext) IGNORE_UNAVAILABLE(io.crate.analyze.SnapshotSettings.IGNORE_UNAVAILABLE) Plan(io.crate.planner.Plan) SubQueryResults(io.crate.planner.operators.SubQueryResults) Schemas(io.crate.metadata.Schemas) VisibleForTesting(io.crate.common.annotations.VisibleForTesting) RelationUnknown(io.crate.exceptions.RelationUnknown) TransportGetSnapshotsAction(org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction) Row1(io.crate.data.Row1) CoordinatorTxnCtx(io.crate.metadata.CoordinatorTxnCtx) BoundRestoreSnapshot(io.crate.analyze.BoundRestoreSnapshot) DocTableInfo(io.crate.metadata.doc.DocTableInfo) RelationUnknown(io.crate.exceptions.RelationUnknown) Symbol(io.crate.expression.symbol.Symbol) SchemaUnknownException(io.crate.exceptions.SchemaUnknownException) RelationAlreadyExists(io.crate.exceptions.RelationAlreadyExists) Settings(org.elasticsearch.common.settings.Settings) SnapshotSettings(io.crate.analyze.SnapshotSettings) PartitionAlreadyExistsException(io.crate.exceptions.PartitionAlreadyExistsException) HashSet(java.util.HashSet) VisibleForTesting(io.crate.common.annotations.VisibleForTesting)

Example 7 with Schemas

use of io.crate.metadata.Schemas in project crate by crate.

the class Node method close.

// During concurrent close() calls we want to make sure that all of them return after the node has completed it's shutdown cycle.
// If not, the hook that is added in Bootstrap#setup() will be useless:
// close() might not be executed, in case another (for example api) call to close() has already set some lifecycles to stopped.
// In this case the process will be terminated even if the first call to close() has not finished yet.
@Override
public synchronized void close() throws IOException {
    synchronized (lifecycle) {
        if (lifecycle.started()) {
            stop();
        }
        if (!lifecycle.moveToClosed()) {
            return;
        }
    }
    logger.info("closing ...");
    List<Closeable> toClose = new ArrayList<>();
    StopWatch stopWatch = new StopWatch("node_close");
    toClose.add(() -> stopWatch.start("node_service"));
    toClose.add(nodeService);
    toClose.add(() -> stopWatch.stop().start("http"));
    toClose.add(injector.getInstance(HttpServerTransport.class));
    toClose.add(() -> stopWatch.stop().start("snapshot_service"));
    toClose.add(injector.getInstance(SnapshotsService.class));
    toClose.add(injector.getInstance(SnapshotShardsService.class));
    toClose.add(() -> stopWatch.stop().start("client"));
    Releasables.close(injector.getInstance(Client.class));
    toClose.add(() -> stopWatch.stop().start("indices_cluster"));
    toClose.add(injector.getInstance(IndicesClusterStateService.class));
    toClose.add(() -> stopWatch.stop().start("indices"));
    toClose.add(injector.getInstance(IndicesService.class));
    // close filter/fielddata caches after indices
    toClose.add(injector.getInstance(IndicesStore.class));
    toClose.add(injector.getInstance(PeerRecoverySourceService.class));
    toClose.add(() -> stopWatch.stop().start("routing"));
    toClose.add(() -> stopWatch.stop().start("cluster"));
    toClose.add(injector.getInstance(ClusterService.class));
    toClose.add(() -> stopWatch.stop().start("node_connections_service"));
    toClose.add(injector.getInstance(NodeConnectionsService.class));
    toClose.add(() -> stopWatch.stop().start("discovery"));
    toClose.add(injector.getInstance(Discovery.class));
    toClose.add(() -> stopWatch.stop().start("monitor"));
    toClose.add(nodeService.getMonitorService());
    toClose.add(() -> stopWatch.stop().start("gateway"));
    toClose.add(injector.getInstance(GatewayService.class));
    toClose.add(() -> stopWatch.stop().start("transport"));
    toClose.add(injector.getInstance(TransportService.class));
    toClose.add(() -> stopWatch.stop().start("gateway_meta_state"));
    toClose.add(injector.getInstance(GatewayMetaState.class));
    toClose.add(() -> stopWatch.stop().start("node_environment"));
    toClose.add(injector.getInstance(NodeEnvironment.class));
    toClose.add(() -> stopWatch.stop().start("decommission_service"));
    toClose.add(injector.getInstance(DecommissioningService.class));
    toClose.add(() -> stopWatch.stop().start("node_disconnect_job_monitor_service"));
    toClose.add(injector.getInstance(NodeDisconnectJobMonitorService.class));
    toClose.add(() -> stopWatch.stop().start("jobs_log_service"));
    toClose.add(injector.getInstance(JobsLogService.class));
    toClose.add(() -> stopWatch.stop().start("postgres_netty"));
    toClose.add(injector.getInstance(PostgresNetty.class));
    toClose.add(() -> stopWatch.stop().start("tasks_service"));
    toClose.add(injector.getInstance(TasksService.class));
    toClose.add(() -> stopWatch.stop().start("schemas"));
    toClose.add(injector.getInstance(Schemas.class));
    toClose.add(() -> stopWatch.stop().start("array_mapper_service"));
    toClose.add(injector.getInstance(ArrayMapperService.class));
    toClose.add(() -> stopWatch.stop().start("dangling_artifacts_service"));
    toClose.add(injector.getInstance(DanglingArtifactsService.class));
    toClose.add(() -> stopWatch.stop().start("ssl_context_provider_service"));
    toClose.add(injector.getInstance(SslContextProviderService.class));
    toClose.add(() -> stopWatch.stop().start("blob_service"));
    toClose.add(injector.getInstance(BlobService.class));
    for (LifecycleComponent plugin : pluginLifecycleComponents) {
        toClose.add(() -> stopWatch.stop().start("plugin(" + plugin.getClass().getName() + ")"));
        toClose.add(plugin);
    }
    toClose.addAll(pluginsService.filterPlugins(Plugin.class));
    toClose.add(() -> stopWatch.stop().start("thread_pool"));
    toClose.add(() -> injector.getInstance(ThreadPool.class).shutdown());
    // Don't call shutdownNow here, it might break ongoing operations on Lucene indices.
    // See https://issues.apache.org/jira/browse/LUCENE-7248. We call shutdownNow in
    // awaitClose if the node doesn't finish closing within the specified time.
    toClose.add(() -> stopWatch.stop());
    if (logger.isTraceEnabled()) {
        logger.trace("Close times for each service:\n{}", stopWatch.prettyPrint());
    }
    IOUtils.close(toClose);
    logger.info("closed");
}
Also used : SnapshotsService(org.elasticsearch.snapshots.SnapshotsService) SnapshotShardsService(org.elasticsearch.snapshots.SnapshotShardsService) NodeConnectionsService(org.elasticsearch.cluster.NodeConnectionsService) NodeEnvironment(org.elasticsearch.env.NodeEnvironment) Closeable(java.io.Closeable) IndicesStore(org.elasticsearch.indices.store.IndicesStore) SslContextProviderService(io.crate.protocols.ssl.SslContextProviderService) ArrayList(java.util.ArrayList) TasksService(io.crate.execution.jobs.TasksService) HttpServerTransport(org.elasticsearch.http.HttpServerTransport) DecommissioningService(io.crate.cluster.gracefulstop.DecommissioningService) GatewayMetaState(org.elasticsearch.gateway.GatewayMetaState) PostgresNetty(io.crate.protocols.postgres.PostgresNetty) IndicesClusterStateService(org.elasticsearch.indices.cluster.IndicesClusterStateService) LifecycleComponent(org.elasticsearch.common.component.LifecycleComponent) PeerRecoverySourceService(org.elasticsearch.indices.recovery.PeerRecoverySourceService) Client(org.elasticsearch.client.Client) NodeClient(org.elasticsearch.client.node.NodeClient) DanglingArtifactsService(io.crate.metadata.DanglingArtifactsService) JobsLogService(io.crate.execution.engine.collect.stats.JobsLogService) Discovery(org.elasticsearch.discovery.Discovery) IndicesService(org.elasticsearch.indices.IndicesService) Schemas(io.crate.metadata.Schemas) StopWatch(org.elasticsearch.common.StopWatch) GatewayService(org.elasticsearch.gateway.GatewayService) ClusterService(org.elasticsearch.cluster.service.ClusterService) TransportService(org.elasticsearch.transport.TransportService) NodeDisconnectJobMonitorService(io.crate.execution.jobs.transport.NodeDisconnectJobMonitorService) BlobService(io.crate.blob.BlobService) ArrayMapperService(io.crate.lucene.ArrayMapperService) ClusterPlugin(org.elasticsearch.plugins.ClusterPlugin) IndexStorePlugin(org.elasticsearch.plugins.IndexStorePlugin) RepositoryPlugin(org.elasticsearch.plugins.RepositoryPlugin) NetworkPlugin(org.elasticsearch.plugins.NetworkPlugin) Plugin(org.elasticsearch.plugins.Plugin) AnalysisPlugin(org.elasticsearch.plugins.AnalysisPlugin) EnginePlugin(org.elasticsearch.plugins.EnginePlugin) CopyPlugin(io.crate.plugin.CopyPlugin) DiscoveryPlugin(org.elasticsearch.plugins.DiscoveryPlugin) MapperPlugin(org.elasticsearch.plugins.MapperPlugin) ActionPlugin(org.elasticsearch.plugins.ActionPlugin)

Example 8 with Schemas

use of io.crate.metadata.Schemas in project crate by crate.

the class InternalCountOperationTest method testCount.

@Test
public void testCount() throws Exception {
    execute("create table t (name string) clustered into 1 shards with (number_of_replicas = 0)");
    ensureYellow();
    execute("insert into t (name) values ('Marvin'), ('Arthur'), ('Trillian')");
    execute("refresh table t");
    CountOperation countOperation = internalCluster().getDataNodeInstance(CountOperation.class);
    ClusterService clusterService = internalCluster().getDataNodeInstance(ClusterService.class);
    CoordinatorTxnCtx txnCtx = CoordinatorTxnCtx.systemTransactionContext();
    Metadata metadata = clusterService.state().getMetadata();
    Index index = metadata.index(getFqn("t")).getIndex();
    IntArrayList shards = new IntArrayList(1);
    shards.add(0);
    Map<String, IntIndexedContainer> indexShards = Map.of(index.getName(), shards);
    {
        CompletableFuture<Long> count = countOperation.count(txnCtx, indexShards, Literal.BOOLEAN_TRUE);
        assertThat(count.get(5, TimeUnit.SECONDS), is(3L));
    }
    Schemas schemas = internalCluster().getInstance(Schemas.class);
    TableInfo tableInfo = schemas.getTableInfo(new RelationName(sqlExecutor.getCurrentSchema(), "t"));
    TableRelation tableRelation = new TableRelation(tableInfo);
    Map<RelationName, AnalyzedRelation> tableSources = Map.of(tableInfo.ident(), tableRelation);
    SqlExpressions sqlExpressions = new SqlExpressions(tableSources, tableRelation);
    Symbol filter = sqlExpressions.normalize(sqlExpressions.asSymbol("name = 'Marvin'"));
    {
        CompletableFuture<Long> count = countOperation.count(txnCtx, indexShards, filter);
        assertThat(count.get(5, TimeUnit.SECONDS), is(1L));
    }
}
Also used : CoordinatorTxnCtx(io.crate.metadata.CoordinatorTxnCtx) Symbol(io.crate.expression.symbol.Symbol) Metadata(org.elasticsearch.cluster.metadata.Metadata) Index(org.elasticsearch.index.Index) IntIndexedContainer(com.carrotsearch.hppc.IntIndexedContainer) Schemas(io.crate.metadata.Schemas) AnalyzedRelation(io.crate.analyze.relations.AnalyzedRelation) TableRelation(io.crate.analyze.relations.TableRelation) CompletableFuture(java.util.concurrent.CompletableFuture) ClusterService(org.elasticsearch.cluster.service.ClusterService) RelationName(io.crate.metadata.RelationName) TableInfo(io.crate.metadata.table.TableInfo) IntArrayList(com.carrotsearch.hppc.IntArrayList) SqlExpressions(io.crate.testing.SqlExpressions) Test(org.junit.Test)

Example 9 with Schemas

use of io.crate.metadata.Schemas in project crate by crate.

the class HandlerSideLevelCollectTest method testClusterLevel.

@Test
public void testClusterLevel() throws Exception {
    Schemas schemas = internalCluster().getInstance(Schemas.class);
    TableInfo tableInfo = schemas.getTableInfo(new RelationName("sys", "cluster"));
    Routing routing = tableInfo.getRouting(clusterService().state(), routingProvider, WhereClause.MATCH_ALL, RoutingProvider.ShardSelection.ANY, SessionContext.systemSessionContext());
    Reference clusterNameRef = new Reference(new ReferenceIdent(SysClusterTableInfo.IDENT, new ColumnIdent("name")), RowGranularity.CLUSTER, DataTypes.STRING, 1, null);
    RoutedCollectPhase collectNode = collectNode(routing, List.of(clusterNameRef), RowGranularity.CLUSTER);
    Bucket result = collect(collectNode);
    assertThat(result.size(), is(1));
    assertThat(((String) result.iterator().next().get(0)), Matchers.startsWith("SUITE-"));
}
Also used : ColumnIdent(io.crate.metadata.ColumnIdent) Bucket(io.crate.data.Bucket) CollectionBucket(io.crate.data.CollectionBucket) Reference(io.crate.metadata.Reference) RelationName(io.crate.metadata.RelationName) Routing(io.crate.metadata.Routing) TableInfo(io.crate.metadata.table.TableInfo) SysClusterTableInfo(io.crate.metadata.sys.SysClusterTableInfo) Schemas(io.crate.metadata.Schemas) ReferenceIdent(io.crate.metadata.ReferenceIdent) RoutedCollectPhase(io.crate.execution.dsl.phases.RoutedCollectPhase) Test(org.junit.Test)

Example 10 with Schemas

use of io.crate.metadata.Schemas in project crate by crate.

the class CreateSnapshotPlan method createRequest.

@VisibleForTesting
public static CreateSnapshotRequest createRequest(AnalyzedCreateSnapshot createSnapshot, CoordinatorTxnCtx txnCtx, NodeContext nodeCtx, Row parameters, SubQueryResults subQueryResults, Schemas schemas) {
    Function<? super Symbol, Object> eval = x -> SymbolEvaluator.evaluate(txnCtx, nodeCtx, x, parameters, subQueryResults);
    Settings settings = GenericPropertiesConverter.genericPropertiesToSettings(createSnapshot.properties().map(eval), SnapshotSettings.SETTINGS);
    boolean ignoreUnavailable = IGNORE_UNAVAILABLE.get(settings);
    final HashSet<String> snapshotIndices;
    final HashSet<String> templates = new HashSet<>();
    if (createSnapshot.tables().isEmpty()) {
        for (SchemaInfo schemaInfo : schemas) {
            for (TableInfo tableInfo : schemaInfo.getTables()) {
                // only check for user generated tables
                if (tableInfo instanceof DocTableInfo) {
                    Operation.blockedRaiseException(tableInfo, Operation.READ);
                }
            }
        }
        snapshotIndices = new HashSet<>(AnalyzedCreateSnapshot.ALL_INDICES);
    } else {
        snapshotIndices = new HashSet<>(createSnapshot.tables().size());
        for (Table<Symbol> table : createSnapshot.tables()) {
            DocTableInfo docTableInfo;
            try {
                docTableInfo = (DocTableInfo) schemas.resolveTableInfo(table.getName(), Operation.CREATE_SNAPSHOT, txnCtx.sessionContext().sessionUser(), txnCtx.sessionContext().searchPath());
            } catch (Exception e) {
                if (ignoreUnavailable && e instanceof ResourceUnknownException) {
                    LOGGER.info("Ignore unknown relation '{}' for the '{}' snapshot'", table.getName(), createSnapshot.snapshot());
                    continue;
                } else {
                    throw e;
                }
            }
            if (docTableInfo.isPartitioned()) {
                templates.add(PartitionName.templateName(docTableInfo.ident().schema(), docTableInfo.ident().name()));
            }
            if (table.partitionProperties().isEmpty()) {
                snapshotIndices.addAll(Arrays.asList(docTableInfo.concreteIndices()));
            } else {
                var partitionName = toPartitionName(docTableInfo, Lists2.map(table.partitionProperties(), x -> x.map(eval)));
                if (!docTableInfo.partitions().contains(partitionName)) {
                    if (!ignoreUnavailable) {
                        throw new PartitionUnknownException(partitionName);
                    } else {
                        LOGGER.info("ignoring unknown partition of table '{}' with ident '{}'", partitionName.relationName(), partitionName.ident());
                    }
                } else {
                    snapshotIndices.add(partitionName.asIndexName());
                }
            }
        }
    }
    return new CreateSnapshotRequest(createSnapshot.snapshot().getRepository(), createSnapshot.snapshot().getSnapshotId().getName()).includeGlobalState(createSnapshot.tables().isEmpty()).waitForCompletion(WAIT_FOR_COMPLETION.get(settings)).indices(snapshotIndices.toArray(new String[0])).indicesOptions(IndicesOptions.fromOptions(ignoreUnavailable, true, true, false, IndicesOptions.lenientExpandOpen())).templates(templates.stream().toList()).settings(settings);
}
Also used : Arrays(java.util.Arrays) Operation(io.crate.metadata.table.Operation) SnapshotInfo(org.elasticsearch.snapshots.SnapshotInfo) Function(java.util.function.Function) PartitionName(io.crate.metadata.PartitionName) DependencyCarrier(io.crate.planner.DependencyCarrier) HashSet(java.util.HashSet) WAIT_FOR_COMPLETION(io.crate.analyze.SnapshotSettings.WAIT_FOR_COMPLETION) SymbolEvaluator(io.crate.analyze.SymbolEvaluator) Settings(org.elasticsearch.common.settings.Settings) AnalyzedCreateSnapshot(io.crate.analyze.AnalyzedCreateSnapshot) IndicesOptions(org.elasticsearch.action.support.IndicesOptions) GenericPropertiesConverter(io.crate.analyze.GenericPropertiesConverter) SnapshotState(org.elasticsearch.snapshots.SnapshotState) OneRowActionListener(io.crate.execution.support.OneRowActionListener) PartitionUnknownException(io.crate.exceptions.PartitionUnknownException) PartitionPropertiesAnalyzer.toPartitionName(io.crate.analyze.PartitionPropertiesAnalyzer.toPartitionName) SchemaInfo(io.crate.metadata.table.SchemaInfo) DocTableInfo(io.crate.metadata.doc.DocTableInfo) TableInfo(io.crate.metadata.table.TableInfo) NodeContext(io.crate.metadata.NodeContext) CreateSnapshotException(io.crate.exceptions.CreateSnapshotException) Table(io.crate.sql.tree.Table) ResourceUnknownException(io.crate.exceptions.ResourceUnknownException) CreateSnapshotRequest(org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotRequest) Lists2(io.crate.common.collections.Lists2) SnapshotSettings(io.crate.analyze.SnapshotSettings) RowConsumer(io.crate.data.RowConsumer) Logger(org.apache.logging.log4j.Logger) Row(io.crate.data.Row) Symbol(io.crate.expression.symbol.Symbol) PlannerContext(io.crate.planner.PlannerContext) IGNORE_UNAVAILABLE(io.crate.analyze.SnapshotSettings.IGNORE_UNAVAILABLE) Plan(io.crate.planner.Plan) SubQueryResults(io.crate.planner.operators.SubQueryResults) Schemas(io.crate.metadata.Schemas) VisibleForTesting(io.crate.common.annotations.VisibleForTesting) LogManager(org.apache.logging.log4j.LogManager) Row1(io.crate.data.Row1) CoordinatorTxnCtx(io.crate.metadata.CoordinatorTxnCtx) DocTableInfo(io.crate.metadata.doc.DocTableInfo) PartitionUnknownException(io.crate.exceptions.PartitionUnknownException) Symbol(io.crate.expression.symbol.Symbol) ResourceUnknownException(io.crate.exceptions.ResourceUnknownException) PartitionUnknownException(io.crate.exceptions.PartitionUnknownException) CreateSnapshotException(io.crate.exceptions.CreateSnapshotException) ResourceUnknownException(io.crate.exceptions.ResourceUnknownException) CreateSnapshotRequest(org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotRequest) DocTableInfo(io.crate.metadata.doc.DocTableInfo) TableInfo(io.crate.metadata.table.TableInfo) Settings(org.elasticsearch.common.settings.Settings) SnapshotSettings(io.crate.analyze.SnapshotSettings) HashSet(java.util.HashSet) SchemaInfo(io.crate.metadata.table.SchemaInfo) VisibleForTesting(io.crate.common.annotations.VisibleForTesting)

Aggregations

Schemas (io.crate.metadata.Schemas)14 RelationName (io.crate.metadata.RelationName)9 Row (io.crate.data.Row)6 ColumnIdent (io.crate.metadata.ColumnIdent)6 CoordinatorTxnCtx (io.crate.metadata.CoordinatorTxnCtx)6 NodeContext (io.crate.metadata.NodeContext)6 TableInfo (io.crate.metadata.table.TableInfo)6 VisibleForTesting (io.crate.common.annotations.VisibleForTesting)5 Symbol (io.crate.expression.symbol.Symbol)5 SymbolEvaluator (io.crate.analyze.SymbolEvaluator)4 Row1 (io.crate.data.Row1)4 RowConsumer (io.crate.data.RowConsumer)4 OneRowActionListener (io.crate.execution.support.OneRowActionListener)4 SubQueryResults (io.crate.planner.operators.SubQueryResults)4 Settings (org.elasticsearch.common.settings.Settings)4 Test (org.junit.Test)4 AnalyzedCreateTable (io.crate.analyze.AnalyzedCreateTable)3 BoundCreateTable (io.crate.analyze.BoundCreateTable)3 NumberOfShards (io.crate.analyze.NumberOfShards)3 AnalyzedRelation (io.crate.analyze.relations.AnalyzedRelation)3