Search in sources :

Example 1 with Database

use of io.prestosql.plugin.hive.metastore.Database in project hetu-core by openlookeng.

the class CarbondataMetadata method updateEmptyCarbondataTableStorePath.

private void updateEmptyCarbondataTableStorePath(ConnectorSession session, String schemaName) throws IOException {
    FileSystem fileSystem;
    String targetLocation;
    if (StringUtils.isEmpty(carbondataTableStore)) {
        Database database = metastore.getDatabase(defaultDBName).orElseThrow(() -> new SchemaNotFoundException(defaultDBName));
        String tableStore = database.getLocation().get();
        /* if path not having prefix with filesystem type, than we will take fileSystem type (ex:hdfs,file:) from core-site.xml using below methods */
        fileSystem = hdfsEnvironment.getFileSystem(new HdfsEnvironment.HdfsContext(session, schemaName), new Path(tableStore));
        targetLocation = fileSystem.getFileStatus(new Path(tableStore)).getPath().toString();
        carbondataTableStore = targetLocation.endsWith(File.separator) ? (targetLocation + carbondataStorageFolderName) : (targetLocation + File.separator + carbondataStorageFolderName);
    } else {
        fileSystem = hdfsEnvironment.getFileSystem(new HdfsEnvironment.HdfsContext(session, schemaName), new Path(carbondataTableStore));
        carbondataTableStore = fileSystem.getFileStatus(new Path(carbondataTableStore)).getPath().toString();
    }
}
Also used : CarbonTablePath(org.apache.carbondata.core.util.path.CarbonTablePath) Path(org.apache.hadoop.fs.Path) FileSystem(org.apache.hadoop.fs.FileSystem) Database(io.prestosql.plugin.hive.metastore.Database) SchemaNotFoundException(io.prestosql.spi.connector.SchemaNotFoundException)

Example 2 with Database

use of io.prestosql.plugin.hive.metastore.Database in project hetu-core by openlookeng.

the class GlueHiveMetastore method renameDatabase.

@Override
public void renameDatabase(HiveIdentity identity, String databaseName, String newDatabaseName) {
    try {
        Database database = getDatabase(databaseName).orElseThrow(() -> new SchemaNotFoundException(databaseName));
        DatabaseInput renamedDatabase = GlueInputConverter.convertDatabase(database).withName(newDatabaseName);
        glueClient.updateDatabase(new UpdateDatabaseRequest().withCatalogId(catalogId).withName(databaseName).withDatabaseInput(renamedDatabase));
    } catch (AmazonServiceException e) {
        throw new PrestoException(HiveErrorCode.HIVE_METASTORE_ERROR, e);
    }
}
Also used : UpdateDatabaseRequest(com.amazonaws.services.glue.model.UpdateDatabaseRequest) Database(io.prestosql.plugin.hive.metastore.Database) AmazonServiceException(com.amazonaws.AmazonServiceException) DatabaseInput(com.amazonaws.services.glue.model.DatabaseInput) PrestoException(io.prestosql.spi.PrestoException) SchemaNotFoundException(io.prestosql.spi.connector.SchemaNotFoundException)

Example 3 with Database

use of io.prestosql.plugin.hive.metastore.Database in project boostkit-bigdata by kunpengcompute.

the class GlueHiveMetastore method renameDatabase.

@Override
public void renameDatabase(HiveIdentity identity, String databaseName, String newDatabaseName) {
    try {
        Database database = getDatabase(databaseName).orElseThrow(() -> new SchemaNotFoundException(databaseName));
        DatabaseInput renamedDatabase = GlueInputConverter.convertDatabase(database).withName(newDatabaseName);
        glueClient.updateDatabase(new UpdateDatabaseRequest().withCatalogId(catalogId).withName(databaseName).withDatabaseInput(renamedDatabase));
    } catch (AmazonServiceException e) {
        throw new PrestoException(HiveErrorCode.HIVE_METASTORE_ERROR, e);
    }
}
Also used : UpdateDatabaseRequest(com.amazonaws.services.glue.model.UpdateDatabaseRequest) Database(io.prestosql.plugin.hive.metastore.Database) AmazonServiceException(com.amazonaws.AmazonServiceException) DatabaseInput(com.amazonaws.services.glue.model.DatabaseInput) PrestoException(io.prestosql.spi.PrestoException) SchemaNotFoundException(io.prestosql.spi.connector.SchemaNotFoundException)

Example 4 with Database

use of io.prestosql.plugin.hive.metastore.Database in project boostkit-bigdata by kunpengcompute.

the class GlueHiveMetastore method createDatabase.

@Override
public void createDatabase(HiveIdentity identity, Database inputDatabase) {
    Database database = inputDatabase;
    if (!database.getLocation().isPresent() && defaultDir.isPresent()) {
        String databaseLocation = new Path(defaultDir.get(), database.getDatabaseName()).toString();
        database = Database.builder(database).setLocation(Optional.of(databaseLocation)).build();
    }
    try {
        DatabaseInput databaseInput = GlueInputConverter.convertDatabase(database);
        glueClient.createDatabase(new CreateDatabaseRequest().withCatalogId(catalogId).withDatabaseInput(databaseInput));
    } catch (AlreadyExistsException e) {
        throw new SchemaAlreadyExistsException(database.getDatabaseName());
    } catch (AmazonServiceException e) {
        throw new PrestoException(HiveErrorCode.HIVE_METASTORE_ERROR, e);
    }
    if (database.getLocation().isPresent()) {
        HiveWriteUtils.createDirectory(hdfsContext, hdfsEnvironment, new Path(database.getLocation().get()));
    }
}
Also used : Path(org.apache.hadoop.fs.Path) CreateDatabaseRequest(com.amazonaws.services.glue.model.CreateDatabaseRequest) TableAlreadyExistsException(io.prestosql.spi.connector.TableAlreadyExistsException) SchemaAlreadyExistsException(io.prestosql.spi.connector.SchemaAlreadyExistsException) AlreadyExistsException(com.amazonaws.services.glue.model.AlreadyExistsException) SchemaAlreadyExistsException(io.prestosql.spi.connector.SchemaAlreadyExistsException) Database(io.prestosql.plugin.hive.metastore.Database) AmazonServiceException(com.amazonaws.AmazonServiceException) DatabaseInput(com.amazonaws.services.glue.model.DatabaseInput) PrestoException(io.prestosql.spi.PrestoException)

Example 5 with Database

use of io.prestosql.plugin.hive.metastore.Database in project boostkit-bigdata by kunpengcompute.

the class AbstractTestHiveFileSystem method setup.

protected void setup(String host, int port, String databaseName, Function<HiveConfig, HdfsConfiguration> hdfsConfigurationProvider, boolean s3SelectPushdownEnabled) {
    database = databaseName;
    table = new SchemaTableName(database, "presto_test_external_fs");
    String random = UUID.randomUUID().toString().toLowerCase(ENGLISH).replace("-", "");
    temporaryCreateTable = new SchemaTableName(database, "tmp_presto_test_create_" + random);
    config = new HiveConfig().setS3SelectPushdownEnabled(s3SelectPushdownEnabled);
    String proxy = System.getProperty("hive.metastore.thrift.client.socks-proxy");
    if (proxy != null) {
        config.setMetastoreSocksProxy(HostAndPort.fromString(proxy));
    }
    MetastoreLocator metastoreLocator = new TestingMetastoreLocator(config, host, port);
    ExecutorService executors = newCachedThreadPool(daemonThreadsNamed("hive-%s"));
    ExecutorService executorRefresh = newCachedThreadPool(daemonThreadsNamed("hive-refresh-%s"));
    HivePartitionManager hivePartitionManager = new HivePartitionManager(TYPE_MANAGER, config);
    HdfsConfiguration hdfsConfiguration = hdfsConfigurationProvider.apply(config);
    hdfsEnvironment = new HdfsEnvironment(hdfsConfiguration, config, new NoHdfsAuthentication());
    metastoreClient = new TestingHiveMetastore(new BridgingHiveMetastore(new ThriftHiveMetastore(metastoreLocator, new ThriftHiveMetastoreConfig())), executors, executorRefresh, config, getBasePath(), hdfsEnvironment);
    locationService = new HiveLocationService(hdfsEnvironment);
    JsonCodec<PartitionUpdate> partitionUpdateCodec = JsonCodec.jsonCodec(PartitionUpdate.class);
    metadataFactory = new HiveMetadataFactory(config, metastoreClient, hdfsEnvironment, hivePartitionManager, newDirectExecutorService(), vacuumExecutorService, heartbeatService, vacuumExecutorService, TYPE_MANAGER, locationService, partitionUpdateCodec, new HiveTypeTranslator(), new NodeVersion("test_version"), SqlStandardAccessControlMetadata::new);
    transactionManager = new HiveTransactionManager();
    splitManager = new HiveSplitManager(transactionHandle -> ((HiveMetadata) transactionManager.get(transactionHandle)).getMetastore(), hivePartitionManager, new NamenodeStats(), hdfsEnvironment, new CachingDirectoryLister(new HiveConfig()), new BoundedExecutor(executors, config.getMaxSplitIteratorThreads()), new HiveCoercionPolicy(TYPE_MANAGER), new CounterStat(), config.getMaxOutstandingSplits(), config.getMaxOutstandingSplitsSize(), config.getMinPartitionBatchSize(), config.getMaxPartitionBatchSize(), config.getMaxInitialSplits(), config.getSplitLoaderConcurrency(), config.getMaxSplitsPerSecond(), config.getRecursiveDirWalkerEnabled(), null, config);
    pageSinkProvider = new HivePageSinkProvider(getDefaultHiveFileWriterFactories(config), hdfsEnvironment, PAGE_SORTER, metastoreClient, new GroupByHashPageIndexerFactory(new JoinCompiler(createTestMetadataManager())), TYPE_MANAGER, config, locationService, partitionUpdateCodec, new TestingNodeManager("fake-environment"), new HiveEventClient(), new HiveSessionProperties(config, new OrcFileWriterConfig(), new ParquetFileWriterConfig()), new HiveWriterStats(), getDefaultOrcFileWriterFactory(config));
    pageSourceProvider = new HivePageSourceProvider(config, hdfsEnvironment, getDefaultHiveRecordCursorProvider(config), getDefaultHiveDataStreamFactories(config), TYPE_MANAGER, getNoOpIndexCache(), getDefaultHiveSelectiveFactories(config));
}
Also used : ConnectorMetadata(io.prestosql.spi.connector.ConnectorMetadata) HiveTestUtils.getDefaultHiveSelectiveFactories(io.prestosql.plugin.hive.HiveTestUtils.getDefaultHiveSelectiveFactories) FileSystem(org.apache.hadoop.fs.FileSystem) HdfsContext(io.prestosql.plugin.hive.HdfsEnvironment.HdfsContext) ConnectorSplitManager(io.prestosql.spi.connector.ConnectorSplitManager) NoHdfsAuthentication(io.prestosql.plugin.hive.authentication.NoHdfsAuthentication) ConnectorPageSink(io.prestosql.spi.connector.ConnectorPageSink) Test(org.testng.annotations.Test) MaterializedResult.materializeSourceDataStream(io.prestosql.testing.MaterializedResult.materializeSourceDataStream) MaterializedResult(io.prestosql.testing.MaterializedResult) Preconditions.checkArgument(com.google.common.base.Preconditions.checkArgument) ConnectorSession(io.prestosql.spi.connector.ConnectorSession) TableNotFoundException(io.prestosql.spi.connector.TableNotFoundException) BoundedExecutor(io.airlift.concurrent.BoundedExecutor) Executors.newScheduledThreadPool(java.util.concurrent.Executors.newScheduledThreadPool) Map(java.util.Map) Path(org.apache.hadoop.fs.Path) ConnectorPageSinkProvider(io.prestosql.spi.connector.ConnectorPageSinkProvider) BIGINT(io.prestosql.spi.type.BigintType.BIGINT) ENGLISH(java.util.Locale.ENGLISH) Assert.assertFalse(org.testng.Assert.assertFalse) Function(com.google.common.base.Function) CachingHiveMetastore(io.prestosql.plugin.hive.metastore.CachingHiveMetastore) UNGROUPED_SCHEDULING(io.prestosql.spi.connector.ConnectorSplitManager.SplitSchedulingStrategy.UNGROUPED_SCHEDULING) ImmutableMap(com.google.common.collect.ImmutableMap) MetadataManager.createTestMetadataManager(io.prestosql.metadata.MetadataManager.createTestMetadataManager) BeforeClass(org.testng.annotations.BeforeClass) Collection(java.util.Collection) SqlStandardAccessControlMetadata(io.prestosql.plugin.hive.security.SqlStandardAccessControlMetadata) ConnectorSplitSource(io.prestosql.spi.connector.ConnectorSplitSource) UUID(java.util.UUID) HiveTestUtils.getDefaultHiveFileWriterFactories(io.prestosql.plugin.hive.HiveTestUtils.getDefaultHiveFileWriterFactories) UncheckedIOException(java.io.UncheckedIOException) List(java.util.List) ConnectorPageSource(io.prestosql.spi.connector.ConnectorPageSource) Table(io.prestosql.plugin.hive.metastore.Table) HiveTestUtils.getTypes(io.prestosql.plugin.hive.HiveTestUtils.getTypes) Optional(java.util.Optional) AbstractTestHive.getAllSplits(io.prestosql.plugin.hive.AbstractTestHive.getAllSplits) TestingNodeManager(io.prestosql.testing.TestingNodeManager) AbstractTestHive.filterNonHiddenColumnHandles(io.prestosql.plugin.hive.AbstractTestHive.filterNonHiddenColumnHandles) JsonCodec(io.airlift.json.JsonCodec) Database(io.prestosql.plugin.hive.metastore.Database) Slice(io.airlift.slice.Slice) TYPE_MANAGER(io.prestosql.plugin.hive.HiveTestUtils.TYPE_MANAGER) ConnectorSplit(io.prestosql.spi.connector.ConnectorSplit) MetastoreLocator(io.prestosql.plugin.hive.metastore.thrift.MetastoreLocator) MoreExecutors.newDirectExecutorService(com.google.common.util.concurrent.MoreExecutors.newDirectExecutorService) HiveTransaction(io.prestosql.plugin.hive.AbstractTestHive.HiveTransaction) ThriftHiveMetastoreConfig(io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastoreConfig) AbstractTestHive.createTableProperties(io.prestosql.plugin.hive.AbstractTestHive.createTableProperties) Assert.assertEquals(org.testng.Assert.assertEquals) CounterStat(io.airlift.stats.CounterStat) AbstractTestHive.filterNonHiddenColumnMetadata(io.prestosql.plugin.hive.AbstractTestHive.filterNonHiddenColumnMetadata) GroupByHashPageIndexerFactory(io.prestosql.GroupByHashPageIndexerFactory) BridgingHiveMetastore(io.prestosql.plugin.hive.metastore.thrift.BridgingHiveMetastore) SchemaTableName(io.prestosql.spi.connector.SchemaTableName) ImmutableList(com.google.common.collect.ImmutableList) TestingMetastoreLocator(io.prestosql.plugin.hive.metastore.thrift.TestingMetastoreLocator) Threads.daemonThreadsNamed(io.airlift.concurrent.Threads.daemonThreadsNamed) ScheduledExecutorService(java.util.concurrent.ScheduledExecutorService) ThriftHiveMetastore(io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastore) ImmutableMultimap(com.google.common.collect.ImmutableMultimap) HiveMetastore(io.prestosql.plugin.hive.metastore.HiveMetastore) ExecutorService(java.util.concurrent.ExecutorService) ConnectorOutputTableHandle(io.prestosql.spi.connector.ConnectorOutputTableHandle) AfterClass(org.testng.annotations.AfterClass) HiveTestUtils.getNoOpIndexCache(io.prestosql.plugin.hive.HiveTestUtils.getNoOpIndexCache) HiveIdentity(io.prestosql.plugin.hive.authentication.HiveIdentity) Executor(java.util.concurrent.Executor) ColumnMetadata(io.prestosql.spi.connector.ColumnMetadata) ConnectorTableHandle(io.prestosql.spi.connector.ConnectorTableHandle) HiveTestUtils.getDefaultHiveRecordCursorProvider(io.prestosql.plugin.hive.HiveTestUtils.getDefaultHiveRecordCursorProvider) ConnectorIdentity(io.prestosql.spi.security.ConnectorIdentity) IOException(java.io.IOException) Iterables.getOnlyElement(com.google.common.collect.Iterables.getOnlyElement) HiveTestUtils.getDefaultHiveDataStreamFactories(io.prestosql.plugin.hive.HiveTestUtils.getDefaultHiveDataStreamFactories) MoreFutures.getFutureValue(io.airlift.concurrent.MoreFutures.getFutureValue) PAGE_SORTER(io.prestosql.plugin.hive.HiveTestUtils.PAGE_SORTER) HostAndPort(com.google.common.net.HostAndPort) MaterializedRow(io.prestosql.testing.MaterializedRow) PrincipalPrivileges(io.prestosql.plugin.hive.metastore.PrincipalPrivileges) ConnectorTableMetadata(io.prestosql.spi.connector.ConnectorTableMetadata) Assertions.assertEqualsIgnoreOrder(io.airlift.testing.Assertions.assertEqualsIgnoreOrder) Transaction(io.prestosql.plugin.hive.AbstractTestHive.Transaction) ColumnHandle(io.prestosql.spi.connector.ColumnHandle) Executors.newCachedThreadPool(java.util.concurrent.Executors.newCachedThreadPool) JoinCompiler(io.prestosql.sql.gen.JoinCompiler) Assert.assertTrue(org.testng.Assert.assertTrue) TestingConnectorSession(io.prestosql.testing.TestingConnectorSession) HiveTestUtils.getDefaultOrcFileWriterFactory(io.prestosql.plugin.hive.HiveTestUtils.getDefaultOrcFileWriterFactory) ConnectorPageSourceProvider(io.prestosql.spi.connector.ConnectorPageSourceProvider) TestingMetastoreLocator(io.prestosql.plugin.hive.metastore.thrift.TestingMetastoreLocator) CounterStat(io.airlift.stats.CounterStat) MetastoreLocator(io.prestosql.plugin.hive.metastore.thrift.MetastoreLocator) TestingMetastoreLocator(io.prestosql.plugin.hive.metastore.thrift.TestingMetastoreLocator) NoHdfsAuthentication(io.prestosql.plugin.hive.authentication.NoHdfsAuthentication) TestingNodeManager(io.prestosql.testing.TestingNodeManager) ThriftHiveMetastoreConfig(io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastoreConfig) BridgingHiveMetastore(io.prestosql.plugin.hive.metastore.thrift.BridgingHiveMetastore) JoinCompiler(io.prestosql.sql.gen.JoinCompiler) ThriftHiveMetastore(io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastore) SchemaTableName(io.prestosql.spi.connector.SchemaTableName) BoundedExecutor(io.airlift.concurrent.BoundedExecutor) MoreExecutors.newDirectExecutorService(com.google.common.util.concurrent.MoreExecutors.newDirectExecutorService) ScheduledExecutorService(java.util.concurrent.ScheduledExecutorService) ExecutorService(java.util.concurrent.ExecutorService) GroupByHashPageIndexerFactory(io.prestosql.GroupByHashPageIndexerFactory)

Aggregations

Database (io.prestosql.plugin.hive.metastore.Database)18 Path (org.apache.hadoop.fs.Path)14 PrestoException (io.prestosql.spi.PrestoException)11 HiveIdentity (io.prestosql.plugin.hive.authentication.HiveIdentity)9 HdfsContext (io.prestosql.plugin.hive.HdfsEnvironment.HdfsContext)8 ImmutableList (com.google.common.collect.ImmutableList)7 ImmutableMap (com.google.common.collect.ImmutableMap)7 IOException (java.io.IOException)7 PrincipalPrivileges (io.prestosql.plugin.hive.metastore.PrincipalPrivileges)6 Table (io.prestosql.plugin.hive.metastore.Table)6 SchemaTableName (io.prestosql.spi.connector.SchemaTableName)6 TableAlreadyExistsException (io.prestosql.spi.connector.TableAlreadyExistsException)6 TableNotFoundException (io.prestosql.spi.connector.TableNotFoundException)6 List (java.util.List)6 Map (java.util.Map)6 Optional (java.util.Optional)6 ImmutableSet (com.google.common.collect.ImmutableSet)5 JsonCodec (io.airlift.json.JsonCodec)5 Slice (io.airlift.slice.Slice)5 HiveUtil.toPartitionValues (io.prestosql.plugin.hive.HiveUtil.toPartitionValues)5