Search in sources :

Example 1 with HiveMinioDataLake

use of io.trino.plugin.hive.containers.HiveMinioDataLake in project trino by trinodb.

the class BaseTestHiveOnDataLake method createQueryRunner.

@Override
protected QueryRunner createQueryRunner() throws Exception {
    this.bucketName = "test-hive-insert-overwrite-" + randomTableSuffix();
    this.dockerizedS3DataLake = closeAfterClass(new HiveMinioDataLake(bucketName, ImmutableMap.of(), hiveHadoopImage));
    this.dockerizedS3DataLake.start();
    this.metastoreClient = new BridgingHiveMetastore(new ThriftHiveMetastore(new TestingMetastoreLocator(Optional.empty(), this.dockerizedS3DataLake.getHiveHadoop().getHiveMetastoreEndpoint()), new HiveConfig(), new MetastoreConfig(), new ThriftMetastoreConfig(), new HdfsEnvironment(new HiveHdfsConfiguration(new HdfsConfigurationInitializer(new HdfsConfig(), ImmutableSet.of()), ImmutableSet.of()), new HdfsConfig(), new NoHdfsAuthentication()), false), HiveIdentity.none());
    return S3HiveQueryRunner.create(dockerizedS3DataLake, ImmutableMap.<String, String>builder().put("hive.insert-existing-partitions-behavior", "OVERWRITE").put("hive.non-managed-table-writes-enabled", "true").put("hive.metastore-cache-ttl", "1d").put("hive.metastore-refresh-interval", "1d").buildOrThrow());
}
Also used : MetastoreConfig(io.trino.plugin.hive.metastore.MetastoreConfig) ThriftMetastoreConfig(io.trino.plugin.hive.metastore.thrift.ThriftMetastoreConfig) TestingMetastoreLocator(io.trino.plugin.hive.metastore.thrift.TestingMetastoreLocator) ThriftHiveMetastore(io.trino.plugin.hive.metastore.thrift.ThriftHiveMetastore) HiveMinioDataLake(io.trino.plugin.hive.containers.HiveMinioDataLake) ThriftMetastoreConfig(io.trino.plugin.hive.metastore.thrift.ThriftMetastoreConfig) NoHdfsAuthentication(io.trino.plugin.hive.authentication.NoHdfsAuthentication) BridgingHiveMetastore(io.trino.plugin.hive.metastore.thrift.BridgingHiveMetastore)

Example 2 with HiveMinioDataLake

use of io.trino.plugin.hive.containers.HiveMinioDataLake in project trino by trinodb.

the class TestHiveQueryFailureRecoveryTest method createQueryRunner.

@Override
protected QueryRunner createQueryRunner(List<TpchTable<?>> requiredTpchTables, Map<String, String> configProperties, Map<String, String> coordinatorProperties) throws Exception {
    // randomizing bucket name to ensure cached TrinoS3FileSystem objects are not reused
    String bucketName = "test-hive-insert-overwrite-" + randomTableSuffix();
    this.dockerizedS3DataLake = new HiveMinioDataLake(bucketName, ImmutableMap.of(), HiveHadoop.DEFAULT_IMAGE);
    dockerizedS3DataLake.start();
    this.minioStorage = new MinioStorage("test-exchange-spooling-" + randomTableSuffix());
    minioStorage.start();
    return S3HiveQueryRunner.builder(dockerizedS3DataLake).setInitialTables(requiredTpchTables).setExtraProperties(configProperties).setCoordinatorProperties(coordinatorProperties).setAdditionalSetup(runner -> {
        runner.installPlugin(new FileSystemExchangePlugin());
        runner.loadExchangeManager("filesystem", getExchangeManagerProperties(minioStorage));
    }).setHiveProperties(ImmutableMap.<String, String>builder().put("hive.s3.streaming.enabled", "false").buildOrThrow()).build();
}
Also used : MinioStorage(io.trino.plugin.exchange.containers.MinioStorage) FileSystemExchangePlugin(io.trino.plugin.exchange.FileSystemExchangePlugin) HiveMinioDataLake(io.trino.plugin.hive.containers.HiveMinioDataLake)

Example 3 with HiveMinioDataLake

use of io.trino.plugin.hive.containers.HiveMinioDataLake in project trino by trinodb.

the class TestHiveTaskFailureRecoveryTest method createQueryRunner.

@Override
protected QueryRunner createQueryRunner(List<TpchTable<?>> requiredTpchTables, Map<String, String> configProperties, Map<String, String> coordinatorProperties) throws Exception {
    // randomizing bucket name to ensure cached TrinoS3FileSystem objects are not reused
    String bucketName = "test-hive-insert-overwrite-" + randomTableSuffix();
    this.dockerizedS3DataLake = new HiveMinioDataLake(bucketName, ImmutableMap.of(), HiveHadoop.DEFAULT_IMAGE);
    dockerizedS3DataLake.start();
    this.minioStorage = new MinioStorage("test-exchange-spooling-" + randomTableSuffix());
    minioStorage.start();
    return S3HiveQueryRunner.builder(dockerizedS3DataLake).setInitialTables(requiredTpchTables).setExtraProperties(ImmutableMap.<String, String>builder().putAll(configProperties).put("enable-dynamic-filtering", "false").buildOrThrow()).setCoordinatorProperties(coordinatorProperties).setAdditionalSetup(runner -> {
        runner.installPlugin(new FileSystemExchangePlugin());
        runner.loadExchangeManager("filesystem", getExchangeManagerProperties(minioStorage));
    }).setHiveProperties(ImmutableMap.<String, String>builder().put("hive.s3.streaming.enabled", "false").buildOrThrow()).build();
}
Also used : MinioStorage(io.trino.plugin.exchange.containers.MinioStorage) FileSystemExchangePlugin(io.trino.plugin.exchange.FileSystemExchangePlugin) HiveMinioDataLake(io.trino.plugin.hive.containers.HiveMinioDataLake)

Aggregations

HiveMinioDataLake (io.trino.plugin.hive.containers.HiveMinioDataLake)3 FileSystemExchangePlugin (io.trino.plugin.exchange.FileSystemExchangePlugin)2 MinioStorage (io.trino.plugin.exchange.containers.MinioStorage)2 NoHdfsAuthentication (io.trino.plugin.hive.authentication.NoHdfsAuthentication)1 MetastoreConfig (io.trino.plugin.hive.metastore.MetastoreConfig)1 BridgingHiveMetastore (io.trino.plugin.hive.metastore.thrift.BridgingHiveMetastore)1 TestingMetastoreLocator (io.trino.plugin.hive.metastore.thrift.TestingMetastoreLocator)1 ThriftHiveMetastore (io.trino.plugin.hive.metastore.thrift.ThriftHiveMetastore)1 ThriftMetastoreConfig (io.trino.plugin.hive.metastore.thrift.ThriftMetastoreConfig)1