Search in sources :

Example 1 with Transaction

use of io.prestosql.plugin.hive.AbstractTestHive.Transaction in project hetu-core by openlookeng.

the class AbstractTestHiveFileSystem method dropTable.

private void dropTable(SchemaTableName table) {
    try (Transaction transaction = newTransaction()) {
        transaction.getMetastore(table.getSchemaName()).dropTable(newSession(), table.getSchemaName(), table.getTableName());
        transaction.commit();
    }
}
Also used : HiveTransaction(io.prestosql.plugin.hive.AbstractTestHive.HiveTransaction) Transaction(io.prestosql.plugin.hive.AbstractTestHive.Transaction)

Example 2 with Transaction

use of io.prestosql.plugin.hive.AbstractTestHive.Transaction in project hetu-core by openlookeng.

the class AbstractTestHiveFileSystem method createTable.

private void createTable(SchemaTableName tableName, HiveStorageFormat storageFormat) throws Exception {
    List<ColumnMetadata> columns = ImmutableList.<ColumnMetadata>builder().add(new ColumnMetadata("id", BIGINT)).build();
    MaterializedResult data = MaterializedResult.resultBuilder(newSession(), BIGINT).row(1L).row(3L).row(2L).build();
    try (Transaction transaction = newTransaction()) {
        ConnectorMetadata metadata = transaction.getMetadata();
        ConnectorSession session = newSession();
        // begin creating the table
        ConnectorTableMetadata tableMetadata = new ConnectorTableMetadata(tableName, columns, createTableProperties(storageFormat));
        ConnectorOutputTableHandle outputHandle = metadata.beginCreateTable(session, tableMetadata, Optional.empty());
        // write the records
        ConnectorPageSink sink = pageSinkProvider.createPageSink(transaction.getTransactionHandle(), session, outputHandle);
        sink.appendPage(data.toPage());
        Collection<Slice> fragments = getFutureValue(sink.finish());
        // commit the table
        metadata.finishCreateTable(session, outputHandle, fragments, ImmutableList.of());
        transaction.commit();
        // Hack to work around the metastore not being configured for S3 or other FS.
        // The metastore tries to validate the location when creating the
        // table, which fails without explicit configuration for file system.
        // We work around that by using a dummy location when creating the
        // table and update it here to the correct location.
        metastoreClient.updateTableLocation(database, tableName.getTableName(), locationService.getTableWriteInfo(((HiveOutputTableHandle) outputHandle).getLocationHandle(), false).getTargetPath().toString());
    }
    try (Transaction transaction = newTransaction()) {
        ConnectorMetadata metadata = transaction.getMetadata();
        ConnectorSession session = newSession();
        // load the new table
        ConnectorTableHandle tableHandle = getTableHandle(metadata, tableName);
        List<ColumnHandle> columnHandles = filterNonHiddenColumnHandles(metadata.getColumnHandles(session, tableHandle).values());
        // verify the metadata
        ConnectorTableMetadata tableMetadata = metadata.getTableMetadata(session, getTableHandle(metadata, tableName));
        assertEquals(filterNonHiddenColumnMetadata(tableMetadata.getColumns()), columns);
        // verify the data
        ConnectorSplitSource splitSource = splitManager.getSplits(transaction.getTransactionHandle(), session, tableHandle, UNGROUPED_SCHEDULING);
        ConnectorSplit split = getOnlyElement(getAllSplits(splitSource));
        try (ConnectorPageSource pageSource = pageSourceProvider.createPageSource(transaction.getTransactionHandle(), session, split, tableHandle, columnHandles)) {
            MaterializedResult result = materializeSourceDataStream(session, pageSource, getTypes(columnHandles));
            assertEqualsIgnoreOrder(result.getMaterializedRows(), data.getMaterializedRows());
        }
    }
}
Also used : ColumnHandle(io.prestosql.spi.connector.ColumnHandle) AbstractTestHive.filterNonHiddenColumnMetadata(io.prestosql.plugin.hive.AbstractTestHive.filterNonHiddenColumnMetadata) ColumnMetadata(io.prestosql.spi.connector.ColumnMetadata) ConnectorSplitSource(io.prestosql.spi.connector.ConnectorSplitSource) ConnectorPageSource(io.prestosql.spi.connector.ConnectorPageSource) ConnectorTableHandle(io.prestosql.spi.connector.ConnectorTableHandle) ConnectorOutputTableHandle(io.prestosql.spi.connector.ConnectorOutputTableHandle) HiveTransaction(io.prestosql.plugin.hive.AbstractTestHive.HiveTransaction) Transaction(io.prestosql.plugin.hive.AbstractTestHive.Transaction) Slice(io.airlift.slice.Slice) ConnectorSession(io.prestosql.spi.connector.ConnectorSession) TestingConnectorSession(io.prestosql.testing.TestingConnectorSession) ConnectorMetadata(io.prestosql.spi.connector.ConnectorMetadata) MaterializedResult(io.prestosql.testing.MaterializedResult) ConnectorPageSink(io.prestosql.spi.connector.ConnectorPageSink) ConnectorSplit(io.prestosql.spi.connector.ConnectorSplit) ConnectorTableMetadata(io.prestosql.spi.connector.ConnectorTableMetadata)

Example 3 with Transaction

use of io.prestosql.plugin.hive.AbstractTestHive.Transaction in project boostkit-bigdata by kunpengcompute.

the class AbstractTestHiveFileSystem method createTable.

private void createTable(SchemaTableName tableName, HiveStorageFormat storageFormat) throws Exception {
    List<ColumnMetadata> columns = ImmutableList.<ColumnMetadata>builder().add(new ColumnMetadata("id", BIGINT)).build();
    MaterializedResult data = MaterializedResult.resultBuilder(newSession(), BIGINT).row(1L).row(3L).row(2L).build();
    try (Transaction transaction = newTransaction()) {
        ConnectorMetadata metadata = transaction.getMetadata();
        ConnectorSession session = newSession();
        // begin creating the table
        ConnectorTableMetadata tableMetadata = new ConnectorTableMetadata(tableName, columns, createTableProperties(storageFormat));
        ConnectorOutputTableHandle outputHandle = metadata.beginCreateTable(session, tableMetadata, Optional.empty());
        // write the records
        ConnectorPageSink sink = pageSinkProvider.createPageSink(transaction.getTransactionHandle(), session, outputHandle);
        sink.appendPage(data.toPage());
        Collection<Slice> fragments = getFutureValue(sink.finish());
        // commit the table
        metadata.finishCreateTable(session, outputHandle, fragments, ImmutableList.of());
        transaction.commit();
        // Hack to work around the metastore not being configured for S3 or other FS.
        // The metastore tries to validate the location when creating the
        // table, which fails without explicit configuration for file system.
        // We work around that by using a dummy location when creating the
        // table and update it here to the correct location.
        metastoreClient.updateTableLocation(database, tableName.getTableName(), locationService.getTableWriteInfo(((HiveOutputTableHandle) outputHandle).getLocationHandle(), false).getTargetPath().toString());
    }
    try (Transaction transaction = newTransaction()) {
        ConnectorMetadata metadata = transaction.getMetadata();
        ConnectorSession session = newSession();
        // load the new table
        ConnectorTableHandle tableHandle = getTableHandle(metadata, tableName);
        List<ColumnHandle> columnHandles = filterNonHiddenColumnHandles(metadata.getColumnHandles(session, tableHandle).values());
        // verify the metadata
        ConnectorTableMetadata tableMetadata = metadata.getTableMetadata(session, getTableHandle(metadata, tableName));
        assertEquals(filterNonHiddenColumnMetadata(tableMetadata.getColumns()), columns);
        // verify the data
        ConnectorSplitSource splitSource = splitManager.getSplits(transaction.getTransactionHandle(), session, tableHandle, UNGROUPED_SCHEDULING);
        ConnectorSplit split = getOnlyElement(getAllSplits(splitSource));
        try (ConnectorPageSource pageSource = pageSourceProvider.createPageSource(transaction.getTransactionHandle(), session, split, tableHandle, columnHandles)) {
            MaterializedResult result = materializeSourceDataStream(session, pageSource, getTypes(columnHandles));
            assertEqualsIgnoreOrder(result.getMaterializedRows(), data.getMaterializedRows());
        }
    }
}
Also used : ColumnHandle(io.prestosql.spi.connector.ColumnHandle) AbstractTestHive.filterNonHiddenColumnMetadata(io.prestosql.plugin.hive.AbstractTestHive.filterNonHiddenColumnMetadata) ColumnMetadata(io.prestosql.spi.connector.ColumnMetadata) ConnectorSplitSource(io.prestosql.spi.connector.ConnectorSplitSource) ConnectorPageSource(io.prestosql.spi.connector.ConnectorPageSource) ConnectorTableHandle(io.prestosql.spi.connector.ConnectorTableHandle) ConnectorOutputTableHandle(io.prestosql.spi.connector.ConnectorOutputTableHandle) HiveTransaction(io.prestosql.plugin.hive.AbstractTestHive.HiveTransaction) Transaction(io.prestosql.plugin.hive.AbstractTestHive.Transaction) Slice(io.airlift.slice.Slice) ConnectorSession(io.prestosql.spi.connector.ConnectorSession) TestingConnectorSession(io.prestosql.testing.TestingConnectorSession) ConnectorMetadata(io.prestosql.spi.connector.ConnectorMetadata) MaterializedResult(io.prestosql.testing.MaterializedResult) ConnectorPageSink(io.prestosql.spi.connector.ConnectorPageSink) ConnectorSplit(io.prestosql.spi.connector.ConnectorSplit) ConnectorTableMetadata(io.prestosql.spi.connector.ConnectorTableMetadata)

Example 4 with Transaction

use of io.prestosql.plugin.hive.AbstractTestHive.Transaction in project hetu-core by openlookeng.

the class AbstractTestHiveFileSystem method testGetRecords.

@Test
public void testGetRecords() throws Exception {
    try (Transaction transaction = newTransaction()) {
        ConnectorMetadata metadata = transaction.getMetadata();
        ConnectorSession session = newSession();
        ConnectorTableHandle tableHandle = getTableHandle(metadata, this.table);
        List<ColumnHandle> columnHandles = ImmutableList.copyOf(metadata.getColumnHandles(session, tableHandle).values());
        Map<String, Integer> columnIndex = indexColumns(columnHandles);
        ConnectorSplitSource splitSource = splitManager.getSplits(transaction.getTransactionHandle(), session, tableHandle, UNGROUPED_SCHEDULING);
        List<ConnectorSplit> splits = getAllSplits(splitSource);
        assertEquals(splits.size(), 1);
        long sum = 0;
        for (ConnectorSplit split : splits) {
            try (ConnectorPageSource pageSource = pageSourceProvider.createPageSource(transaction.getTransactionHandle(), session, split, tableHandle, columnHandles)) {
                MaterializedResult result = materializeSourceDataStream(session, pageSource, getTypes(columnHandles));
                for (MaterializedRow row : result) {
                    sum += (Long) row.getField(columnIndex.get("t_bigint"));
                }
            }
        }
        // The test table is made up of multiple S3 objects with same data and different compression codec
        // formats: uncompressed | .gz | .lz4 | .bz2
        assertEquals(sum, 78300 * 4);
    }
}
Also used : ColumnHandle(io.prestosql.spi.connector.ColumnHandle) ConnectorSplitSource(io.prestosql.spi.connector.ConnectorSplitSource) ConnectorPageSource(io.prestosql.spi.connector.ConnectorPageSource) ConnectorTableHandle(io.prestosql.spi.connector.ConnectorTableHandle) HiveTransaction(io.prestosql.plugin.hive.AbstractTestHive.HiveTransaction) Transaction(io.prestosql.plugin.hive.AbstractTestHive.Transaction) ConnectorSession(io.prestosql.spi.connector.ConnectorSession) TestingConnectorSession(io.prestosql.testing.TestingConnectorSession) ConnectorMetadata(io.prestosql.spi.connector.ConnectorMetadata) MaterializedResult(io.prestosql.testing.MaterializedResult) ConnectorSplit(io.prestosql.spi.connector.ConnectorSplit) MaterializedRow(io.prestosql.testing.MaterializedRow) Test(org.testng.annotations.Test)

Example 5 with Transaction

use of io.prestosql.plugin.hive.AbstractTestHive.Transaction in project boostkit-bigdata by kunpengcompute.

the class AbstractTestHiveFileSystem method dropTable.

private void dropTable(SchemaTableName table) {
    try (Transaction transaction = newTransaction()) {
        transaction.getMetastore(table.getSchemaName()).dropTable(newSession(), table.getSchemaName(), table.getTableName());
        transaction.commit();
    }
}
Also used : HiveTransaction(io.prestosql.plugin.hive.AbstractTestHive.HiveTransaction) Transaction(io.prestosql.plugin.hive.AbstractTestHive.Transaction)

Aggregations

HiveTransaction (io.prestosql.plugin.hive.AbstractTestHive.HiveTransaction)6 Transaction (io.prestosql.plugin.hive.AbstractTestHive.Transaction)6 ColumnHandle (io.prestosql.spi.connector.ColumnHandle)4 ConnectorMetadata (io.prestosql.spi.connector.ConnectorMetadata)4 ConnectorPageSource (io.prestosql.spi.connector.ConnectorPageSource)4 ConnectorSession (io.prestosql.spi.connector.ConnectorSession)4 ConnectorSplit (io.prestosql.spi.connector.ConnectorSplit)4 ConnectorSplitSource (io.prestosql.spi.connector.ConnectorSplitSource)4 ConnectorTableHandle (io.prestosql.spi.connector.ConnectorTableHandle)4 MaterializedResult (io.prestosql.testing.MaterializedResult)4 TestingConnectorSession (io.prestosql.testing.TestingConnectorSession)4 Slice (io.airlift.slice.Slice)2 AbstractTestHive.filterNonHiddenColumnMetadata (io.prestosql.plugin.hive.AbstractTestHive.filterNonHiddenColumnMetadata)2 ColumnMetadata (io.prestosql.spi.connector.ColumnMetadata)2 ConnectorOutputTableHandle (io.prestosql.spi.connector.ConnectorOutputTableHandle)2 ConnectorPageSink (io.prestosql.spi.connector.ConnectorPageSink)2 ConnectorTableMetadata (io.prestosql.spi.connector.ConnectorTableMetadata)2 MaterializedRow (io.prestosql.testing.MaterializedRow)2 Test (org.testng.annotations.Test)2