Search in sources :

Example 1 with TestHelper

use of org.apache.iceberg.mr.TestHelper in project hive by apache.

the class TestTables method appendIcebergTable.

/**
 * Append more data to the table.
 * @param configuration The configuration used during the table creation
 * @param table The table to append
 * @param format The file format used for writing the data
 * @param partition The partition to write to
 * @param records The records with which should be added to the table
 * @throws IOException If there is an error writing data
 */
public void appendIcebergTable(Configuration configuration, Table table, FileFormat format, StructLike partition, List<Record> records) throws IOException {
    TestHelper helper = new TestHelper(configuration, null, null, null, null, format, temp);
    helper.setTable(table);
    if (!records.isEmpty()) {
        helper.appendToTable(helper.writeFile(partition, records));
    }
}
Also used : TestHelper(org.apache.iceberg.mr.TestHelper)

Example 2 with TestHelper

use of org.apache.iceberg.mr.TestHelper in project hive by apache.

the class TestIcebergInputFormats method before.

@Before
public void before() throws IOException {
    conf = new Configuration();
    conf.set(InputFormatConfig.CATALOG, Catalogs.LOCATION);
    HadoopTables tables = new HadoopTables(conf);
    File location = temp.newFolder(testInputFormat.name(), fileFormat.name());
    Assert.assertTrue(location.delete());
    helper = new TestHelper(conf, tables, location.toString(), SCHEMA, SPEC, fileFormat, temp);
    builder = new InputFormatConfig.ConfigBuilder(conf).readFrom(location.toString());
}
Also used : TestHelper(org.apache.iceberg.mr.TestHelper) Configuration(org.apache.hadoop.conf.Configuration) HadoopTables(org.apache.iceberg.hadoop.HadoopTables) DataFile(org.apache.iceberg.DataFile) File(java.io.File) Before(org.junit.Before)

Example 3 with TestHelper

use of org.apache.iceberg.mr.TestHelper in project hive by apache.

the class TestTables method createIcebergTable.

/**
 * Creates an Iceberg table/data without creating the corresponding Hive table. The table will be in the 'default'
 * namespace.
 * @param configuration The configuration used during the table creation
 * @param tableName The name of the test table
 * @param schema The schema used for the table creation
 * @param fileFormat The file format used for writing the data
 * @param records The records with which the table is populated
 * @return The create table
 * @throws IOException If there is an error writing data
 */
public Table createIcebergTable(Configuration configuration, String tableName, Schema schema, FileFormat fileFormat, Map<String, String> additionalTableProps, List<Record> records) throws IOException {
    String identifier = identifier("default." + tableName);
    TestHelper helper = new TestHelper(new Configuration(configuration), tables(), identifier, schema, PartitionSpec.unpartitioned(), fileFormat, additionalTableProps, temp);
    Table table = helper.createTable();
    if (records != null && !records.isEmpty()) {
        helper.appendToTable(helper.writeFile(null, records));
    }
    return table;
}
Also used : TestHelper(org.apache.iceberg.mr.TestHelper) Table(org.apache.iceberg.Table) Configuration(org.apache.hadoop.conf.Configuration)

Aggregations

TestHelper (org.apache.iceberg.mr.TestHelper)3 Configuration (org.apache.hadoop.conf.Configuration)2 File (java.io.File)1 DataFile (org.apache.iceberg.DataFile)1 Table (org.apache.iceberg.Table)1 HadoopTables (org.apache.iceberg.hadoop.HadoopTables)1 Before (org.junit.Before)1