Search in sources :

Example 1 with InputDataTransformer

use of org.apache.drill.metastore.iceberg.transform.InputDataTransformer in project drill by apache.

the class TestTablesInputDataTransformer method testValidDataSeveralRecords.

@Test
public void testValidDataSeveralRecords() {
    List<TableMetadataUnit> units = Arrays.asList(TableMetadataUnit.builder().storagePlugin("dfs").workspace("tmp").tableName("nation").metadataKey(MetadataInfo.GENERAL_INFO_KEY).column("a").build(), TableMetadataUnit.builder().storagePlugin("dfs").workspace("tmp").tableName("nation").metadataKey(MetadataInfo.GENERAL_INFO_KEY).column("b").build(), TableMetadataUnit.builder().storagePlugin("dfs").workspace("tmp").tableName("nation").metadataKey(MetadataInfo.GENERAL_INFO_KEY).column("c").build());
    WriteData writeData = new InputDataTransformer<TableMetadataUnit>(metastoreSchema, partitionSchema, unitGetters).units(units).execute();
    Record tableRecord1 = GenericRecord.create(metastoreSchema);
    tableRecord1.setField("storagePlugin", "dfs");
    tableRecord1.setField("workspace", "tmp");
    tableRecord1.setField("tableName", "nation");
    tableRecord1.setField("metadataKey", MetadataInfo.GENERAL_INFO_KEY);
    tableRecord1.setField("column", "a");
    Record tableRecord2 = GenericRecord.create(metastoreSchema);
    tableRecord2.setField("storagePlugin", "dfs");
    tableRecord2.setField("workspace", "tmp");
    tableRecord2.setField("tableName", "nation");
    tableRecord2.setField("metadataKey", MetadataInfo.GENERAL_INFO_KEY);
    tableRecord2.setField("column", "b");
    Record tableRecord3 = GenericRecord.create(metastoreSchema);
    tableRecord3.setField("storagePlugin", "dfs");
    tableRecord3.setField("workspace", "tmp");
    tableRecord3.setField("tableName", "nation");
    tableRecord3.setField("metadataKey", MetadataInfo.GENERAL_INFO_KEY);
    tableRecord3.setField("column", "c");
    Record partitionRecord = GenericRecord.create(partitionSchema);
    partitionRecord.setField("storagePlugin", "dfs");
    partitionRecord.setField("workspace", "tmp");
    partitionRecord.setField("tableName", "nation");
    partitionRecord.setField("metadataKey", MetadataInfo.GENERAL_INFO_KEY);
    assertEquals(Arrays.asList(tableRecord1, tableRecord2, tableRecord3), writeData.records());
    assertEquals(partitionRecord, writeData.partition());
}
Also used : InputDataTransformer(org.apache.drill.metastore.iceberg.transform.InputDataTransformer) TableMetadataUnit(org.apache.drill.metastore.components.tables.TableMetadataUnit) Record(org.apache.iceberg.data.Record) GenericRecord(org.apache.iceberg.data.GenericRecord) WriteData(org.apache.drill.metastore.iceberg.transform.WriteData) Test(org.junit.Test) IcebergBaseTest(org.apache.drill.metastore.iceberg.IcebergBaseTest)

Example 2 with InputDataTransformer

use of org.apache.drill.metastore.iceberg.transform.InputDataTransformer in project drill by apache.

the class TestTablesInputDataTransformer method testNoData.

@Test
public void testNoData() {
    WriteData writeData = new InputDataTransformer<TableMetadataUnit>(metastoreSchema, partitionSchema, unitGetters).units(Collections.emptyList()).execute();
    assertEquals(Collections.emptyList(), writeData.records());
    assertNull(writeData.partition());
}
Also used : InputDataTransformer(org.apache.drill.metastore.iceberg.transform.InputDataTransformer) WriteData(org.apache.drill.metastore.iceberg.transform.WriteData) Test(org.junit.Test) IcebergBaseTest(org.apache.drill.metastore.iceberg.IcebergBaseTest)

Example 3 with InputDataTransformer

use of org.apache.drill.metastore.iceberg.transform.InputDataTransformer in project drill by apache.

the class TestTablesInputDataTransformer method testValidDataOneRecord.

@Test
public void testValidDataOneRecord() {
    Map<String, String> partitionKeys = new HashMap<>();
    partitionKeys.put("dir0", "2018");
    partitionKeys.put("dir1", "2019");
    List<String> partitionValues = Arrays.asList("a", "b", "c");
    Long lastModifiedTime = System.currentTimeMillis();
    TableMetadataUnit unit = TableMetadataUnit.builder().storagePlugin("dfs").workspace("tmp").tableName("nation").metadataKey(MetadataInfo.GENERAL_INFO_KEY).partitionKeys(partitionKeys).partitionValues(partitionValues).lastModifiedTime(lastModifiedTime).build();
    WriteData writeData = new InputDataTransformer<TableMetadataUnit>(metastoreSchema, partitionSchema, unitGetters).units(Collections.singletonList(unit)).execute();
    Record tableRecord = GenericRecord.create(metastoreSchema);
    tableRecord.setField("storagePlugin", "dfs");
    tableRecord.setField("workspace", "tmp");
    tableRecord.setField("tableName", "nation");
    tableRecord.setField("metadataKey", MetadataInfo.GENERAL_INFO_KEY);
    tableRecord.setField("partitionKeys", partitionKeys);
    tableRecord.setField("partitionValues", partitionValues);
    tableRecord.setField("lastModifiedTime", lastModifiedTime);
    Record partitionRecord = GenericRecord.create(partitionSchema);
    partitionRecord.setField("storagePlugin", "dfs");
    partitionRecord.setField("workspace", "tmp");
    partitionRecord.setField("tableName", "nation");
    partitionRecord.setField("metadataKey", MetadataInfo.GENERAL_INFO_KEY);
    assertEquals(Collections.singletonList(tableRecord), writeData.records());
    assertEquals(partitionRecord, writeData.partition());
}
Also used : InputDataTransformer(org.apache.drill.metastore.iceberg.transform.InputDataTransformer) TableMetadataUnit(org.apache.drill.metastore.components.tables.TableMetadataUnit) HashMap(java.util.HashMap) Record(org.apache.iceberg.data.Record) GenericRecord(org.apache.iceberg.data.GenericRecord) WriteData(org.apache.drill.metastore.iceberg.transform.WriteData) Test(org.junit.Test) IcebergBaseTest(org.apache.drill.metastore.iceberg.IcebergBaseTest)

Example 4 with InputDataTransformer

use of org.apache.drill.metastore.iceberg.transform.InputDataTransformer in project drill by apache.

the class TestTablesInputDataTransformer method testInvalidPartition.

@Test
public void testInvalidPartition() {
    TableMetadataUnit unit = TableMetadataUnit.builder().storagePlugin("dfs").workspace("tmp").tableName("nation").build();
    thrown.expect(IcebergMetastoreException.class);
    new InputDataTransformer<TableMetadataUnit>(metastoreSchema, partitionSchema, unitGetters).units(Collections.singletonList(unit)).execute();
}
Also used : InputDataTransformer(org.apache.drill.metastore.iceberg.transform.InputDataTransformer) TableMetadataUnit(org.apache.drill.metastore.components.tables.TableMetadataUnit) Test(org.junit.Test) IcebergBaseTest(org.apache.drill.metastore.iceberg.IcebergBaseTest)

Aggregations

IcebergBaseTest (org.apache.drill.metastore.iceberg.IcebergBaseTest)4 InputDataTransformer (org.apache.drill.metastore.iceberg.transform.InputDataTransformer)4 Test (org.junit.Test)4 TableMetadataUnit (org.apache.drill.metastore.components.tables.TableMetadataUnit)3 WriteData (org.apache.drill.metastore.iceberg.transform.WriteData)3 GenericRecord (org.apache.iceberg.data.GenericRecord)2 Record (org.apache.iceberg.data.Record)2 HashMap (java.util.HashMap)1