Search in sources :

Example 16 with HoodieSparkTable

use of org.apache.hudi.table.HoodieSparkTable in project hudi by apache.

the class TestUpdateSchemaEvolution method testSchemaEvolutionOnUpdateMisMatchWithChangeColumnType.

@Test
public void testSchemaEvolutionOnUpdateMisMatchWithChangeColumnType() throws Exception {
    final WriteStatus insertResult = prepareFirstRecordCommit(generateOneRecordForExampleSchema());
    // Now try an update with an evolved schema
    // Evolved schema does not have guarantee on preserving the original field ordering
    final HoodieWriteConfig config = makeHoodieClientConfig("/exampleEvolvedSchemaColumnType.avsc");
    final HoodieSparkTable table = HoodieSparkTable.create(config, context);
    String recordStr = "{\"_row_key\":\"8eb5b87a-1feh-4edd-87b4-6ec96dc405a0\"," + "\"time\":\"2016-01-31T03:16:41.415Z\",\"number\":\"12\"}";
    List<HoodieRecord> updateRecords = buildUpdateRecords(recordStr, insertResult.getFileId());
    String assertMsg = "UpdateFunction when change column type, org.apache.parquet.avro.AvroConverters$FieldUTF8Converter";
    assertSchemaEvolutionOnUpdateResult(insertResult, table, updateRecords, assertMsg, true, ParquetDecodingException.class);
}
Also used : HoodieRecord(org.apache.hudi.common.model.HoodieRecord) HoodieWriteConfig(org.apache.hudi.config.HoodieWriteConfig) HoodieSparkTable(org.apache.hudi.table.HoodieSparkTable) Test(org.junit.jupiter.api.Test)

Example 17 with HoodieSparkTable

use of org.apache.hudi.table.HoodieSparkTable in project hudi by apache.

the class TestUpdateSchemaEvolution method prepareFirstRecordCommit.

private WriteStatus prepareFirstRecordCommit(List<String> recordsStrs) throws IOException {
    // Create a bunch of records with an old version of schema
    final HoodieWriteConfig config = makeHoodieClientConfig("/exampleSchema.avsc");
    final HoodieSparkTable table = HoodieSparkTable.create(config, context);
    final List<WriteStatus> statuses = jsc.parallelize(Arrays.asList(1)).map(x -> {
        List<HoodieRecord> insertRecords = new ArrayList<>();
        for (String recordStr : recordsStrs) {
            RawTripTestPayload rowChange = new RawTripTestPayload(recordStr);
            insertRecords.add(new HoodieAvroRecord(new HoodieKey(rowChange.getRowKey(), rowChange.getPartitionPath()), rowChange));
        }
        Map<String, HoodieRecord> insertRecordMap = insertRecords.stream().collect(Collectors.toMap(r -> r.getRecordKey(), Function.identity()));
        HoodieCreateHandle<?, ?, ?, ?> createHandle = new HoodieCreateHandle(config, "100", table, insertRecords.get(0).getPartitionPath(), "f1-0", insertRecordMap, supplier);
        createHandle.write();
        return createHandle.close().get(0);
    }).collect();
    final Path commitFile = new Path(config.getBasePath() + "/.hoodie/" + HoodieTimeline.makeCommitFileName("100"));
    FSUtils.getFs(basePath, HoodieTestUtils.getDefaultHadoopConf()).create(commitFile);
    return statuses.get(0);
}
Also used : Assertions.assertThrows(org.junit.jupiter.api.Assertions.assertThrows) BeforeEach(org.junit.jupiter.api.BeforeEach) Arrays(java.util.Arrays) BaseFileUtils(org.apache.hudi.common.util.BaseFileUtils) ParquetDecodingException(org.apache.parquet.io.ParquetDecodingException) HoodieUpsertException(org.apache.hudi.exception.HoodieUpsertException) Option(org.apache.hudi.common.util.Option) Function(java.util.function.Function) HoodieClientTestHarness(org.apache.hudi.testutils.HoodieClientTestHarness) ArrayList(java.util.ArrayList) HoodieSparkTable(org.apache.hudi.table.HoodieSparkTable) HoodieMergeHandle(org.apache.hudi.io.HoodieMergeHandle) Map(java.util.Map) Path(org.apache.hadoop.fs.Path) HoodieTimeline(org.apache.hudi.common.table.timeline.HoodieTimeline) SchemaTestUtil.getSchemaFromResource(org.apache.hudi.common.testutils.SchemaTestUtil.getSchemaFromResource) HoodieRecord(org.apache.hudi.common.model.HoodieRecord) GenericRecord(org.apache.avro.generic.GenericRecord) Schema(org.apache.avro.Schema) HoodieWriteConfig(org.apache.hudi.config.HoodieWriteConfig) RawTripTestPayload(org.apache.hudi.common.testutils.RawTripTestPayload) HoodieCreateHandle(org.apache.hudi.io.HoodieCreateHandle) InvalidRecordException(org.apache.parquet.io.InvalidRecordException) IOException(java.io.IOException) Collectors(java.util.stream.Collectors) HoodieAvroRecord(org.apache.hudi.common.model.HoodieAvroRecord) Test(org.junit.jupiter.api.Test) AfterEach(org.junit.jupiter.api.AfterEach) List(java.util.List) HoodieRecordLocation(org.apache.hudi.common.model.HoodieRecordLocation) FileSystemViewStorageConfig(org.apache.hudi.common.table.view.FileSystemViewStorageConfig) Executable(org.junit.jupiter.api.function.Executable) HoodieKey(org.apache.hudi.common.model.HoodieKey) HoodieTestUtils(org.apache.hudi.common.testutils.HoodieTestUtils) Assertions.assertDoesNotThrow(org.junit.jupiter.api.Assertions.assertDoesNotThrow) FSUtils(org.apache.hudi.common.fs.FSUtils) Path(org.apache.hadoop.fs.Path) HoodieWriteConfig(org.apache.hudi.config.HoodieWriteConfig) RawTripTestPayload(org.apache.hudi.common.testutils.RawTripTestPayload) HoodieCreateHandle(org.apache.hudi.io.HoodieCreateHandle) HoodieAvroRecord(org.apache.hudi.common.model.HoodieAvroRecord) HoodieKey(org.apache.hudi.common.model.HoodieKey) ArrayList(java.util.ArrayList) List(java.util.List) Map(java.util.Map) HoodieSparkTable(org.apache.hudi.table.HoodieSparkTable)

Example 18 with HoodieSparkTable

use of org.apache.hudi.table.HoodieSparkTable in project hudi by apache.

the class TestUpdateSchemaEvolution method testSchemaEvolutionOnUpdateSuccessWithAddColumnHaveDefault.

@Test
public void testSchemaEvolutionOnUpdateSuccessWithAddColumnHaveDefault() throws Exception {
    final WriteStatus insertResult = prepareFirstRecordCommit(generateMultipleRecordsForExampleSchema());
    // Now try an update with an evolved schema
    // Evolved schema does not have guarantee on preserving the original field ordering
    final HoodieWriteConfig config = makeHoodieClientConfig("/exampleEvolvedSchema.avsc");
    final HoodieSparkTable table = HoodieSparkTable.create(config, context);
    // New content with values for the newly added field
    String recordStr = "{\"_row_key\":\"8eb5b87a-1feh-4edd-87b4-6ec96dc405a0\"," + "\"time\":\"2016-01-31T03:16:41.415Z\",\"number\":12,\"added_field\":1}";
    List<HoodieRecord> updateRecords = buildUpdateRecords(recordStr, insertResult.getFileId());
    String assertMsg = "UpdateFunction could not read records written with exampleSchema.avsc using the " + "exampleEvolvedSchema.avsc";
    assertSchemaEvolutionOnUpdateResult(insertResult, table, updateRecords, assertMsg, false, null);
}
Also used : HoodieRecord(org.apache.hudi.common.model.HoodieRecord) HoodieWriteConfig(org.apache.hudi.config.HoodieWriteConfig) HoodieSparkTable(org.apache.hudi.table.HoodieSparkTable) Test(org.junit.jupiter.api.Test)

Example 19 with HoodieSparkTable

use of org.apache.hudi.table.HoodieSparkTable in project hudi by apache.

the class TestClientRollback method testSavepointAndRollback.

/**
 * Test case for rollback-savepoint interaction.
 */
@Test
public void testSavepointAndRollback() throws Exception {
    HoodieWriteConfig cfg = getConfigBuilder().withCompactionConfig(HoodieCompactionConfig.newBuilder().withCleanerPolicy(HoodieCleaningPolicy.KEEP_LATEST_COMMITS).retainCommits(1).build()).build();
    try (SparkRDDWriteClient client = getHoodieWriteClient(cfg)) {
        HoodieTestDataGenerator.writePartitionMetadataDeprecated(fs, HoodieTestDataGenerator.DEFAULT_PARTITION_PATHS, basePath);
        /**
         * Write 1 (only inserts)
         */
        String newCommitTime = "001";
        client.startCommitWithTime(newCommitTime);
        List<HoodieRecord> records = dataGen.generateInserts(newCommitTime, 200);
        JavaRDD<HoodieRecord> writeRecords = jsc.parallelize(records, 1);
        List<WriteStatus> statuses = client.upsert(writeRecords, newCommitTime).collect();
        assertNoWriteErrors(statuses);
        /**
         * Write 2 (updates)
         */
        newCommitTime = "002";
        client.startCommitWithTime(newCommitTime);
        records = dataGen.generateUpdates(newCommitTime, records);
        statuses = client.upsert(jsc.parallelize(records, 1), newCommitTime).collect();
        // Verify there are no errors
        assertNoWriteErrors(statuses);
        client.savepoint("hoodie-unit-test", "test");
        /**
         * Write 3 (updates)
         */
        newCommitTime = "003";
        client.startCommitWithTime(newCommitTime);
        records = dataGen.generateUpdates(newCommitTime, records);
        statuses = client.upsert(jsc.parallelize(records, 1), newCommitTime).collect();
        // Verify there are no errors
        assertNoWriteErrors(statuses);
        HoodieWriteConfig config = getConfig();
        List<String> partitionPaths = FSUtils.getAllPartitionPaths(context, config.getMetadataConfig(), cfg.getBasePath());
        metaClient = HoodieTableMetaClient.reload(metaClient);
        HoodieSparkTable table = HoodieSparkTable.create(getConfig(), context, metaClient);
        final BaseFileOnlyView view1 = table.getBaseFileOnlyView();
        List<HoodieBaseFile> dataFiles = partitionPaths.stream().flatMap(s -> {
            return view1.getAllBaseFiles(s).filter(f -> f.getCommitTime().equals("003"));
        }).collect(Collectors.toList());
        assertEquals(3, dataFiles.size(), "The data files for commit 003 should be present");
        dataFiles = partitionPaths.stream().flatMap(s -> {
            return view1.getAllBaseFiles(s).filter(f -> f.getCommitTime().equals("002"));
        }).collect(Collectors.toList());
        assertEquals(3, dataFiles.size(), "The data files for commit 002 should be present");
        /**
         * Write 4 (updates)
         */
        newCommitTime = "004";
        client.startCommitWithTime(newCommitTime);
        records = dataGen.generateUpdates(newCommitTime, records);
        statuses = client.upsert(jsc.parallelize(records, 1), newCommitTime).collect();
        // Verify there are no errors
        assertNoWriteErrors(statuses);
        metaClient = HoodieTableMetaClient.reload(metaClient);
        table = HoodieSparkTable.create(getConfig(), context, metaClient);
        final BaseFileOnlyView view2 = table.getBaseFileOnlyView();
        dataFiles = partitionPaths.stream().flatMap(s -> view2.getAllBaseFiles(s).filter(f -> f.getCommitTime().equals("004"))).collect(Collectors.toList());
        assertEquals(3, dataFiles.size(), "The data files for commit 004 should be present");
        // rolling back to a non existent savepoint must not succeed
        assertThrows(HoodieRollbackException.class, () -> {
            client.restoreToSavepoint("001");
        }, "Rolling back to non-existent savepoint should not be allowed");
        // rollback to savepoint 002
        HoodieInstant savepoint = table.getCompletedSavepointTimeline().getInstants().findFirst().get();
        client.restoreToSavepoint(savepoint.getTimestamp());
        metaClient = HoodieTableMetaClient.reload(metaClient);
        table = HoodieSparkTable.create(getConfig(), context, metaClient);
        final BaseFileOnlyView view3 = table.getBaseFileOnlyView();
        dataFiles = partitionPaths.stream().flatMap(s -> view3.getAllBaseFiles(s).filter(f -> f.getCommitTime().equals("002"))).collect(Collectors.toList());
        assertEquals(3, dataFiles.size(), "The data files for commit 002 be available");
        dataFiles = partitionPaths.stream().flatMap(s -> view3.getAllBaseFiles(s).filter(f -> f.getCommitTime().equals("003"))).collect(Collectors.toList());
        assertEquals(0, dataFiles.size(), "The data files for commit 003 should be rolled back");
        dataFiles = partitionPaths.stream().flatMap(s -> view3.getAllBaseFiles(s).filter(f -> f.getCommitTime().equals("004"))).collect(Collectors.toList());
        assertEquals(0, dataFiles.size(), "The data files for commit 004 should be rolled back");
    }
}
Also used : HoodieClientTestBase(org.apache.hudi.testutils.HoodieClientTestBase) Assertions.assertThrows(org.junit.jupiter.api.Assertions.assertThrows) HoodieCleaningPolicy(org.apache.hudi.common.model.HoodieCleaningPolicy) Arrays(java.util.Arrays) HoodieFailedWritesCleaningPolicy(org.apache.hudi.common.model.HoodieFailedWritesCleaningPolicy) HoodieInstant(org.apache.hudi.common.table.timeline.HoodieInstant) HoodieTestDataGenerator(org.apache.hudi.common.testutils.HoodieTestDataGenerator) HashMap(java.util.HashMap) HoodieMetadataTestTable(org.apache.hudi.common.testutils.HoodieMetadataTestTable) HoodieSparkTable(org.apache.hudi.table.HoodieSparkTable) Assertions.assertFalse(org.junit.jupiter.api.Assertions.assertFalse) HoodieTableMetaClient(org.apache.hudi.common.table.HoodieTableMetaClient) Map(java.util.Map) SparkHoodieBackedTableMetadataWriter(org.apache.hudi.metadata.SparkHoodieBackedTableMetadataWriter) Assertions.assertEquals(org.junit.jupiter.api.Assertions.assertEquals) HoodieTimeline(org.apache.hudi.common.table.timeline.HoodieTimeline) JavaRDD(org.apache.spark.api.java.JavaRDD) HoodieRecord(org.apache.hudi.common.model.HoodieRecord) HoodieRollbackException(org.apache.hudi.exception.HoodieRollbackException) Assertions.assertNoWriteErrors(org.apache.hudi.testutils.Assertions.assertNoWriteErrors) HoodieWriteConfig(org.apache.hudi.config.HoodieWriteConfig) BaseFileOnlyView(org.apache.hudi.common.table.view.TableFileSystemView.BaseFileOnlyView) HoodieTestTable(org.apache.hudi.common.testutils.HoodieTestTable) HoodieCommitMetadata(org.apache.hudi.common.model.HoodieCommitMetadata) HoodieRollbackPlan(org.apache.hudi.avro.model.HoodieRollbackPlan) Collectors(java.util.stream.Collectors) HoodieInstantInfo(org.apache.hudi.avro.model.HoodieInstantInfo) FileCreateUtils(org.apache.hudi.common.testutils.FileCreateUtils) HoodieIndex(org.apache.hudi.index.HoodieIndex) HoodieCompactionConfig(org.apache.hudi.config.HoodieCompactionConfig) Test(org.junit.jupiter.api.Test) HoodieBaseFile(org.apache.hudi.common.model.HoodieBaseFile) List(java.util.List) Assertions.assertTrue(org.junit.jupiter.api.Assertions.assertTrue) HoodieIndexConfig(org.apache.hudi.config.HoodieIndexConfig) HoodieTableMetadataWriter(org.apache.hudi.metadata.HoodieTableMetadataWriter) WriteOperationType(org.apache.hudi.common.model.WriteOperationType) Collections(java.util.Collections) FSUtils(org.apache.hudi.common.fs.FSUtils) Pair(org.apache.hudi.common.util.collection.Pair) HoodieInstant(org.apache.hudi.common.table.timeline.HoodieInstant) HoodieBaseFile(org.apache.hudi.common.model.HoodieBaseFile) HoodieRecord(org.apache.hudi.common.model.HoodieRecord) HoodieWriteConfig(org.apache.hudi.config.HoodieWriteConfig) BaseFileOnlyView(org.apache.hudi.common.table.view.TableFileSystemView.BaseFileOnlyView) HoodieSparkTable(org.apache.hudi.table.HoodieSparkTable) Test(org.junit.jupiter.api.Test)

Example 20 with HoodieSparkTable

use of org.apache.hudi.table.HoodieSparkTable in project hudi by apache.

the class HoodieClientTestBase method getHoodieTable.

public HoodieSparkTable getHoodieTable(HoodieTableMetaClient metaClient, HoodieWriteConfig config) {
    HoodieSparkTable table = HoodieSparkTable.create(config, context, metaClient);
    ((SyncableFileSystemView) (table.getSliceView())).reset();
    return table;
}
Also used : SyncableFileSystemView(org.apache.hudi.common.table.view.SyncableFileSystemView) HoodieSparkTable(org.apache.hudi.table.HoodieSparkTable)

Aggregations

HoodieSparkTable (org.apache.hudi.table.HoodieSparkTable)24 HoodieWriteConfig (org.apache.hudi.config.HoodieWriteConfig)22 HoodieRecord (org.apache.hudi.common.model.HoodieRecord)17 Test (org.junit.jupiter.api.Test)14 Map (java.util.Map)9 Arrays (java.util.Arrays)7 HashMap (java.util.HashMap)7 HoodieTableMetaClient (org.apache.hudi.common.table.HoodieTableMetaClient)7 ParameterizedTest (org.junit.jupiter.params.ParameterizedTest)7 IOException (java.io.IOException)6 List (java.util.List)6 Schema (org.apache.avro.Schema)6 HoodieAvroRecord (org.apache.hudi.common.model.HoodieAvroRecord)6 Option (org.apache.hudi.common.util.Option)6 ArrayList (java.util.ArrayList)5 Collectors (java.util.stream.Collectors)5 Path (org.apache.hadoop.fs.Path)5 HoodieKey (org.apache.hudi.common.model.HoodieKey)5 HoodieTestDataGenerator (org.apache.hudi.common.testutils.HoodieTestDataGenerator)5 RawTripTestPayload (org.apache.hudi.common.testutils.RawTripTestPayload)5