Search in sources :

Example 31 with DumpMetaData

use of org.apache.hadoop.hive.ql.parse.repl.load.DumpMetaData in project hive by apache.

the class ReplDumpTask method shouldResumePreviousDump.

private boolean shouldResumePreviousDump(Path lastDumpPath, boolean isBootStrap) throws IOException {
    if (validDump(lastDumpPath)) {
        return false;
    }
    Path hiveDumpPath = new Path(lastDumpPath, ReplUtils.REPL_HIVE_BASE_DIR);
    DumpMetaData dumpMetaData = new DumpMetaData(hiveDumpPath, conf);
    if (tableExpressionModified(dumpMetaData)) {
        return false;
    }
    if (isBootStrap) {
        return shouldResumePreviousDump(dumpMetaData);
    }
    // In case of incremental we should resume if _events_dump file is present and is valid
    Path lastEventFile = new Path(hiveDumpPath, ReplAck.EVENTS_DUMP.toString());
    long resumeFrom = 0;
    try {
        resumeFrom = getResumeFrom(lastEventFile);
    } catch (SemanticException ex) {
        LOG.info("Could not get last repl id from {}, because of:", lastEventFile, ex.getMessage());
    }
    return resumeFrom > 0L;
}
Also used : Path(org.apache.hadoop.fs.Path) DumpMetaData(org.apache.hadoop.hive.ql.parse.repl.load.DumpMetaData) SemanticException(org.apache.hadoop.hive.ql.parse.SemanticException)

Example 32 with DumpMetaData

use of org.apache.hadoop.hive.ql.parse.repl.load.DumpMetaData in project hive by apache.

the class AlterPartitionHandler method handle.

@Override
public void handle(Context withinContext) throws Exception {
    LOG.info("Processing#{} ALTER_PARTITION message : {}", fromEventId(), eventMessageAsJSON);
    // dump partition related events for metadata-only dump.
    if (withinContext.hiveConf.getBoolVar(HiveConf.ConfVars.REPL_DUMP_METADATA_ONLY)) {
        return;
    }
    Table qlMdTable = new Table(tableObject);
    if (!Utils.shouldReplicate(withinContext.replicationSpec, qlMdTable, true, withinContext.getTablesForBootstrap(), withinContext.oldReplScope, withinContext.hiveConf)) {
        return;
    }
    if (Scenario.ALTER == scenario) {
        withinContext.replicationSpec.setIsMetadataOnly(true);
        List<Partition> partitions = new ArrayList<>();
        partitions.add(new Partition(qlMdTable, after));
        Path metaDataPath = new Path(withinContext.eventRoot, EximUtil.METADATA_NAME);
        EximUtil.createExportDump(metaDataPath.getFileSystem(withinContext.hiveConf), metaDataPath, qlMdTable, partitions, withinContext.replicationSpec, withinContext.hiveConf);
    }
    DumpMetaData dmd = withinContext.createDmd(this);
    dmd.setPayload(eventMessageAsJSON);
    dmd.write();
}
Also used : Path(org.apache.hadoop.fs.Path) Partition(org.apache.hadoop.hive.ql.metadata.Partition) Table(org.apache.hadoop.hive.ql.metadata.Table) DumpMetaData(org.apache.hadoop.hive.ql.parse.repl.load.DumpMetaData) ArrayList(java.util.ArrayList)

Example 33 with DumpMetaData

use of org.apache.hadoop.hive.ql.parse.repl.load.DumpMetaData in project hive by apache.

the class AddCheckConstraintHandler method handle.

@Override
public void handle(Context withinContext) throws Exception {
    LOG.debug("Processing#{} ADD_CHECKCONSTRAINT_MESSAGE message : {}", fromEventId(), eventMessageAsJSON);
    if (shouldReplicate(withinContext)) {
        DumpMetaData dmd = withinContext.createDmd(this);
        dmd.setPayload(eventMessageAsJSON);
        dmd.write();
    }
}
Also used : DumpMetaData(org.apache.hadoop.hive.ql.parse.repl.load.DumpMetaData)

Example 34 with DumpMetaData

use of org.apache.hadoop.hive.ql.parse.repl.load.DumpMetaData in project hive by apache.

the class AddForeignKeyHandler method handle.

@Override
public void handle(Context withinContext) throws Exception {
    LOG.debug("Processing#{} ADD_FOREIGNKEY_MESSAGE message : {}", fromEventId(), eventMessageAsJSON);
    if (shouldReplicate(withinContext)) {
        DumpMetaData dmd = withinContext.createDmd(this);
        dmd.setPayload(eventMessageAsJSON);
        dmd.write();
    }
}
Also used : DumpMetaData(org.apache.hadoop.hive.ql.parse.repl.load.DumpMetaData)

Example 35 with DumpMetaData

use of org.apache.hadoop.hive.ql.parse.repl.load.DumpMetaData in project hive by apache.

the class AddNotNullConstraintHandler method handle.

@Override
public void handle(Context withinContext) throws Exception {
    LOG.debug("Processing#{} ADD_NOTNULLCONSTRAINT_MESSAGE message : {}", fromEventId(), eventMessageAsJSON);
    if (shouldReplicate(withinContext)) {
        DumpMetaData dmd = withinContext.createDmd(this);
        dmd.setPayload(eventMessageAsJSON);
        dmd.write();
    }
}
Also used : DumpMetaData(org.apache.hadoop.hive.ql.parse.repl.load.DumpMetaData)

Aggregations

DumpMetaData (org.apache.hadoop.hive.ql.parse.repl.load.DumpMetaData)39 Path (org.apache.hadoop.fs.Path)17 FileSystem (org.apache.hadoop.fs.FileSystem)6 Test (org.junit.Test)6 ArrayList (java.util.ArrayList)5 Table (org.apache.hadoop.hive.ql.metadata.Table)5 Database (org.apache.hadoop.hive.metastore.api.Database)4 SemanticException (org.apache.hadoop.hive.ql.parse.SemanticException)4 IOException (java.io.IOException)3 ReplScope (org.apache.hadoop.hive.common.repl.ReplScope)3 NoSuchObjectException (org.apache.hadoop.hive.metastore.api.NoSuchObjectException)3 HiveException (org.apache.hadoop.hive.ql.metadata.HiveException)3 Partition (org.apache.hadoop.hive.ql.metadata.Partition)3 HashMap (java.util.HashMap)2 List (java.util.List)2 Task (org.apache.hadoop.hive.ql.exec.Task)2 InvalidTableException (org.apache.hadoop.hive.ql.metadata.InvalidTableException)2 FailoverMetaData (org.apache.hadoop.hive.ql.parse.repl.load.FailoverMetaData)2 FileNotFoundException (java.io.FileNotFoundException)1 URI (java.net.URI)1