Search in sources :

Example 1 with TxnOpenException

use of org.apache.hadoop.hive.metastore.api.TxnOpenException in project hive by apache.

the class TestTxnHandler method testUnlockWithTxn.

@Test
public void testUnlockWithTxn() throws Exception {
    LOG.debug("Starting testUnlockWithTxn");
    // Test that attempting to unlock locks associated with a transaction
    // generates an error
    long txnid = openTxn();
    LockComponent comp = new LockComponent(LockType.SHARED_WRITE, LockLevel.DB, "mydb");
    comp.setTablename("mytable");
    comp.setPartitionname("mypartition=myvalue");
    comp.setOperationType(DataOperationType.DELETE);
    List<LockComponent> components = new ArrayList<LockComponent>(1);
    components.add(comp);
    LockRequest req = new LockRequest(components, "me", "localhost");
    req.setTxnid(txnid);
    LockResponse res = txnHandler.lock(req);
    long lockid = res.getLockid();
    try {
        txnHandler.unlock(new UnlockRequest(lockid));
        fail("Allowed to unlock lock associated with transaction.");
    } catch (TxnOpenException e) {
    }
}
Also used : LockComponent(org.apache.hadoop.hive.metastore.api.LockComponent) LockResponse(org.apache.hadoop.hive.metastore.api.LockResponse) ArrayList(java.util.ArrayList) LockRequest(org.apache.hadoop.hive.metastore.api.LockRequest) CheckLockRequest(org.apache.hadoop.hive.metastore.api.CheckLockRequest) UnlockRequest(org.apache.hadoop.hive.metastore.api.UnlockRequest) TxnOpenException(org.apache.hadoop.hive.metastore.api.TxnOpenException) Test(org.junit.Test)

Example 2 with TxnOpenException

use of org.apache.hadoop.hive.metastore.api.TxnOpenException in project hive by apache.

the class TxnHandler method unlock.

/**
 * This would have been made simpler if all locks were associated with a txn.  Then only txn needs to
 * be heartbeated, committed, etc.  no need for client to track individual locks.
 * When removing locks not associated with txn this potentially conflicts with
 * heartbeat/performTimeout which are update/delete of HIVE_LOCKS thus will be locked as needed by db.
 * since this only removes from HIVE_LOCKS at worst some lock acquire is delayed
 */
@RetrySemantics.Idempotent
public void unlock(UnlockRequest rqst) throws TxnOpenException, MetaException {
    try {
        Connection dbConn = null;
        Statement stmt = null;
        long extLockId = rqst.getLockid();
        try {
            /**
             * This method is logically like commit for read-only auto commit queries.
             * READ_COMMITTED since this only has 1 delete statement and no new entries with the
             * same hl_lock_ext_id can be added, i.e. all rows with a given hl_lock_ext_id are
             * created in a single atomic operation.
             * Theoretically, this competes with {@link #lock(org.apache.hadoop.hive.metastore.api.LockRequest)}
             * but hl_lock_ext_id is not known until that method returns.
             * Also competes with {@link #checkLock(org.apache.hadoop.hive.metastore.api.CheckLockRequest)}
             * but using SERIALIZABLE doesn't materially change the interaction.
             * If "delete" stmt misses, additional logic is best effort to produce meaningful error msg.
             */
            dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
            stmt = dbConn.createStatement();
            // hl_txnid <> 0 means it's associated with a transaction
            String s = "DELETE FROM \"HIVE_LOCKS\" WHERE \"HL_LOCK_EXT_ID\" = " + extLockId + " AND (\"HL_TXNID\" = 0 OR" + " (\"HL_TXNID\" <> 0 AND \"HL_LOCK_STATE\" = '" + LOCK_WAITING + "'))";
            // (hl_txnid <> 0 AND hl_lock_state = '" + LOCK_WAITING + "') is for multi-statement txns where
            // some query attempted to lock (thus LOCK_WAITING state) but is giving up due to timeout for example
            LOG.debug("Going to execute update <" + s + ">");
            int rc = stmt.executeUpdate(s);
            if (rc < 1) {
                LOG.info("Failure to unlock any locks with extLockId={}.", extLockId);
                dbConn.rollback();
                Optional<LockInfo> optLockInfo = getLockFromLockId(dbConn, extLockId);
                if (!optLockInfo.isPresent()) {
                    // didn't find any lock with extLockId but at ReadCommitted there is a possibility that
                    // it existed when above delete ran but it didn't have the expected state.
                    LOG.info("No lock in " + LOCK_WAITING + " mode found for unlock(" + JavaUtils.lockIdToString(rqst.getLockid()) + ")");
                    // bail here to make the operation idempotent
                    return;
                }
                LockInfo lockInfo = optLockInfo.get();
                if (isValidTxn(lockInfo.txnId)) {
                    String msg = "Unlocking locks associated with transaction not permitted.  " + lockInfo;
                    // if a lock is associated with a txn we can only "unlock" if if it's in WAITING state
                    // which really means that the caller wants to give up waiting for the lock
                    LOG.error(msg);
                    throw new TxnOpenException(msg);
                } else {
                    // we didn't see this lock when running DELETE stmt above but now it showed up
                    // so should "should never happen" happened...
                    String msg = "Found lock in unexpected state " + lockInfo;
                    LOG.error(msg);
                    throw new MetaException(msg);
                }
            }
            LOG.debug("Successfully unlocked at least 1 lock with extLockId={}", extLockId);
            dbConn.commit();
        } catch (SQLException e) {
            LOG.error("Unlock failed for request={}. Exception msg: {}", rqst, getMessage(e));
            rollbackDBConn(dbConn);
            checkRetryable(e, "unlock(" + rqst + ")");
            throw new MetaException("Unable to update transaction database " + JavaUtils.lockIdToString(extLockId) + " " + StringUtils.stringifyException(e));
        } finally {
            closeStmt(stmt);
            closeDbConn(dbConn);
        }
    } catch (RetryException e) {
        unlock(rqst);
    }
}
Also used : SQLException(java.sql.SQLException) PreparedStatement(java.sql.PreparedStatement) Statement(java.sql.Statement) Connection(java.sql.Connection) Savepoint(java.sql.Savepoint) TxnOpenException(org.apache.hadoop.hive.metastore.api.TxnOpenException) MetaException(org.apache.hadoop.hive.metastore.api.MetaException)

Example 3 with TxnOpenException

use of org.apache.hadoop.hive.metastore.api.TxnOpenException in project hive by apache.

the class Cleaner method removeFiles.

private boolean removeFiles(String location, long minOpenTxnGLB, CompactionInfo ci, boolean dropPartition) throws MetaException, IOException, NoSuchObjectException, NoSuchTxnException {
    if (dropPartition) {
        LockRequest lockRequest = createLockRequest(ci, 0, LockType.EXCL_WRITE, DataOperationType.DELETE);
        LockResponse res = null;
        try {
            res = txnHandler.lock(lockRequest);
            if (res.getState() == LockState.ACQUIRED) {
                // check if partition wasn't recreated
                if (resolvePartition(ci) == null) {
                    return removeFiles(location, ci);
                }
            }
        } catch (NoSuchTxnException | TxnAbortedException e) {
            LOG.error(e.getMessage());
        } finally {
            if (res != null && res.getState() != LockState.NOT_ACQUIRED) {
                try {
                    txnHandler.unlock(new UnlockRequest(res.getLockid()));
                } catch (NoSuchLockException | TxnOpenException e) {
                    LOG.error(e.getMessage());
                }
            }
        }
    }
    ValidTxnList validTxnList = TxnUtils.createValidTxnListForCleaner(txnHandler.getOpenTxns(), minOpenTxnGLB);
    // save it so that getAcidState() sees it
    conf.set(ValidTxnList.VALID_TXNS_KEY, validTxnList.writeToString());
    /**
     * {@code validTxnList} is capped by minOpenTxnGLB so if
     * {@link AcidUtils#getAcidState(Path, Configuration, ValidWriteIdList)} sees a base/delta
     * produced by a compactor, that means every reader that could be active right now see it
     * as well.  That means if this base/delta shadows some earlier base/delta, the it will be
     * used in favor of any files that it shadows.  Thus the shadowed files are safe to delete.
     *
     * The metadata about aborted writeIds (and consequently aborted txn IDs) cannot be deleted
     * above COMPACTION_QUEUE.CQ_HIGHEST_WRITE_ID.
     * See {@link TxnStore#markCleaned(CompactionInfo)} for details.
     * For example given partition P1, txnid:150 starts and sees txnid:149 as open.
     * Say compactor runs in txnid:160, but 149 is still open and P1 has the largest resolved
     * writeId:17.  Compactor will produce base_17_c160.
     * Suppose txnid:149 writes delta_18_18
     * to P1 and aborts.  Compactor can only remove TXN_COMPONENTS entries
     * up to (inclusive) writeId:17 since delta_18_18 may be on disk (and perhaps corrupted) but
     * not visible based on 'validTxnList' capped at minOpenTxn so it will not not be cleaned by
     * {@link #removeFiles(String, ValidWriteIdList, CompactionInfo)} and so we must keep the
     * metadata that says that 18 is aborted.
     * In a slightly different case, whatever txn created delta_18 (and all other txn) may have
     * committed by the time cleaner runs and so cleaner will indeed see delta_18_18 and remove
     * it (since it has nothing but aborted data).  But we can't tell which actually happened
     * in markCleaned() so make sure it doesn't delete meta above CG_CQ_HIGHEST_WRITE_ID.
     *
     * We could perhaps make cleaning of aborted and obsolete and remove all aborted files up
     * to the current Min Open Write Id, this way aborted TXN_COMPONENTS meta can be removed
     * as well up to that point which may be higher than CQ_HIGHEST_WRITE_ID.  This could be
     * useful if there is all of a sudden a flood of aborted txns.  (For another day).
     */
    // Creating 'reader' list since we are interested in the set of 'obsolete' files
    ValidReaderWriteIdList validWriteIdList = getValidCleanerWriteIdList(ci, validTxnList);
    LOG.debug("Cleaning based on writeIdList: {}", validWriteIdList);
    return removeFiles(location, validWriteIdList, ci);
}
Also used : NoSuchLockException(org.apache.hadoop.hive.metastore.api.NoSuchLockException) TxnAbortedException(org.apache.hadoop.hive.metastore.api.TxnAbortedException) LockResponse(org.apache.hadoop.hive.metastore.api.LockResponse) NoSuchTxnException(org.apache.hadoop.hive.metastore.api.NoSuchTxnException) ValidTxnList(org.apache.hadoop.hive.common.ValidTxnList) ValidReaderWriteIdList(org.apache.hadoop.hive.common.ValidReaderWriteIdList) LockRequest(org.apache.hadoop.hive.metastore.api.LockRequest) UnlockRequest(org.apache.hadoop.hive.metastore.api.UnlockRequest) TxnOpenException(org.apache.hadoop.hive.metastore.api.TxnOpenException)

Aggregations

TxnOpenException (org.apache.hadoop.hive.metastore.api.TxnOpenException)3 LockRequest (org.apache.hadoop.hive.metastore.api.LockRequest)2 LockResponse (org.apache.hadoop.hive.metastore.api.LockResponse)2 UnlockRequest (org.apache.hadoop.hive.metastore.api.UnlockRequest)2 Connection (java.sql.Connection)1 PreparedStatement (java.sql.PreparedStatement)1 SQLException (java.sql.SQLException)1 Savepoint (java.sql.Savepoint)1 Statement (java.sql.Statement)1 ArrayList (java.util.ArrayList)1 ValidReaderWriteIdList (org.apache.hadoop.hive.common.ValidReaderWriteIdList)1 ValidTxnList (org.apache.hadoop.hive.common.ValidTxnList)1 CheckLockRequest (org.apache.hadoop.hive.metastore.api.CheckLockRequest)1 LockComponent (org.apache.hadoop.hive.metastore.api.LockComponent)1 MetaException (org.apache.hadoop.hive.metastore.api.MetaException)1 NoSuchLockException (org.apache.hadoop.hive.metastore.api.NoSuchLockException)1 NoSuchTxnException (org.apache.hadoop.hive.metastore.api.NoSuchTxnException)1 TxnAbortedException (org.apache.hadoop.hive.metastore.api.TxnAbortedException)1 Test (org.junit.Test)1