Search in sources :

Example 31 with RawTransaction

use of org.apache.derby.iapi.store.raw.xact.RawTransaction in project derby by apache.

the class RawStore method applyBulkCryptoOperation.

/*
     * Configure the database for encryption, with the  specified 
     * encryption  properties.
     *
     * Basic idea is to encrypt all the containers with new password/key 
     * specified by the user and keep old versions of the data to 
     * rollback the database to the state before the configuration of database 
     * with new encryption attributes. Users can configure the database with 
     * new encryption  attributes at boot time only; advantage of this approach
     * is that there will not be any concurrency issues to handle because
     * no users will be modifying the data. 
     *
     * First step is to encrypt the existing data with new encryption 
     * attributes  and then update the encryption properties for 
     * the database. Configuring  an un-encrypted database for 
     * encryption problem is a minor variation of  re-encrypting an 
     * encrypted database with new encryption key. The database 
     * reconfiguration with new encryption attributes is done under one
     * transaction, if there is a crash/error before it is committed, 
     * then it  is rolled back and the database will be brought back to the
     * state it was before the encryption.  
     *
     * One trickey case in (re) encrypion of database is 
     * unlike standard protocol transaction  commit means all done, 
     * database (re) encryption process has to perform a checkpoint
     *  with a newly generated key then only database  (re) encrption 
     * is complete, Otherwise the problem  is recovery has to deal 
     * with transaction log that is encrypted with old encryption key and 
     * the new encryption key. This probelm is avoided  writing COMMIT
     * and new  CHECKPOINT log record  to a new log file and encrypt the 
     * with a new key, if there is  crash before checkpoint records 
     * are updated , then on next boot the log file after the checkpoint 
     * is deleted before reovery,  which will be the one that is  
     * written with new encryption key and also contains COMMIT record, 
     * so the COMMIT record is also gone when  log file is deleted. 
     * Recovery will not see the commit , so it will  rollback the (re)
     * encryption and revert all the containers to the 
     * original versions. 
     * 
     * Old container versions are deleted only when the check point 
     * with new encryption key is successful, not on post-commit. 
     *
     * @param properties  properties related to this database.
     * @exception StandardException Standard Derby Error Policy
     */
private void applyBulkCryptoOperation(Properties properties, CipherFactory newCipherFactory) throws StandardException {
    boolean decryptDatabase = (isEncryptedDatabase && isTrue(properties, Attribute.DECRYPT_DATABASE));
    boolean reEncrypt = (isEncryptedDatabase && (isSet(properties, Attribute.NEW_BOOT_PASSWORD) || isSet(properties, Attribute.NEW_CRYPTO_EXTERNAL_KEY)));
    if (SanityManager.DEBUG) {
        SanityManager.ASSERT(decryptDatabase || reEncrypt || (!isEncryptedDatabase && isSet(properties, Attribute.DATA_ENCRYPTION)));
    }
    // Check if the cryptographic operation can be performed.
    cryptoOperationAllowed(reEncrypt, decryptDatabase);
    boolean externalKeyEncryption = isSet(properties, Attribute.CRYPTO_EXTERNAL_KEY);
    // check point the datase, so that encryption does not have
    // to encrypt the existing transactions logs.
    logFactory.checkpoint(this, dataFactory, xactFactory, true);
    // start a transaction that is to be used for encryting the database
    RawTransaction transaction = xactFactory.startTransaction(this, getContextService().getCurrentContextManager(), AccessFactoryGlobals.USER_TRANS_NAME);
    try {
        if (decryptDatabase) {
            dataFactory.decryptAllContainers(transaction);
        } else {
            dataFactory.encryptAllContainers(transaction);
        }
        if (SanityManager.DEBUG) {
            crashOnDebugFlag(TEST_REENCRYPT_CRASH_BEFORE_COMMT, reEncrypt);
        }
        // after setting up a new encryption key
        if (!logFactory.isCheckpointInLastLogFile()) {
            // perfrom a checkpoint, this is a reference checkpoint
            // to find if the re(encryption) is complete.
            logFactory.checkpoint(this, dataFactory, xactFactory, true);
        }
        // database is encrypted.
        if (decryptDatabase) {
            isEncryptedDatabase = false;
            logFactory.setDatabaseEncrypted(false, true);
            dataFactory.setDatabaseEncrypted(false);
        } else {
            // Let the log factory know that database is
            // (re-)encrypted and ask it to flush the log
            // before enabling encryption of the log with
            // the new key.
            logFactory.setDatabaseEncrypted(true, true);
            if (reEncrypt) {
                // Switch the encryption/decryption engine to the new ones.
                decryptionEngine = newDecryptionEngine;
                encryptionEngine = newEncryptionEngine;
                currentCipherFactory = newCipherFactory;
            } else {
                // Mark in the raw store that the database is encrypted.
                isEncryptedDatabase = true;
                dataFactory.setDatabaseEncrypted(true);
            }
        }
        // make the log factory ready to encrypt
        // the transaction log with the new encryption
        // key by switching to a new log file.
        // If re-encryption is aborted for any reason,
        // this new log file will be deleted, during
        // recovery.
        logFactory.startNewLogFile();
        // mark that re-encryption is in progress in the
        // service.properties, so that (re) encryption
        // changes that can not be undone using the transaction
        // log can be un-done before recovery starts.
        // (like the changes to service.properties and
        // any log files the can not be understood by the
        // old encryption key), incase engine crashes
        // after this point.
        // if the crash occurs before this point, recovery
        // will rollback the changes using the transaction
        // log.
        properties.put(RawStoreFactory.DB_ENCRYPTION_STATUS, String.valueOf(RawStoreFactory.DB_ENCRYPTION_IN_PROGRESS));
        if (reEncrypt) {
            if (externalKeyEncryption) {
                // save the current copy of verify key file.
                StorageFile verifyKeyFile = storageFactory.newStorageFile(Attribute.CRYPTO_EXTERNAL_KEY_VERIFY_FILE);
                StorageFile oldVerifyKeyFile = storageFactory.newStorageFile(RawStoreFactory.CRYPTO_OLD_EXTERNAL_KEY_VERIFY_FILE);
                if (!privCopyFile(verifyKeyFile, oldVerifyKeyFile))
                    throw StandardException.newException(SQLState.RAWSTORE_ERROR_COPYING_FILE, verifyKeyFile, oldVerifyKeyFile);
                // update the verify key file with the new key info.
                currentCipherFactory.verifyKey(reEncrypt, storageFactory, properties);
            } else {
                // save the current generated encryption key
                String keyString = properties.getProperty(RawStoreFactory.ENCRYPTED_KEY);
                if (keyString != null)
                    properties.put(RawStoreFactory.OLD_ENCRYPTED_KEY, keyString);
            }
        } else if (decryptDatabase) {
            // We cannot remove the encryption properties here, as we may
            // have to revert back to the encrypted database. Instead we set
            // dataEncryption to false and leave all other encryption
            // attributes unchanged. This requires that Derby doesn't store
            // dataEncryption=false for un-encrypted database, otherwise
            // handleIncompleteDbCryptoOperation will be confused.
            properties.put(Attribute.DATA_ENCRYPTION, "false");
        } else {
            // save the encryption block size;
            properties.put(RawStoreFactory.ENCRYPTION_BLOCKSIZE, String.valueOf(encryptionBlockSize));
        }
        // save the new encryption properties into service.properties
        currentCipherFactory.saveProperties(properties);
        if (SanityManager.DEBUG) {
            crashOnDebugFlag(TEST_REENCRYPT_CRASH_AFTER_SWITCH_TO_NEWKEY, reEncrypt);
        }
        // commit the transaction that is used to
        // (re) encrypt the database. Note that
        // this will be logged with newly generated
        // encryption key in the new log file created
        // above.
        transaction.commit();
        if (SanityManager.DEBUG) {
            crashOnDebugFlag(TEST_REENCRYPT_CRASH_AFTER_COMMT, reEncrypt);
        }
        // force the checkpoint with new encryption key.
        logFactory.checkpoint(this, dataFactory, xactFactory, true);
        if (SanityManager.DEBUG) {
            crashOnDebugFlag(TEST_REENCRYPT_CRASH_AFTER_CHECKPOINT, reEncrypt);
        }
        // once the checkpont makes it to the log, re-encrption
        // is complete. only cleanup is remaining ; update the
        // re-encryption status flag to cleanup.
        properties.put(RawStoreFactory.DB_ENCRYPTION_STATUS, String.valueOf(RawStoreFactory.DB_ENCRYPTION_IN_CLEANUP));
        // database is (re)encrypted successfuly,
        // remove the old version of the container files.
        dataFactory.removeOldVersionOfContainers();
        if (decryptDatabase) {
            // By now we can remove all cryptographic properties.
            removeCryptoProperties(properties);
        } else if (reEncrypt) {
            if (externalKeyEncryption) {
                // remove the saved copy of the verify.key file
                StorageFile oldVerifyKeyFile = storageFactory.newStorageFile(RawStoreFactory.CRYPTO_OLD_EXTERNAL_KEY_VERIFY_FILE);
                if (!privDelete(oldVerifyKeyFile))
                    throw StandardException.newException(SQLState.UNABLE_TO_DELETE_FILE, oldVerifyKeyFile);
            } else {
                // remove the old encryption key property.
                properties.remove(RawStoreFactory.OLD_ENCRYPTED_KEY);
            }
        }
        // (re) encrypion is done,  remove the (re)
        // encryption status property.
        properties.remove(RawStoreFactory.DB_ENCRYPTION_STATUS);
        // close the transaction.
        transaction.close();
    } catch (StandardException se) {
        throw StandardException.newException(SQLState.DATABASE_ENCRYPTION_FAILED, se, se.getMessage());
    } finally {
        // clear the new encryption engines.
        newDecryptionEngine = null;
        newEncryptionEngine = null;
    }
}
Also used : StandardException(org.apache.derby.shared.common.error.StandardException) RawTransaction(org.apache.derby.iapi.store.raw.xact.RawTransaction) StorageFile(org.apache.derby.io.StorageFile)

Example 32 with RawTransaction

use of org.apache.derby.iapi.store.raw.xact.RawTransaction in project derby by apache.

the class RawStore method backup.

/**
 * Backup the database to a backup directory.
 *
 * @param backupDir the name of the directory where the backup should be
 *                  stored. This directory will be created if it
 *                  does not exist.
 * @param wait if <tt>true</tt>, waits for  all the backup blocking
 *             operations in progress to finish.
 * @exception StandardException thrown on error
 */
public void backup(String backupDir, boolean wait) throws StandardException {
    if (backupDir == null || backupDir.equals("")) {
        throw StandardException.newException(SQLState.RAWSTORE_CANNOT_CREATE_BACKUP_DIRECTORY, (File) null);
    }
    // in case this is an URL form
    String backupDirURL = null;
    try {
        URL url = new URL(backupDir);
        backupDirURL = url.getFile();
    } catch (MalformedURLException ex) {
    }
    if (backupDirURL != null)
        backupDir = backupDirURL;
    // find the user transaction, it is necessary for online backup
    // to open the container through page cache
    RawTransaction t = xactFactory.findUserTransaction(this, getContextService().getCurrentContextManager(), AccessFactoryGlobals.USER_TRANS_NAME);
    try {
        if (t.isBlockingBackup()) {
            throw StandardException.newException(SQLState.BACKUP_OPERATIONS_NOT_ALLOWED);
        }
        // and stop new ones from starting until the backup is completed.
        if (!xactFactory.blockBackupBlockingOperations(wait)) {
            throw StandardException.newException(SQLState.BACKUP_BLOCKING_OPERATIONS_IN_PROGRESS);
        }
        // perform backup
        backup(t, new File(backupDir));
    } finally {
        // let the xactfatory know that backup is done, so that
        // it can allow backup blocking operations.
        xactFactory.unblockBackupBlockingOperations();
    }
}
Also used : MalformedURLException(java.net.MalformedURLException) RawTransaction(org.apache.derby.iapi.store.raw.xact.RawTransaction) StorageFile(org.apache.derby.io.StorageFile) File(java.io.File) URL(java.net.URL)

Example 33 with RawTransaction

use of org.apache.derby.iapi.store.raw.xact.RawTransaction in project derby by apache.

the class LogToFile method recover.

/**
 *		Recover the rawStore to a consistent state using the log.
 *
 *		<P>
 *		In this implementation, the log is a stream of log records stored in
 *		one or more flat files.  Recovery is done in 2 passes: redo and undo.
 *		<BR> <B>Redo pass</B>
 *		<BR> In the redo pass, reconstruct the state of the rawstore by
 *		repeating exactly what happened before as recorded in the log.
 *		<BR><B>Undo pass</B>
 *		<BR> In the undo pass, all incomplete transactions are rolled back in
 *		the order from the most recently started to the oldest.
 *
 *		<P>MT - synchronization provided by caller - RawStore boot.
 *		This method is guaranteed to be the only method being called and can
 *		assume single thread access on all fields.
 *
 *		@see Loggable#needsRedo
 *		@see FileLogger#redo
 *
 *		@exception StandardException Standard Derby error policy
 */
public void recover(DataFactory df, TransactionFactory tf) throws StandardException {
    if (SanityManager.DEBUG) {
        SanityManager.ASSERT(df != null, "data factory == null");
    }
    checkCorrupt();
    dataFactory = df;
    // to encrypt checksum log records.
    if (firstLog != null)
        logOut = new LogAccessFile(this, firstLog, logBufferSize);
    // initialization without causing serialization conflicts.
    if (inReplicationSlaveMode) {
        synchronized (slaveRecoveryMonitor) {
            // while this thread waited on the monitor
            while (inReplicationSlaveMode && (allowedToReadFileNumber < bootTimeLogFileNumber)) {
                // Wait until the first log file can be read.
                if (replicationSlaveException != null) {
                    throw replicationSlaveException;
                }
                try {
                    slaveRecoveryMonitor.wait();
                } catch (InterruptedException ie) {
                    InterruptStatus.setInterrupted();
                }
            }
        }
    }
    if (recoveryNeeded) {
        try {
            // ///////////////////////////////////////////////////////////
            // 
            // During boot time, the log control file is accessed and
            // bootTimeLogFileNumber is determined.  LogOut is not set up.
            // bootTimeLogFileNumber is the log file the latest checkpoint
            // lives in,
            // or 1.  It may not be the latest log file (the system may have
            // crashed between the time a new log was generated and the
            // checkpoint log written), that can only be determined at the
            // end of recovery redo.
            // 
            // ///////////////////////////////////////////////////////////
            FileLogger logger = (FileLogger) getLogger();
            // ///////////////////////////////////////////////////////////
            if (checkpointInstant != LogCounter.INVALID_LOG_INSTANT) {
                currentCheckpoint = findCheckpoint(checkpointInstant, logger);
            }
            // beginning of the first log file
            if (SanityManager.DEBUG) {
                if (SanityManager.DEBUG_ON(DUMP_LOG_ONLY)) {
                    currentCheckpoint = null;
                    System.out.println("Dump log only");
                    // unless otherwise specified, 1st log file starts at 1
                    String beginLogFileNumber = PropertyUtil.getSystemProperty(DUMP_LOG_FROM_LOG_FILE);
                    if (beginLogFileNumber != null) {
                        bootTimeLogFileNumber = Long.valueOf(beginLogFileNumber).longValue();
                    } else {
                        bootTimeLogFileNumber = 1;
                    }
                }
            }
            if (SanityManager.DEBUG) {
                if (SanityManager.DEBUG_ON("setCheckpoint")) {
                    currentCheckpoint = null;
                    System.out.println("Set Checkpoint.");
                    // unless otherwise specified, 1st log file starts at 1
                    String checkpointStartLogStr = PropertyUtil.getSystemProperty("derby.storage.checkpointStartLog");
                    String checkpointStartOffsetStr = PropertyUtil.getSystemProperty("derby.storage.checkpointStartOffset");
                    if ((checkpointStartLogStr != null) && (checkpointStartOffsetStr != null)) {
                        checkpointInstant = LogCounter.makeLogInstantAsLong(Long.valueOf(checkpointStartLogStr).longValue(), Long.valueOf(checkpointStartOffsetStr).longValue());
                    } else {
                        SanityManager.THROWASSERT("must set derby.storage.checkpointStartLog and derby.storage.checkpointStartOffset, if setting setCheckpoint.");
                    }
                    currentCheckpoint = findCheckpoint(checkpointInstant, logger);
                }
            }
            long redoLWM = LogCounter.INVALID_LOG_INSTANT;
            long undoLWM = LogCounter.INVALID_LOG_INSTANT;
            long ttabInstant = LogCounter.INVALID_LOG_INSTANT;
            StreamLogScan redoScan = null;
            if (currentCheckpoint != null) {
                Formatable transactionTable = null;
                // RESOLVE: sku
                // currentCheckpoint.getTransactionTable();
                // need to set the transaction table before the undo
                tf.useTransactionTable(transactionTable);
                redoLWM = currentCheckpoint.redoLWM();
                undoLWM = currentCheckpoint.undoLWM();
                if (transactionTable != null)
                    ttabInstant = checkpointInstant;
                if (SanityManager.DEBUG) {
                    if (SanityManager.DEBUG_ON(DBG_FLAG)) {
                        SanityManager.DEBUG(DBG_FLAG, "Found checkpoint at " + LogCounter.toDebugString(checkpointInstant) + " " + currentCheckpoint.toString());
                    }
                }
                firstLogFileNumber = LogCounter.getLogFileNumber(redoLWM);
                // figure out where the first interesting log file is.
                if (LogCounter.getLogFileNumber(undoLWM) < firstLogFileNumber) {
                    firstLogFileNumber = LogCounter.getLogFileNumber(undoLWM);
                }
                // if the checkpoint record doesn't have a transaction
                // table, we need to rebuild it by scanning the log from
                // the undoLWM.  If it does have a transaction table, we
                // only need to scan the log from the redoLWM
                redoScan = (StreamLogScan) openForwardsScan(undoLWM, (LogInstant) null);
            } else {
                // no checkpoint
                tf.useTransactionTable((Formatable) null);
                long start = LogCounter.makeLogInstantAsLong(bootTimeLogFileNumber, LOG_FILE_HEADER_SIZE);
                // no checkpoint, start redo from the beginning of the
                // file - assume this is the first log file
                firstLogFileNumber = bootTimeLogFileNumber;
                redoScan = (StreamLogScan) openForwardsScan(start, (LogInstant) null);
            }
            // open a transaction that is used for redo and rollback
            RawTransaction recoveryTransaction = tf.startTransaction(rawStoreFactory, getContextService().getCurrentContextManager(), AccessFactoryGlobals.USER_TRANS_NAME);
            // make this transaction aware that it is a recovery transaction
            // and don't spew forth post commit work while replaying the log
            recoveryTransaction.recoveryTransaction();
            // ///////////////////////////////////////////////////////////
            // 
            // Redo loop - in FileLogger
            // 
            // ///////////////////////////////////////////////////////////
            // 
            // set log factory state to inRedo so that if redo caused any
            // dirty page to be written from the cache, it won't flush the
            // log since the end of the log has not been determined and we
            // know the log record that caused the page to change has
            // already been written to the log.  We need the page write to
            // go thru the log factory because if the redo has a problem,
            // the log factory is corrupt and the only way we know not to
            // write out the page in a checkpoint is if it check with the
            // log factory, and that is done via a flush - we use the WAL
            // protocol to stop corrupt pages from writing to the disk.
            // 
            inRedo = true;
            long logEnd = logger.redo(recoveryTransaction, tf, redoScan, redoLWM, ttabInstant);
            inRedo = false;
            // Replication slave: When recovery has completed the
            // redo pass, the database is no longer in replication
            // slave mode and only the recover thread will access
            // this object until recover has complete. We
            // therefore do not need two versions of the log file
            // number anymore. From this point on, logFileNumber
            // is used for all references to the current log file
            // number; bootTimeLogFileNumber is no longer used.
            logFileNumber = bootTimeLogFileNumber;
            // the database and prevent anyone from using the log
            if (SanityManager.DEBUG) {
                if (SanityManager.DEBUG_ON(LogToFile.DUMP_LOG_ONLY)) {
                    Monitor.logMessage("_____________________________________________________");
                    Monitor.logMessage("\n\t\t Log dump finished");
                    Monitor.logMessage("_____________________________________________________");
                    // just in case, it has not been set anyway
                    logOut = null;
                    return;
                }
            }
            // ///////////////////////////////////////////////////////////
            // 
            // determine where the log ends
            // 
            // ///////////////////////////////////////////////////////////
            StorageRandomAccessFile theLog = null;
            // some way ...
            if (logEnd == LogCounter.INVALID_LOG_INSTANT) {
                Monitor.logTextMessage(MessageId.LOG_LOG_NOT_FOUND);
                StorageFile logFile = getLogFileName(logFileNumber);
                if (privExists(logFile)) {
                    // otherwise, skip it
                    if (!privDelete(logFile)) {
                        logFile = getLogFileName(++logFileNumber);
                    }
                }
                IOException accessException = null;
                try {
                    theLog = privRandomAccessFile(logFile, "rw");
                } catch (IOException ioe) {
                    theLog = null;
                    accessException = ioe;
                }
                if (theLog == null || !privCanWrite(logFile)) {
                    if (theLog != null)
                        theLog.close();
                    theLog = null;
                    Monitor.logTextMessage(MessageId.LOG_CHANGED_DB_TO_READ_ONLY);
                    if (accessException != null)
                        Monitor.logThrowable(accessException);
                    ReadOnlyDB = true;
                } else {
                    try {
                        // no previous log file or previous log position
                        if (!initLogFile(theLog, logFileNumber, LogCounter.INVALID_LOG_INSTANT)) {
                            throw markCorrupt(StandardException.newException(SQLState.LOG_SEGMENT_NOT_EXIST, logFile.getPath()));
                        }
                    } catch (IOException ioe) {
                        throw markCorrupt(StandardException.newException(SQLState.LOG_IO_ERROR, ioe));
                    }
                    // successfully init'd the log file - set up markers,
                    // and position at the end of the log.
                    setEndPosition(theLog.getFilePointer());
                    lastFlush = endPosition;
                    // and reopen the file in rwd mode.
                    if (isWriteSynced) {
                        // extend the file by wring zeros to it
                        preAllocateNewLogFile(theLog);
                        theLog.close();
                        theLog = openLogFileInWriteMode(logFile);
                        // postion the log at the current end postion
                        theLog.seek(endPosition);
                    }
                    if (SanityManager.DEBUG) {
                        SanityManager.ASSERT(endPosition == LOG_FILE_HEADER_SIZE, "empty log file has wrong size");
                    }
                    // because we already incrementing the log number
                    // here, no special log switch required for
                    // backup recoveries.
                    logSwitchRequired = false;
                }
            } else {
                // logEnd is the instant of the next log record in the log
                // it is used to determine the last known good position of
                // the log
                logFileNumber = LogCounter.getLogFileNumber(logEnd);
                ReadOnlyDB = df.isReadOnly();
                StorageFile logFile = getLogFileName(logFileNumber);
                if (!ReadOnlyDB) {
                    // if datafactory doesn't think it is readonly, we can
                    // do some futher test of our own
                    IOException accessException = null;
                    try {
                        if (isWriteSynced)
                            theLog = openLogFileInWriteMode(logFile);
                        else
                            theLog = privRandomAccessFile(logFile, "rw");
                    } catch (IOException ioe) {
                        theLog = null;
                        accessException = ioe;
                    }
                    if (theLog == null || !privCanWrite(logFile)) {
                        if (theLog != null)
                            theLog.close();
                        theLog = null;
                        Monitor.logTextMessage(MessageId.LOG_CHANGED_DB_TO_READ_ONLY);
                        if (accessException != null)
                            Monitor.logThrowable(accessException);
                        ReadOnlyDB = true;
                    }
                }
                if (!ReadOnlyDB) {
                    setEndPosition(LogCounter.getLogFilePosition(logEnd));
                    // find out if log had incomplete log records at the end.
                    if (redoScan.isLogEndFuzzy()) {
                        theLog.seek(endPosition);
                        long eof = theLog.length();
                        Monitor.logTextMessage(MessageId.LOG_INCOMPLETE_LOG_RECORD, logFile, endPosition, eof);
                        /* Write zeros from incomplete log record to end of file */
                        long nWrites = (eof - endPosition) / logBufferSize;
                        int rBytes = (int) ((eof - endPosition) % logBufferSize);
                        byte[] zeroBuf = new byte[logBufferSize];
                        // write the zeros to file
                        while (nWrites-- > 0) theLog.write(zeroBuf);
                        if (rBytes != 0)
                            theLog.write(zeroBuf, 0, rBytes);
                        if (!isWriteSynced)
                            syncFile(theLog);
                    }
                    if (SanityManager.DEBUG) {
                        if (theLog.length() != endPosition) {
                            SanityManager.ASSERT(theLog.length() > endPosition, "log end > log file length, bad scan");
                        }
                    }
                    // set the log to the true end position,
                    // and not the end of the file
                    lastFlush = endPosition;
                    theLog.seek(endPosition);
                }
            }
            if (theLog != null) {
                if (logOut != null) {
                    // Close the currently open log file, if there is
                    // one. DERBY-5937.
                    logOut.close();
                }
                logOut = new LogAccessFile(this, theLog, logBufferSize);
            }
            if (logSwitchRequired)
                switchLogFile();
            boolean noInFlightTransactions = tf.noActiveUpdateTransaction();
            if (ReadOnlyDB) {
                // dirty buffer
                if (!noInFlightTransactions) {
                    throw StandardException.newException(SQLState.LOG_READ_ONLY_DB_NEEDS_UNDO);
                }
            }
            if (SanityManager.DEBUG) {
                if (SanityManager.DEBUG_ON(LogToFile.DBG_FLAG))
                    SanityManager.DEBUG(LogToFile.DBG_FLAG, "About to call undo(), transaction table =" + tf.getTransactionTable());
            }
            if (!noInFlightTransactions) {
                if (SanityManager.DEBUG) {
                    if (SanityManager.DEBUG_ON(LogToFile.DBG_FLAG))
                        SanityManager.DEBUG(LogToFile.DBG_FLAG, "In recovery undo, rollback inflight transactions");
                }
                tf.rollbackAllTransactions(recoveryTransaction, rawStoreFactory);
                if (SanityManager.DEBUG) {
                    if (SanityManager.DEBUG_ON(LogToFile.DBG_FLAG))
                        SanityManager.DEBUG(LogToFile.DBG_FLAG, "finish recovery undo,");
                }
            } else {
                if (SanityManager.DEBUG) {
                    if (SanityManager.DEBUG_ON(LogToFile.DBG_FLAG))
                        SanityManager.DEBUG(LogToFile.DBG_FLAG, "No in flight transaction, no recovery undo work");
                }
            }
            if (SanityManager.DEBUG) {
                if (SanityManager.DEBUG_ON(LogToFile.DBG_FLAG))
                    SanityManager.DEBUG(LogToFile.DBG_FLAG, "About to call rePrepare(), transaction table =" + tf.getTransactionTable());
            }
            tf.handlePreparedXacts(rawStoreFactory);
            if (SanityManager.DEBUG) {
                if (SanityManager.DEBUG_ON(LogToFile.DBG_FLAG))
                    SanityManager.DEBUG(LogToFile.DBG_FLAG, "Finished rePrepare(), transaction table =" + tf.getTransactionTable());
            }
            // ///////////////////////////////////////////////////////////
            // 
            // End of recovery.
            // 
            // ///////////////////////////////////////////////////////////
            // recovery is finished.  Close the transaction
            recoveryTransaction.close();
            // notify the dataFactory that recovery is completed,
            // but before the checkpoint is written.
            dataFactory.postRecovery();
            // ////////////////////////////////////////////////////////////
            // set the transaction factory short id, we have seen all the
            // trasactions in the log, and at the minimum, the checkpoint
            // transaction will be there.  Set the shortId to the next
            // value.
            // ////////////////////////////////////////////////////////////
            tf.resetTranId();
            // if can't checkpoint for some reasons, flush log and carry on
            if (!ReadOnlyDB) {
                boolean needCheckpoint = true;
                // rollbacks, then don't checkpoint. Otherwise checkpoint.
                if (currentCheckpoint != null && noInFlightTransactions && redoLWM != LogCounter.INVALID_LOG_INSTANT && undoLWM != LogCounter.INVALID_LOG_INSTANT) {
                    if ((logFileNumber == LogCounter.getLogFileNumber(redoLWM)) && (logFileNumber == LogCounter.getLogFileNumber(undoLWM)) && (endPosition < (LogCounter.getLogFilePosition(redoLWM) + 1000)))
                        needCheckpoint = false;
                }
                if (needCheckpoint && !checkpoint(rawStoreFactory, df, tf, false))
                    flush(logFileNumber, endPosition);
            }
            logger.close();
            recoveryNeeded = false;
        } catch (IOException ioe) {
            if (SanityManager.DEBUG)
                ioe.printStackTrace();
            throw markCorrupt(StandardException.newException(SQLState.LOG_IO_ERROR, ioe));
        } catch (ClassNotFoundException cnfe) {
            throw markCorrupt(StandardException.newException(SQLState.LOG_CORRUPTED, cnfe));
        } catch (StandardException se) {
            throw markCorrupt(se);
        } catch (Throwable th) {
            if (SanityManager.DEBUG) {
                SanityManager.showTrace(th);
                th.printStackTrace();
            }
            throw markCorrupt(StandardException.newException(SQLState.LOG_RECOVERY_FAILED, th));
        }
    } else {
        tf.useTransactionTable((Formatable) null);
        // set the transaction factory short id
        tf.resetTranId();
    }
    // done with recovery
    // ///////////////////////////////////////////////////////////
    // setup checkpoint daemon and cache cleaner
    // ///////////////////////////////////////////////////////////
    checkpointDaemon = rawStoreFactory.getDaemon();
    if (checkpointDaemon != null) {
        myClientNumber = checkpointDaemon.subscribe(this, true);
        // use the same daemon for the cache cleaner
        dataFactory.setupCacheCleaner(checkpointDaemon);
    }
}
Also used : IOException(java.io.IOException) StorageRandomAccessFile(org.apache.derby.io.StorageRandomAccessFile) StandardException(org.apache.derby.shared.common.error.StandardException) Formatable(org.apache.derby.iapi.services.io.Formatable) RawTransaction(org.apache.derby.iapi.store.raw.xact.RawTransaction) StorageFile(org.apache.derby.io.StorageFile)

Example 34 with RawTransaction

use of org.apache.derby.iapi.store.raw.xact.RawTransaction in project derby by apache.

the class BeginXact method doMe.

/**
 *		Loggable methods
 *		@see Loggable
 */
/**
 *		Apply the change indicated by this operation and optional data.
 *
 *		@param xact			the Transaction
 *		@param instant		the log instant of this operation
 *		@param in			optional data
 */
public void doMe(Transaction xact, LogInstant instant, LimitObjectInput in) {
    RawTransaction rt = (RawTransaction) xact;
    // If we are not doing fake logging for in memory database
    if (instant != null) {
        rt.setFirstLogInstant(instant);
        // need to do this here rather than in the transaction object for
        // recovery.
        rt.addUpdateTransaction(transactionStatus);
    }
}
Also used : RawTransaction(org.apache.derby.iapi.store.raw.xact.RawTransaction)

Example 35 with RawTransaction

use of org.apache.derby.iapi.store.raw.xact.RawTransaction in project derby by apache.

the class FileContainer method prepareForBulkLoad.

protected void prepareForBulkLoad(BaseContainerHandle handle, int numPage) {
    clearPreallocThreshold();
    RawTransaction tran = handle.getTransaction();
    // find the last allocation page - do not invalidate the alloc cache,
    // we don't want to prevent other people from reading or writing
    // pages.
    AllocPage allocPage = findLastAllocPage(handle, tran);
    // many pages, we only promise to try.
    if (allocPage != null) {
        allocPage.preAllocatePage(this, 0, numPage);
        allocPage.unlatch();
    }
}
Also used : RawTransaction(org.apache.derby.iapi.store.raw.xact.RawTransaction)

Aggregations

RawTransaction (org.apache.derby.iapi.store.raw.xact.RawTransaction)40 RecordHandle (org.apache.derby.iapi.store.raw.RecordHandle)10 LockingPolicy (org.apache.derby.iapi.store.raw.LockingPolicy)6 PageKey (org.apache.derby.iapi.store.raw.PageKey)6 StandardException (org.apache.derby.shared.common.error.StandardException)6 ContextManager (org.apache.derby.iapi.services.context.ContextManager)5 StorageFile (org.apache.derby.io.StorageFile)5 IOException (java.io.IOException)4 DynamicByteArrayOutputStream (org.apache.derby.iapi.services.io.DynamicByteArrayOutputStream)3 RawContainerHandle (org.apache.derby.iapi.store.raw.data.RawContainerHandle)3 Serviceable (org.apache.derby.iapi.services.daemon.Serviceable)2 FormatableBitSet (org.apache.derby.iapi.services.io.FormatableBitSet)2 CompatibilitySpace (org.apache.derby.iapi.services.locks.CompatibilitySpace)2 LogicalUndo (org.apache.derby.iapi.store.access.conglomerate.LogicalUndo)2 ByteArrayOutputStream (java.io.ByteArrayOutputStream)1 File (java.io.File)1 OutputStream (java.io.OutputStream)1 MalformedURLException (java.net.MalformedURLException)1 URL (java.net.URL)1 Properties (java.util.Properties)1