use of org.apache.derby.iapi.services.io.ArrayInputStream in project derby by apache.
the class LogToFile method initializeReplicationSlaveRole.
/**
* Initializes logOut so that log received from the replication
* master can be appended to the log file.
*
* Normally, logOut (the file log records are appended to) is set
* up as part of the recovery process. When the database is booted
* in replication slave mode, however, recovery will not get to
* the point where logOut is initialized until this database is no
* longer in slave mode. Since logOut is needed to append log
* records received from the master, logOut needs to be set up for
* replication slave mode.
*
* This method finds the last log record in the log file with the
* highest number. logOut is set up so that log records will be
* appended to the end of that file, and the endPosition and
* lastFlush variables are set to point to the end of the same
* file. All this is normally done as part of recovery.
*
* After the first log file switch resulting from applying log
* received from the master, recovery will be allowed to read up
* to, but not including, the current log file which is the file
* numbered logFileNumber.
*
* Note that this method must not be called until LogToFile#boot()
* has completed. Currently, this is ensured because RawStore#boot
* starts the SlaveFactory (in turn calling this method) after
* LogFactory.boot() has completed. Race conditions for
* logFileNumber may occur if this is changed.
*
* @exception StandardException Standard Derby error policy
*/
public void initializeReplicationSlaveRole() throws StandardException {
if (SanityManager.DEBUG) {
SanityManager.ASSERT(inReplicationSlaveMode, "This method should only be used when" + " in slave replication mode");
}
try {
// Find the log file with the highest file number on disk
while (getLogFileAtBeginning(logFileNumber + 1) != null) {
logFileNumber++;
}
// Scan the highest log file to find it's end.
long startInstant = LogCounter.makeLogInstantAsLong(logFileNumber, LOG_FILE_HEADER_SIZE);
long logEndInstant = LOG_FILE_HEADER_SIZE;
StreamLogScan scanOfHighestLogFile = (StreamLogScan) openForwardsScan(startInstant, (LogInstant) null);
ArrayInputStream scanInputStream = new ArrayInputStream();
while (scanOfHighestLogFile.getNextRecord(scanInputStream, null, 0) != null) {
logEndInstant = scanOfHighestLogFile.getLogRecordEnd();
}
setEndPosition(LogCounter.getLogFilePosition(logEndInstant));
// endPosition and logFileNumber now point to the end of the
// highest log file. This is where a new log record should be
// appended.
/*
* Open the highest log file and make sure log records are
* appended at the end of it
*/
StorageRandomAccessFile logFile = null;
if (isWriteSynced) {
logFile = openLogFileInWriteMode(getLogFileName(logFileNumber));
} else {
logFile = privRandomAccessFile(getLogFileName(logFileNumber), "rw");
}
logOut = new LogAccessFile(this, logFile, logBufferSize);
lastFlush = endPosition;
// append log records at the end of
logFile.seek(endPosition);
// the file
} catch (IOException ioe) {
throw StandardException.newException(SQLState.REPLICATION_UNEXPECTED_EXCEPTION, ioe);
}
}
use of org.apache.derby.iapi.services.io.ArrayInputStream in project derby by apache.
the class FileLogger method readLogRecord.
/**
* Read the next log record from the scan.
*
* <P>MT - caller must provide synchronization (right now, it is only
* called in recovery to find the checkpoint log record. When this method
* is called by a more general audience, MT must be revisited).
*
* @param scan an opened log scan
* @param size estimated size of the log record
*
* @return the log operation that is the next in the scan, or null if no
* more log operation in the log scan
*
* @exception IOException Error reading the log file
* @exception StandardException Standard Derby error policy
* @exception ClassNotFoundException log corrupted
*/
protected Loggable readLogRecord(StreamLogScan scan, int size) throws IOException, StandardException, ClassNotFoundException {
Loggable lop = null;
ArrayInputStream logInputBuffer = new ArrayInputStream(new byte[size]);
LogRecord record = scan.getNextRecord(logInputBuffer, null, 0);
if (record != null)
lop = record.getLoggable();
return lop;
}
use of org.apache.derby.iapi.services.io.ArrayInputStream in project derby by apache.
the class FileLogger method reprepare.
/**
* During recovery re-prepare a transaction.
* <p>
* After redo() and undo(), this routine is called on all outstanding
* in-doubt (prepared) transactions. This routine re-acquires all
* logical write locks for operations in the xact, and then modifies
* the transaction table entry to make the transaction look as if it
* had just been prepared following startup after recovery.
* <p>
*
* @param t is the transaction performing the re-prepare
* @param prepareId is the transaction ID to be re-prepared
* @param prepareStopAt is where the log instant (inclusive) where the
* re-prepare should stop.
* @param prepareStartAt is the log instant (inclusive) where re-prepare
* should begin, this is normally the log instant
* of the last log record of the transaction that
* is to be re-prepare. If null, then re-prepare
* starts from the end of the log.
*
* @exception StandardException Standard exception policy.
*/
public void reprepare(RawTransaction t, TransactionId prepareId, LogInstant prepareStopAt, LogInstant prepareStartAt) throws StandardException {
if (SanityManager.DEBUG) {
if (SanityManager.DEBUG_ON(LogToFile.DBG_FLAG)) {
if (prepareStartAt != null) {
SanityManager.DEBUG(LogToFile.DBG_FLAG, "----------------------------------------------------\n" + "\nBegin of RePrepare : " + prepareId.toString() + "start at " + prepareStartAt.toString() + " stop at " + prepareStopAt.toString() + "\n----------------------------------------------------\n");
} else {
SanityManager.DEBUG(LogToFile.DBG_FLAG, "----------------------------------------------------\n" + "\nBegin of Reprepare: " + prepareId.toString() + "start at end of log stop at " + prepareStopAt.toString() + "\n----------------------------------------------------\n");
}
}
}
// statistics
int clrskipped = 0;
int logrecordseen = 0;
RePreparable lop = null;
// stream to read the log record - initial size 4096, scanLog needs
// to resize if the log record is larger than that.
ArrayInputStream rawInput = null;
try {
StreamLogScan scanLog;
if (prepareStartAt == null) {
// don't know where to start, scan from end of log
scanLog = (StreamLogScan) logFactory.openBackwardsScan(prepareStopAt);
} else {
if (prepareStartAt.lessThan(prepareStopAt)) {
// nothing to prepare!
return;
}
scanLog = (StreamLogScan) logFactory.openBackwardsScan(((LogCounter) prepareStartAt).getValueAsLong(), prepareStopAt);
}
if (SanityManager.DEBUG)
SanityManager.ASSERT(scanLog != null, "cannot open log for prepare");
rawInput = new ArrayInputStream(new byte[4096]);
LogRecord record;
while ((record = scanLog.getNextRecord(rawInput, prepareId, 0)) != null) {
if (SanityManager.DEBUG) {
SanityManager.ASSERT(record.getTransactionId().equals(prepareId), "getNextRecord return unqualified log rec for prepare");
}
logrecordseen++;
if (record.isCLR()) {
clrskipped++;
// the loggable is still in the input stream, get rid of it
record.skipLoggable();
// read the prepareInstant
long prepareInstant = rawInput.readLong();
if (SanityManager.DEBUG) {
if (SanityManager.DEBUG_ON(LogToFile.DBG_FLAG)) {
SanityManager.DEBUG(LogToFile.DBG_FLAG, "Skipping over CLRs, reset scan to " + LogCounter.toDebugString(prepareInstant));
}
}
scanLog.resetPosition(new LogCounter(prepareInstant));
continue;
}
if (record.requiresPrepareLocks()) {
lop = record.getRePreparable();
} else {
continue;
}
if (lop != null) {
// Reget locks based on log record. reclaim all locks with
// a serializable locking policy, since we are only
// reclaiming write locks, isolation level does not matter
// much.
lop.reclaimPrepareLocks(t, t.newLockingPolicy(LockingPolicy.MODE_RECORD, TransactionController.ISOLATION_REPEATABLE_READ, true));
if (SanityManager.DEBUG) {
if (SanityManager.DEBUG_ON(LogToFile.DBG_FLAG)) {
SanityManager.DEBUG(LogToFile.DBG_FLAG, "Reprepare log record at instant " + scanLog.getInstant() + " : " + lop);
}
}
}
}
} catch (ClassNotFoundException cnfe) {
throw logFactory.markCorrupt(StandardException.newException(SQLState.LOG_CORRUPTED, cnfe));
} catch (IOException ioe) {
throw logFactory.markCorrupt(StandardException.newException(SQLState.LOG_READ_LOG_FOR_UNDO, ioe));
} catch (StandardException se) {
throw logFactory.markCorrupt(StandardException.newException(SQLState.LOG_UNDO_FAILED, se, prepareId, lop, (Object) null));
} finally {
if (rawInput != null) {
try {
rawInput.close();
} catch (IOException ioe) {
throw logFactory.markCorrupt(StandardException.newException(SQLState.LOG_READ_LOG_FOR_UNDO, ioe, prepareId));
}
}
}
if (SanityManager.DEBUG) {
if (SanityManager.DEBUG_ON(LogToFile.DBG_FLAG)) {
SanityManager.DEBUG(LogToFile.DBG_FLAG, "Finish prepare" + ", clr skipped = " + clrskipped + ", record seen = " + logrecordseen + "\n");
}
}
if (SanityManager.DEBUG) {
if (SanityManager.DEBUG_ON(LogToFile.DBG_FLAG)) {
SanityManager.DEBUG(LogToFile.DBG_FLAG, "----------------------------------------------------\n" + "End of recovery rePrepare\n" + ", clr skipped = " + clrskipped + ", record seen = " + logrecordseen + "\n----------------------------------------------------\n");
}
}
}
use of org.apache.derby.iapi.services.io.ArrayInputStream in project derby by apache.
the class FileLogger method undo.
/**
* Undo a part of or the entire transaction. Begin rolling back the log
* record at undoStartAt and stopping at (inclusive) the log record at
* undoStopAt.
*
* <P>MT - Not needed. A transaction must be single threaded thru undo,
* each RawTransaction has its own logger, therefore no need to
* synchronize. The RawTransaction must handle synchronizing with
* multiple threads during rollback.
*
* @param t the transaction that needs to be rolled back
* @param undoId the transaction ID
* @param undoStopAt the last log record that should be rolled back
* @param undoStartAt the first log record that should be rolled back
*
* @exception StandardException Standard Derby error policy
*
* @see Logger#undo
*/
public void undo(RawTransaction t, TransactionId undoId, LogInstant undoStopAt, LogInstant undoStartAt) throws StandardException {
if (SanityManager.DEBUG) {
if (SanityManager.DEBUG_ON(LogToFile.DBG_FLAG)) {
if (undoStartAt != null) {
SanityManager.DEBUG(LogToFile.DBG_FLAG, "\nUndo transaction: " + undoId.toString() + "start at " + undoStartAt.toString() + " stop at " + undoStopAt.toString());
} else {
SanityManager.DEBUG(LogToFile.DBG_FLAG, "\nUndo transaction: " + undoId.toString() + "start at end of log stop at " + undoStopAt.toString());
}
}
}
// statistics
int clrgenerated = 0;
int clrskipped = 0;
int logrecordseen = 0;
StreamLogScan scanLog;
Compensation compensation = null;
Undoable lop = null;
// stream to read the log record - initial size 4096, scanLog needs
// to resize if the log record is larget than that.
ArrayInputStream rawInput = null;
try {
if (undoStartAt == null) {
// don't know where to start, rollback from end of log
scanLog = (StreamLogScan) logFactory.openBackwardsScan(undoStopAt);
} else {
if (undoStartAt.lessThan(undoStopAt)) {
// nothing to undo!
return;
}
long undoStartInstant = ((LogCounter) undoStartAt).getValueAsLong();
scanLog = (StreamLogScan) logFactory.openBackwardsScan(undoStartInstant, undoStopAt);
}
if (SanityManager.DEBUG)
SanityManager.ASSERT(scanLog != null, "cannot open log for undo");
rawInput = new ArrayInputStream(new byte[4096]);
LogRecord record;
while ((record = scanLog.getNextRecord(rawInput, undoId, 0)) != null) {
if (SanityManager.DEBUG) {
SanityManager.ASSERT(record.getTransactionId().equals(undoId), "getNextRecord return unqualified log record for undo");
}
logrecordseen++;
if (record.isCLR()) {
clrskipped++;
// the loggable is still in the input stream, get rid of it
record.skipLoggable();
// read the undoInstant
long undoInstant = rawInput.readLong();
if (SanityManager.DEBUG) {
if (SanityManager.DEBUG_ON(LogToFile.DBG_FLAG)) {
SanityManager.DEBUG(LogToFile.DBG_FLAG, "Skipping over CLRs, reset scan to " + LogCounter.toDebugString(undoInstant));
}
}
scanLog.resetPosition(new LogCounter(undoInstant));
continue;
}
lop = record.getUndoable();
if (lop != null) {
int optionalDataLength = rawInput.readInt();
int savePosition = rawInput.getPosition();
rawInput.setLimit(optionalDataLength);
compensation = lop.generateUndo(t, rawInput);
if (SanityManager.DEBUG) {
if (SanityManager.DEBUG_ON(LogToFile.DBG_FLAG)) {
SanityManager.DEBUG(LogToFile.DBG_FLAG, "Rollback log record at instant " + LogCounter.toDebugString(scanLog.getInstant()) + " : " + lop);
}
}
clrgenerated++;
if (compensation != null) {
// generateUndo may have read stuff off the
// stream, reset it for the undo operation.
rawInput.setLimit(savePosition, optionalDataLength);
// log the compensation op that rolls back the
// operation at this instant
t.logAndUndo(compensation, new LogCounter(scanLog.getInstant()), rawInput);
compensation.releaseResource(t);
compensation = null;
}
// if compensation is null, log operation is redo only
}
// if this is not an undoable operation, continue with next log
// record
}
} catch (ClassNotFoundException cnfe) {
throw logFactory.markCorrupt(StandardException.newException(SQLState.LOG_CORRUPTED, cnfe));
} catch (IOException ioe) {
throw logFactory.markCorrupt(StandardException.newException(SQLState.LOG_READ_LOG_FOR_UNDO, ioe));
} catch (StandardException se) {
throw logFactory.markCorrupt(StandardException.newException(SQLState.LOG_UNDO_FAILED, se, undoId, lop, compensation));
} finally {
if (compensation != null) {
// errored out
compensation.releaseResource(t);
}
if (rawInput != null) {
try {
rawInput.close();
} catch (IOException ioe) {
throw logFactory.markCorrupt(StandardException.newException(SQLState.LOG_READ_LOG_FOR_UNDO, ioe, undoId));
}
}
}
if (SanityManager.DEBUG) {
if (SanityManager.DEBUG_ON(LogToFile.DBG_FLAG)) {
SanityManager.DEBUG(LogToFile.DBG_FLAG, "Finish undo" + ", clr generated = " + clrgenerated + ", clr skipped = " + clrskipped + ", record seen = " + logrecordseen + "\n");
}
}
}
use of org.apache.derby.iapi.services.io.ArrayInputStream in project derby by apache.
the class FileContainer method readHeaderFromArray.
/**
* Read containerInfo from a byte array
* The container Header array must be written by or of
* the same format as put together by writeHeaderFromArray.
*
* @exception StandardException Derby Standard error policy
* @exception IOException error in reading the header from file
*/
private void readHeaderFromArray(byte[] a) throws StandardException, IOException {
ArrayInputStream inStream = new ArrayInputStream(a);
inStream.setLimit(CONTAINER_INFO_SIZE);
int fid = inStream.readInt();
if (fid != formatIdInteger) {
throw StandardException.newException(SQLState.DATA_UNKNOWN_CONTAINER_FORMAT, getIdentity(), fid);
}
int status = inStream.readInt();
pageSize = inStream.readInt();
spareSpace = inStream.readInt();
minimumRecordSize = inStream.readInt();
initialPages = inStream.readShort();
PreAllocSize = inStream.readShort();
firstAllocPageNumber = inStream.readLong();
firstAllocPageOffset = inStream.readLong();
containerVersion = inStream.readLong();
estimatedRowCount = inStream.readLong();
reusableRecordIdSequenceNumber = inStream.readLong();
lastLogInstant = null;
if (// pre 2.0, we don't store this.
PreAllocSize == 0)
PreAllocSize = DEFAULT_PRE_ALLOC_SIZE;
// read spare long
long spare3 = inStream.readLong();
// default of 1.
if (initialPages == 0)
initialPages = 1;
// container read in from disk, reset preAllocation values
PreAllocThreshold = PRE_ALLOC_THRESHOLD;
// validate checksum
long onDiskChecksum = inStream.readLong();
checksum.reset();
checksum.update(a, 0, CONTAINER_INFO_SIZE - CHECKSUM_SIZE);
if (onDiskChecksum != checksum.getValue()) {
PageKey pk = new PageKey(identity, FIRST_ALLOC_PAGE_NUMBER);
throw dataFactory.markCorrupt(StandardException.newException(SQLState.FILE_BAD_CHECKSUM, pk, checksum.getValue(), onDiskChecksum, org.apache.derby.iapi.util.StringUtil.hexDump(a)));
}
allocCache.reset();
// set the in memory state
setDroppedState((status & FILE_DROPPED) != 0);
setCommittedDropState((status & FILE_COMMITTED_DROP) != 0);
setReusableRecordIdState((status & FILE_REUSABLE_RECORDID) != 0);
}
Aggregations