use of org.apache.derby.iapi.store.raw.RecordHandle in project derby by apache.
the class FileContainer method newPage.
/**
* Create a new page in the container.
*
* <BR> MT - thread aware - It is assumed that our caller (our super class)
* has already arranged a logical lock on page allocation to only allow a
* single thread through here.
*
* Adding a new page involves 2 transactions and 2 pages.
* The User Transaction (UT) initiated the addPage call and expects a
* latched page (owns by the UT) to be returned.
* The Nested Top Transaction (NTT) is the transaction started by RawStore
* inside an addPage call. This NTT is committed before the page is
* returned. The NTT is used to accessed high traffic data structure such
* as the AllocPage.
*
* This is outline of the algorithm used in adding a page:
* 1) find or make an allocPage which can handle the addding of a new page.
* Latch the allocPage with the NTT.
* 2) invalidate the allocation information cached by the container.
* Without the cache no page can be gotten from the container. Pages
* already in the page cache is not affected. Thus by latching the
* allocPage and invalidating the allocation cache, this NTT blocks out
* all page gets from this container until it commits.
* 3) the allocPage determines which page can be allocated, mark that in its
* data structure (the alloc extent) and returns the page number of the
* new page. This change is associated with the NTT.
* 4) the NTT gets or creates the new page in the page cache (bypassing the
* lookup of the allocPage since that is already latched by the NTT and
* will deadlock).
* 5) the NTT initializes the page (mark it is being a VALID page).
* 6) the page latch is transfered to the UT from the NTT.
* 7) the new page is returned, latched by UT
*
* If we use an NTT, the caller has to commit the NTT to release the
* allocPage latch. If we don't use an NTT, the allocPage latch is released
* as this routine returns.
*
* @param userHandle - the container handle opened by the user transaction,
* use this to latch the new user page
* @param ntt - the nested top transaction for the purpose of allocating the new page
* If ntt is null, use the user transaction for allocation.
* #param allocHandle - the container handle opened by the ntt,
* use this to latch the alloc page
*
* @exception StandardException Standard Derby error policy
*/
protected BasePage newPage(BaseContainerHandle userHandle, RawTransaction ntt, BaseContainerHandle allocHandle, boolean isOverflow) throws StandardException {
// NOTE: we are single threaded thru this method, see MT comment
boolean useNTT = (ntt != null);
// if ntt is null, use user transaction
if (!useNTT)
ntt = userHandle.getTransaction();
// last allocated page
long lastPage;
// last pre-allcated page
long lastPreallocPage;
long pageNumber = // init to appease compiler
ContainerHandle.INVALID_PAGE_NUMBER;
// the page number of the new page
// the identity of the new page
PageKey pkey;
// if true, we are trying to reuse a page
boolean reuse;
/* in case the page recommeded by allocPage is not committed yet, may
/* need to retry a couple of times */
boolean retry;
int numtries = 0;
int maxTries = InterruptStatus.MAX_INTERRUPT_RETRIES;
long startSearch = lastAllocatedPage;
// the alloc page
AllocPage allocPage = null;
// the new page
BasePage page = null;
try {
do {
// we don't expect we need to retry
retry = false;
synchronized (allocCache) {
if (SanityManager.DEBUG) {
SanityManager.ASSERT(ntt.getId().equals(allocHandle.getTransaction().getId()));
if (useNTT)
SanityManager.ASSERT(!ntt.getId().equals(userHandle.getTransaction().getId()));
}
/* find an allocation page that can handle adding a new
* page.
*
* allocPage is unlatched when the ntt commits. The new
* page is initialized by the ntt but the latch is
* transfered to the user transaction before the allocPage
* is unlatched. The allocPage latch prevents almost any
* other reader or writer from finding the new page until
* the ntt is committed and the new page is latched by the
* user transaction.
*
* (If the page is being reused, it is possible for another
* xact which kept a handle on the reused page to find the
* page during the transfer UT -> NTT. If this unlikely
* even occurs and the transfer fails [see code relating
* to transfer below], we retry from the beginning.)
*
* After the NTT commits a reader (getNextPageNumber) may
* get the page number of the newly allocated page and it
* will wait for the new page and latch it when the user
* transaction commits, aborts or unlatches the new page.
* Whether the user transaction commits or aborts, the new
* page stay allocated.
*
* RESOLVE: before NTT rolls back (or commits) the latch is
* released. To repopulate the allocation cache, need to
* get either the container lock on add page, or get a per
* allocation page lock.
*
* This blocks all page read (getPage) from accessing this
* alloc page in this container until the alloc page is
* unlatched. Those who already have a page handle into
* this container are unaffected.
*
* In other words, allocation blocks out reader (of any
* page that is managed by this alloc page) by the latch
* on the allocation page.
*
* Note that write page can proceed as usual.
*/
try {
allocPage = findAllocPageForAdd(allocHandle, ntt, startSearch);
} catch (InterruptDetectedException e) {
// hold. We release it when we do "continue" below.
if (--maxTries > 0) {
// Clear firstAllocPageNumber, i.e. undo side
// effect of makeAllocPage, so retry will work
firstAllocPageNumber = ContainerHandle.INVALID_PAGE_NUMBER;
retry = true;
// needs) and retry writeRAFHeader.
try {
Thread.sleep(InterruptStatus.INTERRUPT_RETRY_SLEEP);
} catch (InterruptedException ee) {
// This thread received an interrupt as
// well, make a note.
InterruptStatus.setInterrupted();
}
continue;
} else {
throw StandardException.newException(SQLState.FILE_IO_INTERRUPTED, e);
}
}
allocCache.invalidate(allocPage, allocPage.getPageNumber());
}
if (SanityManager.DEBUG) {
if (allocPage == null)
allocCache.dumpAllocationCache();
SanityManager.ASSERT(allocPage != null, "findAllocPageForAdd returned a null alloc page");
}
//
// get the next free page's number.
// for case 1, page number > lastPreallocPage
// for case 2, page number <= lastPage
// for case 3, lastPage < page number <= lastPreallocPage
//
pageNumber = allocPage.nextFreePageNumber(startSearch);
// need to distinguish between the following 3 cases:
// 1) the page has not been allocate or initalized.
// Create it in the page cache and sync it to disk.
// 2) the page is being re-allocated.
// We need to read it in to re-initialize it
// 3) the page has been preallocated.
// Create it in the page cache and don't sync it to disk
//
// first find out the current last initialized page and
// preallocated page before the new page is added
lastPage = allocPage.getLastPagenum();
lastPreallocPage = allocPage.getLastPreallocPagenum();
reuse = pageNumber <= lastPage;
// no address translation necessary
pkey = new PageKey(identity, pageNumber);
if (reuse) {
// if re-useing a page, make sure the deallocLock on the new
// page is not held. We only need a zero duration lock on
// the new page because the allocPage is latched and this
// is the only thread which can be looking at this
// pageNumber.
RecordHandle deallocLock = BasePage.MakeRecordHandle(pkey, RecordHandle.DEALLOCATE_PROTECTION_HANDLE);
if (!getDeallocLock(allocHandle, deallocLock, false, /* nowait */
true)) {
// until we get a brand new page.
if (numtries == 0) {
startSearch = ContainerHandle.INVALID_PAGE_NUMBER;
lastAllocatedPage = pageNumber;
} else
// continue from where we were
startSearch = pageNumber;
numtries++;
// We have to unlatch the allocPage so that if that
// transaction rolls back, it won't deadlock with this
// transaction.
allocPage.unlatch();
allocPage = null;
retry = true;
} else {
// we got the lock, next time start from there
lastAllocatedPage = pageNumber;
}
} else {
// deallocated pages
if (numtries > 0)
lastAllocatedPage = ContainerHandle.INVALID_PAGE_NUMBER;
else
lastAllocatedPage = pageNumber;
}
// Retry from the beginning if necessary.
if (retry)
continue;
// If we get past here must have (retry == false)
if (SanityManager.DEBUG) {
SanityManager.ASSERT(retry == false);
}
if (SanityManager.DEBUG) {
// ASSERT lastPage <= lastPreallocPage
if (lastPage > lastPreallocPage) {
SanityManager.THROWASSERT("last page " + lastPage + " > lastPreallocPage " + lastPreallocPage);
}
}
// No I/O at all if this new page is requested as part of a
// create and load statement or this new page is in a temporary
// container.
//
// In the former case, BaseContainer will allow the
// MODE_UNLOGGED bit to go thru to the nested top transaction
// alloc handle. In the later case, there is no nested top
// transaction and the alloc handle is the user handle, which
// is UNLOGGED.
boolean noIO = (allocHandle.getMode() & ContainerHandle.MODE_UNLOGGED) == ContainerHandle.MODE_UNLOGGED;
// or in a create container.
if (!noIO && (bulkIncreaseContainerSize || (pageNumber > lastPreallocPage && pageNumber > PreAllocThreshold))) {
allocPage.preAllocatePage(this, PreAllocThreshold, PreAllocSize);
}
// update last preAllocated Page, it may have been changed by
// the preAllocatePage call. We don't want to do the sync if
// preAllocatePage already took care of it.
lastPreallocPage = allocPage.getLastPreallocPagenum();
boolean prealloced = pageNumber <= lastPreallocPage;
// Argument to the create is an array of ints.
// The array is only used for new page creation or for creating
// a preallocated page, not for reuse.
// 0'th element is the page format
// 1'st element is whether or not to sync the page to disk
// 2'nd element is pagesize
// 3'rd element is spareSpace
PageCreationArgs createPageArgs = new PageCreationArgs(StoredPage.FORMAT_NUMBER, prealloced ? 0 : (noIO ? 0 : CachedPage.WRITE_SYNC), pageSize, spareSpace, minimumRecordSize, 0);
// RESOLVE: right now, there is no re-mapping of pages, so
// pageOffset = pageNumber*pageSize
long pageOffset = pageNumber * pageSize;
try {
page = initPage(allocHandle, pkey, createPageArgs, pageOffset, reuse, isOverflow);
} catch (StandardException se) {
if (SanityManager.DEBUG) {
SanityManager.DEBUG_PRINT("FileContainer", "got exception from initPage:" + "\nreuse = " + reuse + "\nsyncFlag = " + createPageArgs.syncFlag + "\nallocPage = " + allocPage);
}
allocCache.dumpAllocationCache();
throw se;
}
if (SanityManager.DEBUG) {
SanityManager.ASSERT(page != null, "initPage returns null page");
SanityManager.ASSERT(page.isLatched(), "initPage returns unlatched page");
}
// allocate the page in the allocation page bit map
allocPage.addPage(this, pageNumber, ntt, userHandle);
if (useNTT) {
// transfer the page latch from NTT to UT.
//
// after the page is unlatched by NTT, it is still
// protected from being found by almost everybody else
// because the alloc page is still latched and the alloc
// cache is invalidated.
//
// However it is possible for the page to be
// found by threads who specifically ask for this
// pagenumber (e.g. HeapPostCommit).
// We may find that such a thread has latched the page.
// We shouldn't wait for it because we have the alloc page
// latch, and this could cause deadlock (e.g.
// HeapPostCommit might call removePage and this would wait
// on the alloc page).
//
// We may instead find that we can latch the page, but that
// another thread has managed to get hold of it during the
// transfer and either deallocated it or otherwise change it
// (add rows, delete rows etc.)
//
// Since this doesn't happen very often, we retry in these
// 2 cases (we give up the alloc page and page and we start
// this method from scratch).
//
// If the lock manager were changed to allow latches to be
// transferred between transactions, wouldn't need to
// unlatch to do the transfer, and would avoid having to
// retry in these cases (DERBY-2337).
page.unlatch();
page = null;
// need to find it in the cache again since unlatch also
// unkept the page from the cache
page = (BasePage) pageCache.find(pkey);
page = latchPage(userHandle, page, false);
if (page == null || // rows (including deleted rows)
page.recordCount() != 0 || page.getPageStatus() != BasePage.VALID_PAGE) {
retry = true;
if (page != null) {
page.unlatch();
page = null;
}
allocPage.unlatch();
allocPage = null;
}
}
// if ntt is null, no need to transfer. Page is latched by user
// transaction already. Will be no need to retry.
// the alloc page is unlatched in the finally block.
} while (retry == true);
// At this point, should have a page suitable for returning
if (SanityManager.DEBUG)
SanityManager.ASSERT(page.isLatched());
} catch (StandardException se) {
if (page != null)
page.unlatch();
page = null;
// rethrow error
throw se;
} finally {
if (!useNTT && allocPage != null) {
allocPage.unlatch();
allocPage = null;
}
// NTT is committed by the caller
}
if (SanityManager.DEBUG)
SanityManager.ASSERT(page.isLatched());
// at a time in the future.
if (bulkIncreaseContainerSize) {
bulkIncreaseContainerSize = false;
PreAllocSize = DEFAULT_PRE_ALLOC_SIZE;
}
if (!isOverflow && page != null)
setLastInsertedPage(pageNumber);
// logging, this is an estimate only
if (estimatedPageCount >= 0)
estimatedPageCount++;
if (!this.identity.equals(page.getPageId().getContainerId())) {
if (SanityManager.DEBUG) {
SanityManager.THROWASSERT("just created a new page from a different container" + "\n this.identity = " + this.identity + "\n page.getPageId().getContainerId() = " + page.getPageId().getContainerId() + "\n userHandle is: " + userHandle + "\n allocHandle is: " + allocHandle + "\n this container is: " + this);
}
throw StandardException.newException(SQLState.DATA_DIFFERENT_CONTAINER, this.identity, page.getPageId().getContainerId());
}
// return the newly added page
return page;
}
use of org.apache.derby.iapi.store.raw.RecordHandle in project derby by apache.
the class DeleteOperation method writeOptionalDataToBuffer.
/**
* if logical undo, writes out the row that was deleted
*
* @exception IOException Can be thrown by any of the methods of ObjectOutput
* @exception StandardException Standard Derby policy.
*/
private void writeOptionalDataToBuffer(RawTransaction t) throws StandardException, IOException {
if (SanityManager.DEBUG) {
SanityManager.ASSERT(this.page != null);
}
DynamicByteArrayOutputStream logBuffer = t.getLogBuffer();
int optionalDataStart = logBuffer.getPosition();
if (SanityManager.DEBUG) {
SanityManager.ASSERT(optionalDataStart == 0, "Buffer for writing the optional data should start at position 0");
}
if (undo != null)
this.page.logRecord(doMeSlot, BasePage.LOG_RECORD_DEFAULT, recordId, (FormatableBitSet) null, logBuffer, (RecordHandle) null);
int optionalDataLength = logBuffer.getPosition() - optionalDataStart;
if (SanityManager.DEBUG) {
if (optionalDataLength != logBuffer.getUsed())
SanityManager.THROWASSERT("wrong optional data length, optionalDataLength = " + optionalDataLength + ", logBuffer.getUsed() = " + logBuffer.getUsed());
}
// set the position to the beginning of the buffer
logBuffer.setPosition(optionalDataStart);
this.preparedLog = new ByteArray(logBuffer.getByteArray(), optionalDataStart, optionalDataLength);
}
use of org.apache.derby.iapi.store.raw.RecordHandle in project derby by apache.
the class ReclaimSpaceHelper method reclaimSpace.
/**
* Reclaim space based on work.
*/
public static int reclaimSpace(BaseDataFileFactory dataFactory, RawTransaction tran, ReclaimSpace work) throws StandardException {
if (work.reclaimWhat() == ReclaimSpace.CONTAINER)
return reclaimContainer(dataFactory, tran, work);
// Else, not reclaiming container. Get a no-wait shared lock on the
// container regardless of how the user transaction had the
// container opened.
LockingPolicy container_rlock = tran.newLockingPolicy(LockingPolicy.MODE_RECORD, TransactionController.ISOLATION_SERIALIZABLE, true);
if (SanityManager.DEBUG)
SanityManager.ASSERT(container_rlock != null);
ContainerHandle containerHdl = openContainerNW(tran, container_rlock, work.getContainerId());
if (containerHdl == null) {
tran.abort();
if (SanityManager.DEBUG) {
if (SanityManager.DEBUG_ON(DaemonService.DaemonTrace)) {
SanityManager.DEBUG(DaemonService.DaemonTrace, " aborted " + work + " because container is locked or dropped");
}
}
if (// retry this for serveral times
work.incrAttempts() < 3) // it is however, unlikely that three tries will be
// enough because there is no delay between retries.
// See DERBY-4059 and DERBY-4055 for details.
{
return Serviceable.REQUEUE;
} else {
if (SanityManager.DEBUG) {
if (SanityManager.DEBUG_ON(DaemonService.DaemonTrace)) {
SanityManager.DEBUG(DaemonService.DaemonTrace, " gave up after 3 tries to get container lock " + work);
}
}
return Serviceable.DONE;
}
}
if (work.reclaimWhat() == ReclaimSpace.PAGE) {
// Reclaiming a page - called by undo of insert which purged the
// last row off an overflow page. It is safe to reclaim the page
// without first locking the head row because unlike post commit
// work, this is post abort work. Abort is guarenteed to happen
// and to happen only once, if at all.
Page p = containerHdl.getPageNoWait(work.getPageId().getPageNumber());
if (p != null)
containerHdl.removePage(p);
tran.commit();
return Serviceable.DONE;
}
// We are reclaiming row space or long column.
// First get an xlock on the head row piece.
RecordHandle headRecord = work.getHeadRowHandle();
if (!container_rlock.lockRecordForWrite(tran, headRecord, false, /* not insert */
false)) {
// cannot get the row lock, retry
tran.abort();
if (work.incrAttempts() < 3) {
return Serviceable.REQUEUE;
} else {
if (SanityManager.DEBUG) {
if (SanityManager.DEBUG_ON(DaemonService.DaemonTrace)) {
SanityManager.DEBUG(DaemonService.DaemonTrace, " gave up after 3 tries to get row lock " + work);
}
}
return Serviceable.DONE;
}
}
if (work.reclaimWhat() == ReclaimSpace.ROW_RESERVE) {
// This row may benefit from compaction.
containerHdl.compactRecord(headRecord);
// This work is being done - post commit, there is no user
// transaction that depends on the commit being sync'd. It is safe
// to commitNoSync() This do as one of 2 things will happen:
//
// 1) if any data page associated with this transaction is
// moved from cache to disk, then the transaction log
// must be sync'd to the log record for that change and
// all log records including the commit of this xact must
// be sync'd before returning.
//
// 2) if the data page is never written then the log record
// for the commit may never be written, and the xact will
// never make to disk. This is ok as no subsequent action
// depends on this operation being committed.
//
tran.commitNoSync(Transaction.RELEASE_LOCKS);
return Serviceable.DONE;
} else {
if (SanityManager.DEBUG)
SanityManager.ASSERT(work.reclaimWhat() == ReclaimSpace.COLUMN_CHAIN);
// Reclaiming a long column chain due to update. The long column
// chain being reclaimed is the before image of the update
// operation.
//
long headPageId = ((PageKey) headRecord.getPageId()).getPageNumber();
// DERBY-4050 - we wait for the page so we don't have to retry.
// prior to the 4050 fix, we called getPageNoWait and just
// retried 3 times. This left unreclaimed space if we were
// not successful after three tries.
StoredPage headRowPage = (StoredPage) containerHdl.getPage(headPageId);
if (headRowPage == null) {
if (SanityManager.DEBUG) {
if (SanityManager.DEBUG_ON(DaemonService.DaemonTrace)) {
SanityManager.DEBUG(DaemonService.DaemonTrace, "gave up because hadRowPage was null" + work);
}
}
tran.abort();
return Serviceable.DONE;
}
try {
headRowPage.removeOrphanedColumnChain(work, containerHdl);
} finally {
headRowPage.unlatch();
}
// This work is being done - post commit, there is no user
// transaction that depends on the commit being sync'd. It is safe
// to commitNoSync() This do as one of 2 things will happen:
//
// 1) if any data page associated with this transaction is
// moved from cache to disk, then the transaction log
// must be sync'd to the log record for that change and
// all log records including the commit of this xact must
// be sync'd before returning.
//
// 2) if the data page is never written then the log record
// for the commit may never be written, and the xact will
// never make to disk. This is ok as no subsequent action
// depends on this operation being committed.
//
tran.commitNoSync(Transaction.RELEASE_LOCKS);
return Serviceable.DONE;
}
}
use of org.apache.derby.iapi.store.raw.RecordHandle in project derby by apache.
the class B2IRowLocking3 method lockRowOnPage.
/**
* Lock a btree row (row is at given slot in page).
* <p>
* Lock the row at the given slot in the page. Meant to be used if caller
* only has the slot on the page to be locked, and has not read the row
* yet. This routine fetches the row location field from the page, and then
* locks that rowlocation in the base container.
* <p>
* Lock a btree row, enforcing the standard lock/latch protocol.
* On return the row is locked. Return status indicates if the lock
* was waited for, which will mean a latch was dropped while waiting.
* In general a false status means that the caller will either have
* to research the tree unless some protocol has been implemented that
* insures that the row will not have moved while the latch was dropped.
* <p>
* This routine request a row lock NOWAIT on the in-memory row
* "current_row.". If the lock is granted the routine will return true.
* If the lock cannot be granted NOWAIT, then the routine will release
* the latch on "current_leaf" and "aux_leaf" (if aux_leaf is non-null),
* and then it will request a WAIT lock on the row.
* <p>
*
* @param current_leaf Latched current leaf where "current" key is.
* @param aux_leaf If non-null, this leaf is unlatched if the
* routine has to wait on the lock.
* @param current_slot Slot of row to lock.
* @param lock_fetch_desc Descriptor for fetching just the RowLocation,
* used for locking.
* @param position The position to lock if the lock is requested
* while performing a scan, null otherwise.
* @param lock_operation Whether lock is for key prev to insert or not.
* @param lock_duration For what duration should the lock be held,
* if INSTANT_DURATION, then the routine will
* guarantee that lock was acquired while holding
* the latch, but then immediately release the
* lock. If COMMIT_DURATION or MANUAL_DURATION
* then the lock be held when routine returns
* successfully.
*
* @exception StandardException Standard exception policy.
*/
private boolean lockRowOnPage(LeafControlRow current_leaf, LeafControlRow aux_leaf, int current_slot, BTreeRowPosition position, FetchDescriptor lock_fetch_desc, DataValueDescriptor[] lock_template, RowLocation lock_row_loc, int lock_operation, int lock_duration) throws StandardException {
if (SanityManager.DEBUG) {
SanityManager.ASSERT(current_leaf != null);
if (current_slot <= 0 || current_slot >= current_leaf.getPage().recordCount()) {
SanityManager.THROWASSERT("current_slot = " + current_slot + "; current_leaf.getPage().recordCount() = " + current_leaf.getPage().recordCount());
}
SanityManager.ASSERT(lock_template != null, "template is null");
// For now the RowLocation is expected to be the object located in
// the last column of the lock_template, this may change if we
// ever support rows with RowLocations somewhere else.
SanityManager.ASSERT(lock_row_loc == lock_template[lock_template.length - 1], "row_loc is not the object in last column of lock_template.");
if (position != null) {
SanityManager.ASSERT(current_leaf == position.current_leaf);
SanityManager.ASSERT(current_slot == position.current_slot);
}
}
// Fetch the row location to lock.
RecordHandle rec_handle = current_leaf.getPage().fetchFromSlot((RecordHandle) null, current_slot, lock_template, lock_fetch_desc, true);
// First try to get the lock NOWAIT, while latch is held.
boolean ret_status = base_cc.lockRow(lock_row_loc, lock_operation, false, /* NOWAIT */
lock_duration);
if (!ret_status) {
if (position != null) {
// since we're releasing the lock in the middle of a scan,
// save the current position of the scan before releasing the
// latch
position.saveMeAndReleasePage();
} else if (current_leaf != null) {
// otherwise, just release the latch
current_leaf.release();
current_leaf = null;
}
if (aux_leaf != null) {
aux_leaf.release();
aux_leaf = null;
}
if ((((HeapController) base_cc).getOpenConglomerate().getOpenMode() & TransactionManager.OPENMODE_LOCK_ROW_NOWAIT) != 0) {
throw StandardException.newException(SQLState.LOCK_TIMEOUT);
}
base_cc.lockRow(lock_row_loc, lock_operation, true, /* WAIT */
lock_duration);
}
return (ret_status);
}
use of org.apache.derby.iapi.store.raw.RecordHandle in project derby by apache.
the class GenericScanController method fetch.
/**
* Fetch the row at the current position of the Scan.
*
* @param row The row into which the value of the current
* position in the scan is to be stored.
*
* @param qualify Indicates whether the qualifiers should be applied.
*
* @exception StandardException Standard exception policy.
*/
private void fetch(DataValueDescriptor[] row, boolean qualify) throws StandardException {
if (scan_state != SCAN_INPROGRESS)
throw StandardException.newException(SQLState.AM_SCAN_NOT_POSITIONED);
if (!open_conglom.latchPage(scan_position)) {
throw StandardException.newException(SQLState.AM_RECORD_NOT_FOUND, open_conglom.getContainer().getId(), scan_position.current_rh.getPageNumber(), scan_position.current_rh.getId());
}
// RESOLVE (mikem) - should this call apply the qualifiers again?
RecordHandle rh = scan_position.current_page.fetchFromSlot(scan_position.current_rh, scan_position.current_slot, row, qualify ? init_fetchDesc : null, false);
scan_position.unlatch();
if (rh == null) {
throw StandardException.newException(SQLState.AM_RECORD_NOT_FOUND, open_conglom.getContainer().getId(), scan_position.current_rh.getPageNumber(), scan_position.current_rh.getId());
}
return;
}
Aggregations