use of org.apache.geode.cache.PartitionedRegionStorageException in project geode by apache.
the class PartitionedRegion method virtualPut.
@Override
boolean virtualPut(EntryEventImpl event, boolean ifNew, boolean ifOld, Object expectedOldValue, boolean requireOldValue, long lastModified, boolean overwriteDestroyed) throws TimeoutException, CacheWriterException {
final long startTime = PartitionedRegionStats.startTime();
boolean result = false;
final DistributedPutAllOperation putAllOp_save = event.setPutAllOperation(null);
if (event.getEventId() == null) {
event.setNewEventId(this.cache.getDistributedSystem());
}
boolean bucketStorageAssigned = true;
try {
final Integer bucketId = event.getKeyInfo().getBucketId();
assert bucketId != KeyInfo.UNKNOWN_BUCKET;
// check in bucket2Node region
InternalDistributedMember targetNode = getNodeForBucketWrite(bucketId, null);
// and to optimize distribution.
if (logger.isDebugEnabled()) {
logger.debug("PR.virtualPut putting event={}", event);
}
if (targetNode == null) {
try {
bucketStorageAssigned = false;
targetNode = createBucket(bucketId, event.getNewValSizeForPR(), null);
} catch (PartitionedRegionStorageException e) {
// try not to throw a PRSE if the cache is closing or this region was
// destroyed during createBucket() (bug 36574)
this.checkReadiness();
if (this.cache.isClosed()) {
throw new RegionDestroyedException(toString(), getFullPath());
}
throw e;
}
}
if (event.isBridgeEvent() && bucketStorageAssigned) {
setNetworkHopType(bucketId, targetNode);
}
if (putAllOp_save == null) {
result = putInBucket(targetNode, bucketId, event, ifNew, ifOld, expectedOldValue, requireOldValue, (ifNew ? 0L : lastModified));
if (logger.isDebugEnabled()) {
logger.debug("PR.virtualPut event={} ifNew={} ifOld={} result={}", event, ifNew, ifOld, result);
}
} else {
// fix for 40502
checkIfAboveThreshold(event);
// putAll: save the bucket id into DPAO, then wait for postPutAll to send msg
// at this time, DPAO's PutAllEntryData should be empty, we should add entry here with
// bucket id
// the message will be packed in postPutAll, include the one to local bucket, because the
// buckets
// could be changed at that time
putAllOp_save.addEntry(event, bucketId);
if (logger.isDebugEnabled()) {
logger.debug("PR.virtualPut PutAll added event={} into bucket {}", event, bucketId);
}
result = true;
}
} catch (RegionDestroyedException rde) {
if (!rde.getRegionFullPath().equals(getFullPath())) {
throw new RegionDestroyedException(toString(), getFullPath(), rde);
}
} finally {
if (putAllOp_save == null) {
// only for normal put
if (ifNew) {
this.prStats.endCreate(startTime);
} else {
this.prStats.endPut(startTime);
}
}
}
if (!result) {
checkReadiness();
if (!ifNew && !ifOld && !this.concurrencyChecksEnabled) {
// may fail due to concurrency conflict
// failed for unknown reason
// throw new PartitionedRegionStorageException("unable to execute operation");
logger.warn(LocalizedMessage.create(LocalizedStrings.PartitionedRegion_PRVIRTUALPUT_RETURNING_FALSE_WHEN_IFNEW_AND_IFOLD_ARE_BOTH_FALSE), new Exception(LocalizedStrings.PartitionedRegion_STACK_TRACE.toLocalizedString()));
}
}
return result;
}
use of org.apache.geode.cache.PartitionedRegionStorageException in project geode by apache.
the class PRHARedundancyProvider method createBucketAtomically.
/**
* Creates bucket atomically by creating all the copies to satisfy redundancy. In case all copies
* can not be created, a PartitionedRegionStorageException is thrown to the user and
* BucketBackupMessage is sent to the nodes to make copies of a bucket that was only partially
* created. Other VMs are informed of bucket creation through updates through their
* {@link BucketAdvisor.BucketProfile}s.
*
* <p>
* This method is synchronized to enforce a single threaded ordering, allowing for a more accurate
* picture of bucket distribution in the face of concurrency. See bug 37275.
* </p>
*
* This method is now slightly misnamed. Another member could be in the process of creating this
* same bucket at the same time.
*
* @param bucketId Id of the bucket to be created.
* @param newBucketSize size of the first entry.
* @param startTime a time stamp prior to calling the method, used to update bucket creation stats
* @return the primary member for the newly created bucket
* @throws PartitionedRegionStorageException if required # of buckets can not be created to
* satisfy redundancy.
* @throws PartitionedRegionException if d-lock can not be acquired to create bucket.
* @throws PartitionOfflineException if persistent data recovery is not complete for a partitioned
* region referred to in the query.
*/
public InternalDistributedMember createBucketAtomically(final int bucketId, final int newBucketSize, final long startTime, final boolean finishIncompleteCreation, String partitionName) throws PartitionedRegionStorageException, PartitionedRegionException, PartitionOfflineException {
final boolean isDebugEnabled = logger.isDebugEnabled();
prRegion.checkPROffline();
// If there are insufficient stores throw *before* we try acquiring the
// (very expensive) bucket lock or the (somewhat expensive) monitor on this
earlySufficientStoresCheck(partitionName);
synchronized (this) {
if (this.prRegion.getCache().isCacheAtShutdownAll()) {
throw new CacheClosedException("Cache is shutting down");
}
if (isDebugEnabled) {
logger.debug("Starting atomic creation of bucketId={}", this.prRegion.bucketStringForLogs(bucketId));
}
Collection<InternalDistributedMember> acceptedMembers = // ArrayList<DataBucketStores>
new ArrayList<InternalDistributedMember>();
Set<InternalDistributedMember> excludedMembers = new HashSet<InternalDistributedMember>();
ArrayListWithClearState<InternalDistributedMember> failedMembers = new ArrayListWithClearState<InternalDistributedMember>();
final long timeOut = System.currentTimeMillis() + computeTimeout();
BucketMembershipObserver observer = null;
boolean needToElectPrimary = true;
InternalDistributedMember bucketPrimary = null;
try {
this.prRegion.checkReadiness();
Bucket toCreate = this.prRegion.getRegionAdvisor().getBucket(bucketId);
if (!finishIncompleteCreation) {
bucketPrimary = this.prRegion.getBucketPrimary(bucketId);
if (bucketPrimary != null) {
if (isDebugEnabled) {
logger.debug("during atomic creation, discovered that the primary already exists {} returning early", bucketPrimary);
}
needToElectPrimary = false;
return bucketPrimary;
}
}
observer = new BucketMembershipObserver(toCreate).beginMonitoring();
// track if insufficient data stores have been
boolean loggedInsufficentStores = false;
// detected
for (; ; ) {
this.prRegion.checkReadiness();
if (this.prRegion.getCache().isCacheAtShutdownAll()) {
if (isDebugEnabled) {
logger.debug("Aborted createBucketAtomically due to ShutdownAll");
}
throw new CacheClosedException("Cache is shutting down");
}
// this.prRegion.getCache().getLogger().config(
// "DEBUG createBucketAtomically: "
// + " bucketId=" + this.prRegion.getBucketName(bucketId) +
// " accepted: " + acceptedMembers +
// " failed: " + failedMembers);
long timeLeft = timeOut - System.currentTimeMillis();
if (timeLeft < 0) {
// It took too long.
timedOut(this.prRegion, getAllStores(partitionName), acceptedMembers, ALLOCATE_ENOUGH_MEMBERS_TO_HOST_BUCKET.toLocalizedString(), computeTimeout());
// NOTREACHED
}
if (isDebugEnabled) {
logger.debug("createBucketAtomically: have {} ms left to finish this", timeLeft);
}
// Always go back to the advisor, see if any fresh data stores are
// present.
Set<InternalDistributedMember> allStores = getAllStores(partitionName);
loggedInsufficentStores = checkSufficientStores(allStores, loggedInsufficentStores);
InternalDistributedMember candidate = createBucketInstance(bucketId, newBucketSize, excludedMembers, acceptedMembers, failedMembers, timeOut, allStores);
if (candidate != null) {
if (this.prRegion.getDistributionManager().enforceUniqueZone()) {
// enforceUniqueZone property has no effect for a loner. Fix for defect #47181
if (!(this.prRegion.getDistributionManager() instanceof LonerDistributionManager)) {
Set<InternalDistributedMember> exm = getBuddyMembersInZone(candidate, allStores);
exm.remove(candidate);
exm.removeAll(acceptedMembers);
excludedMembers.addAll(exm);
} else {
// log a warning if Loner
logger.warn(LocalizedMessage.create(LocalizedStrings.GemFireCache_ENFORCE_UNIQUE_HOST_NOT_APPLICABLE_FOR_LONER));
}
}
}
// Get an updated list of bucket owners, which should include
// buckets created concurrently with this createBucketAtomically call
acceptedMembers = prRegion.getRegionAdvisor().getBucketOwners(bucketId);
if (isDebugEnabled) {
logger.debug("Accepted members: {}", acceptedMembers);
}
// the candidate has accepted
if (bucketPrimary == null && acceptedMembers.contains(candidate)) {
bucketPrimary = candidate;
}
// prune out the stores that have left
verifyBucketNodes(excludedMembers, partitionName);
// Note - we used to wait for the created bucket to become primary here
// if this is a colocated region. We no longer need to do that, because
// the EndBucketMessage is sent out after bucket creation completes to
// select the primary.
// Have we exhausted all candidates?
final int potentialCandidateCount = (allStores.size() - (excludedMembers.size() + acceptedMembers.size() + failedMembers.size()));
// Determining exhausted members competes with bucket balancing; it's
// important to re-visit all failed members since "failed" set may
// contain datastores which at the moment are imbalanced, but yet could
// be candidates. If the failed members list is empty, its expected
// that the next iteration clears the (already empty) list.
final boolean exhaustedPotentialCandidates = failedMembers.wasCleared() && potentialCandidateCount <= 0;
final boolean redundancySatisfied = acceptedMembers.size() > this.prRegion.getRedundantCopies();
final boolean bucketNotCreated = acceptedMembers.size() == 0;
if (isDebugEnabled) {
logger.debug("potentialCandidateCount={}, exhaustedPotentialCandidates={}, redundancySatisfied={}, bucketNotCreated={}", potentialCandidateCount, exhaustedPotentialCandidates, redundancySatisfied, bucketNotCreated);
}
if (bucketNotCreated) {
// if we haven't managed to create the bucket on any nodes, retry.
continue;
}
if (exhaustedPotentialCandidates && !redundancySatisfied) {
insufficientStores(allStores, acceptedMembers, true);
}
// Fix for bug 39283
if (redundancySatisfied || exhaustedPotentialCandidates) {
// Tell one of the members to become primary.
// The rest of the members will be allowed to
// volunteer for primary.
endBucketCreation(bucketId, acceptedMembers, bucketPrimary, partitionName);
final int expectedRemoteHosts = acceptedMembers.size() - (acceptedMembers.contains(this.prRegion.getMyId()) ? 1 : 0);
boolean interrupted = Thread.interrupted();
try {
BucketMembershipObserverResults results = observer.waitForOwnersGetPrimary(expectedRemoteHosts, acceptedMembers, partitionName);
if (results.problematicDeparture) {
// BZZZT! Member left. Start over.
continue;
}
bucketPrimary = results.primary;
} catch (InterruptedException e) {
interrupted = true;
this.prRegion.getCancelCriterion().checkCancelInProgress(e);
} finally {
if (interrupted) {
Thread.currentThread().interrupt();
}
}
needToElectPrimary = false;
return bucketPrimary;
}
// almost done
}
// for
} catch (CancelException e) {
// Fix for 43544 - We don't need to elect a primary
// if the cache was closed. The other members will
// take care of it. This ensures we don't compromise
// redundancy.
needToElectPrimary = false;
throw e;
} catch (RegionDestroyedException e) {
// Fix for 43544 - We don't need to elect a primary
// if the region was destroyed. The other members will
// take care of it. This ensures we don't compromise
// redundancy.
needToElectPrimary = false;
throw e;
} catch (PartitionOfflineException e) {
throw e;
} catch (RuntimeException e) {
if (isDebugEnabled) {
logger.debug("Unable to create new bucket {}: {}", bucketId, e.getMessage(), e);
}
// than reattempting on other nodes?
if (!finishIncompleteCreation) {
cleanUpBucket(bucketId);
}
throw e;
} finally {
if (observer != null) {
observer.stopMonitoring();
}
// Try to make sure everyone that created the bucket can volunteer for primary
if (needToElectPrimary) {
try {
endBucketCreation(bucketId, prRegion.getRegionAdvisor().getBucketOwners(bucketId), bucketPrimary, partitionName);
} catch (Exception e) {
// if region is going down, then no warning level logs
if (e instanceof CancelException || e instanceof CacheClosedException || (prRegion.getCancelCriterion().isCancelInProgress())) {
logger.debug("Exception trying choose a primary after bucket creation failure", e);
} else {
logger.warn("Exception trying choose a primary after bucket creation failure", e);
}
}
}
}
}
// synchronized(this)
}
use of org.apache.geode.cache.PartitionedRegionStorageException in project geode by apache.
the class AbstractBaseController method getValue.
protected <T> T getValue(final String regionNamePath, final Object key, boolean postProcess) {
Assert.notNull(key, "The Cache Region key to read the value for cannot be null!");
Region r = getRegion(regionNamePath);
try {
Object value = r.get(key);
if (postProcess) {
return (T) securityService.postProcess(regionNamePath, key, value, false);
} else {
return (T) value;
}
} catch (SerializationException se) {
throw new DataTypeNotSupportedException("The resource identified could not convert into the supported content characteristics (JSON)!", se);
} catch (NullPointerException npe) {
throw new GemfireRestException(String.format("Resource (%1$s) configuration does not allow null keys!", regionNamePath), npe);
} catch (IllegalArgumentException iae) {
throw new GemfireRestException(String.format("Resource (%1$s) configuration does not allow requested operation on specified key!", regionNamePath), iae);
} catch (LeaseExpiredException lee) {
throw new GemfireRestException("Server has encountered error while processing this request!", lee);
} catch (TimeoutException te) {
throw new GemfireRestException("Server has encountered timeout error while processing this request!", te);
} catch (CacheLoaderException cle) {
throw new GemfireRestException("Server has encountered CacheLoader error while processing this request!", cle);
} catch (PartitionedRegionStorageException prse) {
throw new GemfireRestException("CacheLoader could not be invoked on partitioned region!", prse);
}
}
use of org.apache.geode.cache.PartitionedRegionStorageException in project geode by apache.
the class PaginationDUnitTest method partitionedRegionStorageExceptionWhenAllDataStoreAreClosedWhilePagination.
@Test
@Parameters(method = "getListOfRegionTestTypes")
public void partitionedRegionStorageExceptionWhenAllDataStoreAreClosedWhilePagination(RegionTestableType regionTestType) {
SerializableRunnableIF createIndex = () -> {
LuceneService luceneService = LuceneServiceProvider.get(getCache());
luceneService.createIndexFactory().setFields("text").create(INDEX_NAME, REGION_NAME);
};
dataStore1.invoke(() -> initDataStore(createIndex, regionTestType));
accessor.invoke(() -> initAccessor(createIndex, regionTestType));
putEntryInEachBucket();
assertTrue(waitForFlushBeforeExecuteTextSearch(dataStore1, FLUSH_WAIT_TIME_MS));
accessor.invoke(() -> {
Cache cache = getCache();
LuceneService service = LuceneServiceProvider.get(cache);
LuceneQuery<Integer, TestObject> query;
query = service.createLuceneQueryFactory().setLimit(1000).setPageSize(PAGE_SIZE).create(INDEX_NAME, REGION_NAME, "world", "text");
PageableLuceneQueryResults<Integer, TestObject> pages = query.findPages();
assertTrue(pages.hasNext());
List<LuceneResultStruct<Integer, TestObject>> page = pages.next();
assertEquals(page.size(), PAGE_SIZE, page.size());
dataStore1.invoke(() -> closeCache());
try {
page = pages.next();
fail();
} catch (Exception e) {
Assert.assertEquals("Expected Exception = PartitionedRegionStorageException but hit " + e.toString(), true, e instanceof PartitionedRegionStorageException);
}
});
}
Aggregations