Search in sources :

Example 1 with StoppableWriteLock

use of org.apache.geode.internal.util.concurrent.StoppableReentrantReadWriteLock.StoppableWriteLock in project geode by apache.

the class PartitionedRegionDataStore method cacheLoaderChanged.

/**
   * sent by the partitioned region when its loader has changed
   */
protected void cacheLoaderChanged(final CacheLoader newLoader, final CacheLoader oldLoader) {
    StoppableWriteLock lock = this.bucketCreationLock.writeLock();
    lock.lock();
    try {
        this.loader = newLoader;
        visitBuckets(new BucketVisitor() {

            @Override
            public void visit(Integer bucketId, Region r) {
                AttributesMutator mut = r.getAttributesMutator();
                if (logger.isDebugEnabled()) {
                    logger.debug("setting new cache loader in bucket region: {}", newLoader);
                }
                mut.setCacheLoader(newLoader);
            }
        });
    } finally {
        lock.unlock();
    }
}
Also used : StoppableWriteLock(org.apache.geode.internal.util.concurrent.StoppableReentrantReadWriteLock.StoppableWriteLock) AtomicInteger(java.util.concurrent.atomic.AtomicInteger)

Example 2 with StoppableWriteLock

use of org.apache.geode.internal.util.concurrent.StoppableReentrantReadWriteLock.StoppableWriteLock in project geode by apache.

the class PartitionedRegionDataStore method cleanUp.

/**
   * This method does the cleaning up of the closed/locallyDestroyed PartitionedRegion. This cleans
   * up the reference of the close PartitionedRegion's node from the b2n region and locallyDestroys
   * the b2n region (if removeBucketMapping is true). It locallyDestroys the bucket region and
   * cleans up the localBucketRegion map to avoid any stale references to locally destroyed bucket
   * region.
   * 
   * @param removeBucketMapping
   */
void cleanUp(boolean removeBucketMapping, boolean removeFromDisk) {
    if (logger.isDebugEnabled()) {
        logger.debug("cleanUp: Starting cleanup for {}", this.partitionedRegion);
    }
    try {
        if (removeBucketMapping) {
            if (logger.isDebugEnabled()) {
                logger.debug("cleanUp: Done destroyBucket2NodeRegionLocally for {}", this.partitionedRegion);
            }
        } else {
            if (logger.isDebugEnabled()) {
                logger.debug("cleanUp: not removing node from b2n region");
            }
        }
        // Lock out bucket creation while doing this :-)
        StoppableWriteLock lock = this.bucketCreationLock.writeLock();
        lock.lock();
        try {
            ProxyBucketRegion[] proxyBuckets = getPartitionedRegion().getRegionAdvisor().getProxyBucketArray();
            if (proxyBuckets != null) {
                for (ProxyBucketRegion pbr : proxyBuckets) {
                    Integer bucketId = Integer.valueOf(pbr.getBucketId());
                    BucketRegion buk = localBucket2RegionMap.get(bucketId);
                    // concurrent entry iterator does not guarantee value, key pairs
                    if (buk != null) {
                        try {
                            buk.getBucketAdvisor().getProxyBucketRegion().setHosting(false);
                            if (removeFromDisk) {
                                buk.localDestroyRegion();
                            } else {
                                buk.close();
                            }
                            if (logger.isDebugEnabled()) {
                                logger.debug("cleanup: Locally destroyed bucket {}", buk.getFullPath());
                            }
                            // Fix for defect #49012
                            if (buk instanceof AbstractBucketRegionQueue && buk.getPartitionedRegion().isShadowPR()) {
                                if (buk.getPartitionedRegion().getColocatedWithRegion() != null) {
                                    buk.getPartitionedRegion().getColocatedWithRegion().getRegionAdvisor().getBucketAdvisor(bucketId).setShadowBucketDestroyed(true);
                                }
                            }
                        } catch (RegionDestroyedException ignore) {
                        } catch (Exception ex) {
                            logger.warn(LocalizedMessage.create(LocalizedStrings.PartitionedRegion_PARTITIONEDREGION_0_CLEANUP_PROBLEM_DESTROYING_BUCKET_1, new Object[] { this.partitionedRegion.getFullPath(), Integer.valueOf(buk.getId()) }), ex);
                        }
                        localBucket2RegionMap.remove(bucketId);
                    } else if (removeFromDisk) {
                        DiskRegion diskRegion = pbr.getDiskRegion();
                        if (diskRegion != null) {
                            diskRegion.beginDestroy(null);
                            diskRegion.endDestroy(null);
                        }
                    }
                }
            // while
            }
        } finally {
            lock.unlock();
        }
    } catch (Exception ex) {
        logger.warn(LocalizedMessage.create(LocalizedStrings.PartitionedRegionDataStore_PARTITIONEDREGION_0_CAUGHT_UNEXPECTED_EXCEPTION_DURING_CLEANUP, this.partitionedRegion.getFullPath()), ex);
    } finally {
        this.partitionedRegion.getPrStats().setBucketCount(0);
        this.bucketStats.close();
    }
}
Also used : StoppableWriteLock(org.apache.geode.internal.util.concurrent.StoppableReentrantReadWriteLock.StoppableWriteLock) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) BucketMovedException(org.apache.geode.internal.cache.execute.BucketMovedException) InternalGemFireException(org.apache.geode.InternalGemFireException) QueryInvalidException(org.apache.geode.cache.query.QueryInvalidException) FunctionException(org.apache.geode.cache.execute.FunctionException) IOException(java.io.IOException)

Example 3 with StoppableWriteLock

use of org.apache.geode.internal.util.concurrent.StoppableReentrantReadWriteLock.StoppableWriteLock in project geode by apache.

the class PartitionedRegionDataStore method removeBucket.

/**
   * Removes a redundant bucket hosted by this data store. The rebalancer invokes this method
   * directly or sends this member a message to invoke it.
   * 
   * From the spec:
   * 
   * How to Remove a Redundant Bucket
   * 
   * This operation is done by the rebalancer (REB) and can only be done on non-primary buckets. If
   * you want to remove a primary bucket first send one of its peers "become primary" and then send
   * it "unhost" (we could offer a "unhost" option on "become primary" or a "becomePrimary" option
   * on "create redundant"). The member that has the bucket being removed is called the bucket host
   * (BH).
   * 
   * 1. REB sends an "unhostBucket" message to BH. This message will be rejected if the member finds
   * itself to be the primary or if he doesn't host the bucket by sending a failure reply to REB. 2.
   * BH marks itself as "not-hosting". This causes any read operations that come in to not start and
   * retry. BH also updates the advisor to know that it is no longer hosting the bucket. 3. BH then
   * waits for any in-progress reads (which read ops to wait for are TBD) to complete. 4. BH then
   * removes the bucket region from its cache. 5. BH then sends a success reply to REB.
   * 
   * This method is now also used by the PartitionManager. For the PartitionManager, it does remove
   * the primary bucket.
   * 
   * @param bucketId the id of the bucket to remove
   * 
   * @return true if the bucket was removed; false if unable to remove or if bucket is not hosted
   */
public boolean removeBucket(int bucketId, boolean forceRemovePrimary) {
    waitForInProgressBackup();
    // finished recovering from disk
    if (!this.partitionedRegion.getRedundancyProvider().isPersistentRecoveryComplete()) {
        if (logger.isDebugEnabled()) {
            logger.debug("Returning false from removeBucket because we have not finished recovering all colocated regions from disk");
        }
        return false;
    }
    // Lock out bucket creation while doing this :-)
    StoppableWriteLock lock = this.bucketCreationLock.writeLock();
    lock.lock();
    try {
        BucketRegion bucketRegion = this.localBucket2RegionMap.get(Integer.valueOf(bucketId));
        if (bucketRegion == null) {
            if (logger.isDebugEnabled()) {
                logger.debug("Returning true from removeBucket because we don't have the bucket we've been told to remove");
            }
            return true;
        }
        BucketAdvisor bucketAdvisor = bucketRegion.getBucketAdvisor();
        Lock writeLock = bucketAdvisor.getActiveWriteLock();
        // Fix for 43613 - don't remove the bucket
        // if we are primary. We hold the lock here
        // to prevent this member from becoming primary until this
        // member is no longer hosting the bucket.
        writeLock.lock();
        try {
            if (!forceRemovePrimary && bucketAdvisor.isPrimary()) {
                return false;
            }
            // recurse down to each tier of children to remove first
            removeBucketForColocatedChildren(bucketId, forceRemovePrimary);
            if (bucketRegion.getPartitionedRegion().isShadowPR()) {
                if (bucketRegion.getPartitionedRegion().getColocatedWithRegion() != null) {
                    bucketRegion.getPartitionedRegion().getColocatedWithRegion().getRegionAdvisor().getBucketAdvisor(bucketId).setShadowBucketDestroyed(true);
                }
            }
            bucketAdvisor.getProxyBucketRegion().removeBucket();
        } finally {
            writeLock.unlock();
        }
        if (logger.isDebugEnabled()) {
            logger.debug("Removed bucket {} from advisor", bucketRegion);
        }
        // Flush the state of the primary. This make sure we have processed
        // all operations were sent out before we removed our profile from
        // our peers.
        //
        // Another option, instead of using the StateFlushOperation, could
        // be to send a message which waits until it acquires the
        // activePrimaryMoveLock on primary the bucket region. That would also
        // wait for in progress writes. I choose to use the StateFlushOperation
        // because it won't block write operations while we're trying to acquire
        // the activePrimaryMoveLock
        InternalDistributedMember primary = bucketAdvisor.getPrimary();
        InternalDistributedMember myId = this.partitionedRegion.getDistributionManager().getDistributionManagerId();
        if (!myId.equals(primary)) {
            StateFlushOperation flush = new StateFlushOperation(bucketRegion);
            int executor = DistributionManager.WAITING_POOL_EXECUTOR;
            try {
                flush.flush(Collections.singleton(primary), myId, executor, false);
            } catch (InterruptedException e) {
                this.partitionedRegion.getCancelCriterion().checkCancelInProgress(e);
                Thread.currentThread().interrupt();
                throw new InternalGemFireException("Interrupted while flushing state");
            }
            if (logger.isDebugEnabled()) {
                logger.debug("Finished state flush for removal of {}", bucketRegion);
            }
        } else {
            if (logger.isDebugEnabled()) {
                logger.debug("We became primary while destroying the bucket. Too late to stop now.");
            }
        }
        bucketRegion.invokePartitionListenerAfterBucketRemoved();
        bucketRegion.preDestroyBucket(bucketId);
        bucketRegion.localDestroyRegion();
        bucketAdvisor.getProxyBucketRegion().finishRemoveBucket();
        if (logger.isDebugEnabled()) {
            logger.debug("Destroyed {}", bucketRegion);
        }
        this.localBucket2RegionMap.remove(Integer.valueOf(bucketId));
        this.partitionedRegion.getPrStats().incBucketCount(-1);
        return true;
    } finally {
        lock.unlock();
    }
}
Also used : StoppableWriteLock(org.apache.geode.internal.util.concurrent.StoppableReentrantReadWriteLock.StoppableWriteLock) InternalDistributedMember(org.apache.geode.distributed.internal.membership.InternalDistributedMember) InternalGemFireException(org.apache.geode.InternalGemFireException) StoppableReentrantReadWriteLock(org.apache.geode.internal.util.concurrent.StoppableReentrantReadWriteLock) BucketLock(org.apache.geode.internal.cache.PartitionedRegion.BucketLock) StoppableReadLock(org.apache.geode.internal.util.concurrent.StoppableReentrantReadWriteLock.StoppableReadLock) StoppableWriteLock(org.apache.geode.internal.util.concurrent.StoppableReentrantReadWriteLock.StoppableWriteLock) Lock(java.util.concurrent.locks.Lock)

Aggregations

StoppableWriteLock (org.apache.geode.internal.util.concurrent.StoppableReentrantReadWriteLock.StoppableWriteLock)3 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)2 InternalGemFireException (org.apache.geode.InternalGemFireException)2 IOException (java.io.IOException)1 Lock (java.util.concurrent.locks.Lock)1 FunctionException (org.apache.geode.cache.execute.FunctionException)1 QueryInvalidException (org.apache.geode.cache.query.QueryInvalidException)1 InternalDistributedMember (org.apache.geode.distributed.internal.membership.InternalDistributedMember)1 BucketLock (org.apache.geode.internal.cache.PartitionedRegion.BucketLock)1 BucketMovedException (org.apache.geode.internal.cache.execute.BucketMovedException)1 StoppableReentrantReadWriteLock (org.apache.geode.internal.util.concurrent.StoppableReentrantReadWriteLock)1 StoppableReadLock (org.apache.geode.internal.util.concurrent.StoppableReentrantReadWriteLock.StoppableReadLock)1