Search in sources :

Example 11 with PartitionOfflineException

use of org.apache.geode.cache.persistence.PartitionOfflineException in project geode by apache.

the class PRBasicQueryDUnitTest method testColocatedPRQueryDuringRecovery.

/**
   * A basic dunit test that <br>
   * 1. Creates a PR and colocated child region Accessor and Data Store with redundantCopies = 0. 2.
   * Populates the region with test data. 3. Fires a query on accessor VM and verifies the result.
   * 4. Shuts down the caches, then restarts them asynchronously 5. Attempt the query while the
   * regions are being recovered
   * 
   * @throws Exception
   */
@Test
public void testColocatedPRQueryDuringRecovery() throws Exception {
    Host host = Host.getHost(0);
    VM vm0 = host.getVM(0);
    VM vm1 = host.getVM(1);
    setCacheInVMs(vm0, vm1);
    LogWriterUtils.getLogWriter().info("PRQBasicQueryDUnitTest#testColocatedPRBasicQuerying: Querying PR Test with DACK Started");
    // Creting PR's on the participating VM's
    // Creating Accessor node on the VM0.
    LogWriterUtils.getLogWriter().info("PRQBasicQueryDUnitTest#testColocatedPRBasicQuerying: Creating the Accessor node in the PR");
    vm0.invoke(PRQHelp.getCacheSerializableRunnableForColocatedPRCreate(name, redundancy, PortfolioData.class, true));
    // Creating local region on vm0 to compare the results of query.
    vm0.invoke(PRQHelp.getCacheSerializableRunnableForLocalRegionCreation(localName, PortfolioData.class));
    LogWriterUtils.getLogWriter().info("PRQBasicQueryDUnitTest#testColocatedPRBasicQuerying: Successfully created the Accessor node in the PR");
    // Creating the Datastores Nodes in the VM1.
    LogWriterUtils.getLogWriter().info("PRQBasicQueryDUnitTest:testColocatedPRBasicQuerying ----- Creating the Datastore node in the PR");
    vm1.invoke(PRQHelp.getCacheSerializableRunnableForColocatedPRCreate(name, redundancy, PortfolioData.class, true));
    LogWriterUtils.getLogWriter().info("PRQBasicQueryDUnitTest#testColocatedPRBasicQuerying: Successfully Created the Datastore node in the PR");
    LogWriterUtils.getLogWriter().info("PRQBasicQueryDUnitTest#testColocatedPRBasicQuerying: Successfully Created PR's across all VM's");
    // Generating portfolio object array to be populated across the PR's & Local
    // Regions
    final PortfolioData[] portfolio = createPortfolioData(cnt, cntDest);
    // Putting the data into the PR's created
    vm0.invoke(PRQHelp.getCacheSerializableRunnableForPRPuts(name, portfolio, cnt, cntDest));
    vm0.invoke(PRQHelp.getCacheSerializableRunnableForPRDuplicatePuts(name, portfolio, cnt, cntDest));
    LogWriterUtils.getLogWriter().info("PRQBasicQueryDUnitTest#testColocatedPRBasicQuerying: Inserted Portfolio data across PR's");
    vm0.invoke(PRQHelp.getCacheSerializableRunnableForPRPuts(localName, portfolio, cnt, cntDest));
    vm0.invoke(PRQHelp.getCacheSerializableRunnableForPRDuplicatePuts(localName, portfolio, cnt, cntDest));
    // querying the VM for data and comparing the result with query result of
    // local region.
    // querying the VM for data
    vm0.invoke(PRQHelp.getCacheSerializableRunnableForPRQueryAndCompareResults(name, localName));
    LogWriterUtils.getLogWriter().info("PRQBasicQueryDUnitTest#testColocatedPRBasicQuerying: Querying PR's 1st pass ENDED");
    // Shut everything down and then restart to test queries during recovery
    vm0.invoke(PRQHelp.getCacheSerializableRunnableForCloseCache());
    vm1.invoke(PRQHelp.getCacheSerializableRunnableForCloseCache());
    // Re-create the regions - only create the parent regions on the datastores
    setCacheInVMs(vm0, vm1);
    LogWriterUtils.getLogWriter().info("PRQBasicQueryDUnitTest#testColocatedPRBasicQuerying: Creating the Accessor node in the PR");
    vm0.invoke(PRQHelp.getCacheSerializableRunnableForColocatedParentCreate(name, redundancy, PortfolioData.class, true));
    // Creating local region on vm0 to compare the results of query.
    vm0.invoke(PRQHelp.getCacheSerializableRunnableForLocalRegionCreation(localName, PortfolioData.class));
    LogWriterUtils.getLogWriter().info("PRQBasicQueryDUnitTest#testColocatedPRBasicQuerying: Successfully created the Accessor node in the PR");
    LogWriterUtils.getLogWriter().info("PRQBasicQueryDUnitTest:testColocatedPRBasicQuerying: re-creating the Datastore node in the PR");
    vm1.invoke(PRQHelp.getCacheSerializableRunnableForColocatedParentCreate(name, redundancy, PortfolioData.class, true));
    // Now start the child regions asynchronously so queries will happen during persistent recovery
    AsyncInvocation vm0PR = vm0.invokeAsync(PRQHelp.getCacheSerializableRunnableForColocatedChildCreate(name, redundancy, PortfolioData.class, true));
    AsyncInvocation vm1PR = vm1.invokeAsync(PRQHelp.getCacheSerializableRunnableForColocatedChildCreate(name, redundancy, PortfolioData.class, true));
    // delay the query to let the recovery get underway
    Thread.sleep(100);
    try {
        // This is a repeat of the original query from before closing and restarting the datastores.
        // This time
        // it should fail due to persistent recovery that has not completed.
        vm0.invoke(PRQHelp.getCacheSerializableRunnableForPRQueryAndCompareResults(name, localName, true));
        fail("Expected PartitionOfflineException when queryiong a region with offline colocated child");
    } catch (Exception e) {
        if (!(e.getCause() instanceof PartitionOfflineException)) {
            e.printStackTrace();
            throw e;
        }
    }
    LogWriterUtils.getLogWriter().info("PRQBasicQueryDUnitTest#testColocatedPRBasicQuerying: Querying PR's 2nd pass (after restarting regions) ENDED");
}
Also used : PartitionOfflineException(org.apache.geode.cache.persistence.PartitionOfflineException) VM(org.apache.geode.test.dunit.VM) Host(org.apache.geode.test.dunit.Host) PortfolioData(org.apache.geode.cache.query.data.PortfolioData) AsyncInvocation(org.apache.geode.test.dunit.AsyncInvocation) PartitionOfflineException(org.apache.geode.cache.persistence.PartitionOfflineException) Test(org.junit.Test) DistributedTest(org.apache.geode.test.junit.categories.DistributedTest)

Example 12 with PartitionOfflineException

use of org.apache.geode.cache.persistence.PartitionOfflineException in project geode by apache.

the class AsyncEventListenerDUnitTest method testCacheClosedBeforeAEQWrite.

@Test
public void testCacheClosedBeforeAEQWrite() {
    Integer lnPort = (Integer) vm0.invoke(() -> AsyncEventQueueTestBase.createFirstLocatorWithDSId(1));
    vm1.invoke(createCacheRunnable(lnPort));
    vm2.invoke(createCacheRunnable(lnPort));
    vm3.invoke(createCacheRunnable(lnPort));
    final DistributedMember member1 = vm1.invoke(() -> cache.getDistributedSystem().getDistributedMember());
    vm1.invoke(() -> addAEQWithCacheCloseFilter());
    vm2.invoke(() -> addAEQWithCacheCloseFilter());
    vm1.invoke(() -> createPersistentPartitionRegion());
    vm2.invoke(() -> createPersistentPartitionRegion());
    vm3.invoke(() -> {
        AttributesFactory fact = new AttributesFactory();
        PartitionAttributesFactory pfact = new PartitionAttributesFactory();
        pfact.setTotalNumBuckets(16);
        pfact.setLocalMaxMemory(0);
        fact.setPartitionAttributes(pfact.create());
        fact.setOffHeap(isOffHeap());
        Region r = cache.createRegionFactory(fact.create()).addAsyncEventQueueId("ln").create(getTestMethodName() + "_PR");
    });
    vm3.invoke(() -> {
        Region r = cache.getRegion(Region.SEPARATOR + getTestMethodName() + "_PR");
        r.put(1, 1);
        r.put(2, 2);
        // This will trigger the gateway event filter to close the cache
        try {
            r.removeAll(Collections.singleton(1));
            fail("Should have received a partition offline exception");
        } catch (PartitionOfflineException expected) {
        }
    });
}
Also used : PartitionAttributesFactory(org.apache.geode.cache.PartitionAttributesFactory) AttributesFactory(org.apache.geode.cache.AttributesFactory) PartitionAttributesFactory(org.apache.geode.cache.PartitionAttributesFactory) PartitionOfflineException(org.apache.geode.cache.persistence.PartitionOfflineException) DistributedMember(org.apache.geode.distributed.DistributedMember) Region(org.apache.geode.cache.Region) FlakyTest(org.apache.geode.test.junit.categories.FlakyTest) Test(org.junit.Test) DistributedTest(org.apache.geode.test.junit.categories.DistributedTest)

Example 13 with PartitionOfflineException

use of org.apache.geode.cache.persistence.PartitionOfflineException in project geode by apache.

the class PersistentColocatedPartitionedRegionDUnitTest method replaceOfflineMemberAndRestartCreateColocatedPRLate.

/**
   * Test for support issue 7870. 1. Run three members with redundancy 1 and recovery delay 0 2.
   * Kill one of the members, to trigger replacement of buckets 3. Shutdown all members and restart.
   * 
   * What was happening is that in the parent PR, we discarded our offline data in one member, but
   * in the child PR the other members ended up waiting for the child bucket to be created in the
   * member that discarded it's offline data.
   * 
   * In this test case, we're creating the child PR later, after the parent buckets have already
   * been completely created.
   * 
   * @throws Throwable
   */
public void replaceOfflineMemberAndRestartCreateColocatedPRLate(SerializableRunnable createParentPR, SerializableRunnable createChildPR) throws Throwable {
    IgnoredException.addIgnoredException("PartitionOfflineException");
    IgnoredException.addIgnoredException("RegionDestroyedException");
    Host host = Host.getHost(0);
    VM vm0 = host.getVM(0);
    VM vm1 = host.getVM(1);
    VM vm2 = host.getVM(2);
    // Create the PRs on three members
    vm0.invoke(createParentPR);
    vm1.invoke(createParentPR);
    vm2.invoke(createParentPR);
    vm0.invoke(createChildPR);
    vm1.invoke(createChildPR);
    vm2.invoke(createChildPR);
    // Create some buckets.
    createData(vm0, 0, NUM_BUCKETS, "a");
    createData(vm0, 0, NUM_BUCKETS, "a", "region2");
    // Close one of the members to trigger redundancy recovery.
    closeCache(vm2);
    // Wait until redundancy is recovered.
    waitForRedundancyRecovery(vm0, 1, PR_REGION_NAME);
    waitForRedundancyRecovery(vm0, 1, "region2");
    createData(vm0, 0, NUM_BUCKETS, "b");
    createData(vm0, 0, NUM_BUCKETS, "b", "region2");
    // Close the remaining members.
    vm0.invoke(new SerializableCallable() {

        public Object call() throws Exception {
            InternalDistributedSystem ds = (InternalDistributedSystem) getCache().getDistributedSystem();
            AdminDistributedSystemImpl.shutDownAllMembers(ds.getDistributionManager(), 0);
            return null;
        }
    });
    // Make sure that vm-1 is completely disconnected
    // The shutdown all asynchronously finishes the disconnect after
    // replying to the admin member.
    vm1.invoke(new SerializableRunnable() {

        public void run() {
            basicGetSystem().disconnect();
        }
    });
    // Recreate the parent region. Try to make sure that
    // the member with the latest copy of the buckets
    // is the one that decides to throw away it's copy
    // by starting it last.
    AsyncInvocation async2 = vm2.invokeAsync(createParentPR);
    AsyncInvocation async1 = vm1.invokeAsync(createParentPR);
    Wait.pause(2000);
    AsyncInvocation async0 = vm0.invokeAsync(createParentPR);
    async0.getResult(MAX_WAIT);
    async1.getResult(MAX_WAIT);
    async2.getResult(MAX_WAIT);
    // Wait for async tasks
    Wait.pause(2000);
    // Recreate the child region.
    async2 = vm2.invokeAsync(createChildPR);
    async1 = vm1.invokeAsync(createChildPR);
    async0 = vm0.invokeAsync(createChildPR);
    async0.getResult(MAX_WAIT);
    async1.getResult(MAX_WAIT);
    async2.getResult(MAX_WAIT);
    // Validate the data
    checkData(vm0, 0, NUM_BUCKETS, "b");
    checkData(vm0, 0, NUM_BUCKETS, "b", "region2");
    // Make sure we can actually use the buckets in the child region.
    createData(vm0, 0, NUM_BUCKETS, "c", "region2");
    waitForRedundancyRecovery(vm0, 1, PR_REGION_NAME);
    waitForRedundancyRecovery(vm0, 1, "region2");
    // Make sure we don't have any extra buckets after the restart
    int totalBucketCount = getBucketList(vm0).size();
    totalBucketCount += getBucketList(vm1).size();
    totalBucketCount += getBucketList(vm2).size();
    assertEquals(2 * NUM_BUCKETS, totalBucketCount);
    totalBucketCount = getBucketList(vm0, "region2").size();
    totalBucketCount += getBucketList(vm1, "region2").size();
    totalBucketCount += getBucketList(vm2, "region2").size();
    assertEquals(2 * NUM_BUCKETS, totalBucketCount);
}
Also used : VM(org.apache.geode.test.dunit.VM) SerializableCallable(org.apache.geode.test.dunit.SerializableCallable) SerializableRunnable(org.apache.geode.test.dunit.SerializableRunnable) Host(org.apache.geode.test.dunit.Host) InternalDistributedSystem(org.apache.geode.distributed.internal.InternalDistributedSystem) AsyncInvocation(org.apache.geode.test.dunit.AsyncInvocation) IgnoredException(org.apache.geode.test.dunit.IgnoredException) PartitionedRegionStorageException(org.apache.geode.cache.PartitionedRegionStorageException) RMIException(org.apache.geode.test.dunit.RMIException) CacheClosedException(org.apache.geode.cache.CacheClosedException) PartitionOfflineException(org.apache.geode.cache.persistence.PartitionOfflineException) IOException(java.io.IOException)

Example 14 with PartitionOfflineException

use of org.apache.geode.cache.persistence.PartitionOfflineException in project geode by apache.

the class PersistentColocatedPartitionedRegionDUnitTest method testParentRegionGetWithRecoveryInProgress.

@Test
public void testParentRegionGetWithRecoveryInProgress() throws Throwable {
    SerializableRunnable createParentPR = new SerializableRunnable("createParentPR") {

        public void run() {
            String oldRetryTimeout = System.setProperty(DistributionConfig.GEMFIRE_PREFIX + "partitionedRegionRetryTimeout", "10000");
            try {
                Cache cache = getCache();
                DiskStore ds = cache.findDiskStore("disk");
                if (ds == null) {
                    ds = cache.createDiskStoreFactory().setDiskDirs(getDiskDirs()).create("disk");
                }
                AttributesFactory af = new AttributesFactory();
                PartitionAttributesFactory paf = new PartitionAttributesFactory();
                paf.setRedundantCopies(0);
                paf.setRecoveryDelay(0);
                af.setPartitionAttributes(paf.create());
                af.setDataPolicy(DataPolicy.PERSISTENT_PARTITION);
                af.setDiskStoreName("disk");
                cache.createRegion(PR_REGION_NAME, af.create());
            } finally {
                System.setProperty(DistributionConfig.GEMFIRE_PREFIX + "partitionedRegionRetryTimeout", String.valueOf(PartitionedRegionHelper.DEFAULT_TOTAL_WAIT_RETRY_ITERATION));
                System.out.println("oldRetryTimeout = " + oldRetryTimeout);
            }
        }
    };
    SerializableRunnable createChildPR = new SerializableRunnable("createChildPR") {

        public void run() throws InterruptedException {
            String oldRetryTimeout = System.setProperty(DistributionConfig.GEMFIRE_PREFIX + "partitionedRegionRetryTimeout", "10000");
            try {
                Cache cache = getCache();
                AttributesFactory af = new AttributesFactory();
                PartitionAttributesFactory paf = new PartitionAttributesFactory();
                paf.setRedundantCopies(0);
                paf.setRecoveryDelay(0);
                paf.setColocatedWith(PR_REGION_NAME);
                af.setDataPolicy(DataPolicy.PERSISTENT_PARTITION);
                af.setDiskStoreName("disk");
                af.setPartitionAttributes(paf.create());
                cache.createRegion("region2", af.create());
            } finally {
                System.setProperty(DistributionConfig.GEMFIRE_PREFIX + "partitionedRegionRetryTimeout", String.valueOf(PartitionedRegionHelper.DEFAULT_TOTAL_WAIT_RETRY_ITERATION));
            }
        }
    };
    boolean caughtException = false;
    try {
        // Expect a get() on the un-recovered (due to offline child) parent region to fail
        regionGetWithOfflineChild(createParentPR, createChildPR, false);
    } catch (Exception e) {
        caughtException = true;
        assertTrue(e instanceof RMIException);
        assertTrue(e.getCause() instanceof PartitionOfflineException);
    }
    if (!caughtException) {
        fail("Expected TimeoutException from remote");
    }
}
Also used : DiskStore(org.apache.geode.cache.DiskStore) PartitionAttributesFactory(org.apache.geode.cache.PartitionAttributesFactory) RMIException(org.apache.geode.test.dunit.RMIException) AttributesFactory(org.apache.geode.cache.AttributesFactory) PartitionAttributesFactory(org.apache.geode.cache.PartitionAttributesFactory) PartitionOfflineException(org.apache.geode.cache.persistence.PartitionOfflineException) SerializableRunnable(org.apache.geode.test.dunit.SerializableRunnable) IgnoredException(org.apache.geode.test.dunit.IgnoredException) PartitionedRegionStorageException(org.apache.geode.cache.PartitionedRegionStorageException) RMIException(org.apache.geode.test.dunit.RMIException) CacheClosedException(org.apache.geode.cache.CacheClosedException) PartitionOfflineException(org.apache.geode.cache.persistence.PartitionOfflineException) IOException(java.io.IOException) Cache(org.apache.geode.cache.Cache) DistributedTest(org.apache.geode.test.junit.categories.DistributedTest) FlakyTest(org.apache.geode.test.junit.categories.FlakyTest) Test(org.junit.Test)

Example 15 with PartitionOfflineException

use of org.apache.geode.cache.persistence.PartitionOfflineException in project geode by apache.

the class PersistentColocatedPartitionedRegionDUnitTest method testParentRegionGetWithOfflineChildRegion.

@Test
public void testParentRegionGetWithOfflineChildRegion() throws Throwable {
    SerializableRunnable createParentPR = new SerializableRunnable("createParentPR") {

        public void run() {
            String oldRetryTimeout = System.setProperty(DistributionConfig.GEMFIRE_PREFIX + "partitionedRegionRetryTimeout", "10000");
            try {
                Cache cache = getCache();
                DiskStore ds = cache.findDiskStore("disk");
                if (ds == null) {
                    ds = cache.createDiskStoreFactory().setDiskDirs(getDiskDirs()).create("disk");
                }
                AttributesFactory af = new AttributesFactory();
                PartitionAttributesFactory paf = new PartitionAttributesFactory();
                paf.setRedundantCopies(0);
                paf.setRecoveryDelay(0);
                af.setPartitionAttributes(paf.create());
                af.setDataPolicy(DataPolicy.PERSISTENT_PARTITION);
                af.setDiskStoreName("disk");
                cache.createRegion(PR_REGION_NAME, af.create());
            } finally {
                System.setProperty(DistributionConfig.GEMFIRE_PREFIX + "partitionedRegionRetryTimeout", String.valueOf(PartitionedRegionHelper.DEFAULT_TOTAL_WAIT_RETRY_ITERATION));
            }
        }
    };
    SerializableRunnable createChildPR = new SerializableRunnable("createChildPR") {

        public void run() throws InterruptedException {
            String oldRetryTimeout = System.setProperty(DistributionConfig.GEMFIRE_PREFIX + "partitionedRegionRetryTimeout", "10000");
            try {
                Cache cache = getCache();
                AttributesFactory af = new AttributesFactory();
                PartitionAttributesFactory paf = new PartitionAttributesFactory();
                paf.setRedundantCopies(0);
                paf.setRecoveryDelay(0);
                paf.setColocatedWith(PR_REGION_NAME);
                af.setDataPolicy(DataPolicy.PERSISTENT_PARTITION);
                af.setDiskStoreName("disk");
                af.setPartitionAttributes(paf.create());
                // delay child region creations to cause a delay in persistent recovery
                Thread.sleep(100);
                cache.createRegion("region2", af.create());
            } finally {
                System.setProperty(DistributionConfig.GEMFIRE_PREFIX + "partitionedRegionRetryTimeout", String.valueOf(PartitionedRegionHelper.DEFAULT_TOTAL_WAIT_RETRY_ITERATION));
            }
        }
    };
    boolean caughtException = false;
    try {
        // Expect a get() on the un-recovered (due to offline child) parent region to fail
        regionGetWithOfflineChild(createParentPR, createChildPR, false);
    } catch (Exception e) {
        caughtException = true;
        assertTrue(e instanceof RMIException);
        assertTrue(e.getCause() instanceof PartitionOfflineException);
    }
    if (!caughtException) {
        fail("Expected TimeoutException from remote");
    }
}
Also used : DiskStore(org.apache.geode.cache.DiskStore) PartitionAttributesFactory(org.apache.geode.cache.PartitionAttributesFactory) RMIException(org.apache.geode.test.dunit.RMIException) AttributesFactory(org.apache.geode.cache.AttributesFactory) PartitionAttributesFactory(org.apache.geode.cache.PartitionAttributesFactory) PartitionOfflineException(org.apache.geode.cache.persistence.PartitionOfflineException) SerializableRunnable(org.apache.geode.test.dunit.SerializableRunnable) IgnoredException(org.apache.geode.test.dunit.IgnoredException) PartitionedRegionStorageException(org.apache.geode.cache.PartitionedRegionStorageException) RMIException(org.apache.geode.test.dunit.RMIException) CacheClosedException(org.apache.geode.cache.CacheClosedException) PartitionOfflineException(org.apache.geode.cache.persistence.PartitionOfflineException) IOException(java.io.IOException) Cache(org.apache.geode.cache.Cache) DistributedTest(org.apache.geode.test.junit.categories.DistributedTest) FlakyTest(org.apache.geode.test.junit.categories.FlakyTest) Test(org.junit.Test)

Aggregations

PartitionOfflineException (org.apache.geode.cache.persistence.PartitionOfflineException)19 DistributedTest (org.apache.geode.test.junit.categories.DistributedTest)12 Test (org.junit.Test)12 Host (org.apache.geode.test.dunit.Host)10 IgnoredException (org.apache.geode.test.dunit.IgnoredException)10 RMIException (org.apache.geode.test.dunit.RMIException)10 VM (org.apache.geode.test.dunit.VM)10 FlakyTest (org.apache.geode.test.junit.categories.FlakyTest)10 IOException (java.io.IOException)9 CacheClosedException (org.apache.geode.cache.CacheClosedException)9 PartitionedRegionStorageException (org.apache.geode.cache.PartitionedRegionStorageException)9 SerializableRunnable (org.apache.geode.test.dunit.SerializableRunnable)9 AsyncInvocation (org.apache.geode.test.dunit.AsyncInvocation)7 AttributesFactory (org.apache.geode.cache.AttributesFactory)6 Cache (org.apache.geode.cache.Cache)6 PartitionAttributesFactory (org.apache.geode.cache.PartitionAttributesFactory)6 DiskStore (org.apache.geode.cache.DiskStore)5 Region (org.apache.geode.cache.Region)4 InternalDistributedSystem (org.apache.geode.distributed.internal.InternalDistributedSystem)3 CancelException (org.apache.geode.CancelException)2