Search in sources :

Example 71 with TimeoutException

use of java.util.concurrent.TimeoutException in project platform_frameworks_base by android.

the class UiAutomation method executeAndWaitForEvent.

/**
     * Executes a command and waits for a specific accessibility event up to a
     * given wait timeout. To detect a sequence of events one can implement a
     * filter that keeps track of seen events of the expected sequence and
     * returns true after the last event of that sequence is received.
     * <p>
     * <strong>Note:</strong> It is caller's responsibility to recycle the returned event.
     * </p>
     * @param command The command to execute.
     * @param filter Filter that recognizes the expected event.
     * @param timeoutMillis The wait timeout in milliseconds.
     *
     * @throws TimeoutException If the expected event is not received within the timeout.
     */
public AccessibilityEvent executeAndWaitForEvent(Runnable command, AccessibilityEventFilter filter, long timeoutMillis) throws TimeoutException {
    // Acquire the lock and prepare for receiving events.
    synchronized (mLock) {
        throwIfNotConnectedLocked();
        mEventQueue.clear();
        // Prepare to wait for an event.
        mWaitingForEventDelivery = true;
    }
    // Note: We have to release the lock since calling out with this lock held
    // can bite. We will correctly filter out events from other interactions,
    // so starting to collect events before running the action is just fine.
    // We will ignore events from previous interactions.
    final long executionStartTimeMillis = SystemClock.uptimeMillis();
    // Execute the command *without* the lock being held.
    command.run();
    // Acquire the lock and wait for the event.
    synchronized (mLock) {
        try {
            // Wait for the event.
            final long startTimeMillis = SystemClock.uptimeMillis();
            while (true) {
                // Drain the event queue
                while (!mEventQueue.isEmpty()) {
                    AccessibilityEvent event = mEventQueue.remove(0);
                    // Ignore events from previous interactions.
                    if (event.getEventTime() < executionStartTimeMillis) {
                        continue;
                    }
                    if (filter.accept(event)) {
                        return event;
                    }
                    event.recycle();
                }
                // Check if timed out and if not wait.
                final long elapsedTimeMillis = SystemClock.uptimeMillis() - startTimeMillis;
                final long remainingTimeMillis = timeoutMillis - elapsedTimeMillis;
                if (remainingTimeMillis <= 0) {
                    throw new TimeoutException("Expected event not received within: " + timeoutMillis + " ms.");
                }
                try {
                    mLock.wait(remainingTimeMillis);
                } catch (InterruptedException ie) {
                /* ignore */
                }
            }
        } finally {
            mWaitingForEventDelivery = false;
            mEventQueue.clear();
            mLock.notifyAll();
        }
    }
}
Also used : AccessibilityEvent(android.view.accessibility.AccessibilityEvent) TimeoutException(java.util.concurrent.TimeoutException)

Example 72 with TimeoutException

use of java.util.concurrent.TimeoutException in project platform_frameworks_base by android.

the class ViewDebug method exportMethods.

private static void exportMethods(Context context, Object view, BufferedWriter out, Class<?> klass, String prefix) throws IOException {
    final Method[] methods = getExportedPropertyMethods(klass);
    int count = methods.length;
    for (int i = 0; i < count; i++) {
        final Method method = methods[i];
        //noinspection EmptyCatchBlock
        try {
            Object methodValue = callMethodOnAppropriateTheadBlocking(method, view);
            final Class<?> returnType = method.getReturnType();
            final ExportedProperty property = sAnnotations.get(method);
            String categoryPrefix = property.category().length() != 0 ? property.category() + ":" : "";
            if (returnType == int.class) {
                if (property.resolveId() && context != null) {
                    final int id = (Integer) methodValue;
                    methodValue = resolveId(context, id);
                } else {
                    final FlagToString[] flagsMapping = property.flagMapping();
                    if (flagsMapping.length > 0) {
                        final int intValue = (Integer) methodValue;
                        final String valuePrefix = categoryPrefix + prefix + method.getName() + '_';
                        exportUnrolledFlags(out, flagsMapping, intValue, valuePrefix);
                    }
                    final IntToString[] mapping = property.mapping();
                    if (mapping.length > 0) {
                        final int intValue = (Integer) methodValue;
                        boolean mapped = false;
                        int mappingCount = mapping.length;
                        for (int j = 0; j < mappingCount; j++) {
                            final IntToString mapper = mapping[j];
                            if (mapper.from() == intValue) {
                                methodValue = mapper.to();
                                mapped = true;
                                break;
                            }
                        }
                        if (!mapped) {
                            methodValue = intValue;
                        }
                    }
                }
            } else if (returnType == int[].class) {
                final int[] array = (int[]) methodValue;
                final String valuePrefix = categoryPrefix + prefix + method.getName() + '_';
                final String suffix = "()";
                exportUnrolledArray(context, out, property, array, valuePrefix, suffix);
                continue;
            } else if (returnType == String[].class) {
                final String[] array = (String[]) methodValue;
                if (property.hasAdjacentMapping() && array != null) {
                    for (int j = 0; j < array.length; j += 2) {
                        if (array[j] != null) {
                            writeEntry(out, categoryPrefix + prefix, array[j], "()", array[j + 1] == null ? "null" : array[j + 1]);
                        }
                    }
                }
                continue;
            } else if (!returnType.isPrimitive()) {
                if (property.deepExport()) {
                    dumpViewProperties(context, methodValue, out, prefix + property.prefix());
                    continue;
                }
            }
            writeEntry(out, categoryPrefix + prefix, method.getName(), "()", methodValue);
        } catch (IllegalAccessException e) {
        } catch (InvocationTargetException e) {
        } catch (TimeoutException e) {
        }
    }
}
Also used : Method(java.lang.reflect.Method) InvocationTargetException(java.lang.reflect.InvocationTargetException) AccessibleObject(java.lang.reflect.AccessibleObject) TimeoutException(java.util.concurrent.TimeoutException)

Example 73 with TimeoutException

use of java.util.concurrent.TimeoutException in project hadoop by apache.

the class StorageLocationChecker method check.

/**
   * Initiate a check of the supplied storage volumes and return
   * a list of failed volumes.
   *
   * StorageLocations are returned in the same order as the input
   * for compatibility with existing unit tests.
   *
   * @param conf HDFS configuration.
   * @param dataDirs list of volumes to check.
   * @return returns a list of failed volumes. Returns the empty list if
   *         there are no failed volumes.
   *
   * @throws InterruptedException if the check was interrupted.
   * @throws IOException if the number of failed volumes exceeds the
   *                     maximum allowed or if there are no good
   *                     volumes.
   */
public List<StorageLocation> check(final Configuration conf, final Collection<StorageLocation> dataDirs) throws InterruptedException, IOException {
    final HashMap<StorageLocation, Boolean> goodLocations = new LinkedHashMap<>();
    final Set<StorageLocation> failedLocations = new HashSet<>();
    final Map<StorageLocation, ListenableFuture<VolumeCheckResult>> futures = Maps.newHashMap();
    final LocalFileSystem localFS = FileSystem.getLocal(conf);
    final CheckContext context = new CheckContext(localFS, expectedPermission);
    // Start parallel disk check operations on all StorageLocations.
    for (StorageLocation location : dataDirs) {
        goodLocations.put(location, true);
        Optional<ListenableFuture<VolumeCheckResult>> olf = delegateChecker.schedule(location, context);
        if (olf.isPresent()) {
            futures.put(location, olf.get());
        }
    }
    if (maxVolumeFailuresTolerated >= dataDirs.size()) {
        throw new DiskErrorException("Invalid value configured for " + DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY + " - " + maxVolumeFailuresTolerated + ". Value configured is >= " + "to the number of configured volumes (" + dataDirs.size() + ").");
    }
    final long checkStartTimeMs = timer.monotonicNow();
    // Retrieve the results of the disk checks.
    for (Map.Entry<StorageLocation, ListenableFuture<VolumeCheckResult>> entry : futures.entrySet()) {
        // Determine how much time we can allow for this check to complete.
        // The cumulative wait time cannot exceed maxAllowedTimeForCheck.
        final long waitSoFarMs = (timer.monotonicNow() - checkStartTimeMs);
        final long timeLeftMs = Math.max(0, maxAllowedTimeForCheckMs - waitSoFarMs);
        final StorageLocation location = entry.getKey();
        try {
            final VolumeCheckResult result = entry.getValue().get(timeLeftMs, TimeUnit.MILLISECONDS);
            switch(result) {
                case HEALTHY:
                    break;
                case DEGRADED:
                    LOG.warn("StorageLocation {} appears to be degraded.", location);
                    break;
                case FAILED:
                    LOG.warn("StorageLocation {} detected as failed.", location);
                    failedLocations.add(location);
                    goodLocations.remove(location);
                    break;
                default:
                    LOG.error("Unexpected health check result {} for StorageLocation {}", result, location);
            }
        } catch (ExecutionException | TimeoutException e) {
            LOG.warn("Exception checking StorageLocation " + location, e.getCause());
            failedLocations.add(location);
            goodLocations.remove(location);
        }
    }
    if (failedLocations.size() > maxVolumeFailuresTolerated) {
        throw new DiskErrorException("Too many failed volumes - " + "current valid volumes: " + goodLocations.size() + ", volumes configured: " + dataDirs.size() + ", volumes failed: " + failedLocations.size() + ", volume failures tolerated: " + maxVolumeFailuresTolerated);
    }
    if (goodLocations.size() == 0) {
        throw new DiskErrorException("All directories in " + DFS_DATANODE_DATA_DIR_KEY + " are invalid: " + failedLocations);
    }
    return new ArrayList<>(goodLocations.keySet());
}
Also used : CheckContext(org.apache.hadoop.hdfs.server.datanode.StorageLocation.CheckContext) DiskErrorException(org.apache.hadoop.util.DiskChecker.DiskErrorException) ArrayList(java.util.ArrayList) LinkedHashMap(java.util.LinkedHashMap) LocalFileSystem(org.apache.hadoop.fs.LocalFileSystem) ListenableFuture(com.google.common.util.concurrent.ListenableFuture) StorageLocation(org.apache.hadoop.hdfs.server.datanode.StorageLocation) ExecutionException(java.util.concurrent.ExecutionException) HashMap(java.util.HashMap) LinkedHashMap(java.util.LinkedHashMap) Map(java.util.Map) HashSet(java.util.HashSet) TimeoutException(java.util.concurrent.TimeoutException)

Example 74 with TimeoutException

use of java.util.concurrent.TimeoutException in project hadoop by apache.

the class MiniDFSCluster method setDataNodeStorageCapacities.

private synchronized void setDataNodeStorageCapacities(final int curDnIdx, final DataNode curDn, long[][] storageCapacities) throws IOException {
    if (storageCapacities == null || storageCapacities.length == 0) {
        return;
    }
    try {
        waitDataNodeFullyStarted(curDn);
    } catch (TimeoutException | InterruptedException e) {
        throw new IOException(e);
    }
    try (FsDatasetSpi.FsVolumeReferences volumes = curDn.getFSDataset().getFsVolumeReferences()) {
        assert storageCapacities[curDnIdx].length == storagesPerDatanode;
        assert volumes.size() == storagesPerDatanode;
        int j = 0;
        for (FsVolumeSpi fvs : volumes) {
            FsVolumeImpl volume = (FsVolumeImpl) fvs;
            LOG.info("setCapacityForTesting " + storageCapacities[curDnIdx][j] + " for [" + volume.getStorageType() + "]" + volume.getStorageID());
            volume.setCapacityForTesting(storageCapacities[curDnIdx][j]);
            j++;
        }
    }
    DataNodeTestUtils.triggerHeartbeat(curDn);
}
Also used : FsVolumeImpl(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl) FsDatasetSpi(org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi) FsVolumeSpi(org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi) IOException(java.io.IOException) TimeoutException(java.util.concurrent.TimeoutException)

Example 75 with TimeoutException

use of java.util.concurrent.TimeoutException in project hadoop by apache.

the class TestReplication method testNoExtraReplicationWhenBlockReceivedIsLate.

/**
   * This test makes sure that, when a file is closed before all
   * of the datanodes in the pipeline have reported their replicas,
   * the NameNode doesn't consider the block under-replicated too
   * aggressively. It is a regression test for HDFS-1172.
   */
@Test(timeout = 60000)
public void testNoExtraReplicationWhenBlockReceivedIsLate() throws Exception {
    LOG.info("Test block replication when blockReceived is late");
    final short numDataNodes = 3;
    final short replication = 3;
    final Configuration conf = new Configuration();
    conf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 1024);
    final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(numDataNodes).build();
    final String testFile = "/replication-test-file";
    final Path testPath = new Path(testFile);
    final BlockManager bm = cluster.getNameNode().getNamesystem().getBlockManager();
    try {
        cluster.waitActive();
        // Artificially delay IBR from 1 DataNode.
        // this ensures that the client's completeFile() RPC will get to the
        // NN before some of the replicas are reported.
        NameNode nn = cluster.getNameNode();
        DataNode dn = cluster.getDataNodes().get(0);
        DatanodeProtocolClientSideTranslatorPB spy = InternalDataNodeTestUtils.spyOnBposToNN(dn, nn);
        DelayAnswer delayer = new GenericTestUtils.DelayAnswer(LOG);
        Mockito.doAnswer(delayer).when(spy).blockReceivedAndDeleted(Mockito.<DatanodeRegistration>anyObject(), Mockito.anyString(), Mockito.<StorageReceivedDeletedBlocks[]>anyObject());
        FileSystem fs = cluster.getFileSystem();
        // Create and close a small file with two blocks
        DFSTestUtil.createFile(fs, testPath, 1500, replication, 0);
        // schedule replication via BlockManager#computeReplicationWork
        BlockManagerTestUtil.computeAllPendingWork(bm);
        // Initially, should have some pending replication since the close()
        // is earlier than at lease one of the reportReceivedDeletedBlocks calls
        assertTrue(pendingReplicationCount(bm) > 0);
        // release pending IBR.
        delayer.waitForCall();
        delayer.proceed();
        delayer.waitForResult();
        // make sure DataNodes do replication work if exists
        for (DataNode d : cluster.getDataNodes()) {
            DataNodeTestUtils.triggerHeartbeat(d);
        }
        // Wait until there is nothing pending
        try {
            GenericTestUtils.waitFor(new Supplier<Boolean>() {

                @Override
                public Boolean get() {
                    return pendingReplicationCount(bm) == 0;
                }
            }, 100, 3000);
        } catch (TimeoutException e) {
            fail("timed out while waiting for no pending replication.");
        }
        // Check that none of the datanodes have serviced a replication request.
        // i.e. that the NameNode didn't schedule any spurious replication.
        assertNoReplicationWasPerformed(cluster);
    } finally {
        if (cluster != null) {
            cluster.shutdown();
        }
    }
}
Also used : Path(org.apache.hadoop.fs.Path) NameNode(org.apache.hadoop.hdfs.server.namenode.NameNode) Configuration(org.apache.hadoop.conf.Configuration) MetricsRecordBuilder(org.apache.hadoop.metrics2.MetricsRecordBuilder) DelayAnswer(org.apache.hadoop.test.GenericTestUtils.DelayAnswer) DatanodeProtocolClientSideTranslatorPB(org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB) BlockManager(org.apache.hadoop.hdfs.server.blockmanagement.BlockManager) DataNode(org.apache.hadoop.hdfs.server.datanode.DataNode) FileSystem(org.apache.hadoop.fs.FileSystem) StorageReceivedDeletedBlocks(org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks) TimeoutException(java.util.concurrent.TimeoutException) Test(org.junit.Test)

Aggregations

TimeoutException (java.util.concurrent.TimeoutException)788 ExecutionException (java.util.concurrent.ExecutionException)249 IOException (java.io.IOException)184 Test (org.junit.Test)149 ArrayList (java.util.ArrayList)75 CountDownLatch (java.util.concurrent.CountDownLatch)73 ExecutorService (java.util.concurrent.ExecutorService)71 Future (java.util.concurrent.Future)54 CancellationException (java.util.concurrent.CancellationException)44 Test (org.testng.annotations.Test)44 List (java.util.List)39 HashMap (java.util.HashMap)38 Map (java.util.Map)38 File (java.io.File)36 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)36 TimeUnit (java.util.concurrent.TimeUnit)34 AtomicReference (java.util.concurrent.atomic.AtomicReference)26 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)22 URI (java.net.URI)21 RejectedExecutionException (java.util.concurrent.RejectedExecutionException)21