Search in sources :

Example 1 with FlinkRuntimeException

use of org.apache.flink.util.FlinkRuntimeException in project flink by apache.

the class StreamContextEnvironment method serializeConfig.

private static byte[] serializeConfig(Serializable config) {
    try (final ByteArrayOutputStream bos = new ByteArrayOutputStream();
        final ObjectOutputStream oos = new ObjectOutputStream(bos)) {
        oos.writeObject(config);
        oos.flush();
        return bos.toByteArray();
    } catch (IOException e) {
        throw new FlinkRuntimeException("Cannot serialize configuration.", e);
    }
}
Also used : FlinkRuntimeException(org.apache.flink.util.FlinkRuntimeException) ByteArrayOutputStream(java.io.ByteArrayOutputStream) IOException(java.io.IOException) ObjectOutputStream(java.io.ObjectOutputStream)

Example 2 with FlinkRuntimeException

use of org.apache.flink.util.FlinkRuntimeException in project flink by apache.

the class FileSystemLookupFunction method checkCacheReload.

private void checkCacheReload() {
    if (nextLoadTime > System.currentTimeMillis()) {
        return;
    }
    if (nextLoadTime > 0) {
        LOG.info("Lookup join cache has expired after {} minute(s), reloading", reloadInterval.toMinutes());
    } else {
        LOG.info("Populating lookup join cache");
    }
    int numRetry = 0;
    while (true) {
        cache.clear();
        try {
            long count = 0;
            GenericRowData reuse = new GenericRowData(rowType.getFieldCount());
            partitionReader.open(partitionFetcher.fetch(fetcherContext));
            RowData row;
            while ((row = partitionReader.read(reuse)) != null) {
                count++;
                RowData rowData = serializer.copy(row);
                RowData key = extractLookupKey(rowData);
                List<RowData> rows = cache.computeIfAbsent(key, k -> new ArrayList<>());
                rows.add(rowData);
            }
            partitionReader.close();
            nextLoadTime = System.currentTimeMillis() + reloadInterval.toMillis();
            LOG.info("Loaded {} row(s) into lookup join cache", count);
            return;
        } catch (Exception e) {
            if (numRetry >= MAX_RETRIES) {
                throw new FlinkRuntimeException(String.format("Failed to load table into cache after %d retries", numRetry), e);
            }
            numRetry++;
            long toSleep = numRetry * RETRY_INTERVAL.toMillis();
            LOG.warn(String.format("Failed to load table into cache, will retry in %d seconds", toSleep / 1000), e);
            try {
                Thread.sleep(toSleep);
            } catch (InterruptedException ex) {
                LOG.warn("Interrupted while waiting to retry failed cache load, aborting");
                throw new FlinkRuntimeException(ex);
            }
        }
    }
}
Also used : RowData(org.apache.flink.table.data.RowData) GenericRowData(org.apache.flink.table.data.GenericRowData) FlinkRuntimeException(org.apache.flink.util.FlinkRuntimeException) GenericRowData(org.apache.flink.table.data.GenericRowData) FlinkRuntimeException(org.apache.flink.util.FlinkRuntimeException)

Example 3 with FlinkRuntimeException

use of org.apache.flink.util.FlinkRuntimeException in project flink by apache.

the class AvroFactory method getSpecificDataForClass.

/**
 * Creates a {@link SpecificData} object for a given class. Possibly uses the specific data from
 * the generated class with logical conversions applied (avro >= 1.9.x).
 *
 * <p>Copied over from {@code SpecificData#getForClass(Class<T> c)} we do not use the method
 * directly, because we want to be API backwards compatible with older Avro versions which did
 * not have this method
 */
public static <T extends SpecificData> SpecificData getSpecificDataForClass(Class<T> type, ClassLoader cl) {
    try {
        Field specificDataField = type.getDeclaredField("MODEL$");
        specificDataField.setAccessible(true);
        return (SpecificData) specificDataField.get((Object) null);
    } catch (IllegalAccessException e) {
        throw new FlinkRuntimeException("Could not access the MODEL$ field of avro record", e);
    } catch (NoSuchFieldException e) {
        return new SpecificData(cl);
    }
}
Also used : Field(java.lang.reflect.Field) SpecificData(org.apache.avro.specific.SpecificData) FlinkRuntimeException(org.apache.flink.util.FlinkRuntimeException)

Example 4 with FlinkRuntimeException

use of org.apache.flink.util.FlinkRuntimeException in project flink by apache.

the class KeyGroupPartitionedPriorityQueue method getSubsetForKeyGroup.

@Nonnull
@Override
public Set<T> getSubsetForKeyGroup(int keyGroupId) {
    HashSet<T> result = new HashSet<>();
    PQ partitionQueue = keyGroupedHeaps[globalKeyGroupToLocalIndex(keyGroupId)];
    try (CloseableIterator<T> iterator = partitionQueue.iterator()) {
        while (iterator.hasNext()) {
            result.add(iterator.next());
        }
    } catch (Exception e) {
        throw new FlinkRuntimeException("Exception while iterating key group.", e);
    }
    return result;
}
Also used : FlinkRuntimeException(org.apache.flink.util.FlinkRuntimeException) FlinkRuntimeException(org.apache.flink.util.FlinkRuntimeException) HashSet(java.util.HashSet) Nonnull(javax.annotation.Nonnull)

Example 5 with FlinkRuntimeException

use of org.apache.flink.util.FlinkRuntimeException in project flink by apache.

the class FileSourceSplitState method toFileSourceSplit.

/**
 * Use the current row count as the starting row count to create a new FileSourceSplit.
 */
@SuppressWarnings("unchecked")
public SplitT toFileSourceSplit() {
    final CheckpointedPosition position = (offset == CheckpointedPosition.NO_OFFSET && recordsToSkipAfterOffset == 0) ? null : new CheckpointedPosition(offset, recordsToSkipAfterOffset);
    final FileSourceSplit updatedSplit = split.updateWithCheckpointedPosition(position);
    // some sanity checks to avoid surprises and not accidentally lose split information
    if (updatedSplit == null) {
        throw new FlinkRuntimeException("Split returned 'null' in updateWithCheckpointedPosition(): " + split);
    }
    if (updatedSplit.getClass() != split.getClass()) {
        throw new FlinkRuntimeException(String.format("Split returned different type in updateWithCheckpointedPosition(). " + "Split type is %s, returned type is %s", split.getClass().getName(), updatedSplit.getClass().getName()));
    }
    return (SplitT) updatedSplit;
}
Also used : CheckpointedPosition(org.apache.flink.connector.file.src.util.CheckpointedPosition) FlinkRuntimeException(org.apache.flink.util.FlinkRuntimeException)

Aggregations

FlinkRuntimeException (org.apache.flink.util.FlinkRuntimeException)78 IOException (java.io.IOException)28 Test (org.junit.Test)13 JobID (org.apache.flink.api.common.JobID)10 HashMap (java.util.HashMap)8 ArrayList (java.util.ArrayList)7 CompletableFuture (java.util.concurrent.CompletableFuture)7 ExecutionException (java.util.concurrent.ExecutionException)7 Nonnull (javax.annotation.Nonnull)7 Configuration (org.apache.flink.configuration.Configuration)6 Collectors (java.util.stream.Collectors)5 JobGraph (org.apache.flink.runtime.jobgraph.JobGraph)5 JobResultStore (org.apache.flink.runtime.highavailability.JobResultStore)4 RocksDBException (org.rocksdb.RocksDBException)4 List (java.util.List)3 Map (java.util.Map)3 CheckpointMetrics (org.apache.flink.runtime.checkpoint.CheckpointMetrics)3 TaskStateSnapshot (org.apache.flink.runtime.checkpoint.TaskStateSnapshot)3 ExecutionAttemptID (org.apache.flink.runtime.executiongraph.ExecutionAttemptID)3 JobResult (org.apache.flink.runtime.jobmaster.JobResult)3