Search in sources :

Example 76 with Stream

use of java.util.stream.Stream in project ddf by codice.

the class Associated method getAssociations.

public Collection<Edge> getAssociations(String metacardId) throws UnsupportedQueryException, SourceUnavailableException, FederationException {
    Map<String, Metacard> metacardMap = query(withNonrestrictedTags(forRootAndParents(metacardId)));
    if (metacardMap.isEmpty()) {
        return Collections.emptyList();
    }
    Metacard root = metacardMap.get(metacardId);
    Collection<Metacard> parents = metacardMap.values().stream().filter(m -> !m.getId().equals(metacardId)).collect(Collectors.toList());
    Map<String, Metacard> childMetacardMap = query(withNonrestrictedTags(forChildAssociations(root)));
    Collection<Edge> parentEdges = createParentEdges(parents, root);
    Collection<Edge> childrenEdges = createChildEdges(childMetacardMap.values(), root);
    Collection<Edge> edges = Stream.of(parentEdges, childrenEdges).flatMap(Collection::stream).collect(Collectors.toList());
    return edges;
}
Also used : QueryRequestImpl(ddf.catalog.operation.impl.QueryRequestImpl) SourceUnavailableException(ddf.catalog.source.SourceUnavailableException) CatalogFramework(ddf.catalog.CatalogFramework) UpdateRequestImpl(ddf.catalog.operation.impl.UpdateRequestImpl) UnsupportedQueryException(ddf.catalog.source.UnsupportedQueryException) AttributeImpl(ddf.catalog.data.impl.AttributeImpl) HashMap(java.util.HashMap) Function(java.util.function.Function) ArrayList(java.util.ArrayList) HashSet(java.util.HashSet) MetacardVersion(ddf.catalog.core.versioning.MetacardVersion) SortBy(org.opengis.filter.sort.SortBy) EndpointUtil(org.codice.ddf.catalog.ui.util.EndpointUtil) Metacard(ddf.catalog.data.Metacard) Map(java.util.Map) HashCodeBuilder(org.apache.commons.lang3.builder.HashCodeBuilder) Result(ddf.catalog.data.Result) EqualsBuilder(org.apache.commons.lang3.builder.EqualsBuilder) QueryImpl(ddf.catalog.operation.impl.QueryImpl) ImmutableSet(com.google.common.collect.ImmutableSet) Collection(java.util.Collection) IngestException(ddf.catalog.source.IngestException) DeletedMetacard(ddf.catalog.core.versioning.DeletedMetacard) Set(java.util.Set) FederationException(ddf.catalog.federation.FederationException) Collectors(java.util.stream.Collectors) Sets(com.google.common.collect.Sets) Objects(java.util.Objects) TimeUnit(java.util.concurrent.TimeUnit) QueryResponse(ddf.catalog.operation.QueryResponse) List(java.util.List) Stream(java.util.stream.Stream) Attribute(ddf.catalog.data.Attribute) Optional(java.util.Optional) Filter(org.opengis.filter.Filter) Collections(java.util.Collections) Metacard(ddf.catalog.data.Metacard) DeletedMetacard(ddf.catalog.core.versioning.DeletedMetacard)

Example 77 with Stream

use of java.util.stream.Stream in project cassandra by apache.

the class SecondaryIndexManager method reload.

/**
     * Drops and adds new indexes associated with the underlying CF
     */
public void reload() {
    // figure out what needs to be added and dropped.
    Indexes tableIndexes = baseCfs.metadata().indexes;
    indexes.keySet().stream().filter(indexName -> !tableIndexes.has(indexName)).forEach(this::removeIndex);
    // some may not have been created here yet, only added to schema
    for (IndexMetadata tableIndex : tableIndexes) addIndex(tableIndex);
}
Also used : java.util(java.util) Iterables(com.google.common.collect.Iterables) MoreExecutors(com.google.common.util.concurrent.MoreExecutors) JMXEnabledThreadPoolExecutor(org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutor) CompactionManager(org.apache.cassandra.db.compaction.CompactionManager) ColumnMetadata(org.apache.cassandra.schema.ColumnMetadata) SSTableSet(org.apache.cassandra.db.lifecycle.SSTableSet) LoggerFactory(org.slf4j.LoggerFactory) org.apache.cassandra.db(org.apache.cassandra.db) Constructor(java.lang.reflect.Constructor) Function(java.util.function.Function) StringUtils(org.apache.commons.lang3.StringUtils) SSTableReader(org.apache.cassandra.io.sstable.format.SSTableReader) Indexes(org.apache.cassandra.schema.Indexes) org.apache.cassandra.db.rows(org.apache.cassandra.db.rows) OpOrder(org.apache.cassandra.utils.concurrent.OpOrder) Strings(com.google.common.base.Strings) org.apache.cassandra.index.transactions(org.apache.cassandra.index.transactions) IndexTarget(org.apache.cassandra.cql3.statements.IndexTarget) ProtocolVersion(org.apache.cassandra.transport.ProtocolVersion) Refs(org.apache.cassandra.utils.concurrent.Refs) DatabaseDescriptor(org.apache.cassandra.config.DatabaseDescriptor) InvalidRequestException(org.apache.cassandra.exceptions.InvalidRequestException) org.apache.cassandra.db.partitions(org.apache.cassandra.db.partitions) Longs(com.google.common.primitives.Longs) ImmutableSet(com.google.common.collect.ImmutableSet) Logger(org.slf4j.Logger) RowFilter(org.apache.cassandra.db.filter.RowFilter) FBUtilities(org.apache.cassandra.utils.FBUtilities) IndexMetadata(org.apache.cassandra.schema.IndexMetadata) java.util.concurrent(java.util.concurrent) Tracing(org.apache.cassandra.tracing.Tracing) Collectors(java.util.stream.Collectors) Maps(com.google.common.collect.Maps) Sets(com.google.common.collect.Sets) Futures(com.google.common.util.concurrent.Futures) NamedThreadFactory(org.apache.cassandra.concurrent.NamedThreadFactory) CassandraIndex(org.apache.cassandra.index.internal.CassandraIndex) Stream(java.util.stream.Stream) SinglePartitionPager(org.apache.cassandra.service.pager.SinglePartitionPager) StageManager(org.apache.cassandra.concurrent.StageManager) Joiner(com.google.common.base.Joiner) View(org.apache.cassandra.db.lifecycle.View) Indexes(org.apache.cassandra.schema.Indexes) IndexMetadata(org.apache.cassandra.schema.IndexMetadata)

Example 78 with Stream

use of java.util.stream.Stream in project cassandra by apache.

the class SSTableExport method main.

/**
     * Given arguments specifying an SSTable, and optionally an output file, export the contents of the SSTable to JSON.
     *
     * @param args
     *            command lines arguments
     * @throws ConfigurationException
     *             on configuration failure (wrong params given)
     */
public static void main(String[] args) throws ConfigurationException {
    CommandLineParser parser = new PosixParser();
    try {
        cmd = parser.parse(options, args);
    } catch (ParseException e1) {
        System.err.println(e1.getMessage());
        printUsage();
        System.exit(1);
    }
    if (cmd.getArgs().length != 1) {
        System.err.println("You must supply exactly one sstable");
        printUsage();
        System.exit(1);
    }
    String[] keys = cmd.getOptionValues(KEY_OPTION);
    HashSet<String> excludes = new HashSet<>(Arrays.asList(cmd.getOptionValues(EXCLUDE_KEY_OPTION) == null ? new String[0] : cmd.getOptionValues(EXCLUDE_KEY_OPTION)));
    String ssTableFileName = new File(cmd.getArgs()[0]).getAbsolutePath();
    if (!new File(ssTableFileName).exists()) {
        System.err.println("Cannot find file " + ssTableFileName);
        System.exit(1);
    }
    Descriptor desc = Descriptor.fromFilename(ssTableFileName);
    try {
        TableMetadata metadata = metadataFromSSTable(desc);
        if (cmd.hasOption(ENUMERATE_KEYS_OPTION)) {
            try (KeyIterator iter = new KeyIterator(desc, metadata)) {
                JsonTransformer.keysToJson(null, iterToStream(iter), cmd.hasOption(RAW_TIMESTAMPS), metadata, System.out);
            }
        } else {
            SSTableReader sstable = SSTableReader.openNoValidation(desc, TableMetadataRef.forOfflineTools(metadata));
            IPartitioner partitioner = sstable.getPartitioner();
            final ISSTableScanner currentScanner;
            if ((keys != null) && (keys.length > 0)) {
                List<AbstractBounds<PartitionPosition>> bounds = Arrays.stream(keys).filter(key -> !excludes.contains(key)).map(metadata.partitionKeyType::fromString).map(partitioner::decorateKey).sorted().map(DecoratedKey::getToken).map(token -> new Bounds<>(token.minKeyBound(), token.maxKeyBound())).collect(Collectors.toList());
                currentScanner = sstable.getScanner(bounds.iterator());
            } else {
                currentScanner = sstable.getScanner();
            }
            Stream<UnfilteredRowIterator> partitions = iterToStream(currentScanner).filter(i -> excludes.isEmpty() || !excludes.contains(metadata.partitionKeyType.getString(i.partitionKey().getKey())));
            if (cmd.hasOption(DEBUG_OUTPUT_OPTION)) {
                AtomicLong position = new AtomicLong();
                partitions.forEach(partition -> {
                    position.set(currentScanner.getCurrentPosition());
                    if (!partition.partitionLevelDeletion().isLive()) {
                        System.out.println("[" + metadata.partitionKeyType.getString(partition.partitionKey().getKey()) + "]@" + position.get() + " " + partition.partitionLevelDeletion());
                    }
                    if (!partition.staticRow().isEmpty()) {
                        System.out.println("[" + metadata.partitionKeyType.getString(partition.partitionKey().getKey()) + "]@" + position.get() + " " + partition.staticRow().toString(metadata, true));
                    }
                    partition.forEachRemaining(row -> {
                        System.out.println("[" + metadata.partitionKeyType.getString(partition.partitionKey().getKey()) + "]@" + position.get() + " " + row.toString(metadata, false, true));
                        position.set(currentScanner.getCurrentPosition());
                    });
                });
            } else {
                JsonTransformer.toJson(currentScanner, partitions, cmd.hasOption(RAW_TIMESTAMPS), metadata, System.out);
            }
        }
    } catch (IOException e) {
        // throwing exception outside main with broken pipe causes windows cmd to hang
        e.printStackTrace(System.err);
    }
    System.exit(0);
}
Also used : ISSTableScanner(org.apache.cassandra.io.sstable.ISSTableScanner) java.util(java.util) org.apache.commons.cli(org.apache.commons.cli) SSTableReader(org.apache.cassandra.io.sstable.format.SSTableReader) KeyIterator(org.apache.cassandra.io.sstable.KeyIterator) UTF8Type(org.apache.cassandra.db.marshal.UTF8Type) DecoratedKey(org.apache.cassandra.db.DecoratedKey) ConfigurationException(org.apache.cassandra.exceptions.ConfigurationException) UnfilteredRowIterator(org.apache.cassandra.db.rows.UnfilteredRowIterator) MetadataComponent(org.apache.cassandra.io.sstable.metadata.MetadataComponent) Descriptor(org.apache.cassandra.io.sstable.Descriptor) StreamSupport(java.util.stream.StreamSupport) DatabaseDescriptor(org.apache.cassandra.config.DatabaseDescriptor) SerializationHeader(org.apache.cassandra.db.SerializationHeader) FBUtilities(org.apache.cassandra.utils.FBUtilities) MetadataType(org.apache.cassandra.io.sstable.metadata.MetadataType) ISSTableScanner(org.apache.cassandra.io.sstable.ISSTableScanner) IOException(java.io.IOException) org.apache.cassandra.dht(org.apache.cassandra.dht) Collectors(java.util.stream.Collectors) File(java.io.File) AtomicLong(java.util.concurrent.atomic.AtomicLong) Stream(java.util.stream.Stream) PartitionPosition(org.apache.cassandra.db.PartitionPosition) ColumnIdentifier(org.apache.cassandra.cql3.ColumnIdentifier) TableMetadataRef(org.apache.cassandra.schema.TableMetadataRef) TableMetadata(org.apache.cassandra.schema.TableMetadata) UnfilteredRowIterator(org.apache.cassandra.db.rows.UnfilteredRowIterator) SSTableReader(org.apache.cassandra.io.sstable.format.SSTableReader) TableMetadata(org.apache.cassandra.schema.TableMetadata) KeyIterator(org.apache.cassandra.io.sstable.KeyIterator) DecoratedKey(org.apache.cassandra.db.DecoratedKey) IOException(java.io.IOException) AtomicLong(java.util.concurrent.atomic.AtomicLong) Descriptor(org.apache.cassandra.io.sstable.Descriptor) DatabaseDescriptor(org.apache.cassandra.config.DatabaseDescriptor) File(java.io.File)

Example 79 with Stream

use of java.util.stream.Stream in project crate by crate.

the class TransportKillNodeAction method broadcast.

public void broadcast(Request request, ActionListener<KillResponse> listener, Collection<String> excludedNodeIds) {
    Stream<DiscoveryNode> nodes = StreamSupport.stream(clusterService.state().nodes().spliterator(), false);
    Collection<DiscoveryNode> filteredNodes = nodes.filter(node -> !excludedNodeIds.contains(node.getId())).collect(Collectors.toList());
    listener = new MultiActionListener<>(filteredNodes.size(), KillResponse.MERGE_FUNCTION, listener);
    DefaultTransportResponseHandler<KillResponse> responseHandler = new DefaultTransportResponseHandler<KillResponse>(listener) {

        @Override
        public KillResponse newInstance() {
            return new KillResponse(0);
        }
    };
    for (DiscoveryNode node : filteredNodes) {
        transportService.sendRequest(node, name, request, responseHandler);
    }
}
Also used : java.util(java.util) ListenableFuture(com.google.common.util.concurrent.ListenableFuture) AbstractComponent(org.elasticsearch.common.component.AbstractComponent) TransportRequest(org.elasticsearch.transport.TransportRequest) DefaultTransportResponseHandler(io.crate.executor.transport.DefaultTransportResponseHandler) Callable(java.util.concurrent.Callable) MultiActionListener(io.crate.executor.MultiActionListener) Collectors(java.util.stream.Collectors) FutureCallback(com.google.common.util.concurrent.FutureCallback) NodeActionRequestHandler(io.crate.executor.transport.NodeActionRequestHandler) Futures(com.google.common.util.concurrent.Futures) DiscoveryNode(org.elasticsearch.cluster.node.DiscoveryNode) Settings(org.elasticsearch.common.settings.Settings) Stream(java.util.stream.Stream) NodeAction(io.crate.executor.transport.NodeAction) ClusterService(org.elasticsearch.cluster.ClusterService) JobContextService(io.crate.jobs.JobContextService) ThreadPool(org.elasticsearch.threadpool.ThreadPool) StreamSupport(java.util.stream.StreamSupport) TransportService(org.elasticsearch.transport.TransportService) Nonnull(javax.annotation.Nonnull) ActionListener(org.elasticsearch.action.ActionListener) Nullable(javax.annotation.Nullable) DiscoveryNode(org.elasticsearch.cluster.node.DiscoveryNode) DefaultTransportResponseHandler(io.crate.executor.transport.DefaultTransportResponseHandler)

Example 80 with Stream

use of java.util.stream.Stream in project crate by crate.

the class BlobPathITest method testDataIsStoredInTableSpecificBlobPath.

@Test
public void testDataIsStoredInTableSpecificBlobPath() throws Exception {
    launchNodeAndInitClient(configureGlobalBlobPath());
    Path tableBlobPath = createTempDir("tableBlobPath");
    Settings indexSettings = Settings.builder().put(oneShardAndZeroReplicas()).put(BlobIndicesService.SETTING_INDEX_BLOBS_PATH, tableBlobPath.toString()).build();
    blobAdminClient.createBlobTable("test", indexSettings).get();
    client.put("test", "abcdefg");
    String digest = "2fb5e13419fc89246865e7a324f476ec624e8740";
    try (Stream<Path> files = Files.walk(tableBlobPath)) {
        assertThat(files.anyMatch(i -> digest.equals(i.getFileName().toString())), is(true));
    }
}
Also used : Path(java.nio.file.Path) HttpServerTransport(org.elasticsearch.http.HttpServerTransport) Files(java.nio.file.Files) InetSocketTransportAddress(org.elasticsearch.common.transport.InetSocketTransportAddress) SETTING_NUMBER_OF_SHARDS(org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS) Matchers(org.hamcrest.Matchers) Test(org.junit.Test) IOException(java.io.IOException) BlobIndicesService(io.crate.blob.v2.BlobIndicesService) InetSocketAddress(java.net.InetSocketAddress) Collectors(java.util.stream.Collectors) BlobAdminClient(io.crate.blob.v2.BlobAdminClient) SETTING_NUMBER_OF_REPLICAS(org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS) List(java.util.List) Settings(org.elasticsearch.common.settings.Settings) Stream(java.util.stream.Stream) ESIntegTestCase(org.elasticsearch.test.ESIntegTestCase) Matchers.lessThan(org.hamcrest.Matchers.lessThan) Is.is(org.hamcrest.core.Is.is) Matchers.greaterThan(org.hamcrest.Matchers.greaterThan) Path(java.nio.file.Path) Settings(org.elasticsearch.common.settings.Settings) Test(org.junit.Test)

Aggregations

Stream (java.util.stream.Stream)161 Collectors (java.util.stream.Collectors)98 List (java.util.List)89 ArrayList (java.util.ArrayList)66 Map (java.util.Map)66 Set (java.util.Set)59 IOException (java.io.IOException)58 Optional (java.util.Optional)45 Collections (java.util.Collections)43 HashMap (java.util.HashMap)43 Arrays (java.util.Arrays)33 HashSet (java.util.HashSet)33 File (java.io.File)32 Path (java.nio.file.Path)32 Function (java.util.function.Function)28 Logger (org.slf4j.Logger)26 LoggerFactory (org.slf4j.LoggerFactory)26 java.util (java.util)25 Predicate (java.util.function.Predicate)23 Objects (java.util.Objects)22