Search in sources :

Example 41 with MessageInput

use of org.graylog2.plugin.inputs.MessageInput in project graylog2-server by Graylog2.

the class InputEventListenerTest method inputUpdatedDoesNotStartLocalInputOnOtherNode.

@Test
public void inputUpdatedDoesNotStartLocalInputOnOtherNode() throws Exception {
    final String inputId = "input-id";
    final Input input = mock(Input.class);
    @SuppressWarnings("unchecked") final IOState<MessageInput> inputState = mock(IOState.class);
    when(inputState.getState()).thenReturn(IOState.Type.RUNNING);
    when(inputService.find(inputId)).thenReturn(input);
    when(nodeId.toString()).thenReturn("node-id");
    when(input.getNodeId()).thenReturn("other-node-id");
    when(input.isGlobal()).thenReturn(false);
    final MessageInput messageInput = mock(MessageInput.class);
    when(inputService.getMessageInput(input)).thenReturn(messageInput);
    listener.inputUpdated(InputUpdated.create(inputId));
    verify(inputLauncher, never()).launch(messageInput);
}
Also used : MessageInput(org.graylog2.plugin.inputs.MessageInput) MessageInput(org.graylog2.plugin.inputs.MessageInput) Test(org.junit.Test)

Example 42 with MessageInput

use of org.graylog2.plugin.inputs.MessageInput in project graylog2-server by Graylog2.

the class KafkaTransport method doLaunch.

@Override
public void doLaunch(final MessageInput input) throws MisfireException {
    serverStatus.awaitRunning(new Runnable() {

        @Override
        public void run() {
            lifecycleStateChange(Lifecycle.RUNNING);
        }
    });
    // listen for lifecycle changes
    serverEventBus.register(this);
    final Properties props = new Properties();
    props.put("group.id", GROUP_ID);
    props.put("client.id", "gl2-" + nodeId + "-" + input.getId());
    props.put("fetch.min.bytes", String.valueOf(configuration.getInt(CK_FETCH_MIN_BYTES)));
    props.put("fetch.wait.max.ms", String.valueOf(configuration.getInt(CK_FETCH_WAIT_MAX)));
    props.put("zookeeper.connect", configuration.getString(CK_ZOOKEEPER));
    // Default auto commit interval is 60 seconds. Reduce to 1 second to minimize message duplication
    // if something breaks.
    props.put("auto.commit.interval.ms", "1000");
    // Set a consumer timeout to avoid blocking on the consumer iterator.
    props.put("consumer.timeout.ms", "1000");
    final int numThreads = configuration.getInt(CK_THREADS);
    final ConsumerConfig consumerConfig = new ConsumerConfig(props);
    cc = Consumer.createJavaConsumerConnector(consumerConfig);
    final TopicFilter filter = new Whitelist(configuration.getString(CK_TOPIC_FILTER));
    final List<KafkaStream<byte[], byte[]>> streams = cc.createMessageStreamsByFilter(filter, numThreads);
    final ExecutorService executor = executorService(numThreads);
    // this is being used during shutdown to first stop all submitted jobs before committing the offsets back to zookeeper
    // and then shutting down the connection.
    // this is to avoid yanking away the connection from the consumer runnables
    stopLatch = new CountDownLatch(streams.size());
    for (final KafkaStream<byte[], byte[]> stream : streams) {
        executor.submit(new Runnable() {

            @Override
            public void run() {
                final ConsumerIterator<byte[], byte[]> consumerIterator = stream.iterator();
                boolean retry;
                do {
                    retry = false;
                    try {
                        // noinspection WhileLoopReplaceableByForEach
                        while (consumerIterator.hasNext()) {
                            if (paused) {
                                // we try not to spin here, so we wait until the lifecycle goes back to running.
                                LOG.debug("Message processing is paused, blocking until message processing is turned back on.");
                                Uninterruptibles.awaitUninterruptibly(pausedLatch);
                            }
                            // check for being stopped before actually getting the message, otherwise we could end up losing that message
                            if (stopped) {
                                break;
                            }
                            if (isThrottled()) {
                                blockUntilUnthrottled();
                            }
                            // process the message, this will immediately mark the message as having been processed. this gets tricky
                            // if we get an exception about processing it down below.
                            final MessageAndMetadata<byte[], byte[]> message = consumerIterator.next();
                            final byte[] bytes = message.message();
                            // it is possible that the message is null
                            if (bytes == null) {
                                continue;
                            }
                            totalBytesRead.addAndGet(bytes.length);
                            lastSecBytesReadTmp.addAndGet(bytes.length);
                            final RawMessage rawMessage = new RawMessage(bytes);
                            // TODO implement throttling
                            input.processRawMessage(rawMessage);
                        }
                    } catch (ConsumerTimeoutException e) {
                        // Happens when there is nothing to consume, retry to check again.
                        retry = true;
                    } catch (Exception e) {
                        LOG.error("Kafka consumer error, stopping consumer thread.", e);
                    }
                } while (retry && !stopped);
                // explicitly commit our offsets when stopping.
                // this might trigger a couple of times, but it won't hurt
                cc.commitOffsets();
                stopLatch.countDown();
            }
        });
    }
    scheduler.scheduleAtFixedRate(new Runnable() {

        @Override
        public void run() {
            lastSecBytesRead.set(lastSecBytesReadTmp.getAndSet(0));
        }
    }, 1, 1, TimeUnit.SECONDS);
}
Also used : TopicFilter(kafka.consumer.TopicFilter) MessageAndMetadata(kafka.message.MessageAndMetadata) KafkaStream(kafka.consumer.KafkaStream) Properties(java.util.Properties) CountDownLatch(java.util.concurrent.CountDownLatch) ConsumerTimeoutException(kafka.consumer.ConsumerTimeoutException) MisfireException(org.graylog2.plugin.inputs.MisfireException) ConsumerIterator(kafka.consumer.ConsumerIterator) InstrumentedExecutorService(com.codahale.metrics.InstrumentedExecutorService) ScheduledExecutorService(java.util.concurrent.ScheduledExecutorService) ExecutorService(java.util.concurrent.ExecutorService) Whitelist(kafka.consumer.Whitelist) ConsumerTimeoutException(kafka.consumer.ConsumerTimeoutException) ConsumerConfig(kafka.consumer.ConsumerConfig) RawMessage(org.graylog2.plugin.journal.RawMessage)

Example 43 with MessageInput

use of org.graylog2.plugin.inputs.MessageInput in project graylog2-server by Graylog2.

the class UdpTransportTest method launchTransportForBootStrapTest.

private UdpTransport launchTransportForBootStrapTest(final ChannelHandler channelHandler) throws MisfireException {
    final UdpTransport transport = new UdpTransport(CONFIGURATION, throughputCounter, new LocalMetricRegistry()) {

        @Override
        protected LinkedHashMap<String, Callable<? extends ChannelHandler>> getBaseChannelHandlers(MessageInput input) {
            final LinkedHashMap<String, Callable<? extends ChannelHandler>> handlers = new LinkedHashMap<>();
            handlers.put("counter", Callables.returning(channelHandler));
            handlers.putAll(super.getFinalChannelHandlers(input));
            return handlers;
        }
    };
    final MessageInput messageInput = mock(MessageInput.class);
    when(messageInput.getId()).thenReturn("TEST");
    when(messageInput.getName()).thenReturn("TEST");
    transport.launch(messageInput);
    return transport;
}
Also used : MessageInput(org.graylog2.plugin.inputs.MessageInput) ChannelHandler(org.jboss.netty.channel.ChannelHandler) Callable(java.util.concurrent.Callable) LocalMetricRegistry(org.graylog2.plugin.LocalMetricRegistry) LinkedHashMap(java.util.LinkedHashMap)

Example 44 with MessageInput

use of org.graylog2.plugin.inputs.MessageInput in project graylog2-server by Graylog2.

the class AbstractTcpTransportTest method getChildChannelHandlersGeneratesSelfSignedCertificates.

@Test
public void getChildChannelHandlersGeneratesSelfSignedCertificates() {
    final Configuration configuration = new Configuration(ImmutableMap.of("bind_address", "localhost", "port", 12345, "tls_enable", true));
    final AbstractTcpTransport transport = new AbstractTcpTransport(configuration, throughputCounter, localRegistry, eventLoopGroup, eventLoopGroupFactory, nettyTransportConfiguration, tlsConfiguration) {
    };
    final MessageInput input = mock(MessageInput.class);
    assertThat(transport.getChildChannelHandlers(input)).containsKey("tls");
}
Also used : Configuration(org.graylog2.plugin.configuration.Configuration) TLSProtocolsConfiguration(org.graylog2.configuration.TLSProtocolsConfiguration) NettyTransportConfiguration(org.graylog2.inputs.transports.NettyTransportConfiguration) MessageInput(org.graylog2.plugin.inputs.MessageInput) Test(org.junit.Test)

Example 45 with MessageInput

use of org.graylog2.plugin.inputs.MessageInput in project graylog2-server by Graylog2.

the class InputEventListenerTest method inputCreatedStartsGlobalInputOnOtherNode.

@Test
public void inputCreatedStartsGlobalInputOnOtherNode() throws Exception {
    final String inputId = "input-id";
    final Input input = mock(Input.class);
    when(inputService.find(inputId)).thenReturn(input);
    when(nodeId.toString()).thenReturn("node-id");
    when(input.getNodeId()).thenReturn("other-node-id");
    when(input.isGlobal()).thenReturn(true);
    final MessageInput messageInput = mock(MessageInput.class);
    when(inputService.getMessageInput(input)).thenReturn(messageInput);
    listener.inputCreated(InputCreated.create(inputId));
    verify(inputLauncher, times(1)).launch(messageInput);
}
Also used : MessageInput(org.graylog2.plugin.inputs.MessageInput) MessageInput(org.graylog2.plugin.inputs.MessageInput) Test(org.junit.Test)

Aggregations

MessageInput (org.graylog2.plugin.inputs.MessageInput)47 Test (org.junit.Test)18 Callable (java.util.concurrent.Callable)17 NotFoundException (org.graylog2.database.NotFoundException)10 Configuration (org.graylog2.plugin.configuration.Configuration)9 ChannelHandler (io.netty.channel.ChannelHandler)8 LinkedHashMap (java.util.LinkedHashMap)8 Input (org.graylog2.inputs.Input)8 MisfireException (org.graylog2.plugin.inputs.MisfireException)7 ChannelHandler (org.jboss.netty.channel.ChannelHandler)7 Timed (com.codahale.metrics.annotation.Timed)6 ApiOperation (io.swagger.annotations.ApiOperation)6 ApiResponses (io.swagger.annotations.ApiResponses)6 EventBus (com.google.common.eventbus.EventBus)5 AuditEvent (org.graylog2.audit.jersey.AuditEvent)5 Subscribe (com.google.common.eventbus.Subscribe)4 Produces (javax.ws.rs.Produces)4 IOState (org.graylog2.plugin.IOState)4 LocalMetricRegistry (org.graylog2.plugin.LocalMetricRegistry)4 Extractor (org.graylog2.plugin.inputs.Extractor)4