Search in sources :

Example 1 with DefaultScheduledPollConsumerScheduler

use of org.apache.camel.impl.DefaultScheduledPollConsumerScheduler in project camel by apache.

the class SqsEndpoint method createConsumer.

public Consumer createConsumer(Processor processor) throws Exception {
    SqsConsumer sqsConsumer = new SqsConsumer(this, processor);
    configureConsumer(sqsConsumer);
    sqsConsumer.setMaxMessagesPerPoll(maxMessagesPerPoll);
    DefaultScheduledPollConsumerScheduler scheduler = new DefaultScheduledPollConsumerScheduler();
    scheduler.setConcurrentTasks(configuration.getConcurrentConsumers());
    sqsConsumer.setScheduler(scheduler);
    return sqsConsumer;
}
Also used : DefaultScheduledPollConsumerScheduler(org.apache.camel.impl.DefaultScheduledPollConsumerScheduler)

Example 2 with DefaultScheduledPollConsumerScheduler

use of org.apache.camel.impl.DefaultScheduledPollConsumerScheduler in project camel by apache.

the class IronMQEndpoint method createConsumer.

public Consumer createConsumer(Processor processor) throws Exception {
    IronMQConsumer ironMQConsumer = new IronMQConsumer(this, processor, getClient().queue(configuration.getQueueName()));
    configureConsumer(ironMQConsumer);
    ironMQConsumer.setMaxMessagesPerPoll(configuration.getMaxMessagesPerPoll());
    DefaultScheduledPollConsumerScheduler scheduler = new DefaultScheduledPollConsumerScheduler();
    scheduler.setConcurrentTasks(configuration.getConcurrentConsumers());
    ironMQConsumer.setScheduler(scheduler);
    return ironMQConsumer;
}
Also used : DefaultScheduledPollConsumerScheduler(org.apache.camel.impl.DefaultScheduledPollConsumerScheduler)

Example 3 with DefaultScheduledPollConsumerScheduler

use of org.apache.camel.impl.DefaultScheduledPollConsumerScheduler in project camel by apache.

the class HdfsConsumerTest method testReadWithReadSuffix.

@Test
public void testReadWithReadSuffix() throws Exception {
    if (!canTest()) {
        return;
    }
    String[] beforeFiles = new File("target/test").list();
    int before = beforeFiles != null ? beforeFiles.length : 0;
    final Path file = new Path(new File("target/test/test-camel-boolean").getAbsolutePath());
    Configuration conf = new Configuration();
    FileSystem fs1 = FileSystem.get(file.toUri(), conf);
    SequenceFile.Writer writer = createWriter(fs1, conf, file, NullWritable.class, BooleanWritable.class);
    NullWritable keyWritable = NullWritable.get();
    BooleanWritable valueWritable = new BooleanWritable();
    valueWritable.set(true);
    writer.append(keyWritable, valueWritable);
    writer.sync();
    writer.close();
    context.addRoutes(new RouteBuilder() {

        public void configure() {
            from("hdfs:localhost/" + file.getParent().toUri() + "?scheduler=#myScheduler&pattern=*&fileSystemType=LOCAL&fileType=SEQUENCE_FILE&initialDelay=0&readSuffix=handled").to("mock:result");
        }
    });
    ScheduledExecutorService pool = context.getExecutorServiceManager().newScheduledThreadPool(null, "unitTestPool", 1);
    DefaultScheduledPollConsumerScheduler scheduler = new DefaultScheduledPollConsumerScheduler(pool);
    ((JndiRegistry) ((PropertyPlaceholderDelegateRegistry) context.getRegistry()).getRegistry()).bind("myScheduler", scheduler);
    context.start();
    MockEndpoint resultEndpoint = context.getEndpoint("mock:result", MockEndpoint.class);
    resultEndpoint.expectedMessageCount(1);
    resultEndpoint.assertIsSatisfied();
    // synchronize on pool that was used to run hdfs consumer thread
    scheduler.getScheduledExecutorService().shutdown();
    scheduler.getScheduledExecutorService().awaitTermination(5000, TimeUnit.MILLISECONDS);
    Set<String> files = new HashSet<String>(Arrays.asList(new File("target/test").list()));
    // there may be some leftover files before, so test that we only added 2 new files
    assertThat(files.size() - before, equalTo(2));
    assertTrue(files.remove("test-camel-boolean.handled"));
    assertTrue(files.remove(".test-camel-boolean.handled.crc"));
}
Also used : Path(org.apache.hadoop.fs.Path) ScheduledExecutorService(java.util.concurrent.ScheduledExecutorService) Configuration(org.apache.hadoop.conf.Configuration) RouteBuilder(org.apache.camel.builder.RouteBuilder) MockEndpoint(org.apache.camel.component.mock.MockEndpoint) NullWritable(org.apache.hadoop.io.NullWritable) MockEndpoint(org.apache.camel.component.mock.MockEndpoint) JndiRegistry(org.apache.camel.impl.JndiRegistry) SequenceFile(org.apache.hadoop.io.SequenceFile) BooleanWritable(org.apache.hadoop.io.BooleanWritable) FileSystem(org.apache.hadoop.fs.FileSystem) ArrayFile(org.apache.hadoop.io.ArrayFile) SequenceFile(org.apache.hadoop.io.SequenceFile) File(java.io.File) DefaultScheduledPollConsumerScheduler(org.apache.camel.impl.DefaultScheduledPollConsumerScheduler) HashSet(java.util.HashSet) Test(org.junit.Test)

Example 4 with DefaultScheduledPollConsumerScheduler

use of org.apache.camel.impl.DefaultScheduledPollConsumerScheduler in project camel by apache.

the class HdfsConsumerTest method testReadWithReadSuffix.

@Test
public void testReadWithReadSuffix() throws Exception {
    if (!canTest()) {
        return;
    }
    String[] beforeFiles = new File("target/test").list();
    int before = beforeFiles != null ? beforeFiles.length : 0;
    final Path file = new Path(new File("target/test/test-camel-boolean").getAbsolutePath());
    Configuration conf = new Configuration();
    SequenceFile.Writer writer = createWriter(conf, file, NullWritable.class, BooleanWritable.class);
    NullWritable keyWritable = NullWritable.get();
    BooleanWritable valueWritable = new BooleanWritable();
    valueWritable.set(true);
    writer.append(keyWritable, valueWritable);
    writer.sync();
    writer.close();
    context.addRoutes(new RouteBuilder() {

        public void configure() {
            from("hdfs2:localhost/" + file.getParent().toUri() + "?scheduler=#myScheduler&pattern=*&fileSystemType=LOCAL&fileType=SEQUENCE_FILE&initialDelay=0&readSuffix=handled").to("mock:result");
        }
    });
    ScheduledExecutorService pool = context.getExecutorServiceManager().newScheduledThreadPool(null, "unitTestPool", 1);
    DefaultScheduledPollConsumerScheduler scheduler = new DefaultScheduledPollConsumerScheduler(pool);
    ((JndiRegistry) ((PropertyPlaceholderDelegateRegistry) context.getRegistry()).getRegistry()).bind("myScheduler", scheduler);
    context.start();
    MockEndpoint resultEndpoint = context.getEndpoint("mock:result", MockEndpoint.class);
    resultEndpoint.expectedMessageCount(1);
    resultEndpoint.assertIsSatisfied();
    // synchronize on pool that was used to run hdfs consumer thread
    scheduler.getScheduledExecutorService().shutdown();
    scheduler.getScheduledExecutorService().awaitTermination(5000, TimeUnit.MILLISECONDS);
    Set<String> files = new HashSet<String>(Arrays.asList(new File("target/test").list()));
    // there may be some leftover files before, so test that we only added 2 new files
    assertThat(files.size() - before, equalTo(2));
    assertTrue(files.remove("test-camel-boolean.handled"));
    assertTrue(files.remove(".test-camel-boolean.handled.crc"));
}
Also used : Path(org.apache.hadoop.fs.Path) ScheduledExecutorService(java.util.concurrent.ScheduledExecutorService) Configuration(org.apache.hadoop.conf.Configuration) RouteBuilder(org.apache.camel.builder.RouteBuilder) MockEndpoint(org.apache.camel.component.mock.MockEndpoint) NullWritable(org.apache.hadoop.io.NullWritable) MockEndpoint(org.apache.camel.component.mock.MockEndpoint) JndiRegistry(org.apache.camel.impl.JndiRegistry) SequenceFile(org.apache.hadoop.io.SequenceFile) BooleanWritable(org.apache.hadoop.io.BooleanWritable) ArrayFile(org.apache.hadoop.io.ArrayFile) SequenceFile(org.apache.hadoop.io.SequenceFile) File(java.io.File) Writer(org.apache.hadoop.io.SequenceFile.Writer) DefaultScheduledPollConsumerScheduler(org.apache.camel.impl.DefaultScheduledPollConsumerScheduler) HashSet(java.util.HashSet) Test(org.junit.Test)

Aggregations

DefaultScheduledPollConsumerScheduler (org.apache.camel.impl.DefaultScheduledPollConsumerScheduler)4 File (java.io.File)2 HashSet (java.util.HashSet)2 ScheduledExecutorService (java.util.concurrent.ScheduledExecutorService)2 RouteBuilder (org.apache.camel.builder.RouteBuilder)2 MockEndpoint (org.apache.camel.component.mock.MockEndpoint)2 JndiRegistry (org.apache.camel.impl.JndiRegistry)2 Configuration (org.apache.hadoop.conf.Configuration)2 Path (org.apache.hadoop.fs.Path)2 ArrayFile (org.apache.hadoop.io.ArrayFile)2 BooleanWritable (org.apache.hadoop.io.BooleanWritable)2 NullWritable (org.apache.hadoop.io.NullWritable)2 SequenceFile (org.apache.hadoop.io.SequenceFile)2 Test (org.junit.Test)2 FileSystem (org.apache.hadoop.fs.FileSystem)1 Writer (org.apache.hadoop.io.SequenceFile.Writer)1