Search in sources :

Example 36 with Kryo

use of com.esotericsoftware.kryo.Kryo in project apex-malhar by apache.

the class XmlFormatterTest method testOperatorSerialization.

@Test
public void testOperatorSerialization() {
    Kryo kryo = new Kryo();
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    Output output = new Output(baos);
    kryo.writeObject(output, this.operator);
    output.close();
    Input input = new Input(baos.toByteArray());
    XMLFormatter tba1 = kryo.readObject(input, XMLFormatter.class);
    Assert.assertNotNull("XML parser not null", tba1);
}
Also used : Input(com.esotericsoftware.kryo.io.Input) XMLFormatter(java.util.logging.XMLFormatter) Output(com.esotericsoftware.kryo.io.Output) ByteArrayOutputStream(java.io.ByteArrayOutputStream) Kryo(com.esotericsoftware.kryo.Kryo) Test(org.junit.Test)

Example 37 with Kryo

use of com.esotericsoftware.kryo.Kryo in project apex-malhar by apache.

the class AbstractFileInputOperatorTest method testPartitioningStateTransferFailure.

/**
 * Test for testing dynamic partitioning interrupting ongoing read.
 * - Create 4 file with 3 records each.
 * - Create a single partition, and read some records, populating pending files in operator.
 * - Split it in two operators
 * - Try to emit the remaining records.
 */
@Test
public void testPartitioningStateTransferFailure() throws Exception {
    LineByLineFileInputOperator oper = new LineByLineFileInputOperator();
    oper.getScanner().setFilePatternRegexp(".*partition([\\d]*)");
    oper.setDirectory(new File(testMeta.dir).getAbsolutePath());
    oper.setScanIntervalMillis(0);
    oper.setEmitBatchSize(2);
    LineByLineFileInputOperator initialState = new Kryo().copy(oper);
    // Create 4 files with 3 records each.
    Path path = new Path(new File(testMeta.dir).getAbsolutePath());
    FileContext.getLocalFSFileContext().delete(path, true);
    int file;
    for (file = 0; file < 4; file++) {
        FileUtils.write(new File(testMeta.dir, "partition00" + file), "a\nb\nc\n");
    }
    CollectorTestSink<String> queryResults = new CollectorTestSink<String>();
    @SuppressWarnings({ "unchecked", "rawtypes" }) CollectorTestSink<Object> sink = (CollectorTestSink) queryResults;
    oper.output.setSink(sink);
    int wid = 0;
    // Read some records
    oper.setup(testMeta.context);
    for (int i = 0; i < 5; i++) {
        oper.beginWindow(wid);
        oper.emitTuples();
        oper.endWindow();
        wid++;
    }
    Assert.assertEquals("Partial tuples read ", 6, sink.collectedTuples.size());
    Assert.assertEquals(1, initialState.getCurrentPartitions());
    initialState.setPartitionCount(2);
    StatsListener.Response rsp = initialState.processStats(null);
    Assert.assertEquals(true, rsp.repartitionRequired);
    // Create partitions of the operator.
    List<Partition<AbstractFileInputOperator<String>>> partitions = Lists.newArrayList();
    partitions.add(new DefaultPartition<AbstractFileInputOperator<String>>(oper));
    // incremental capacity controlled partitionCount property
    Collection<Partition<AbstractFileInputOperator<String>>> newPartitions = initialState.definePartitions(partitions, new PartitioningContextImpl(null, 0));
    Assert.assertEquals(2, newPartitions.size());
    Assert.assertEquals(1, initialState.getCurrentPartitions());
    Map<Integer, Partition<AbstractFileInputOperator<String>>> m = Maps.newHashMap();
    for (Partition<AbstractFileInputOperator<String>> p : newPartitions) {
        m.put(m.size(), p);
    }
    initialState.partitioned(m);
    Assert.assertEquals(2, initialState.getCurrentPartitions());
    /* Collect all operators in a list */
    List<AbstractFileInputOperator<String>> opers = Lists.newArrayList();
    for (Partition<AbstractFileInputOperator<String>> p : newPartitions) {
        LineByLineFileInputOperator oi = (LineByLineFileInputOperator) p.getPartitionedInstance();
        oi.setup(testMeta.context);
        oi.output.setSink(sink);
        opers.add(oi);
    }
    sink.clear();
    for (int i = 0; i < 10; i++) {
        for (AbstractFileInputOperator<String> o : opers) {
            o.beginWindow(wid);
            o.emitTuples();
            o.endWindow();
        }
        wid++;
    }
    // No record should be read.
    Assert.assertEquals("Remaining tuples read ", 6, sink.collectedTuples.size());
}
Also used : Path(org.apache.hadoop.fs.Path) Partition(com.datatorrent.api.Partitioner.Partition) DefaultPartition(com.datatorrent.api.DefaultPartition) StatsListener(com.datatorrent.api.StatsListener) PartitioningContextImpl(org.apache.apex.malhar.lib.partitioner.StatelessPartitionerTest.PartitioningContextImpl) LineByLineFileInputOperator(org.apache.apex.malhar.lib.fs.LineByLineFileInputOperator) File(java.io.File) Kryo(com.esotericsoftware.kryo.Kryo) CollectorTestSink(org.apache.apex.malhar.lib.testbench.CollectorTestSink) Test(org.junit.Test)

Example 38 with Kryo

use of com.esotericsoftware.kryo.Kryo in project apex-malhar by apache.

the class AbstractFileInputOperatorTest method testPartitioningStateTransfer.

/**
 * Test for testing dynamic partitioning.
 * - Create 4 file with 3 records each.
 * - Create a single partition, and read all records, populating pending files in operator.
 * - Split it in two operators
 * - Try to emit records again, expected result is no record is emitted, as all files are
 *   processed.
 * - Create another 4 files with 3 records each
 * - Try to emit records again, expected result total record emitted 4 * 3 = 12.
 */
@Test
public void testPartitioningStateTransfer() throws Exception {
    LineByLineFileInputOperator oper = new LineByLineFileInputOperator();
    oper.getScanner().setFilePatternRegexp(".*partition([\\d]*)");
    oper.setDirectory(new File(testMeta.dir).getAbsolutePath());
    oper.setScanIntervalMillis(0);
    LineByLineFileInputOperator initialState = new Kryo().copy(oper);
    // Create 4 files with 3 records each.
    Path path = new Path(new File(testMeta.dir).getAbsolutePath());
    FileContext.getLocalFSFileContext().delete(path, true);
    int file;
    for (file = 0; file < 4; file++) {
        FileUtils.write(new File(testMeta.dir, "partition00" + file), "a\nb\nc\n");
    }
    CollectorTestSink<String> queryResults = new CollectorTestSink<String>();
    @SuppressWarnings({ "unchecked", "rawtypes" }) CollectorTestSink<Object> sink = (CollectorTestSink) queryResults;
    oper.output.setSink(sink);
    int wid = 0;
    // Read all records to populate processedList in operator.
    oper.setup(testMeta.context);
    for (int i = 0; i < 10; i++) {
        oper.beginWindow(wid);
        oper.emitTuples();
        oper.endWindow();
        wid++;
    }
    Assert.assertEquals("All tuples read ", 12, sink.collectedTuples.size());
    Assert.assertEquals(1, initialState.getCurrentPartitions());
    initialState.setPartitionCount(2);
    StatsListener.Response rsp = initialState.processStats(null);
    Assert.assertEquals(true, rsp.repartitionRequired);
    // Create partitions of the operator.
    List<Partition<AbstractFileInputOperator<String>>> partitions = Lists.newArrayList();
    partitions.add(new DefaultPartition<AbstractFileInputOperator<String>>(oper));
    // incremental capacity controlled partitionCount property
    Collection<Partition<AbstractFileInputOperator<String>>> newPartitions = initialState.definePartitions(partitions, new PartitioningContextImpl(null, 0));
    Assert.assertEquals(2, newPartitions.size());
    Assert.assertEquals(1, initialState.getCurrentPartitions());
    Map<Integer, Partition<AbstractFileInputOperator<String>>> m = Maps.newHashMap();
    for (Partition<AbstractFileInputOperator<String>> p : newPartitions) {
        m.put(m.size(), p);
    }
    initialState.partitioned(m);
    Assert.assertEquals(2, initialState.getCurrentPartitions());
    /* Collect all operators in a list */
    List<AbstractFileInputOperator<String>> opers = Lists.newArrayList();
    for (Partition<AbstractFileInputOperator<String>> p : newPartitions) {
        LineByLineFileInputOperator oi = (LineByLineFileInputOperator) p.getPartitionedInstance();
        oi.setup(testMeta.context);
        oi.output.setSink(sink);
        opers.add(oi);
    }
    sink.clear();
    for (int i = 0; i < 10; i++) {
        for (AbstractFileInputOperator<String> o : opers) {
            o.beginWindow(wid);
            o.emitTuples();
            o.endWindow();
        }
        wid++;
    }
    // No record should be read.
    Assert.assertEquals("No new tuples read ", 0, sink.collectedTuples.size());
    // Add four new files with 3 records each.
    for (; file < 8; file++) {
        FileUtils.write(new File(testMeta.dir, "partition00" + file), "a\nb\nc\n");
    }
    for (int i = 0; i < 10; i++) {
        for (AbstractFileInputOperator<String> o : opers) {
            o.beginWindow(wid);
            o.emitTuples();
            o.endWindow();
        }
        wid++;
    }
    // If all files are processed only once then number of records emitted should
    // be 12.
    Assert.assertEquals("All tuples read ", 12, sink.collectedTuples.size());
}
Also used : Path(org.apache.hadoop.fs.Path) Partition(com.datatorrent.api.Partitioner.Partition) DefaultPartition(com.datatorrent.api.DefaultPartition) StatsListener(com.datatorrent.api.StatsListener) PartitioningContextImpl(org.apache.apex.malhar.lib.partitioner.StatelessPartitionerTest.PartitioningContextImpl) LineByLineFileInputOperator(org.apache.apex.malhar.lib.fs.LineByLineFileInputOperator) File(java.io.File) Kryo(com.esotericsoftware.kryo.Kryo) CollectorTestSink(org.apache.apex.malhar.lib.testbench.CollectorTestSink) Test(org.junit.Test)

Example 39 with Kryo

use of com.esotericsoftware.kryo.Kryo in project apex-malhar by apache.

the class XmlParserTest method testOperatorSerialization.

@Test
public void testOperatorSerialization() {
    Kryo kryo = new Kryo();
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    Output output = new Output(baos);
    kryo.writeObject(output, this.operator);
    output.close();
    Input input = new Input(baos.toByteArray());
    XmlParser tba1 = kryo.readObject(input, XmlParser.class);
    Assert.assertNotNull("XML parser not null", tba1);
}
Also used : Input(com.esotericsoftware.kryo.io.Input) Output(com.esotericsoftware.kryo.io.Output) ByteArrayOutputStream(java.io.ByteArrayOutputStream) Kryo(com.esotericsoftware.kryo.Kryo) Test(org.junit.Test)

Example 40 with Kryo

use of com.esotericsoftware.kryo.Kryo in project apex-malhar by apache.

the class POJOTimeBasedJoinOperatorTest method testFullOuterJoinOperator.

@Test
public void testFullOuterJoinOperator() throws IOException, InterruptedException {
    Kryo kryo = new Kryo();
    POJOJoinOperator oper = new POJOJoinOperator();
    JoinStore store = new InMemoryStore(200, 200);
    oper.setLeftStore(kryo.copy(store));
    oper.setRightStore(kryo.copy(store));
    oper.setIncludeFields("ID,Name;OID,Amount");
    oper.setKeyFields("ID,CID");
    oper.outputClass = CustOrder.class;
    oper.setStrategy("outer_join");
    oper.setup(MapTimeBasedJoinOperator.context);
    CollectorTestSink<List<CustOrder>> sink = new CollectorTestSink<List<CustOrder>>();
    @SuppressWarnings({ "unchecked", "rawtypes" }) CollectorTestSink<Object> tmp = (CollectorTestSink) sink;
    oper.outputPort.setSink(tmp);
    oper.beginWindow(0);
    Customer tuple1 = new Customer(1, "Anil");
    oper.input1.process(tuple1);
    CountDownLatch latch = new CountDownLatch(1);
    Order order = new Order(102, 3, 300);
    oper.input2.process(order);
    Order order2 = new Order(103, 7, 300);
    oper.input2.process(order2);
    oper.endWindow();
    latch.await(500, TimeUnit.MILLISECONDS);
    oper.beginWindow(1);
    Order order3 = new Order(104, 5, 300);
    oper.input2.process(order3);
    Customer tuple2 = new Customer(5, "DT");
    oper.input1.process(tuple2);
    latch.await(500, TimeUnit.MILLISECONDS);
    oper.endWindow();
    latch.await(500, TimeUnit.MILLISECONDS);
    oper.beginWindow(2);
    oper.endWindow();
    latch.await(5000, TimeUnit.MILLISECONDS);
    /* Number of tuple, emitted */
    Assert.assertEquals("Number of tuple emitted ", 3, sink.collectedTuples.size());
    Iterator<List<CustOrder>> ite = sink.collectedTuples.iterator();
    List<CustOrder> emittedList = ite.next();
    CustOrder emitted = emittedList.get(0);
    Assert.assertEquals("value of ID :", tuple2.ID, emitted.ID);
    Assert.assertEquals("value of Name :", tuple2.Name, emitted.Name);
    Assert.assertEquals("value of OID: ", order3.OID, emitted.OID);
    Assert.assertEquals("value of Amount: ", order3.Amount, emitted.Amount);
    emittedList = ite.next();
    Assert.assertEquals("Joined Tuple ", "{ID=1, Name='Anil', OID=0, Amount=0}", emittedList.get(0).toString());
    emittedList = ite.next();
    Assert.assertEquals("Joined Tuple ", "{ID=0, Name='null', OID=102, Amount=300}", emittedList.get(0).toString());
    Assert.assertEquals("Joined Tuple ", "{ID=0, Name='null', OID=103, Amount=300}", emittedList.get(1).toString());
}
Also used : CountDownLatch(java.util.concurrent.CountDownLatch) List(java.util.List) Kryo(com.esotericsoftware.kryo.Kryo) CollectorTestSink(org.apache.apex.malhar.lib.testbench.CollectorTestSink) Test(org.junit.Test)

Aggregations

Kryo (com.esotericsoftware.kryo.Kryo)94 Input (com.esotericsoftware.kryo.io.Input)37 Output (com.esotericsoftware.kryo.io.Output)34 Test (org.junit.Test)26 ByteArrayOutputStream (java.io.ByteArrayOutputStream)21 ByteArrayInputStream (java.io.ByteArrayInputStream)17 StdInstantiatorStrategy (org.objenesis.strategy.StdInstantiatorStrategy)14 File (java.io.File)10 CollectorTestSink (org.apache.apex.malhar.lib.testbench.CollectorTestSink)10 List (java.util.List)9 Map (java.util.Map)8 Test (org.testng.annotations.Test)8 ArrayList (java.util.ArrayList)7 Path (org.apache.hadoop.fs.Path)7 BigIntegerSerializer (com.esotericsoftware.kryo.serializers.DefaultSerializers.BigIntegerSerializer)5 FileNotFoundException (java.io.FileNotFoundException)5 IOException (java.io.IOException)5 BaseTest (org.broadinstitute.hellbender.utils.test.BaseTest)5 DefaultPartition (com.datatorrent.api.DefaultPartition)4 CountDownLatch (java.util.concurrent.CountDownLatch)4