Search in sources :

Example 76 with Tuple3

use of org.apache.flink.api.java.tuple.Tuple3 in project flink by apache.

the class ReduceWithCombinerITCase method testReduceOnKeyedDataset.

@Test
public void testReduceOnKeyedDataset() throws Exception {
    // set up the execution environment
    final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
    env.setParallelism(4);
    // creates the input data and distributes them evenly among the available downstream tasks
    DataSet<Tuple3<String, Integer, Boolean>> input = createKeyedInput(env);
    List<Tuple3<String, Integer, Boolean>> actual = input.groupBy(0).reduceGroup(new KeyedCombReducer()).collect();
    String expected = "k1,6,true\nk2,4,true\n";
    compareResultAsTuples(actual, expected);
}
Also used : ExecutionEnvironment(org.apache.flink.api.java.ExecutionEnvironment) Tuple3(org.apache.flink.api.java.tuple.Tuple3) Test(org.junit.Test)

Example 77 with Tuple3

use of org.apache.flink.api.java.tuple.Tuple3 in project flink by apache.

the class ReduceWithCombinerITCase method testReduceOnKeyedDatasetWithSelector.

@Test
public void testReduceOnKeyedDatasetWithSelector() throws Exception {
    // set up the execution environment
    final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
    env.setParallelism(4);
    // creates the input data and distributes them evenly among the available downstream tasks
    DataSet<Tuple3<String, Integer, Boolean>> input = createKeyedInput(env);
    List<Tuple3<String, Integer, Boolean>> actual = input.groupBy(new KeySelectorX()).reduceGroup(new KeyedCombReducer()).collect();
    String expected = "k1,6,true\nk2,4,true\n";
    compareResultAsTuples(actual, expected);
}
Also used : ExecutionEnvironment(org.apache.flink.api.java.ExecutionEnvironment) Tuple3(org.apache.flink.api.java.tuple.Tuple3) Test(org.junit.Test)

Example 78 with Tuple3

use of org.apache.flink.api.java.tuple.Tuple3 in project flink by apache.

the class PartitionITCase method testRangePartitionByKeyFieldAndDifferentParallelism.

@Test
public void testRangePartitionByKeyFieldAndDifferentParallelism() throws Exception {
    /*
		 * Test range partition by key field and different parallelism
		 */
    final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
    env.setParallelism(3);
    DataSet<Tuple3<Integer, Long, String>> ds = CollectionDataSets.get3TupleDataSet(env);
    DataSet<Long> uniqLongs = ds.partitionByRange(1).setParallelism(4).mapPartition(new UniqueTupleLongMapper());
    List<Long> result = uniqLongs.collect();
    String expected = "1\n" + "2\n" + "3\n" + "4\n" + "5\n" + "6\n";
    compareResultAsText(result, expected);
}
Also used : ExecutionEnvironment(org.apache.flink.api.java.ExecutionEnvironment) Tuple3(org.apache.flink.api.java.tuple.Tuple3) Test(org.junit.Test)

Example 79 with Tuple3

use of org.apache.flink.api.java.tuple.Tuple3 in project flink by apache.

the class PartitionITCase method testHashPartitionByKeyField.

@Test
public void testHashPartitionByKeyField() throws Exception {
    /*
		 * Test hash partition by key field
		 */
    final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
    DataSet<Tuple3<Integer, Long, String>> ds = CollectionDataSets.get3TupleDataSet(env);
    DataSet<Long> uniqLongs = ds.partitionByHash(1).mapPartition(new UniqueTupleLongMapper());
    List<Long> result = uniqLongs.collect();
    String expected = "1\n" + "2\n" + "3\n" + "4\n" + "5\n" + "6\n";
    compareResultAsText(result, expected);
}
Also used : ExecutionEnvironment(org.apache.flink.api.java.ExecutionEnvironment) Tuple3(org.apache.flink.api.java.tuple.Tuple3) Test(org.junit.Test)

Example 80 with Tuple3

use of org.apache.flink.api.java.tuple.Tuple3 in project flink by apache.

the class PartitionITCase method testRangePartitionByKeySelector.

@Test
public void testRangePartitionByKeySelector() throws Exception {
    /*
		 * Test range partition by key selector
		 */
    final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
    DataSet<Tuple3<Integer, Long, String>> ds = CollectionDataSets.get3TupleDataSet(env);
    DataSet<Long> uniqLongs = ds.partitionByRange(new KeySelector1()).mapPartition(new UniqueTupleLongMapper());
    List<Long> result = uniqLongs.collect();
    String expected = "1\n" + "2\n" + "3\n" + "4\n" + "5\n" + "6\n";
    compareResultAsText(result, expected);
}
Also used : ExecutionEnvironment(org.apache.flink.api.java.ExecutionEnvironment) Tuple3(org.apache.flink.api.java.tuple.Tuple3) Test(org.junit.Test)

Aggregations

Tuple3 (org.apache.flink.api.java.tuple.Tuple3)559 Test (org.junit.Test)506 ExecutionEnvironment (org.apache.flink.api.java.ExecutionEnvironment)415 Tuple2 (org.apache.flink.api.java.tuple.Tuple2)182 Plan (org.apache.flink.api.common.Plan)89 Tuple5 (org.apache.flink.api.java.tuple.Tuple5)74 StreamExecutionEnvironment (org.apache.flink.streaming.api.environment.StreamExecutionEnvironment)63 OptimizedPlan (org.apache.flink.optimizer.plan.OptimizedPlan)55 SinkPlanNode (org.apache.flink.optimizer.plan.SinkPlanNode)53 OneInputTransformation (org.apache.flink.streaming.api.transformations.OneInputTransformation)43 TimeWindow (org.apache.flink.streaming.api.windowing.windows.TimeWindow)43 DualInputPlanNode (org.apache.flink.optimizer.plan.DualInputPlanNode)38 ExecutionConfig (org.apache.flink.api.common.ExecutionConfig)37 IOException (java.io.IOException)32 ArrayList (java.util.ArrayList)31 Configuration (org.apache.flink.configuration.Configuration)29 EventTimeTrigger (org.apache.flink.streaming.api.windowing.triggers.EventTimeTrigger)27 FieldSet (org.apache.flink.api.common.operators.util.FieldSet)24 TypeHint (org.apache.flink.api.common.typeinfo.TypeHint)24 Tuple1 (org.apache.flink.api.java.tuple.Tuple1)21