Search in sources :

Example 51 with ScalarWriter

use of org.apache.drill.exec.vector.accessor.ScalarWriter in project drill by apache.

the class TestResultSetLoaderOverflow method testSizeLimitOnArray.

/**
 * Test a row with a single array column which overflows. Verifies
 * that all the fiddly bits about offset vectors and so on works
 * correctly. Run this test (the simplest case) if you change anything
 * about the array handling code.
 */
@Test
public void testSizeLimitOnArray() {
    TupleMetadata schema = new SchemaBuilder().addArray("s", MinorType.VARCHAR).buildSchema();
    ResultSetOptions options = new ResultSetOptionBuilder().rowCountLimit(ValueVector.MAX_ROW_COUNT).readerSchema(schema).build();
    ResultSetLoader rsLoader = new ResultSetLoaderImpl(fixture.allocator(), options);
    RowSetLoader rootWriter = rsLoader.writer();
    // Fill batch with rows of with a single array, three values each. Tack on
    // a suffix to each so we can be sure the proper data is written and moved
    // to the overflow batch.
    rsLoader.startBatch();
    byte[] value = new byte[473];
    Arrays.fill(value, (byte) 'X');
    String strValue = new String(value, Charsets.UTF_8);
    int valuesPerArray = 13;
    int count = 0;
    {
        int rowSize = 0;
        int totalSize = 0;
        while (rootWriter.start()) {
            totalSize += rowSize;
            rowSize = 0;
            ScalarWriter array = rootWriter.array(0).scalar();
            for (int i = 0; i < valuesPerArray; i++) {
                String cellValue = strValue + (count + 1) + "." + i;
                array.setString(cellValue);
                rowSize += cellValue.length();
            }
            rootWriter.save();
            count++;
        }
        // Row count should include the overflow row.
        int expectedCount = count - 1;
        // Size without overflow row should fit in the vector, size
        // with overflow should not.
        assertTrue(totalSize <= ValueVector.MAX_BUFFER_SIZE);
        assertTrue(totalSize + rowSize > ValueVector.MAX_BUFFER_SIZE);
        // Result should exclude the overflow row. Last row
        // should hold the last full array.
        VectorContainer container = rsLoader.harvest();
        BatchValidator.validate(container);
        RowSet result = fixture.wrap(container);
        assertEquals(expectedCount, result.rowCount());
        RowSetReader reader = result.reader();
        reader.setPosition(expectedCount - 1);
        ArrayReader arrayReader = reader.array(0);
        ScalarReader strReader = arrayReader.scalar();
        assertEquals(valuesPerArray, arrayReader.size());
        for (int i = 0; i < valuesPerArray; i++) {
            assertTrue(arrayReader.next());
            String cellValue = strValue + (count - 1) + "." + i;
            assertEquals(cellValue, strReader.getString());
        }
        result.clear();
    }
    // Next batch should start with the overflow row.
    // The only row in this next batch should be the whole
    // array being written at the time of overflow.
    {
        rsLoader.startBatch();
        assertEquals(1, rootWriter.rowCount());
        assertEquals(count, rsLoader.totalRowCount());
        VectorContainer container = rsLoader.harvest();
        BatchValidator.validate(container);
        RowSet result = fixture.wrap(container);
        assertEquals(1, result.rowCount());
        RowSetReader reader = result.reader();
        reader.next();
        ArrayReader arrayReader = reader.array(0);
        ScalarReader strReader = arrayReader.scalar();
        assertEquals(valuesPerArray, arrayReader.size());
        for (int i = 0; i < valuesPerArray; i++) {
            assertTrue(arrayReader.next());
            String cellValue = strValue + count + "." + i;
            assertEquals(cellValue, strReader.getString());
        }
        result.clear();
    }
    rsLoader.close();
}
Also used : RowSet(org.apache.drill.exec.physical.rowSet.RowSet) ResultSetOptions(org.apache.drill.exec.physical.resultSet.impl.ResultSetLoaderImpl.ResultSetOptions) VectorContainer(org.apache.drill.exec.record.VectorContainer) ScalarReader(org.apache.drill.exec.vector.accessor.ScalarReader) ArrayReader(org.apache.drill.exec.vector.accessor.ArrayReader) ResultSetLoader(org.apache.drill.exec.physical.resultSet.ResultSetLoader) TupleMetadata(org.apache.drill.exec.record.metadata.TupleMetadata) SchemaBuilder(org.apache.drill.exec.record.metadata.SchemaBuilder) RowSetLoader(org.apache.drill.exec.physical.resultSet.RowSetLoader) RowSetReader(org.apache.drill.exec.physical.rowSet.RowSetReader) ScalarWriter(org.apache.drill.exec.vector.accessor.ScalarWriter) SubOperatorTest(org.apache.drill.test.SubOperatorTest) Test(org.junit.Test)

Example 52 with ScalarWriter

use of org.apache.drill.exec.vector.accessor.ScalarWriter in project drill by apache.

the class TestResultSetLoaderProtocol method testOverwriteRow.

/**
 * The writer protocol allows a client to write to a row any number of times
 * before invoking {@code save()}. In this case, each new value simply
 * overwrites the previous value. Here, we test the most basic case: a simple,
 * flat tuple with no arrays. We use a very large Varchar that would, if
 * overwrite were not working, cause vector overflow.
 * <p>
 * The ability to overwrite rows is seldom needed except in one future use
 * case: writing a row, then applying a filter "in-place" to discard unwanted
 * rows, without having to send the row downstream.
 * <p>
 * Because of this use case, specific rules apply when discarding row or
 * overwriting values.
 * <ul>
 * <li>Values can be written once per row. Fixed-width columns actually allow
 * multiple writes. But, because of the way variable-width columns work,
 * multiple writes will cause undefined results.</li>
 * <li>To overwrite a row, call <tt>start()</tt> without calling
 * <tt>save()</tt> on the previous row. Doing so ignores data for the
 * previous row and starts a new row in place of the old one.</li>
 * </ul>
 * Note that there is no explicit method to discard a row. Instead,
 * the rule is that a row is not saved until <tt>save()</tt> is called.
 */
@Test
public void testOverwriteRow() {
    TupleMetadata schema = new SchemaBuilder().add("a", MinorType.INT).add("b", MinorType.VARCHAR).buildSchema();
    ResultSetLoaderImpl.ResultSetOptions options = new ResultSetOptionBuilder().readerSchema(schema).rowCountLimit(ValueVector.MAX_ROW_COUNT).build();
    ResultSetLoader rsLoader = new ResultSetLoaderImpl(fixture.allocator(), options);
    RowSetLoader rootWriter = rsLoader.writer();
    // Can't use the shortcut to populate rows when doing overwrites.
    ScalarWriter aWriter = rootWriter.scalar("a");
    ScalarWriter bWriter = rootWriter.scalar("b");
    // Write 100,000 rows, overwriting 99% of them. This will cause vector
    // overflow and data corruption if overwrite does not work; but will happily
    // produce the correct result if everything works as it should.
    byte[] value = new byte[512];
    Arrays.fill(value, (byte) 'X');
    int count = 0;
    rsLoader.startBatch();
    while (count < 100_000) {
        rootWriter.start();
        count++;
        aWriter.setInt(count);
        bWriter.setBytes(value, value.length);
        if (count % 100 == 0) {
            rootWriter.save();
        }
    }
    // Verify using a reader.
    RowSet result = fixture.wrap(rsLoader.harvest());
    assertEquals(count / 100, result.rowCount());
    RowSetReader reader = result.reader();
    int rowId = 1;
    while (reader.next()) {
        assertEquals(rowId * 100, reader.scalar("a").getInt());
        assertTrue(Arrays.equals(value, reader.scalar("b").getBytes()));
        rowId++;
    }
    result.clear();
    rsLoader.close();
}
Also used : ResultSetLoader(org.apache.drill.exec.physical.resultSet.ResultSetLoader) TupleMetadata(org.apache.drill.exec.record.metadata.TupleMetadata) SchemaBuilder(org.apache.drill.exec.record.metadata.SchemaBuilder) SingleRowSet(org.apache.drill.exec.physical.rowSet.RowSet.SingleRowSet) RowSet(org.apache.drill.exec.physical.rowSet.RowSet) RowSetLoader(org.apache.drill.exec.physical.resultSet.RowSetLoader) RowSetReader(org.apache.drill.exec.physical.rowSet.RowSetReader) ScalarWriter(org.apache.drill.exec.vector.accessor.ScalarWriter) SubOperatorTest(org.apache.drill.test.SubOperatorTest) Test(org.junit.Test)

Example 53 with ScalarWriter

use of org.apache.drill.exec.vector.accessor.ScalarWriter in project drill by apache.

the class TestScalarAccessors method testAppendWithArray.

/**
 * Test the ability to append bytes to a VarChar column. Should work for
 * Var16Char, but that type is not yet supported in Drill.
 */
@Test
public void testAppendWithArray() {
    TupleMetadata schema = new SchemaBuilder().addArray("col", MinorType.VARCHAR).buildSchema();
    DirectRowSet rs = DirectRowSet.fromSchema(fixture.allocator(), schema);
    RowSetWriter writer = rs.writer(100);
    ArrayWriter arrayWriter = writer.array("col");
    ScalarWriter colWriter = arrayWriter.scalar();
    byte[] first = "abc".getBytes();
    byte[] second = "12345".getBytes();
    for (int i = 0; i < 3; i++) {
        colWriter.setBytes(first, first.length);
        colWriter.appendBytes(second, second.length);
        arrayWriter.save();
        colWriter.setBytes(second, second.length);
        colWriter.appendBytes(first, first.length);
        arrayWriter.save();
        colWriter.setBytes(first, first.length);
        colWriter.appendBytes(second, second.length);
        arrayWriter.save();
        writer.save();
    }
    RowSet actual = writer.done();
    RowSet expected = new RowSetBuilder(fixture.allocator(), schema).addSingleCol(strArray("abc12345", "12345abc", "abc12345")).addSingleCol(strArray("abc12345", "12345abc", "abc12345")).addSingleCol(strArray("abc12345", "12345abc", "abc12345")).build();
    RowSetUtilities.verify(expected, actual);
}
Also used : TupleMetadata(org.apache.drill.exec.record.metadata.TupleMetadata) SchemaBuilder(org.apache.drill.exec.record.metadata.SchemaBuilder) SingleRowSet(org.apache.drill.exec.physical.rowSet.RowSet.SingleRowSet) ArrayWriter(org.apache.drill.exec.vector.accessor.ArrayWriter) ScalarWriter(org.apache.drill.exec.vector.accessor.ScalarWriter) SubOperatorTest(org.apache.drill.test.SubOperatorTest) Test(org.junit.Test)

Example 54 with ScalarWriter

use of org.apache.drill.exec.vector.accessor.ScalarWriter in project drill by axbaretto.

the class TestResultSetLoaderMaps method testBasics.

@Test
public void testBasics() {
    TupleMetadata schema = new SchemaBuilder().add("a", MinorType.INT).addMap("m").add("c", MinorType.INT).add("d", MinorType.VARCHAR).resumeSchema().add("e", MinorType.VARCHAR).buildSchema();
    ResultSetLoaderImpl.ResultSetOptions options = new OptionBuilder().setSchema(schema).build();
    ResultSetLoader rsLoader = new ResultSetLoaderImpl(fixture.allocator(), options);
    RowSetLoader rootWriter = rsLoader.writer();
    // Verify structure and schema
    assertEquals(5, rsLoader.schemaVersion());
    TupleMetadata actualSchema = rootWriter.schema();
    assertEquals(3, actualSchema.size());
    assertTrue(actualSchema.metadata(1).isMap());
    assertEquals(2, actualSchema.metadata("m").mapSchema().size());
    assertEquals(2, actualSchema.column("m").getChildren().size());
    rsLoader.startBatch();
    // Write a row the way that clients will do.
    ScalarWriter aWriter = rootWriter.scalar("a");
    TupleWriter mWriter = rootWriter.tuple("m");
    ScalarWriter cWriter = mWriter.scalar("c");
    ScalarWriter dWriter = mWriter.scalar("d");
    ScalarWriter eWriter = rootWriter.scalar("e");
    rootWriter.start();
    aWriter.setInt(10);
    cWriter.setInt(110);
    dWriter.setString("fred");
    eWriter.setString("pebbles");
    rootWriter.save();
    try {
        mWriter.addColumn(SchemaBuilder.columnSchema("c", MinorType.INT, DataMode.OPTIONAL));
        fail();
    } catch (IllegalArgumentException e) {
    // Expected
    }
    // Write another using the test-time conveniences
    rootWriter.addRow(20, objArray(210, "barney"), "bam-bam");
    // Harvest the batch
    RowSet actual = fixture.wrap(rsLoader.harvest());
    assertEquals(5, rsLoader.schemaVersion());
    assertEquals(2, actual.rowCount());
    // Validate data
    SingleRowSet expected = fixture.rowSetBuilder(schema).addRow(10, objArray(110, "fred"), "pebbles").addRow(20, objArray(210, "barney"), "bam-bam").build();
    new RowSetComparison(expected).verifyAndClearAll(actual);
    rsLoader.close();
}
Also used : SingleRowSet(org.apache.drill.test.rowSet.RowSet.SingleRowSet) SingleRowSet(org.apache.drill.test.rowSet.RowSet.SingleRowSet) RowSet(org.apache.drill.test.rowSet.RowSet) RowSetComparison(org.apache.drill.test.rowSet.RowSetComparison) ResultSetLoader(org.apache.drill.exec.physical.rowSet.ResultSetLoader) TupleWriter(org.apache.drill.exec.vector.accessor.TupleWriter) TupleMetadata(org.apache.drill.exec.record.metadata.TupleMetadata) SchemaBuilder(org.apache.drill.test.rowSet.schema.SchemaBuilder) RowSetLoader(org.apache.drill.exec.physical.rowSet.RowSetLoader) ScalarWriter(org.apache.drill.exec.vector.accessor.ScalarWriter) SubOperatorTest(org.apache.drill.test.SubOperatorTest) Test(org.junit.Test)

Example 55 with ScalarWriter

use of org.apache.drill.exec.vector.accessor.ScalarWriter in project drill by axbaretto.

the class TestResultSetLoaderMaps method testMapOverflowWithNewColumn.

/**
 * Test the case in which a new column is added during the overflow row. Unlike
 * the top-level schema case, internally we must create a copy of the map, and
 * move vectors across only when the result is to include the schema version
 * of the target column. For overflow, the new column is added after the
 * first batch; it is added in the second batch that contains the overflow
 * row in which the column was added.
 */
@Test
public void testMapOverflowWithNewColumn() {
    TupleMetadata schema = new SchemaBuilder().add("a", MinorType.INT).addMap("m").add("b", MinorType.INT).add("c", MinorType.VARCHAR).resumeSchema().buildSchema();
    ResultSetLoaderImpl.ResultSetOptions options = new OptionBuilder().setSchema(schema).setRowCountLimit(ValueVector.MAX_ROW_COUNT).build();
    ResultSetLoader rsLoader = new ResultSetLoaderImpl(fixture.allocator(), options);
    assertEquals(4, rsLoader.schemaVersion());
    RowSetLoader rootWriter = rsLoader.writer();
    // Can't use the shortcut to populate rows when doing a schema
    // change.
    ScalarWriter aWriter = rootWriter.scalar("a");
    TupleWriter mWriter = rootWriter.tuple("m");
    ScalarWriter bWriter = mWriter.scalar("b");
    ScalarWriter cWriter = mWriter.scalar("c");
    byte[] value = new byte[512];
    Arrays.fill(value, (byte) 'X');
    int count = 0;
    rsLoader.startBatch();
    while (!rootWriter.isFull()) {
        rootWriter.start();
        aWriter.setInt(count);
        bWriter.setInt(count * 10);
        cWriter.setBytes(value, value.length);
        if (rootWriter.isFull()) {
            // Overflow just occurred. Add another column.
            mWriter.addColumn(SchemaBuilder.columnSchema("d", MinorType.INT, DataMode.OPTIONAL));
            mWriter.scalar("d").setInt(count * 100);
        }
        rootWriter.save();
        count++;
    }
    // Result set should include the original columns, but not d.
    RowSet result = fixture.wrap(rsLoader.harvest());
    assertEquals(4, rsLoader.schemaVersion());
    assertTrue(schema.isEquivalent(result.schema()));
    BatchSchema expectedSchema = new BatchSchema(SelectionVectorMode.NONE, schema.toFieldList());
    assertTrue(expectedSchema.isEquivalent(result.batchSchema()));
    // Use a reader to validate row-by-row. Too large to create an expected
    // result set.
    RowSetReader reader = result.reader();
    TupleReader mapReader = reader.tuple("m");
    int rowId = 0;
    while (reader.next()) {
        assertEquals(rowId, reader.scalar("a").getInt());
        assertEquals(rowId * 10, mapReader.scalar("b").getInt());
        assertTrue(Arrays.equals(value, mapReader.scalar("c").getBytes()));
        rowId++;
    }
    result.clear();
    // Next batch should start with the overflow row
    rsLoader.startBatch();
    assertEquals(1, rootWriter.rowCount());
    result = fixture.wrap(rsLoader.harvest());
    assertEquals(1, result.rowCount());
    reader = result.reader();
    mapReader = reader.tuple("m");
    while (reader.next()) {
        assertEquals(rowId, reader.scalar("a").getInt());
        assertEquals(rowId * 10, mapReader.scalar("b").getInt());
        assertTrue(Arrays.equals(value, mapReader.scalar("c").getBytes()));
        assertEquals(rowId * 100, mapReader.scalar("d").getInt());
    }
    result.clear();
    rsLoader.close();
}
Also used : TupleReader(org.apache.drill.exec.vector.accessor.TupleReader) SingleRowSet(org.apache.drill.test.rowSet.RowSet.SingleRowSet) RowSet(org.apache.drill.test.rowSet.RowSet) ResultSetLoader(org.apache.drill.exec.physical.rowSet.ResultSetLoader) BatchSchema(org.apache.drill.exec.record.BatchSchema) TupleWriter(org.apache.drill.exec.vector.accessor.TupleWriter) TupleMetadata(org.apache.drill.exec.record.metadata.TupleMetadata) SchemaBuilder(org.apache.drill.test.rowSet.schema.SchemaBuilder) RowSetLoader(org.apache.drill.exec.physical.rowSet.RowSetLoader) RowSetReader(org.apache.drill.test.rowSet.RowSetReader) ScalarWriter(org.apache.drill.exec.vector.accessor.ScalarWriter) SubOperatorTest(org.apache.drill.test.SubOperatorTest) Test(org.junit.Test)

Aggregations

ScalarWriter (org.apache.drill.exec.vector.accessor.ScalarWriter)120 TupleMetadata (org.apache.drill.exec.record.metadata.TupleMetadata)69 SubOperatorTest (org.apache.drill.test.SubOperatorTest)68 Test (org.junit.Test)68 SchemaBuilder (org.apache.drill.exec.record.metadata.SchemaBuilder)51 SingleRowSet (org.apache.drill.exec.physical.rowSet.RowSet.SingleRowSet)44 ScalarReader (org.apache.drill.exec.vector.accessor.ScalarReader)31 ArrayWriter (org.apache.drill.exec.vector.accessor.ArrayWriter)26 RowSetLoader (org.apache.drill.exec.physical.resultSet.RowSetLoader)25 ResultSetLoader (org.apache.drill.exec.physical.resultSet.ResultSetLoader)24 TupleWriter (org.apache.drill.exec.vector.accessor.TupleWriter)23 ArrayReader (org.apache.drill.exec.vector.accessor.ArrayReader)22 RowSet (org.apache.drill.exec.physical.rowSet.RowSet)21 ExtendableRowSet (org.apache.drill.exec.physical.rowSet.RowSet.ExtendableRowSet)19 SchemaBuilder (org.apache.drill.test.rowSet.schema.SchemaBuilder)18 ColumnMetadata (org.apache.drill.exec.record.metadata.ColumnMetadata)17 TupleReader (org.apache.drill.exec.vector.accessor.TupleReader)17 SingleRowSet (org.apache.drill.test.rowSet.RowSet.SingleRowSet)14 RowSetReader (org.apache.drill.test.rowSet.RowSetReader)14 ResultSetLoader (org.apache.drill.exec.physical.rowSet.ResultSetLoader)13