Search in sources :

Example 1 with Accessor

use of org.apache.drill.exec.vector.ValueVector.Accessor in project drill by apache.

the class WebUserConnection method sendData.

@Override
public void sendData(RpcOutcomeListener<Ack> listener, QueryWritableBatch result) {
    // Check if there is any data or not. There can be overflow here but DrillBuf doesn't support allocating with
    // bytes in long. Hence we are just preserving the earlier behavior and logging debug log for the case.
    final int dataByteCount = (int) result.getByteCount();
    if (dataByteCount <= 0) {
        if (logger.isDebugEnabled()) {
            logger.debug("Either no data received in this batch or there is BufferOverflow in dataByteCount: {}", dataByteCount);
        }
        listener.success(Acks.OK, null);
        return;
    }
    // If here that means there is some data for sure. Create a ByteBuf with all the data in it.
    final int rows = result.getHeader().getRowCount();
    final BufferAllocator allocator = webSessionResources.getAllocator();
    final DrillBuf bufferWithData = allocator.buffer(dataByteCount);
    try {
        final ByteBuf[] resultDataBuffers = result.getBuffers();
        for (final ByteBuf buffer : resultDataBuffers) {
            bufferWithData.writeBytes(buffer);
            buffer.release();
        }
        final RecordBatchLoader loader = new RecordBatchLoader(allocator);
        try {
            loader.load(result.getHeader().getDef(), bufferWithData);
            // SchemaChangeException, so check/clean catch clause below.
            for (int i = 0; i < loader.getSchema().getFieldCount(); ++i) {
                columns.add(loader.getSchema().getColumn(i).getPath());
            }
            for (int i = 0; i < rows; ++i) {
                final Map<String, String> record = Maps.newHashMap();
                for (VectorWrapper<?> vw : loader) {
                    final String field = vw.getValueVector().getMetadata().getNamePart().getName();
                    final Accessor accessor = vw.getValueVector().getAccessor();
                    final Object value = i < accessor.getValueCount() ? accessor.getObject(i) : null;
                    final String display = value == null ? null : value.toString();
                    record.put(field, display);
                }
                results.add(record);
            }
        } finally {
            loader.clear();
        }
    } catch (Exception e) {
        exception = UserException.systemError(e).build(logger);
    } finally {
        // Notify the listener with ACK.OK both in error/success case because data was send successfully from Drillbit.
        bufferWithData.release();
        listener.success(Acks.OK, null);
    }
}
Also used : RecordBatchLoader(org.apache.drill.exec.record.RecordBatchLoader) ByteBuf(io.netty.buffer.ByteBuf) Accessor(org.apache.drill.exec.vector.ValueVector.Accessor) UserException(org.apache.drill.common.exceptions.UserException) BufferAllocator(org.apache.drill.exec.memory.BufferAllocator) DrillBuf(io.netty.buffer.DrillBuf)

Aggregations

ByteBuf (io.netty.buffer.ByteBuf)1 DrillBuf (io.netty.buffer.DrillBuf)1 UserException (org.apache.drill.common.exceptions.UserException)1 BufferAllocator (org.apache.drill.exec.memory.BufferAllocator)1 RecordBatchLoader (org.apache.drill.exec.record.RecordBatchLoader)1 Accessor (org.apache.drill.exec.vector.ValueVector.Accessor)1