Search in sources :

Example 1 with ChunkReaderContext

use of com.linkedin.pinot.core.io.reader.impl.ChunkReaderContext in project pinot by linkedin.

the class RawIndexCreatorTest method readValueFromIndex.

/**
   * Helper method to reader value for the given row.
   *
   * @param rawIndexReader Index reader
   * @param dataType Data type of value to be read
   * @param row Row to read
   * @return Value read from index
   */
private Object readValueFromIndex(FixedByteChunkSingleValueReader rawIndexReader, FieldSpec.DataType dataType, int row) {
    Object actual;
    ChunkReaderContext context = rawIndexReader.createContext();
    switch(dataType) {
        case INT:
            actual = rawIndexReader.getInt(row, context);
            break;
        case LONG:
            actual = rawIndexReader.getLong(row, context);
            break;
        case FLOAT:
            actual = rawIndexReader.getFloat(row, context);
            break;
        case DOUBLE:
            actual = rawIndexReader.getDouble(row, context);
            break;
        default:
            throw new IllegalArgumentException("Illegal data type for fixed width raw index reader: " + dataType);
    }
    return actual;
}
Also used : ChunkReaderContext(com.linkedin.pinot.core.io.reader.impl.ChunkReaderContext)

Example 2 with ChunkReaderContext

use of com.linkedin.pinot.core.io.reader.impl.ChunkReaderContext in project pinot by linkedin.

the class FixedByteChunkSingleValueReaderWriteTest method testInt.

@Test
public void testInt() throws Exception {
    int[] expected = new int[NUM_VALUES];
    for (int i = 0; i < NUM_VALUES; i++) {
        expected[i] = _random.nextInt();
    }
    File outFile = new File(TEST_FILE);
    FileUtils.deleteQuietly(outFile);
    ChunkCompressor compressor = ChunkCompressorFactory.getCompressor("snappy");
    FixedByteChunkSingleValueWriter writer = new FixedByteChunkSingleValueWriter(outFile, compressor, NUM_VALUES, NUM_DOCS_PER_CHUNK, V1Constants.Numbers.INTEGER_SIZE);
    for (int i = 0; i < NUM_VALUES; i++) {
        writer.setInt(i, expected[i]);
    }
    writer.close();
    PinotDataBuffer pinotDataBuffer = PinotDataBuffer.fromFile(outFile, ReadMode.mmap, FileChannel.MapMode.READ_ONLY, getClass().getName());
    ChunkDecompressor uncompressor = ChunkCompressorFactory.getDecompressor("snappy");
    FixedByteChunkSingleValueReader reader = new FixedByteChunkSingleValueReader(pinotDataBuffer, uncompressor);
    ChunkReaderContext context = reader.createContext();
    for (int i = 0; i < NUM_VALUES; i++) {
        int actual = reader.getInt(i, context);
        Assert.assertEquals(actual, expected[i]);
    }
    reader.close();
    FileUtils.deleteQuietly(outFile);
}
Also used : ChunkCompressor(com.linkedin.pinot.core.io.compression.ChunkCompressor) PinotDataBuffer(com.linkedin.pinot.core.segment.memory.PinotDataBuffer) ChunkDecompressor(com.linkedin.pinot.core.io.compression.ChunkDecompressor) FixedByteChunkSingleValueReader(com.linkedin.pinot.core.io.reader.impl.v1.FixedByteChunkSingleValueReader) File(java.io.File) FixedByteChunkSingleValueWriter(com.linkedin.pinot.core.io.writer.impl.v1.FixedByteChunkSingleValueWriter) ChunkReaderContext(com.linkedin.pinot.core.io.reader.impl.ChunkReaderContext) Test(org.testng.annotations.Test)

Example 3 with ChunkReaderContext

use of com.linkedin.pinot.core.io.reader.impl.ChunkReaderContext in project pinot by linkedin.

the class FixedByteChunkSingleValueReaderWriteTest method testLong.

@Test
public void testLong() throws Exception {
    long[] expected = new long[NUM_VALUES];
    for (int i = 0; i < NUM_VALUES; i++) {
        expected[i] = _random.nextLong();
    }
    File outFile = new File(TEST_FILE);
    FileUtils.deleteQuietly(outFile);
    ChunkCompressor compressor = ChunkCompressorFactory.getCompressor("snappy");
    FixedByteChunkSingleValueWriter writer = new FixedByteChunkSingleValueWriter(outFile, compressor, NUM_VALUES, NUM_DOCS_PER_CHUNK, V1Constants.Numbers.LONG_SIZE);
    for (int i = 0; i < NUM_VALUES; i++) {
        writer.setLong(i, expected[i]);
    }
    writer.close();
    PinotDataBuffer pinotDataBuffer = PinotDataBuffer.fromFile(outFile, ReadMode.mmap, FileChannel.MapMode.READ_ONLY, getClass().getName());
    ChunkDecompressor uncompressor = ChunkCompressorFactory.getDecompressor("snappy");
    FixedByteChunkSingleValueReader reader = new FixedByteChunkSingleValueReader(pinotDataBuffer, uncompressor);
    ChunkReaderContext context = reader.createContext();
    for (int i = 0; i < NUM_VALUES; i++) {
        long actual = reader.getLong(i, context);
        Assert.assertEquals(actual, expected[i]);
    }
    reader.close();
    FileUtils.deleteQuietly(outFile);
}
Also used : ChunkCompressor(com.linkedin.pinot.core.io.compression.ChunkCompressor) PinotDataBuffer(com.linkedin.pinot.core.segment.memory.PinotDataBuffer) ChunkDecompressor(com.linkedin.pinot.core.io.compression.ChunkDecompressor) FixedByteChunkSingleValueReader(com.linkedin.pinot.core.io.reader.impl.v1.FixedByteChunkSingleValueReader) File(java.io.File) FixedByteChunkSingleValueWriter(com.linkedin.pinot.core.io.writer.impl.v1.FixedByteChunkSingleValueWriter) ChunkReaderContext(com.linkedin.pinot.core.io.reader.impl.ChunkReaderContext) Test(org.testng.annotations.Test)

Example 4 with ChunkReaderContext

use of com.linkedin.pinot.core.io.reader.impl.ChunkReaderContext in project pinot by linkedin.

the class FixedByteChunkSingleValueReaderWriteTest method testFloat.

@Test
public void testFloat() throws Exception {
    float[] expected = new float[NUM_VALUES];
    for (int i = 0; i < NUM_VALUES; i++) {
        expected[i] = _random.nextFloat();
    }
    File outFile = new File(TEST_FILE);
    FileUtils.deleteQuietly(outFile);
    ChunkCompressor compressor = ChunkCompressorFactory.getCompressor("snappy");
    FixedByteChunkSingleValueWriter writer = new FixedByteChunkSingleValueWriter(outFile, compressor, NUM_VALUES, NUM_DOCS_PER_CHUNK, V1Constants.Numbers.FLOAT_SIZE);
    for (int i = 0; i < NUM_VALUES; i++) {
        writer.setFloat(i, expected[i]);
    }
    writer.close();
    PinotDataBuffer pinotDataBuffer = PinotDataBuffer.fromFile(outFile, ReadMode.mmap, FileChannel.MapMode.READ_ONLY, getClass().getName());
    ChunkDecompressor uncompressor = ChunkCompressorFactory.getDecompressor("snappy");
    FixedByteChunkSingleValueReader reader = new FixedByteChunkSingleValueReader(pinotDataBuffer, uncompressor);
    ChunkReaderContext context = reader.createContext();
    for (int i = 0; i < NUM_VALUES; i++) {
        float actual = reader.getFloat(i, context);
        Assert.assertEquals(actual, expected[i]);
    }
    reader.close();
    FileUtils.deleteQuietly(outFile);
}
Also used : ChunkCompressor(com.linkedin.pinot.core.io.compression.ChunkCompressor) PinotDataBuffer(com.linkedin.pinot.core.segment.memory.PinotDataBuffer) ChunkDecompressor(com.linkedin.pinot.core.io.compression.ChunkDecompressor) FixedByteChunkSingleValueReader(com.linkedin.pinot.core.io.reader.impl.v1.FixedByteChunkSingleValueReader) File(java.io.File) FixedByteChunkSingleValueWriter(com.linkedin.pinot.core.io.writer.impl.v1.FixedByteChunkSingleValueWriter) ChunkReaderContext(com.linkedin.pinot.core.io.reader.impl.ChunkReaderContext) Test(org.testng.annotations.Test)

Example 5 with ChunkReaderContext

use of com.linkedin.pinot.core.io.reader.impl.ChunkReaderContext in project pinot by linkedin.

the class FixedByteChunkSingleValueReaderWriteTest method testDouble.

@Test
public void testDouble() throws Exception {
    double[] expected = new double[NUM_VALUES];
    for (int i = 0; i < NUM_VALUES; i++) {
        expected[i] = _random.nextDouble();
    }
    File outFile = new File(TEST_FILE);
    FileUtils.deleteQuietly(outFile);
    ChunkCompressor compressor = ChunkCompressorFactory.getCompressor("snappy");
    FixedByteChunkSingleValueWriter writer = new FixedByteChunkSingleValueWriter(outFile, compressor, NUM_VALUES, NUM_DOCS_PER_CHUNK, V1Constants.Numbers.DOUBLE_SIZE);
    for (int i = 0; i < NUM_VALUES; i++) {
        writer.setDouble(i, expected[i]);
    }
    writer.close();
    PinotDataBuffer pinotDataBuffer = PinotDataBuffer.fromFile(outFile, ReadMode.mmap, FileChannel.MapMode.READ_ONLY, getClass().getName());
    ChunkDecompressor uncompressor = ChunkCompressorFactory.getDecompressor("snappy");
    FixedByteChunkSingleValueReader reader = new FixedByteChunkSingleValueReader(pinotDataBuffer, uncompressor);
    ChunkReaderContext context = reader.createContext();
    for (int i = 0; i < NUM_VALUES; i++) {
        double actual = reader.getDouble(i, context);
        Assert.assertEquals(actual, expected[i]);
    }
    reader.close();
    FileUtils.deleteQuietly(outFile);
}
Also used : ChunkCompressor(com.linkedin.pinot.core.io.compression.ChunkCompressor) PinotDataBuffer(com.linkedin.pinot.core.segment.memory.PinotDataBuffer) ChunkDecompressor(com.linkedin.pinot.core.io.compression.ChunkDecompressor) FixedByteChunkSingleValueReader(com.linkedin.pinot.core.io.reader.impl.v1.FixedByteChunkSingleValueReader) File(java.io.File) FixedByteChunkSingleValueWriter(com.linkedin.pinot.core.io.writer.impl.v1.FixedByteChunkSingleValueWriter) ChunkReaderContext(com.linkedin.pinot.core.io.reader.impl.ChunkReaderContext) Test(org.testng.annotations.Test)

Aggregations

ChunkReaderContext (com.linkedin.pinot.core.io.reader.impl.ChunkReaderContext)7 ChunkDecompressor (com.linkedin.pinot.core.io.compression.ChunkDecompressor)6 PinotDataBuffer (com.linkedin.pinot.core.segment.memory.PinotDataBuffer)6 Test (org.testng.annotations.Test)6 ChunkCompressor (com.linkedin.pinot.core.io.compression.ChunkCompressor)5 File (java.io.File)5 FixedByteChunkSingleValueReader (com.linkedin.pinot.core.io.reader.impl.v1.FixedByteChunkSingleValueReader)4 FixedByteChunkSingleValueWriter (com.linkedin.pinot.core.io.writer.impl.v1.FixedByteChunkSingleValueWriter)4 VarByteChunkSingleValueReader (com.linkedin.pinot.core.io.reader.impl.v1.VarByteChunkSingleValueReader)2 GenericRow (com.linkedin.pinot.core.data.GenericRow)1 VarByteChunkSingleValueWriter (com.linkedin.pinot.core.io.writer.impl.v1.VarByteChunkSingleValueWriter)1 Random (java.util.Random)1