Search in sources :

Example 11 with Decompressor

use of org.apache.hadoop.io.compress.Decompressor in project hadoop by apache.

the class TestZlibCompressorDecompressor method testZlibCompressorDecompressorWithConfiguration.

@Test
public void testZlibCompressorDecompressorWithConfiguration() {
    Configuration conf = new Configuration();
    if (ZlibFactory.isNativeZlibLoaded(conf)) {
        byte[] rawData;
        int tryNumber = 5;
        int BYTE_SIZE = 10 * 1024;
        Compressor zlibCompressor = ZlibFactory.getZlibCompressor(conf);
        Decompressor zlibDecompressor = ZlibFactory.getZlibDecompressor(conf);
        rawData = generate(BYTE_SIZE);
        try {
            for (int i = 0; i < tryNumber; i++) compressDecompressZlib(rawData, (ZlibCompressor) zlibCompressor, (ZlibDecompressor) zlibDecompressor);
            zlibCompressor.reinit(conf);
        } catch (Exception ex) {
            fail("testZlibCompressorDecompressorWithConfiguration ex error " + ex);
        }
    } else {
        assertTrue("ZlibFactory is using native libs against request", ZlibFactory.isNativeZlibLoaded(conf));
    }
}
Also used : ZlibDirectDecompressor(org.apache.hadoop.io.compress.zlib.ZlibDecompressor.ZlibDirectDecompressor) Decompressor(org.apache.hadoop.io.compress.Decompressor) Configuration(org.apache.hadoop.conf.Configuration) Compressor(org.apache.hadoop.io.compress.Compressor) IOException(java.io.IOException) Test(org.junit.Test)

Example 12 with Decompressor

use of org.apache.hadoop.io.compress.Decompressor in project hadoop by apache.

the class TestZStandardCompressorDecompressor method testCompressionCompressesCorrectly.

@Test
public void testCompressionCompressesCorrectly() throws Exception {
    int uncompressedSize = (int) FileUtils.sizeOf(uncompressedFile);
    byte[] bytes = FileUtils.readFileToByteArray(uncompressedFile);
    assertEquals(uncompressedSize, bytes.length);
    Configuration conf = new Configuration();
    ZStandardCodec codec = new ZStandardCodec();
    codec.setConf(conf);
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    Compressor compressor = codec.createCompressor();
    CompressionOutputStream outputStream = codec.createOutputStream(baos, compressor);
    for (byte aByte : bytes) {
        outputStream.write(aByte);
    }
    outputStream.finish();
    outputStream.close();
    assertEquals(uncompressedSize, compressor.getBytesRead());
    assertTrue(compressor.finished());
    // just make sure we can decompress the file
    ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
    ByteArrayInputStream bais = new ByteArrayInputStream(baos.toByteArray());
    Decompressor decompressor = codec.createDecompressor();
    CompressionInputStream inputStream = codec.createInputStream(bais, decompressor);
    byte[] buffer = new byte[100];
    int n = buffer.length;
    while ((n = inputStream.read(buffer, 0, n)) != -1) {
        byteArrayOutputStream.write(buffer, 0, n);
    }
    assertArrayEquals(bytes, byteArrayOutputStream.toByteArray());
}
Also used : CompressionOutputStream(org.apache.hadoop.io.compress.CompressionOutputStream) Decompressor(org.apache.hadoop.io.compress.Decompressor) Configuration(org.apache.hadoop.conf.Configuration) ByteArrayInputStream(java.io.ByteArrayInputStream) CompressionInputStream(org.apache.hadoop.io.compress.CompressionInputStream) Compressor(org.apache.hadoop.io.compress.Compressor) ZStandardCodec(org.apache.hadoop.io.compress.ZStandardCodec) ByteArrayOutputStream(java.io.ByteArrayOutputStream) Test(org.junit.Test)

Example 13 with Decompressor

use of org.apache.hadoop.io.compress.Decompressor in project hadoop by apache.

the class TestZStandardCompressorDecompressor method testCompressingWithOneByteOutputBuffer.

@Test
public void testCompressingWithOneByteOutputBuffer() throws Exception {
    int uncompressedSize = (int) FileUtils.sizeOf(uncompressedFile);
    byte[] bytes = FileUtils.readFileToByteArray(uncompressedFile);
    assertEquals(uncompressedSize, bytes.length);
    Configuration conf = new Configuration();
    ZStandardCodec codec = new ZStandardCodec();
    codec.setConf(conf);
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    Compressor compressor = new ZStandardCompressor(3, IO_FILE_BUFFER_SIZE_DEFAULT, 1);
    CompressionOutputStream outputStream = codec.createOutputStream(baos, compressor);
    for (byte aByte : bytes) {
        outputStream.write(aByte);
    }
    outputStream.finish();
    outputStream.close();
    assertEquals(uncompressedSize, compressor.getBytesRead());
    assertTrue(compressor.finished());
    // just make sure we can decompress the file
    ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
    ByteArrayInputStream bais = new ByteArrayInputStream(baos.toByteArray());
    Decompressor decompressor = codec.createDecompressor();
    CompressionInputStream inputStream = codec.createInputStream(bais, decompressor);
    byte[] buffer = new byte[100];
    int n = buffer.length;
    while ((n = inputStream.read(buffer, 0, n)) != -1) {
        byteArrayOutputStream.write(buffer, 0, n);
    }
    assertArrayEquals(bytes, byteArrayOutputStream.toByteArray());
}
Also used : CompressionOutputStream(org.apache.hadoop.io.compress.CompressionOutputStream) Decompressor(org.apache.hadoop.io.compress.Decompressor) Configuration(org.apache.hadoop.conf.Configuration) ByteArrayInputStream(java.io.ByteArrayInputStream) CompressionInputStream(org.apache.hadoop.io.compress.CompressionInputStream) Compressor(org.apache.hadoop.io.compress.Compressor) ZStandardCodec(org.apache.hadoop.io.compress.ZStandardCodec) ByteArrayOutputStream(java.io.ByteArrayOutputStream) Test(org.junit.Test)

Example 14 with Decompressor

use of org.apache.hadoop.io.compress.Decompressor in project asterixdb by apache.

the class CodecPool method getDecompressor.

/**
   * Get a {@link Decompressor} for the given {@link CompressionCodec} from the
   * pool or a new one.
   * 
   * @param codec
   *          the <code>CompressionCodec</code> for which to get the
   *          <code>Decompressor</code>
   * @return <code>Decompressor</code> for the given
   *         <code>CompressionCodec</code> the pool or a new one
   */
public static Decompressor getDecompressor(CompressionCodec codec) {
    Decompressor decompressor = borrow(DECOMPRESSOR_POOL, codec.getDecompressorType());
    if (decompressor == null) {
        decompressor = codec.createDecompressor();
        LOG.info("Got brand-new decompressor");
    } else {
        LOG.debug("Got recycled decompressor");
    }
    return decompressor;
}
Also used : Decompressor(org.apache.hadoop.io.compress.Decompressor)

Aggregations

Decompressor (org.apache.hadoop.io.compress.Decompressor)14 Test (org.junit.Test)9 Configuration (org.apache.hadoop.conf.Configuration)7 CompressionInputStream (org.apache.hadoop.io.compress.CompressionInputStream)6 Compressor (org.apache.hadoop.io.compress.Compressor)5 ZlibDirectDecompressor (org.apache.hadoop.io.compress.zlib.ZlibDecompressor.ZlibDirectDecompressor)4 ByteArrayOutputStream (java.io.ByteArrayOutputStream)3 IOException (java.io.IOException)3 InputStream (java.io.InputStream)3 ZStandardCodec (org.apache.hadoop.io.compress.ZStandardCodec)3 ByteArrayInputStream (java.io.ByteArrayInputStream)2 File (java.io.File)2 URL (java.net.URL)2 HashSet (java.util.HashSet)2 Path (org.apache.hadoop.fs.Path)2 BZip2Codec (org.apache.hadoop.io.compress.BZip2Codec)2 CompressionOutputStream (org.apache.hadoop.io.compress.CompressionOutputStream)2 BufferedInputStream (java.io.BufferedInputStream)1 FileInputStream (java.io.FileInputStream)1 Configurable (org.apache.hadoop.conf.Configurable)1