use of java.io.BufferedInputStream in project hadoop by apache.
the class TestAliyunOSSFileSystemStore method writeRenameReadCompare.
protected void writeRenameReadCompare(Path path, long len) throws IOException, NoSuchAlgorithmException {
// If len > fs.oss.multipart.upload.threshold,
// we'll use a multipart upload copy
MessageDigest digest = MessageDigest.getInstance("MD5");
OutputStream out = new BufferedOutputStream(new DigestOutputStream(fs.create(path, false), digest));
for (long i = 0; i < len; i++) {
out.write('Q');
}
out.flush();
out.close();
assertTrue("Exists", fs.exists(path));
Path copyPath = path.suffix(".copy");
fs.rename(path, copyPath);
assertTrue("Copy exists", fs.exists(copyPath));
// Download file from Aliyun OSS and compare the digest against the original
MessageDigest digest2 = MessageDigest.getInstance("MD5");
InputStream in = new BufferedInputStream(new DigestInputStream(fs.open(copyPath), digest2));
long copyLen = 0;
while (in.read() != -1) {
copyLen++;
}
in.close();
assertEquals("Copy length matches original", len, copyLen);
assertArrayEquals("Digests match", digest.digest(), digest2.digest());
}
use of java.io.BufferedInputStream in project hadoop by apache.
the class StreamXmlRecordReader method init.
public final void init() throws IOException {
LOG.info("StreamBaseRecordReader.init: " + " start_=" + start_ + " end_=" + end_ + " length_=" + length_ + " start_ > in_.getPos() =" + (start_ > in_.getPos()) + " " + start_ + " > " + in_.getPos());
if (start_ > in_.getPos()) {
in_.seek(start_);
}
pos_ = start_;
bin_ = new BufferedInputStream(in_);
seekNextRecordBoundary();
}
use of java.io.BufferedInputStream in project groovy by apache.
the class NioGroovyMethods method eachByte.
/**
* Traverse through each byte of this Path
*
* @param self a Path
* @param closure a closure
* @throws java.io.IOException if an IOException occurs.
* @see org.codehaus.groovy.runtime.IOGroovyMethods#eachByte(java.io.InputStream, groovy.lang.Closure)
* @since 2.3.0
*/
public static void eachByte(Path self, @ClosureParams(value = SimpleType.class, options = "byte") Closure closure) throws IOException {
BufferedInputStream is = newInputStream(self);
IOGroovyMethods.eachByte(is, closure);
}
use of java.io.BufferedInputStream in project groovy by apache.
the class NioGroovyMethods method eachByte.
/**
* Traverse through the bytes of this Path, bufferLen bytes at a time.
*
* @param self a Path
* @param bufferLen the length of the buffer to use.
* @param closure a 2 parameter closure which is passed the byte[] and a number of bytes successfully read.
* @throws java.io.IOException if an IOException occurs.
* @see org.codehaus.groovy.runtime.IOGroovyMethods#eachByte(java.io.InputStream, int, groovy.lang.Closure)
* @since 2.3.0
*/
public static void eachByte(Path self, int bufferLen, @ClosureParams(value = FromString.class, options = "byte[],Integer") Closure closure) throws IOException {
BufferedInputStream is = newInputStream(self);
IOGroovyMethods.eachByte(is, bufferLen, closure);
}
use of java.io.BufferedInputStream in project hadoop by apache.
the class Credentials method readTokenStorageFile.
/**
* Convenience method for reading a token storage file and loading its Tokens.
* @param filename
* @param conf
* @throws IOException
*/
public static Credentials readTokenStorageFile(File filename, Configuration conf) throws IOException {
DataInputStream in = null;
Credentials credentials = new Credentials();
try {
in = new DataInputStream(new BufferedInputStream(new FileInputStream(filename)));
credentials.readTokenStorageStream(in);
return credentials;
} catch (IOException ioe) {
throw new IOException("Exception reading " + filename, ioe);
} finally {
IOUtils.cleanup(LOG, in);
}
}
Aggregations