Search in sources :

Example 1 with DatanodeIDProto

use of org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeIDProto in project hadoop by apache.

the class TestPBHelper method testConvertDatanodeID.

@Test
public void testConvertDatanodeID() {
    DatanodeID dn = DFSTestUtil.getLocalDatanodeID();
    DatanodeIDProto dnProto = PBHelperClient.convert(dn);
    DatanodeID dn2 = PBHelperClient.convert(dnProto);
    compare(dn, dn2);
}
Also used : DatanodeID(org.apache.hadoop.hdfs.protocol.DatanodeID) DatanodeIDProto(org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeIDProto) Test(org.junit.Test)

Example 2 with DatanodeIDProto

use of org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeIDProto in project hadoop by apache.

the class ClientNamenodeProtocolServerSideTranslatorPB method updatePipeline.

@Override
public UpdatePipelineResponseProto updatePipeline(RpcController controller, UpdatePipelineRequestProto req) throws ServiceException {
    try {
        List<DatanodeIDProto> newNodes = req.getNewNodesList();
        List<String> newStorageIDs = req.getStorageIDsList();
        server.updatePipeline(req.getClientName(), PBHelperClient.convert(req.getOldBlock()), PBHelperClient.convert(req.getNewBlock()), PBHelperClient.convert(newNodes.toArray(new DatanodeIDProto[newNodes.size()])), newStorageIDs.toArray(new String[newStorageIDs.size()]));
        return VOID_UPDATEPIPELINE_RESPONSE;
    } catch (IOException e) {
        throw new ServiceException(e);
    }
}
Also used : ServiceException(com.google.protobuf.ServiceException) IOException(java.io.IOException) DatanodeIDProto(org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeIDProto)

Example 3 with DatanodeIDProto

use of org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeIDProto in project hadoop by apache.

the class DatanodeProtocolServerSideTranslatorPB method commitBlockSynchronization.

@Override
public CommitBlockSynchronizationResponseProto commitBlockSynchronization(RpcController controller, CommitBlockSynchronizationRequestProto request) throws ServiceException {
    List<DatanodeIDProto> dnprotos = request.getNewTaragetsList();
    DatanodeID[] dns = new DatanodeID[dnprotos.size()];
    for (int i = 0; i < dnprotos.size(); i++) {
        dns[i] = PBHelperClient.convert(dnprotos.get(i));
    }
    final List<String> sidprotos = request.getNewTargetStoragesList();
    final String[] storageIDs = sidprotos.toArray(new String[sidprotos.size()]);
    try {
        impl.commitBlockSynchronization(PBHelperClient.convert(request.getBlock()), request.getNewGenStamp(), request.getNewLength(), request.getCloseFile(), request.getDeleteBlock(), dns, storageIDs);
    } catch (IOException e) {
        throw new ServiceException(e);
    }
    return VOID_COMMIT_BLOCK_SYNCHRONIZATION_RESPONSE_PROTO;
}
Also used : DatanodeID(org.apache.hadoop.hdfs.protocol.DatanodeID) ServiceException(com.google.protobuf.ServiceException) IOException(java.io.IOException) DatanodeIDProto(org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeIDProto)

Aggregations

DatanodeIDProto (org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeIDProto)3 ServiceException (com.google.protobuf.ServiceException)2 IOException (java.io.IOException)2 DatanodeID (org.apache.hadoop.hdfs.protocol.DatanodeID)2 Test (org.junit.Test)1