Search in sources :

Example 41 with TException

use of org.apache.thrift.TException in project zeppelin by apache.

the class RemoteInterpreter method interpret.

@Override
public InterpreterResult interpret(String st, InterpreterContext context) {
    if (logger.isDebugEnabled()) {
        logger.debug("st:\n{}", st);
    }
    FormType form = getFormType();
    RemoteInterpreterProcess interpreterProcess = getInterpreterProcess();
    Client client = null;
    try {
        client = interpreterProcess.getClient();
    } catch (Exception e1) {
        throw new InterpreterException(e1);
    }
    InterpreterContextRunnerPool interpreterContextRunnerPool = interpreterProcess.getInterpreterContextRunnerPool();
    List<InterpreterContextRunner> runners = context.getRunners();
    if (runners != null && runners.size() != 0) {
        // assume all runners in this InterpreterContext have the same note id
        String noteId = runners.get(0).getNoteId();
        interpreterContextRunnerPool.clear(noteId);
        interpreterContextRunnerPool.addAll(noteId, runners);
    }
    boolean broken = false;
    try {
        final GUI currentGUI = context.getGui();
        RemoteInterpreterResult remoteResult = client.interpret(sessionKey, className, st, convert(context));
        Map<String, Object> remoteConfig = (Map<String, Object>) gson.fromJson(remoteResult.getConfig(), new TypeToken<Map<String, Object>>() {
        }.getType());
        context.getConfig().clear();
        context.getConfig().putAll(remoteConfig);
        if (form == FormType.NATIVE) {
            GUI remoteGui = gson.fromJson(remoteResult.getGui(), GUI.class);
            currentGUI.clear();
            currentGUI.setParams(remoteGui.getParams());
            currentGUI.setForms(remoteGui.getForms());
        } else if (form == FormType.SIMPLE) {
            final Map<String, Input> currentForms = currentGUI.getForms();
            final Map<String, Object> currentParams = currentGUI.getParams();
            final GUI remoteGUI = gson.fromJson(remoteResult.getGui(), GUI.class);
            final Map<String, Input> remoteForms = remoteGUI.getForms();
            final Map<String, Object> remoteParams = remoteGUI.getParams();
            currentForms.putAll(remoteForms);
            currentParams.putAll(remoteParams);
        }
        InterpreterResult result = convert(remoteResult);
        return result;
    } catch (TException e) {
        broken = true;
        throw new InterpreterException(e);
    } finally {
        interpreterProcess.releaseClient(client, broken);
    }
}
Also used : TException(org.apache.thrift.TException) RemoteInterpreterResult(org.apache.zeppelin.interpreter.thrift.RemoteInterpreterResult) RemoteInterpreterResult(org.apache.zeppelin.interpreter.thrift.RemoteInterpreterResult) TException(org.apache.thrift.TException) GUI(org.apache.zeppelin.display.GUI) AngularObject(org.apache.zeppelin.display.AngularObject) Client(org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService.Client)

Example 42 with TException

use of org.apache.thrift.TException in project zeppelin by apache.

the class RemoteInterpreter method completion.

@Override
public List<InterpreterCompletion> completion(String buf, int cursor) {
    RemoteInterpreterProcess interpreterProcess = getInterpreterProcess();
    Client client = null;
    try {
        client = interpreterProcess.getClient();
    } catch (Exception e1) {
        throw new InterpreterException(e1);
    }
    boolean broken = false;
    try {
        List completion = client.completion(sessionKey, className, buf, cursor);
        return completion;
    } catch (TException e) {
        broken = true;
        throw new InterpreterException(e);
    } finally {
        interpreterProcess.releaseClient(client, broken);
    }
}
Also used : TException(org.apache.thrift.TException) Client(org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService.Client) TException(org.apache.thrift.TException)

Example 43 with TException

use of org.apache.thrift.TException in project zeppelin by apache.

the class RemoteInterpreter method getFormType.

@Override
public FormType getFormType() {
    init();
    if (formType != null) {
        return formType;
    }
    RemoteInterpreterProcess interpreterProcess = getInterpreterProcess();
    Client client = null;
    try {
        client = interpreterProcess.getClient();
    } catch (Exception e1) {
        throw new InterpreterException(e1);
    }
    boolean broken = false;
    try {
        formType = FormType.valueOf(client.getFormType(sessionKey, className));
        return formType;
    } catch (TException e) {
        broken = true;
        throw new InterpreterException(e);
    } finally {
        interpreterProcess.releaseClient(client, broken);
    }
}
Also used : TException(org.apache.thrift.TException) Client(org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService.Client) TException(org.apache.thrift.TException)

Example 44 with TException

use of org.apache.thrift.TException in project hive by apache.

the class Hive method loadPartition.

/**
   * Load a directory into a Hive Table Partition - Alters existing content of
   * the partition with the contents of loadPath. - If the partition does not
   * exist - one is created - files in loadPath are moved into Hive. But the
   * directory itself is not removed.
   *
   * @param loadPath
   *          Directory containing files to load into Table
   * @param  tbl
   *          name of table to be loaded.
   * @param partSpec
   *          defines which partition needs to be loaded
   * @param replace
   *          if true - replace files in the partition, otherwise add files to
   *          the partition
   * @param inheritTableSpecs if true, on [re]creating the partition, take the
   *          location/inputformat/outputformat/serde details from table spec
   * @param isSrcLocal
   *          If the source directory is LOCAL
   * @param isAcid true if this is an ACID operation
   */
public Partition loadPartition(Path loadPath, Table tbl, Map<String, String> partSpec, boolean replace, boolean inheritTableSpecs, boolean isSkewedStoreAsSubdir, boolean isSrcLocal, boolean isAcid, boolean hasFollowingStatsTask) throws HiveException {
    Path tblDataLocationPath = tbl.getDataLocation();
    try {
        Partition oldPart = getPartition(tbl, partSpec, false);
        /**
       * Move files before creating the partition since down stream processes
       * check for existence of partition in metadata before accessing the data.
       * If partition is created before data is moved, downstream waiting
       * processes might move forward with partial data
       */
        Path oldPartPath = (oldPart != null) ? oldPart.getDataLocation() : null;
        Path newPartPath = null;
        if (inheritTableSpecs) {
            Path partPath = new Path(tbl.getDataLocation(), Warehouse.makePartPath(partSpec));
            newPartPath = new Path(tblDataLocationPath.toUri().getScheme(), tblDataLocationPath.toUri().getAuthority(), partPath.toUri().getPath());
            if (oldPart != null) {
                /*
           * If we are moving the partition across filesystem boundaries
           * inherit from the table properties. Otherwise (same filesystem) use the
           * original partition location.
           *
           * See: HIVE-1707 and HIVE-2117 for background
           */
                FileSystem oldPartPathFS = oldPartPath.getFileSystem(getConf());
                FileSystem loadPathFS = loadPath.getFileSystem(getConf());
                if (FileUtils.equalsFileSystem(oldPartPathFS, loadPathFS)) {
                    newPartPath = oldPartPath;
                }
            }
        } else {
            newPartPath = oldPartPath;
        }
        List<Path> newFiles = null;
        PerfLogger perfLogger = SessionState.getPerfLogger();
        perfLogger.PerfLogBegin("MoveTask", "FileMoves");
        if (replace || (oldPart == null && !isAcid)) {
            replaceFiles(tbl.getPath(), loadPath, newPartPath, oldPartPath, getConf(), isSrcLocal);
        } else {
            if (conf.getBoolVar(ConfVars.FIRE_EVENTS_FOR_DML) && !tbl.isTemporary() && oldPart != null) {
                newFiles = Collections.synchronizedList(new ArrayList<Path>());
            }
            FileSystem fs = tbl.getDataLocation().getFileSystem(conf);
            Hive.copyFiles(conf, loadPath, newPartPath, fs, isSrcLocal, isAcid, newFiles);
        }
        perfLogger.PerfLogEnd("MoveTask", "FileMoves");
        Partition newTPart = oldPart != null ? oldPart : new Partition(tbl, partSpec, newPartPath);
        alterPartitionSpecInMemory(tbl, partSpec, newTPart.getTPartition(), inheritTableSpecs, newPartPath.toString());
        validatePartition(newTPart);
        if ((null != newFiles) || replace) {
            fireInsertEvent(tbl, partSpec, newFiles);
        } else {
            LOG.debug("No new files were created, and is not a replace. Skipping generating INSERT event.");
        }
        //column stats will be inaccurate
        StatsSetupConst.clearColumnStatsState(newTPart.getParameters());
        // recreate the partition if it existed before
        if (isSkewedStoreAsSubdir) {
            org.apache.hadoop.hive.metastore.api.Partition newCreatedTpart = newTPart.getTPartition();
            SkewedInfo skewedInfo = newCreatedTpart.getSd().getSkewedInfo();
            /* Construct list bucketing location mappings from sub-directory name. */
            Map<List<String>, String> skewedColValueLocationMaps = constructListBucketingLocationMap(newPartPath, skewedInfo);
            /* Add list bucketing location mappings. */
            skewedInfo.setSkewedColValueLocationMaps(skewedColValueLocationMaps);
            newCreatedTpart.getSd().setSkewedInfo(skewedInfo);
        }
        if (!this.getConf().getBoolVar(HiveConf.ConfVars.HIVESTATSAUTOGATHER)) {
            StatsSetupConst.setBasicStatsState(newTPart.getParameters(), StatsSetupConst.FALSE);
        }
        if (oldPart == null) {
            newTPart.getTPartition().setParameters(new HashMap<String, String>());
            if (this.getConf().getBoolVar(HiveConf.ConfVars.HIVESTATSAUTOGATHER)) {
                StatsSetupConst.setBasicStatsStateForCreateTable(newTPart.getParameters(), StatsSetupConst.TRUE);
            }
            MetaStoreUtils.populateQuickStats(HiveStatsUtils.getFileStatusRecurse(newPartPath, -1, newPartPath.getFileSystem(conf)), newTPart.getParameters());
            try {
                LOG.debug("Adding new partition " + newTPart.getSpec());
                getSychronizedMSC().add_partition(newTPart.getTPartition());
            } catch (AlreadyExistsException aee) {
                // With multiple users concurrently issuing insert statements on the same partition has
                // a side effect that some queries may not see a partition at the time when they're issued,
                // but will realize the partition is actually there when it is trying to add such partition
                // to the metastore and thus get AlreadyExistsException, because some earlier query just created it (race condition).
                // For example, imagine such a table is created:
                //  create table T (name char(50)) partitioned by (ds string);
                // and the following two queries are launched at the same time, from different sessions:
                //  insert into table T partition (ds) values ('Bob', 'today'); -- creates the partition 'today'
                //  insert into table T partition (ds) values ('Joe', 'today'); -- will fail with AlreadyExistsException
                // In that case, we want to retry with alterPartition.
                LOG.debug("Caught AlreadyExistsException, trying to alter partition instead");
                setStatsPropAndAlterPartition(hasFollowingStatsTask, tbl, newTPart);
            }
        } else {
            setStatsPropAndAlterPartition(hasFollowingStatsTask, tbl, newTPart);
        }
        return newTPart;
    } catch (IOException e) {
        LOG.error(StringUtils.stringifyException(e));
        throw new HiveException(e);
    } catch (MetaException e) {
        LOG.error(StringUtils.stringifyException(e));
        throw new HiveException(e);
    } catch (InvalidOperationException e) {
        LOG.error(StringUtils.stringifyException(e));
        throw new HiveException(e);
    } catch (TException e) {
        LOG.error(StringUtils.stringifyException(e));
        throw new HiveException(e);
    }
}
Also used : Path(org.apache.hadoop.fs.Path) TException(org.apache.thrift.TException) AlreadyExistsException(org.apache.hadoop.hive.metastore.api.AlreadyExistsException) PerfLogger(org.apache.hadoop.hive.ql.log.PerfLogger) ArrayList(java.util.ArrayList) IOException(java.io.IOException) SkewedInfo(org.apache.hadoop.hive.metastore.api.SkewedInfo) FileSystem(org.apache.hadoop.fs.FileSystem) InvalidOperationException(org.apache.hadoop.hive.metastore.api.InvalidOperationException) ArrayList(java.util.ArrayList) List(java.util.List) LinkedList(java.util.LinkedList) MetaException(org.apache.hadoop.hive.metastore.api.MetaException) HiveMetaException(org.apache.hadoop.hive.metastore.HiveMetaException)

Example 45 with TException

use of org.apache.thrift.TException in project hive by apache.

the class Hive method fireInsertEvent.

private void fireInsertEvent(Table tbl, Map<String, String> partitionSpec, List<Path> newFiles) throws HiveException {
    if (conf.getBoolVar(ConfVars.FIRE_EVENTS_FOR_DML)) {
        LOG.debug("Firing dml insert event");
        if (tbl.isTemporary()) {
            LOG.debug("Not firing dml insert event as " + tbl.getTableName() + " is temporary");
            return;
        }
        try {
            FileSystem fileSystem = tbl.getDataLocation().getFileSystem(conf);
            FireEventRequestData data = new FireEventRequestData();
            InsertEventRequestData insertData = new InsertEventRequestData();
            data.setInsertData(insertData);
            if (newFiles != null && newFiles.size() > 0) {
                for (Path p : newFiles) {
                    insertData.addToFilesAdded(p.toString());
                    FileChecksum cksum = fileSystem.getFileChecksum(p);
                    // File checksum is not implemented for local filesystem (RawLocalFileSystem)
                    if (cksum != null) {
                        String checksumString = StringUtils.byteToHexString(cksum.getBytes(), 0, cksum.getLength());
                        insertData.addToFilesAddedChecksum(checksumString);
                    } else {
                        // Add an empty checksum string for filesystems that don't generate one
                        insertData.addToFilesAddedChecksum("");
                    }
                }
            } else {
                insertData.setFilesAdded(new ArrayList<String>());
            }
            FireEventRequest rqst = new FireEventRequest(true, data);
            rqst.setDbName(tbl.getDbName());
            rqst.setTableName(tbl.getTableName());
            if (partitionSpec != null && partitionSpec.size() > 0) {
                List<String> partVals = new ArrayList<String>(partitionSpec.size());
                for (FieldSchema fs : tbl.getPartitionKeys()) {
                    partVals.add(partitionSpec.get(fs.getName()));
                }
                rqst.setPartitionVals(partVals);
            }
            getMSC().fireListenerEvent(rqst);
        } catch (IOException | TException e) {
            throw new HiveException(e);
        }
    }
}
Also used : Path(org.apache.hadoop.fs.Path) TException(org.apache.thrift.TException) FieldSchema(org.apache.hadoop.hive.metastore.api.FieldSchema) ArrayList(java.util.ArrayList) IOException(java.io.IOException) FileChecksum(org.apache.hadoop.fs.FileChecksum) FileSystem(org.apache.hadoop.fs.FileSystem) FireEventRequestData(org.apache.hadoop.hive.metastore.api.FireEventRequestData) FireEventRequest(org.apache.hadoop.hive.metastore.api.FireEventRequest) InsertEventRequestData(org.apache.hadoop.hive.metastore.api.InsertEventRequestData)

Aggregations

TException (org.apache.thrift.TException)381 IOException (java.io.IOException)164 MetaException (org.apache.hadoop.hive.metastore.api.MetaException)57 NoSuchObjectException (org.apache.hadoop.hive.metastore.api.NoSuchObjectException)48 ArrayList (java.util.ArrayList)42 HashMap (java.util.HashMap)40 Table (org.apache.hadoop.hive.metastore.api.Table)38 Map (java.util.Map)30 TBinaryProtocol (org.apache.thrift.protocol.TBinaryProtocol)29 AuthorizationException (org.apache.storm.generated.AuthorizationException)27 Test (org.junit.Test)26 List (java.util.List)25 InvalidObjectException (org.apache.hadoop.hive.metastore.api.InvalidObjectException)24 UnknownHostException (java.net.UnknownHostException)23 TProtocol (org.apache.thrift.protocol.TProtocol)23 FileNotFoundException (java.io.FileNotFoundException)22 HiveSQLException (org.apache.hive.service.cli.HiveSQLException)22 InvalidMetaException (com.netflix.metacat.common.server.connectors.exception.InvalidMetaException)21 LoginException (javax.security.auth.login.LoginException)21 ConnectorException (com.netflix.metacat.common.server.connectors.exception.ConnectorException)20