Search in sources :

Example 6 with SolrException

use of org.apache.solr.common.SolrException in project lucene-solr by apache.

the class UIMAUpdateRequestProcessorFactory method getInstance.

@Override
public UpdateRequestProcessor getInstance(SolrQueryRequest req, SolrQueryResponse rsp, UpdateRequestProcessor next) {
    SolrUIMAConfiguration configuration = new SolrUIMAConfigurationReader(args).readSolrUIMAConfiguration();
    synchronized (this) {
        if (ae == null && pool == null) {
            AEProvider aeProvider = AEProviderFactory.getInstance().getAEProvider(req.getCore().getName(), configuration.getAePath(), configuration.getRuntimeParameters());
            try {
                ae = aeProvider.getAE();
                pool = new JCasPool(10, ae);
            } catch (ResourceInitializationException e) {
                throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
            }
        }
    }
    return new UIMAUpdateRequestProcessor(next, req.getCore().getName(), configuration, ae, pool);
}
Also used : AEProvider(org.apache.lucene.analysis.uima.ae.AEProvider) ResourceInitializationException(org.apache.uima.resource.ResourceInitializationException) JCasPool(org.apache.uima.util.JCasPool) SolrException(org.apache.solr.common.SolrException)

Example 7 with SolrException

use of org.apache.solr.common.SolrException in project lucene-solr by apache.

the class ApiBag method getCommandOperations.

public static List<CommandOperation> getCommandOperations(Reader reader, Map<String, JsonSchemaValidator> validators, boolean validate) {
    List<CommandOperation> parsedCommands = null;
    try {
        parsedCommands = CommandOperation.parse(reader);
    } catch (IOException e) {
        throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, e);
    }
    if (validators == null || !validate) {
        // no validation possible because we do not have a spec
        return parsedCommands;
    }
    List<CommandOperation> commandsCopy = CommandOperation.clone(parsedCommands);
    for (CommandOperation cmd : commandsCopy) {
        JsonSchemaValidator validator = validators.get(cmd.name);
        if (validator == null) {
            cmd.addError(formatString("Unknown operation ''{0}'' available ops are ''{1}''", cmd.name, validators.keySet()));
            continue;
        } else {
            List<String> errs = validator.validateJson(cmd.getCommandData());
            if (errs != null)
                for (String err : errs) cmd.addError(err);
        }
    }
    List<Map> errs = CommandOperation.captureErrors(commandsCopy);
    if (!errs.isEmpty()) {
        throw new ExceptionWithErrObject(SolrException.ErrorCode.BAD_REQUEST, "Error in command payload", errs);
    }
    return commandsCopy;
}
Also used : CommandOperation(org.apache.solr.common.util.CommandOperation) IOException(java.io.IOException) StrUtils.formatString(org.apache.solr.common.util.StrUtils.formatString) HashMap(java.util.HashMap) Map(java.util.Map) ValidatingJsonMap(org.apache.solr.common.util.ValidatingJsonMap) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) SolrException(org.apache.solr.common.SolrException) JsonSchemaValidator(org.apache.solr.util.JsonSchemaValidator)

Example 8 with SolrException

use of org.apache.solr.common.SolrException in project lucene-solr by apache.

the class SolrCore method getSearcher.

/**
   * Get a {@link SolrIndexSearcher} or start the process of creating a new one.
   * <p>
   * The registered searcher is the default searcher used to service queries.
   * A searcher will normally be registered after all of the warming
   * and event handlers (newSearcher or firstSearcher events) have run.
   * In the case where there is no registered searcher, the newly created searcher will
   * be registered before running the event handlers (a slow searcher is better than no searcher).
   *
   * <p>
   * These searchers contain read-only IndexReaders. To access a non read-only IndexReader,
   * see newSearcher(String name, boolean readOnly).
   *
   * <p>
   * If <tt>forceNew==true</tt> then
   *  A new searcher will be opened and registered regardless of whether there is already
   *    a registered searcher or other searchers in the process of being created.
   * <p>
   * If <tt>forceNew==false</tt> then:<ul>
   *   <li>If a searcher is already registered, that searcher will be returned</li>
   *   <li>If no searcher is currently registered, but at least one is in the process of being created, then
   * this call will block until the first searcher is registered</li>
   *   <li>If no searcher is currently registered, and no searchers in the process of being registered, a new
   * searcher will be created.</li>
   * </ul>
   * <p>
   * If <tt>returnSearcher==true</tt> then a {@link RefCounted}&lt;{@link SolrIndexSearcher}&gt; will be returned with
   * the reference count incremented.  It <b>must</b> be decremented when no longer needed.
   * <p>
   * If <tt>waitSearcher!=null</tt> and a new {@link SolrIndexSearcher} was created,
   * then it is filled in with a Future that will return after the searcher is registered.  The Future may be set to
   * <tt>null</tt> in which case the SolrIndexSearcher created has already been registered at the time
   * this method returned.
   * <p>
   * @param forceNew             if true, force the open of a new index searcher regardless if there is already one open.
   * @param returnSearcher       if true, returns a {@link SolrIndexSearcher} holder with the refcount already incremented.
   * @param waitSearcher         if non-null, will be filled in with a {@link Future} that will return after the new searcher is registered.
   * @param updateHandlerReopens if true, the UpdateHandler will be used when reopening a {@link SolrIndexSearcher}.
   */
public RefCounted<SolrIndexSearcher> getSearcher(boolean forceNew, boolean returnSearcher, final Future[] waitSearcher, boolean updateHandlerReopens) {
    synchronized (searcherLock) {
        for (; ; ) {
            // see if we can return the current searcher
            if (_searcher != null && !forceNew) {
                if (returnSearcher) {
                    _searcher.incref();
                    return _searcher;
                } else {
                    return null;
                }
            }
            // check to see if we can wait for someone else's searcher to be set
            if (onDeckSearchers > 0 && !forceNew && _searcher == null) {
                try {
                    searcherLock.wait();
                } catch (InterruptedException e) {
                    log.info(SolrException.toStr(e));
                }
            }
            // check again: see if we can return right now
            if (_searcher != null && !forceNew) {
                if (returnSearcher) {
                    _searcher.incref();
                    return _searcher;
                } else {
                    return null;
                }
            }
            // At this point, we know we need to open a new searcher...
            // first: increment count to signal other threads that we are
            //        opening a new searcher.
            onDeckSearchers++;
            newSearcherCounter.inc();
            if (onDeckSearchers < 1) {
                // should never happen... just a sanity check
                log.error(logid + "ERROR!!! onDeckSearchers is " + onDeckSearchers);
                // reset
                onDeckSearchers = 1;
            } else if (onDeckSearchers > maxWarmingSearchers) {
                onDeckSearchers--;
                newSearcherMaxReachedCounter.inc();
                try {
                    searcherLock.wait();
                } catch (InterruptedException e) {
                    log.info(SolrException.toStr(e));
                }
                // go back to the top of the loop and retry
                continue;
            } else if (onDeckSearchers > 1) {
                log.warn(logid + "PERFORMANCE WARNING: Overlapping onDeckSearchers=" + onDeckSearchers);
            }
            // I can now exit the loop and proceed to open a searcher
            break;
        }
    }
    // a signal to decrement onDeckSearchers if something goes wrong.
    final boolean[] decrementOnDeckCount = new boolean[] { true };
    // searcher we are autowarming from
    RefCounted<SolrIndexSearcher> currSearcherHolder = null;
    RefCounted<SolrIndexSearcher> searchHolder = null;
    boolean success = false;
    openSearcherLock.lock();
    Timer.Context timerContext = newSearcherTimer.time();
    try {
        searchHolder = openNewSearcher(updateHandlerReopens, false);
        // increment it again if we are going to return it to the caller.
        if (returnSearcher) {
            searchHolder.incref();
        }
        final RefCounted<SolrIndexSearcher> newSearchHolder = searchHolder;
        final SolrIndexSearcher newSearcher = newSearchHolder.get();
        boolean alreadyRegistered = false;
        synchronized (searcherLock) {
            if (_searcher == null) {
                // want to register this one before warming is complete instead of waiting.
                if (solrConfig.useColdSearcher) {
                    registerSearcher(newSearchHolder);
                    decrementOnDeckCount[0] = false;
                    alreadyRegistered = true;
                }
            } else {
                // get a reference to the current searcher for purposes of autowarming.
                currSearcherHolder = _searcher;
                currSearcherHolder.incref();
            }
        }
        final SolrIndexSearcher currSearcher = currSearcherHolder == null ? null : currSearcherHolder.get();
        Future future = null;
        // if the underlying searcher has not changed, no warming is needed
        if (newSearcher != currSearcher) {
            // should this go before the other event handlers or after?
            if (currSearcher != null) {
                future = searcherExecutor.submit(() -> {
                    Timer.Context warmupContext = newSearcherWarmupTimer.time();
                    try {
                        newSearcher.warm(currSearcher);
                    } catch (Throwable e) {
                        SolrException.log(log, e);
                        if (e instanceof Error) {
                            throw (Error) e;
                        }
                    } finally {
                        warmupContext.close();
                    }
                    return null;
                });
            }
            if (currSearcher == null) {
                future = searcherExecutor.submit(() -> {
                    try {
                        for (SolrEventListener listener : firstSearcherListeners) {
                            listener.newSearcher(newSearcher, null);
                        }
                    } catch (Throwable e) {
                        SolrException.log(log, null, e);
                        if (e instanceof Error) {
                            throw (Error) e;
                        }
                    }
                    return null;
                });
            }
            if (currSearcher != null) {
                future = searcherExecutor.submit(() -> {
                    try {
                        for (SolrEventListener listener : newSearcherListeners) {
                            listener.newSearcher(newSearcher, currSearcher);
                        }
                    } catch (Throwable e) {
                        SolrException.log(log, null, e);
                        if (e instanceof Error) {
                            throw (Error) e;
                        }
                    }
                    return null;
                });
            }
        }
        // WARNING: this code assumes a single threaded executor (that all tasks
        // queued will finish first).
        final RefCounted<SolrIndexSearcher> currSearcherHolderF = currSearcherHolder;
        if (!alreadyRegistered) {
            future = searcherExecutor.submit(() -> {
                try {
                    // registerSearcher will decrement onDeckSearchers and
                    // do a notify, even if it fails.
                    registerSearcher(newSearchHolder);
                } catch (Throwable e) {
                    SolrException.log(log, e);
                    if (e instanceof Error) {
                        throw (Error) e;
                    }
                } finally {
                    // for warming...
                    if (currSearcherHolderF != null)
                        currSearcherHolderF.decref();
                }
                return null;
            });
        }
        if (waitSearcher != null) {
            waitSearcher[0] = future;
        }
        success = true;
        // callers may wait on the waitSearcher future returned.
        return returnSearcher ? newSearchHolder : null;
    } catch (Exception e) {
        if (e instanceof SolrException)
            throw (SolrException) e;
        throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
    } finally {
        timerContext.close();
        if (!success) {
            newSearcherOtherErrorsCounter.inc();
            ;
            synchronized (searcherLock) {
                onDeckSearchers--;
                if (onDeckSearchers < 0) {
                    // sanity check... should never happen
                    log.error(logid + "ERROR!!! onDeckSearchers after decrement=" + onDeckSearchers);
                    // try and recover
                    onDeckSearchers = 0;
                }
                // if we failed, we need to wake up at least one waiter to continue the process
                searcherLock.notify();
            }
            if (currSearcherHolder != null) {
                currSearcherHolder.decref();
            }
            if (searchHolder != null) {
                // decrement 1 for _searcher (searchHolder will never become _searcher now)
                searchHolder.decref();
                if (returnSearcher) {
                    // decrement 1 because we won't be returning the searcher to the user
                    searchHolder.decref();
                }
            }
        }
        // we want to do this after we decrement onDeckSearchers so another thread
        // doesn't increment first and throw a false warning.
        openSearcherLock.unlock();
    }
}
Also used : IOContext(org.apache.lucene.store.IOContext) MDCLoggingContext(org.apache.solr.logging.MDCLoggingContext) DirContext(org.apache.solr.core.DirectoryFactory.DirContext) LeafReaderContext(org.apache.lucene.index.LeafReaderContext) SolrIndexSearcher(org.apache.solr.search.SolrIndexSearcher) LockObtainFailedException(org.apache.lucene.store.LockObtainFailedException) IOException(java.io.IOException) NoSuchFileException(java.nio.file.NoSuchFileException) SolrException(org.apache.solr.common.SolrException) FileNotFoundException(java.io.FileNotFoundException) KeeperException(org.apache.zookeeper.KeeperException) Timer(com.codahale.metrics.Timer) Future(java.util.concurrent.Future) SolrException(org.apache.solr.common.SolrException)

Example 9 with SolrException

use of org.apache.solr.common.SolrException in project lucene-solr by apache.

the class SolrCore method writeNewIndexProps.

/**
   * Write the index.properties file with the new index sub directory name
   * @param dir a data directory (containing an index.properties file)
   * @param tmpFileName the file name to write the new index.properties to
   * @param tmpIdxDirName new index directory name
   */
private static void writeNewIndexProps(Directory dir, String tmpFileName, String tmpIdxDirName) {
    if (tmpFileName == null) {
        tmpFileName = IndexFetcher.INDEX_PROPERTIES;
    }
    final Properties p = new Properties();
    // Read existing properties
    try {
        final IndexInput input = dir.openInput(IndexFetcher.INDEX_PROPERTIES, DirectoryFactory.IOCONTEXT_NO_CACHE);
        final InputStream is = new PropertiesInputStream(input);
        try {
            p.load(new InputStreamReader(is, StandardCharsets.UTF_8));
        } catch (Exception e) {
            log.error("Unable to load " + IndexFetcher.INDEX_PROPERTIES, e);
        } finally {
            IOUtils.closeQuietly(is);
        }
    } catch (IOException e) {
    // ignore; file does not exist
    }
    p.put("index", tmpIdxDirName);
    // Write new properties
    Writer os = null;
    try {
        IndexOutput out = dir.createOutput(tmpFileName, DirectoryFactory.IOCONTEXT_NO_CACHE);
        os = new OutputStreamWriter(new PropertiesOutputStream(out), StandardCharsets.UTF_8);
        p.store(os, IndexFetcher.INDEX_PROPERTIES);
        dir.sync(Collections.singleton(tmpFileName));
    } catch (Exception e) {
        throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Unable to write " + IndexFetcher.INDEX_PROPERTIES, e);
    } finally {
        IOUtils.closeQuietly(os);
    }
}
Also used : InputStreamReader(java.io.InputStreamReader) PropertiesInputStream(org.apache.solr.util.PropertiesInputStream) InputStream(java.io.InputStream) IndexInput(org.apache.lucene.store.IndexInput) IndexOutput(org.apache.lucene.store.IndexOutput) OutputStreamWriter(java.io.OutputStreamWriter) IOException(java.io.IOException) Properties(java.util.Properties) PropertiesInputStream(org.apache.solr.util.PropertiesInputStream) LockObtainFailedException(org.apache.lucene.store.LockObtainFailedException) IOException(java.io.IOException) NoSuchFileException(java.nio.file.NoSuchFileException) SolrException(org.apache.solr.common.SolrException) FileNotFoundException(java.io.FileNotFoundException) KeeperException(org.apache.zookeeper.KeeperException) PHPSerializedResponseWriter(org.apache.solr.response.PHPSerializedResponseWriter) XMLResponseWriter(org.apache.solr.response.XMLResponseWriter) SolrIndexWriter(org.apache.solr.update.SolrIndexWriter) SmileResponseWriter(org.apache.solr.response.SmileResponseWriter) GeoJSONResponseWriter(org.apache.solr.response.GeoJSONResponseWriter) BinaryResponseWriter(org.apache.solr.response.BinaryResponseWriter) PythonResponseWriter(org.apache.solr.response.PythonResponseWriter) JSONResponseWriter(org.apache.solr.response.JSONResponseWriter) SchemaXmlResponseWriter(org.apache.solr.response.SchemaXmlResponseWriter) IndexWriter(org.apache.lucene.index.IndexWriter) Writer(java.io.Writer) PHPResponseWriter(org.apache.solr.response.PHPResponseWriter) QueryResponseWriter(org.apache.solr.response.QueryResponseWriter) GraphMLResponseWriter(org.apache.solr.response.GraphMLResponseWriter) RubyResponseWriter(org.apache.solr.response.RubyResponseWriter) CSVResponseWriter(org.apache.solr.response.CSVResponseWriter) OutputStreamWriter(java.io.OutputStreamWriter) RawResponseWriter(org.apache.solr.response.RawResponseWriter) SolrException(org.apache.solr.common.SolrException) PropertiesOutputStream(org.apache.solr.util.PropertiesOutputStream)

Example 10 with SolrException

use of org.apache.solr.common.SolrException in project lucene-solr by apache.

the class SolrResourceLoader method persistConfLocally.

public static void persistConfLocally(SolrResourceLoader loader, String resourceName, byte[] content) {
    // Persist locally
    File confFile = new File(loader.getConfigDir(), resourceName);
    try {
        File parentDir = confFile.getParentFile();
        if (!parentDir.isDirectory()) {
            if (!parentDir.mkdirs()) {
                final String msg = "Can't create managed schema directory " + parentDir.getAbsolutePath();
                log.error(msg);
                throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, msg);
            }
        }
        try (OutputStream out = new FileOutputStream(confFile)) {
            out.write(content);
        }
        log.info("Written confile " + resourceName);
    } catch (IOException e) {
        final String msg = "Error persisting conf file " + resourceName;
        log.error(msg, e);
        throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, msg, e);
    } finally {
        try {
            IOUtils.fsync(confFile.toPath(), false);
        } catch (IOException e) {
            final String msg = "Error syncing conf file " + resourceName;
            log.error(msg, e);
        }
    }
}
Also used : OutputStream(java.io.OutputStream) FileOutputStream(java.io.FileOutputStream) FileOutputStream(java.io.FileOutputStream) IOException(java.io.IOException) File(java.io.File) SolrException(org.apache.solr.common.SolrException)

Aggregations

SolrException (org.apache.solr.common.SolrException)617 IOException (java.io.IOException)172 ArrayList (java.util.ArrayList)100 ModifiableSolrParams (org.apache.solr.common.params.ModifiableSolrParams)80 NamedList (org.apache.solr.common.util.NamedList)79 HashMap (java.util.HashMap)75 Map (java.util.Map)70 SolrParams (org.apache.solr.common.params.SolrParams)64 KeeperException (org.apache.zookeeper.KeeperException)60 Test (org.junit.Test)55 Replica (org.apache.solr.common.cloud.Replica)48 Slice (org.apache.solr.common.cloud.Slice)45 DocCollection (org.apache.solr.common.cloud.DocCollection)41 SolrInputDocument (org.apache.solr.common.SolrInputDocument)39 SchemaField (org.apache.solr.schema.SchemaField)39 List (java.util.List)38 SimpleOrderedMap (org.apache.solr.common.util.SimpleOrderedMap)38 SolrServerException (org.apache.solr.client.solrj.SolrServerException)37 SolrQueryRequest (org.apache.solr.request.SolrQueryRequest)34 SolrCore (org.apache.solr.core.SolrCore)33