use of org.antlr.v4.runtime.atn.ParserATNSimulator in project antlr4 by tunnelvisionlabs.
the class GrammarParserInterpreter method getAllPossibleParseTrees.
/**
* Given an ambiguous parse information, return the list of ambiguous parse trees.
* An ambiguity occurs when a specific token sequence can be recognized
* in more than one way by the grammar. These ambiguities are detected only
* at decision points.
*
* The list of trees includes the actual interpretation (that for
* the minimum alternative number) and all ambiguous alternatives.
* The actual interpretation is always first.
*
* This method reuses the same physical input token stream used to
* detect the ambiguity by the original parser in the first place.
* This method resets/seeks within but does not alter originalParser.
*
* The trees are rooted at the node whose start..stop token indices
* include the start and stop indices of this ambiguity event. That is,
* the trees returned will always include the complete ambiguous subphrase
* identified by the ambiguity event. The subtrees returned will
* also always contain the node associated with the overridden decision.
*
* Be aware that this method does NOT notify error or parse listeners as
* it would trigger duplicate or otherwise unwanted events.
*
* This uses a temporary ParserATNSimulator and a ParserInterpreter
* so we don't mess up any statistics, event lists, etc...
* The parse tree constructed while identifying/making ambiguityInfo is
* not affected by this method as it creates a new parser interp to
* get the ambiguous interpretations.
*
* Nodes in the returned ambig trees are independent of the original parse
* tree (constructed while identifying/creating ambiguityInfo).
*
* @since 4.5.1
*
* @param g From which grammar should we drive alternative
* numbers and alternative labels.
*
* @param originalParser The parser used to create ambiguityInfo; it
* is not modified by this routine and can be either
* a generated or interpreted parser. It's token
* stream *is* reset/seek()'d.
* @param tokens A stream of tokens to use with the temporary parser.
* This will often be just the token stream within the
* original parser but here it is for flexibility.
*
* @param decision Which decision to try different alternatives for.
*
* @param alts The set of alternatives to try while re-parsing.
*
* @param startIndex The index of the first token of the ambiguous
* input or other input of interest.
*
* @param stopIndex The index of the last token of the ambiguous input.
* The start and stop indexes are used primarily to
* identify how much of the resulting parse tree
* to return.
*
* @param startRuleIndex The start rule for the entire grammar, not
* the ambiguous decision. We re-parse the entire input
* and so we need the original start rule.
*
* @return The list of all possible interpretations of
* the input for the decision in ambiguityInfo.
* The actual interpretation chosen by the parser
* is always given first because this method
* retests the input in alternative order and
* ANTLR always resolves ambiguities by choosing
* the first alternative that matches the input.
* The subtree returned
*
* @throws RecognitionException Throws upon syntax error while matching
* ambig input.
*/
public static List<ParserRuleContext> getAllPossibleParseTrees(Grammar g, Parser originalParser, TokenStream tokens, int decision, BitSet alts, int startIndex, int stopIndex, int startRuleIndex) throws RecognitionException {
List<ParserRuleContext> trees = new ArrayList<ParserRuleContext>();
// Create a new parser interpreter to parse the ambiguous subphrase
ParserInterpreter parser = deriveTempParserInterpreter(g, originalParser, tokens);
if (stopIndex >= (tokens.size() - 1)) {
// if we are pointing at EOF token
// EOF is not in tree, so must be 1 less than last non-EOF token
stopIndex = tokens.size() - 2;
}
// get ambig trees
int alt = alts.nextSetBit(0);
while (alt >= 0) {
// re-parse entire input for all ambiguous alternatives
// (don't have to do first as it's been parsed, but do again for simplicity
// using this temp parser.)
parser.reset();
parser.addDecisionOverride(decision, startIndex, alt);
ParserRuleContext t = parser.parse(startRuleIndex);
GrammarInterpreterRuleContext ambigSubTree = (GrammarInterpreterRuleContext) Trees.getRootOfSubtreeEnclosingRegion(t, startIndex, stopIndex);
// Use higher of overridden decision tree or tree enclosing all tokens
if (Trees.isAncestorOf(parser.getOverrideDecisionRoot(), ambigSubTree)) {
ambigSubTree = (GrammarInterpreterRuleContext) parser.getOverrideDecisionRoot();
}
trees.add(ambigSubTree);
alt = alts.nextSetBit(alt + 1);
}
return trees;
}
use of org.antlr.v4.runtime.atn.ParserATNSimulator in project presto by prestodb.
the class AntlrATNCacheFields method configureParser.
@SuppressWarnings("ObjectEquality")
public void configureParser(Parser parser) {
requireNonNull(parser, "parser is null");
// Intentional identity equals comparison
checkArgument(atn == parser.getATN(), "Parser ATN mismatch: expected %s, found %s", atn, parser.getATN());
parser.setInterpreter(new ParserATNSimulator(parser, atn, decisionToDFA, predictionContextCache));
}
use of org.antlr.v4.runtime.atn.ParserATNSimulator in project antlr4 by tunnelvisionlabs.
the class Parser method setProfile.
/**
* @since 4.3
*/
public void setProfile(boolean profile) {
ParserATNSimulator interp = getInterpreter();
if (profile) {
if (!(interp instanceof ProfilingATNSimulator)) {
setInterpreter(new ProfilingATNSimulator(this));
}
} else if (interp instanceof ProfilingATNSimulator) {
setInterpreter(new ParserATNSimulator(this, getATN()));
}
getInterpreter().setPredictionMode(interp.getPredictionMode());
}
use of org.antlr.v4.runtime.atn.ParserATNSimulator in project antlr4 by tunnelvisionlabs.
the class TestATNParserPrediction method checkPredictedAlt.
/**
* first check that the ATN predicts right alt.
* Then check adaptive prediction.
*/
public void checkPredictedAlt(LexerGrammar lg, Grammar g, int decision, String inputString, int expectedAlt) {
Tool.internalOption_ShowATNConfigsInDFA = true;
ATN lexatn = createATN(lg, true);
LexerATNSimulator lexInterp = new LexerATNSimulator(lexatn);
IntegerList types = getTokenTypesViaATN(inputString, lexInterp);
System.out.println(types);
semanticProcess(lg);
g.importVocab(lg);
semanticProcess(g);
ParserATNFactory f = new ParserATNFactory(g);
ATN atn = f.createATN();
DOTGenerator dot = new DOTGenerator(g);
Rule r = g.getRule("a");
if (r != null)
System.out.println(dot.getDOT(atn.ruleToStartState[r.index]));
r = g.getRule("b");
if (r != null)
System.out.println(dot.getDOT(atn.ruleToStartState[r.index]));
r = g.getRule("e");
if (r != null)
System.out.println(dot.getDOT(atn.ruleToStartState[r.index]));
r = g.getRule("ifstat");
if (r != null)
System.out.println(dot.getDOT(atn.ruleToStartState[r.index]));
r = g.getRule("block");
if (r != null)
System.out.println(dot.getDOT(atn.ruleToStartState[r.index]));
// Check ATN prediction
// ParserATNSimulator<Token> interp = new ParserATNSimulator<Token>(atn);
TokenStream input = new IntTokenStream(types);
ParserInterpreterForTesting interp = new ParserInterpreterForTesting(g, input);
DecisionState startState = atn.decisionToState.get(decision);
DFA dfa = new DFA(startState, decision);
int alt = interp.adaptivePredict(input, decision, ParserRuleContext.emptyContext());
System.out.println(dot.getDOT(dfa, false));
assertEquals(expectedAlt, alt);
// Check adaptive prediction
input.seek(0);
alt = interp.adaptivePredict(input, decision, null);
assertEquals(expectedAlt, alt);
// run 2x; first time creates DFA in atn
input.seek(0);
alt = interp.adaptivePredict(input, decision, null);
assertEquals(expectedAlt, alt);
}
use of org.antlr.v4.runtime.atn.ParserATNSimulator in project titan.EclipsePlug-ins by eclipse.
the class TTCN3Analyzer method parse.
/**
* Parse TTCN-3 file using ANTLR v4
* @param aReader file to parse (cannot be null, closes aReader)
* @param aFileLength file length
* @param aEclipseFile Eclipse dependent resource file
*/
private void parse(final Reader aReader, final int aFileLength, final IFile aEclipseFile) {
CharStream charStream = new UnbufferedCharStream(aReader);
Ttcn3Lexer lexer = new Ttcn3Lexer(charStream);
lexer.setCommentTodo(true);
lexer.setTokenFactory(new CommonTokenFactory(true));
lexer.initRootInterval(aFileLength);
TitanListener lexerListener = new TitanListener();
// remove ConsoleErrorListener
lexer.removeErrorListeners();
lexer.addErrorListener(lexerListener);
// 1. Previously it was UnbufferedTokenStream(lexer), but it was changed to BufferedTokenStream, because UnbufferedTokenStream seems to be unusable. It is an ANTLR 4 bug.
// Read this: https://groups.google.com/forum/#!topic/antlr-discussion/gsAu-6d3pKU
// pr_PatternChunk[StringBuilder builder, boolean[] uni]:
// $builder.append($v.text); <-- exception is thrown here: java.lang.UnsupportedOperationException: interval 85..85 not in token buffer window: 86..341
// 2. Changed from BufferedTokenStream to CommonTokenStream, otherwise tokens with "-> channel(HIDDEN)" are not filtered out in lexer.
final CommonTokenStream tokenStream = new CommonTokenStream(lexer);
Ttcn3Parser parser = new Ttcn3Parser(tokenStream);
ParserUtilities.setBuildParseTree(parser);
PreprocessedTokenStream preprocessor = null;
if (aEclipseFile != null && GlobalParser.TTCNPP_EXTENSION.equals(aEclipseFile.getFileExtension())) {
lexer.setTTCNPP();
preprocessor = new PreprocessedTokenStream(lexer);
preprocessor.setActualFile(aEclipseFile);
if (aEclipseFile.getProject() != null) {
preprocessor.setMacros(PreprocessorSymbolsOptionsData.getTTCN3PreprocessorDefines(aEclipseFile.getProject()));
}
parser = new Ttcn3Parser(preprocessor);
ParserUtilities.setBuildParseTree(parser);
preprocessor.setActualLexer(lexer);
preprocessor.setParser(parser);
}
if (aEclipseFile != null) {
lexer.setActualFile(aEclipseFile);
parser.setActualFile(aEclipseFile);
parser.setProject(aEclipseFile.getProject());
}
// remove ConsoleErrorListener
parser.removeErrorListeners();
TitanListener parserListener = new TitanListener();
parser.addErrorListener(parserListener);
// This is added because of the following ANTLR 4 bug:
// Memory Leak in PredictionContextCache #499
// https://github.com/antlr/antlr4/issues/499
DFA[] decisionToDFA = parser.getInterpreter().decisionToDFA;
parser.setInterpreter(new ParserATNSimulator(parser, parser.getATN(), decisionToDFA, new PredictionContextCache()));
// try SLL mode
try {
parser.getInterpreter().setPredictionMode(PredictionMode.SLL);
final ParseTree root = parser.pr_TTCN3File();
ParserUtilities.logParseTree(root, parser);
warnings = parser.getWarnings();
mErrorsStored = lexerListener.getErrorsStored();
mErrorsStored.addAll(parserListener.getErrorsStored());
} catch (RecognitionException e) {
// quit
}
if (!warnings.isEmpty() || !mErrorsStored.isEmpty()) {
// SLL mode might have failed, try LL mode
try {
CharStream charStream2 = new UnbufferedCharStream(aReader);
lexer.setInputStream(charStream2);
// lexer.reset();
parser.reset();
parserListener.reset();
parser.getInterpreter().setPredictionMode(PredictionMode.LL);
final ParseTree root = parser.pr_TTCN3File();
ParserUtilities.logParseTree(root, parser);
warnings = parser.getWarnings();
mErrorsStored = lexerListener.getErrorsStored();
mErrorsStored.addAll(parserListener.getErrorsStored());
} catch (RecognitionException e) {
}
}
unsupportedConstructs = parser.getUnsupportedConstructs();
rootInterval = lexer.getRootInterval();
actualTtc3Module = parser.getModule();
if (preprocessor != null) {
// if the file was preprocessed
mErrorsStored.addAll(preprocessor.getErrorStorage());
warnings.addAll(preprocessor.getWarnings());
unsupportedConstructs.addAll(preprocessor.getUnsupportedConstructs());
if (actualTtc3Module != null) {
actualTtc3Module.setIncludedFiles(preprocessor.getIncludedFiles());
actualTtc3Module.setInactiveCodeLocations(preprocessor.getInactiveCodeLocations());
}
}
try {
aReader.close();
} catch (IOException e) {
}
}
Aggregations