use of org.antlr.v4.runtime.CharStream in project antlr4 by antlr.
the class BaseNodeTest method getTokenTypes.
public List<String> getTokenTypes(LexerGrammar lg, ATN atn, CharStream input) {
LexerATNSimulator interp = new LexerATNSimulator(atn, new DFA[] { new DFA(atn.modeToStartState.get(Lexer.DEFAULT_MODE)) }, null);
List<String> tokenTypes = new ArrayList<String>();
int ttype;
boolean hitEOF = false;
do {
if (hitEOF) {
tokenTypes.add("EOF");
break;
}
int t = input.LA(1);
ttype = interp.match(input, Lexer.DEFAULT_MODE);
if (ttype == Token.EOF) {
tokenTypes.add("EOF");
} else {
tokenTypes.add(lg.typeToTokenList.get(ttype));
}
if (t == IntStream.EOF) {
hitEOF = true;
}
} while (ttype != Token.EOF);
return tokenTypes;
}
use of org.antlr.v4.runtime.CharStream in project scheduler by btrplace.
the class SpecScanner method getTokens.
private static CommonTokenStream getTokens(String source) throws IOException {
CharStream is = CharStreams.fromReader(new StringReader(source));
CstrSpecLexer lexer = new CstrSpecLexer(is);
lexer.removeErrorListeners();
lexer.addErrorListener(new DiagnosticErrorListener());
return new CommonTokenStream(lexer);
}
use of org.antlr.v4.runtime.CharStream in project nikita-noark5-core by HiOA-ABI.
the class OdataTest method testOdata.
@RequestMapping(method = RequestMethod.GET, value = "arkivstruktur/{\\w*}")
public ResponseEntity<String> testOdata(final UriComponentsBuilder uriBuilder, HttpServletRequest request, final HttpServletResponse response) throws Exception {
String uqueryString = request.getQueryString();
String decoded = URLDecoder.decode(uqueryString, UTF_8);
StringBuffer originalRequest = request.getRequestURL();
originalRequest.append("?" + decoded);
CharStream stream = CharStreams.fromString(originalRequest.toString());
ODataLexer lexer = new ODataLexer(stream);
CommonTokenStream tokens = new CommonTokenStream(lexer);
ODataParser parser = new ODataParser(tokens);
ParseTree tree = parser.odataURL();
ParseTreeWalker walker = new ParseTreeWalker();
// Make the HQL Statement
NikitaODataToHQLWalker hqlWalker = new NikitaODataToHQLWalker();
walker.walk(hqlWalker, tree);
Session session = entityManager.unwrap(org.hibernate.Session.class);
Query query = hqlWalker.getHqlStatment(session);
String queryString = query.getQueryString();
System.out.println(queryString);
List<NoarkEntity> list = query.getResultList();
return ResponseEntity.status(HttpStatus.CREATED).body(list.toString());
}
use of org.antlr.v4.runtime.CharStream in project grakn by graknlabs.
the class QueryParserImpl method parseList.
/**
* @param reader a reader representing several queries
* @return a list of queries
*/
@Override
public <T extends Query<?>> Stream<T> parseList(Reader reader) {
UnbufferedCharStream charStream = new UnbufferedCharStream(reader);
GraqlErrorListener errorListener = GraqlErrorListener.withoutQueryString();
GraqlLexer lexer = createLexer(charStream, errorListener);
/*
We tell the lexer to copy the text into each generated token.
Normally when calling `Token#getText`, it will look into the underlying `TokenStream` and call
`TokenStream#size` to check it is in-bounds. However, `UnbufferedTokenStream#size` is not supported
(because then it would have to read the entire input). To avoid this issue, we set this flag which will
copy over the text into each `Token`, s.t. that `Token#getText` will just look up the copied text field.
*/
lexer.setTokenFactory(new CommonTokenFactory(true));
// Use an unbuffered token stream so we can handle extremely large input strings
UnbufferedTokenStream tokenStream = new UnbufferedTokenStream(ChannelTokenSource.of(lexer));
GraqlParser parser = createParser(tokenStream, errorListener);
/*
The "bail" error strategy prevents us reading all the way to the end of the input, e.g.
```
match $x isa person; insert $x has name "Bob"; match $x isa movie; get;
^
```
In this example, when ANTLR reaches the indicated `match`, it considers two possibilities:
1. this is the end of the query
2. the user has made a mistake. Maybe they accidentally pasted the `match` here.
Because of case 2, ANTLR will parse beyond the `match` in order to produce a more helpful error message.
This causes memory issues for very large queries, so we use the simpler "bail" strategy that will
immediately stop when it hits `match`.
*/
parser.setErrorHandler(new BailErrorStrategy());
// This is a lazy iterator that will only consume a single query at a time, without parsing any further.
// This means it can pass arbitrarily long streams of queries in constant memory!
Iterable<T> queryIterator = () -> new AbstractIterator<T>() {
@Nullable
@Override
protected T computeNext() {
int latestToken = tokenStream.LA(1);
if (latestToken == Token.EOF) {
endOfData();
return null;
} else {
// When we next run it, it will start where it left off in the stream
return (T) QUERY.parse(parser, errorListener);
}
}
};
return StreamSupport.stream(queryIterator.spliterator(), false);
}
use of org.antlr.v4.runtime.CharStream in project JsoupXpath by zhegexiaohuozi.
the class JXDocument method selN.
public List<JXNode> selN(String xpath) throws XpathSyntaxErrorException {
List<JXNode> finalRes = new LinkedList<>();
try {
CharStream input = CharStreams.fromString(xpath);
XpathLexer lexer = new XpathLexer(input);
CommonTokenStream tokens = new CommonTokenStream(lexer);
XpathParser parser = new XpathParser(tokens);
parser.setErrorHandler(new DoFailOnErrorHandler());
ParseTree tree = parser.main();
XpathProcessor processor = new XpathProcessor(elements);
XValue calRes = processor.visit(tree);
if (calRes.isElements()) {
for (Element el : calRes.asElements()) {
finalRes.add(JXNode.e(el));
}
} else if (calRes.isList()) {
for (String str : calRes.asList()) {
finalRes.add(JXNode.t(str));
}
}
} catch (Exception e) {
String msg = "Please check the syntax of your xpath expr, ";
throw new XpathSyntaxErrorException(msg + ExceptionUtils.getRootCauseMessage(e), e);
}
return finalRes;
}
Aggregations