use of edu.uci.ics.texera.web.TexeraWebException in project textdb by TextDB.
the class SystemResource method getOperatorMetadata.
@GET
@Path("/operator-metadata")
public OperatorMetadata getOperatorMetadata() {
try {
List<JsonNode> operators = new ArrayList<>();
for (Class<? extends PredicateBase> predicateClass : JsonSchemaHelper.operatorTypeMap.keySet()) {
JsonNode schemaNode = objectMapper.readTree(Files.readAllBytes(JsonSchemaHelper.getJsonSchemaPath(predicateClass)));
operators.add(schemaNode);
}
OperatorMetadata operatorMetadata = new OperatorMetadata(operators, OperatorGroupConstants.OperatorGroupOrderList);
return operatorMetadata;
} catch (Exception e) {
throw new TexeraWebException(e);
}
}
use of edu.uci.ics.texera.web.TexeraWebException in project textdb by TextDB.
the class KeywordDictionaryResource method uploadManualDictionary.
@POST
@Path("/upload-manual-dict")
public GenericWebResponse uploadManualDictionary(@Session HttpSession session, UserManualDictionary userManualDictionary) {
if (userManualDictionary == null || !userManualDictionary.isValid()) {
throw new TexeraWebException("Error occurred in user manual dictionary");
}
UInteger userID = UserResource.getUser(session).getUserID();
if (userManualDictionary.separator.isEmpty()) {
userManualDictionary.separator = ",";
}
List<String> itemArray = convertStringToList(userManualDictionary.content, userManualDictionary.separator);
byte[] contentByteArray = convertListToByteArray(itemArray);
int count = insertDictionaryToDataBase(userManualDictionary.name, contentByteArray, userManualDictionary.description, userID);
throwErrorWhenNotOne("Error occurred while inserting dictionary to database", count);
return GenericWebResponse.generateSuccessResponse();
}
use of edu.uci.ics.texera.web.TexeraWebException in project textdb by TextDB.
the class KeywordDictionaryResource method readFileContent.
private String readFileContent(InputStream fileStream) {
StringBuilder fileContents = new StringBuilder();
String line;
try (BufferedReader br = new BufferedReader(new InputStreamReader(fileStream))) {
while ((line = br.readLine()) != null) {
fileContents.append(line);
}
} catch (IOException e) {
throw new TexeraWebException(e);
}
return fileContents.toString();
}
use of edu.uci.ics.texera.web.TexeraWebException in project textdb by TextDB.
the class QueryPlanResource method suggestAutocompleteSchema.
/**
* This is the edu.uci.ics.texera.web.request handler for the execution of a Query Plan.
* @param logicalPlanJson, the json representation of the logical plan
* @return - Generic GenericWebResponse object
*/
/* EG of using /autocomplete end point (how this inline update method works):
1. At the beginning of creating a graph, (for example) when a scan source and a keyword search
operators are initailized (dragged in the flow-chart) but unlinked, the graph looks like this:
___________________ ___________________
| | | |
| | | Keyword Search |
| Source: Scan | | Attributes: N/A |
| TableName: N/A | | (Other inputs...)|
| | | |
| | | |
| | | |
|___________________| |___________________|
2. Then, you can feel free to link these two operators together, or go ahead and select a
table as the source first. Let's link them together first.
___________________ ___________________
| | | |
| | | Keyword Search |
| Source: Scan | | Attributes: N/A |
| TableName: N/A | ========================> | (Other inputs...)|
| | | |
| | | |
| | | |
|___________________| |___________________|
3. At this moment, the Keyword Search operator still does NOT have any available options for
its Attributes field because of the lack of the source. Therefore, we can select a table
name as the source next (let's use table "plan" as an example here)
___________________ ___________________
| | | |
| | | Keyword Search |
| Source: Scan | | Attributes: N/A |
| TableName: plan | ========================> | (Other inputs...)|
| | | |
| | | |
| | | |
|___________________| |___________________|
4. After select table "plan" as the source, now you can see the options list of Attributes in
the Keyword Search operator becomes available. you should see 4 options in the list: name,
description, logicPlan, payload. Feel free to choose whichever you need for your desired result.
___________________ ___________________
| | | |
| | | Keyword Search |
| Source: Scan | | Attributes: name |
| TableName: plan | ========================> | (Other inputs...)|
| | | |
| | | |
| | | |
|___________________| |___________________|
5. Basically, the method supports that whenever you connect a source (with a valid table name)
to a regular search operator, the later operator is able to recognize the metadata of its
input operator (which is the source), and then updates its attribute options in the drop-down
list. To illustrate how powerful this functionality is, you can add a new (Scan) Source and
pick another table which is different than table "plan" we have already created. The graph
now should be looked like the following:
___________________ ___________________
| | | |
| | | Keyword Search |
| Source: Scan | | Attributes: name |
| TableName: plan | ========================> | (Other inputs...)|
| | | |
| | | |
| | | |
|___________________| |___________________|
_______________________
| |
| |
| Source: Scan |
| TableName: dictionary |
| |
| |
| |
|_______________________|
6. Then, connect "dictionary" to the Keyword Search operator. The original link between "plan"
and Keyword Search will automatically disappear.
___________________ ___________________
| | | |
| | | Keyword Search |
| Source: Scan | | Attributes: N/A |
| TableName: plan | | (Other inputs...)|
| | /============> | |
| | // | |
| | // | |
|___________________| // |___________________|
//
//
_______________________ //
| | //
| |//
| Source: Scan |/
| TableName: dictionary |
| |
| |
| |
|_______________________|
7. After the new link generated, the Attributes field of the Keyword Search will be empty again. When
you try to check its drop-down list, the options are all updated to dictionary's attributes, which
are name and payload. The options from "plan" are all gone.
*/
@POST
@Path("/autocomplete")
public JsonNode suggestAutocompleteSchema(@Session HttpSession session, String logicalPlanJson) {
try {
UserResource.User user = UserResource.getUser(session);
QueryContext ctx = new QueryContext();
if (user != null) {
ctx.setProjectOwnerID(user.userID.toString());
}
JsonNode logicalPlanNode = new ObjectMapper().readTree(logicalPlanJson);
ArrayNode operators = (ArrayNode) logicalPlanNode.get(PropertyNameConstants.OPERATOR_LIST);
ArrayNode links = (ArrayNode) logicalPlanNode.get(PropertyNameConstants.OPERATOR_LINK_LIST);
ArrayNode validOperators = new ObjectMapper().createArrayNode();
ArrayNode validLinks = new ObjectMapper().createArrayNode();
ArrayNode linksEndWithInvalidDest = new ObjectMapper().createArrayNode();
Set<String> validOperatorsId = new HashSet<>();
getValidOperatorsAndLinks(operators, links, validOperators, validLinks, linksEndWithInvalidDest, validOperatorsId);
ObjectNode validLogicalPlanNode = new ObjectMapper().createObjectNode();
(validLogicalPlanNode).putArray(PropertyNameConstants.OPERATOR_LIST).addAll(validOperators);
(validLogicalPlanNode).putArray(PropertyNameConstants.OPERATOR_LINK_LIST).addAll(validLinks);
LogicalPlan logicalPlan = new ObjectMapper().treeToValue(validLogicalPlanNode, LogicalPlan.class);
logicalPlan.setContext(ctx);
// Get all input schema for valid operator with valid links
Map<String, List<Schema>> inputSchema = logicalPlan.retrieveAllOperatorInputSchema();
// Get all input schema for invalid operator with valid input operator
for (JsonNode linkNode : linksEndWithInvalidDest) {
String origin = linkNode.get(PropertyNameConstants.ORIGIN_OPERATOR_ID).textValue();
String dest = linkNode.get(PropertyNameConstants.DESTINATION_OPERATOR_ID).textValue();
Optional<Schema> schema = logicalPlan.getOperatorOutputSchema(origin, inputSchema);
if (schema.isPresent()) {
if (inputSchema.containsKey(dest)) {
inputSchema.get(dest).add(schema.get());
} else {
inputSchema.put(dest, new ArrayList<>(Arrays.asList(schema.get())));
}
}
}
ObjectNode result = new ObjectMapper().createObjectNode();
for (Map.Entry<String, List<Schema>> entry : inputSchema.entrySet()) {
Set<String> attributes = new HashSet<>();
for (Schema schema : entry.getValue()) {
attributes.addAll(schema.getAttributeNames());
}
ArrayNode currentSchemaNode = result.putArray(entry.getKey());
for (String attrName : attributes) {
currentSchemaNode.add(attrName);
}
}
ObjectNode response = new ObjectMapper().createObjectNode();
response.put("code", 0);
response.set("result", result);
return response;
} catch (JsonMappingException je) {
ObjectNode response = new ObjectMapper().createObjectNode();
response.put("code", -1);
response.put("message", "Json Mapping Exception would not be handled for auto plan. " + je.getMessage());
return response;
} catch (IOException | TexeraException e) {
if (e.getMessage().contains("does not exist in the schema:")) {
ObjectNode response = new ObjectMapper().createObjectNode();
response.put("code", -1);
response.put("message", "Attribute Not Exist Exception would not be handled for auto plan. " + e.getMessage());
return response;
}
throw new TexeraWebException(e.getMessage());
}
}
use of edu.uci.ics.texera.web.TexeraWebException in project textdb by TextDB.
the class QueryPlanResource method executeQueryPlan.
/**
* This is the edu.uci.ics.texera.web.request handler for the execution of a Query Plan.
* @param logicalPlanJson, the json representation of the logical plan
* @return - Generic TexeraWebResponse object
*/
@POST
@Path("/execute")
public // TODO: investigate how to use LogicalPlan directly
JsonNode executeQueryPlan(String logicalPlanJson) {
try {
LogicalPlan logicalPlan = new ObjectMapper().readValue(logicalPlanJson, LogicalPlan.class);
Plan plan = logicalPlan.buildQueryPlan();
ISink sink = plan.getRoot();
// send response back to frontend
if (sink instanceof TupleSink) {
TupleSink tupleSink = (TupleSink) sink;
tupleSink.open();
List<Tuple> results = tupleSink.collectAllTuples();
tupleSink.close();
// make sure result directory is created
if (Files.notExists(resultDirectory)) {
Files.createDirectories(resultDirectory);
}
// clean up old result files
cleanupOldResults();
// generate new UUID as the result id
String resultID = UUID.randomUUID().toString();
// write original json of the result into a file
java.nio.file.Path resultFile = resultDirectory.resolve(resultID + ".json");
Files.createFile(resultFile);
Files.write(resultFile, new ObjectMapper().writeValueAsBytes(results));
// put readable json of the result into response
ArrayNode resultNode = new ObjectMapper().createArrayNode();
for (Tuple tuple : results) {
resultNode.add(tuple.getReadableJson());
}
ObjectNode response = new ObjectMapper().createObjectNode();
response.put("code", 0);
response.set("result", resultNode);
response.put("resultID", resultID);
return response;
} else {
// execute the plan and return success message
Engine.getEngine().evaluate(plan);
ObjectNode response = new ObjectMapper().createObjectNode();
response.put("code", 1);
response.put("message", "plan sucessfully executed");
return response;
}
} catch (IOException | TexeraException e) {
throw new TexeraWebException(e.getMessage());
}
}
Aggregations