use of edu.stanford.nlp.ling.IndexedWord in project CoreNLP by stanfordnlp.
the class SemanticGraph method toList.
/**
* Returns a String representation of this graph as a list of typed
* dependencies, as exemplified by the following:
*
* <pre>
* nsubj(died-6, Sam-3)
* tmod(died-6, today-9)
* </pre>
*
* @return a <code>String</code> representation of this set of typed
* dependencies
*/
public String toList() {
StringBuilder buf = new StringBuilder();
for (IndexedWord root : getRoots()) {
buf.append("root(ROOT-0, ");
buf.append(root.toString(CoreLabel.OutputFormat.VALUE_INDEX)).append(")\n");
}
for (SemanticGraphEdge edge : this.edgeListSorted()) {
buf.append(edge.getRelation().toString()).append("(");
buf.append(edge.getSource().toString(CoreLabel.OutputFormat.VALUE_INDEX)).append(", ");
buf.append(edge.getTarget().toString(CoreLabel.OutputFormat.VALUE_INDEX)).append(")\n");
}
return buf.toString();
}
use of edu.stanford.nlp.ling.IndexedWord in project CoreNLP by stanfordnlp.
the class SemanticGraph method getSiblings.
/**
* Method for getting the siblings of a particular node. Siblings are the
* other children of your parent, where parent is determined as the parent
* returned by getParent
*
* @return collection of sibling nodes (does not include vertex)
* the collection is empty if your parent is null
*/
public Collection<IndexedWord> getSiblings(IndexedWord vertex) {
IndexedWord parent = this.getParent(vertex);
if (parent != null) {
Set<IndexedWord> result = wordMapFactory.newSet();
result.addAll(this.getChildren(parent));
//remove this vertex - you're not your own sibling
result.remove(vertex);
return result;
} else {
return Collections.emptySet();
}
}
use of edu.stanford.nlp.ling.IndexedWord in project CoreNLP by stanfordnlp.
the class SemanticGraph method getPathToRoot.
/**
* Helper function for the public function with the same name.
* <br>
* Builds up the list backwards.
*/
private List<IndexedWord> getPathToRoot(IndexedWord vertex, List<IndexedWord> used) {
used.add(vertex);
// TODO: Apparently the order of the nodes in the path to the root
// makes a difference for the RTE system. Look into this some more
List<IndexedWord> parents = getParentList(vertex);
// Set<IndexedWord> parents = wordMapFactory.newSet();
// parents.addAll(getParents(vertex));
parents.removeAll(used);
if (roots.contains(vertex) || (parents.isEmpty())) {
used.remove(used.size() - 1);
if (roots.contains(vertex))
return Generics.newArrayList();
else
// no path found
return null;
}
for (IndexedWord parent : parents) {
List<IndexedWord> path = getPathToRoot(parent, used);
if (path != null) {
path.add(parent);
used.remove(used.size() - 1);
return path;
}
}
used.remove(used.size() - 1);
return null;
}
use of edu.stanford.nlp.ling.IndexedWord in project CoreNLP by stanfordnlp.
the class SemanticGraph method yieldSpan.
/**
* Returns the span of the subtree yield of this node. That is, the span of all the nodes under it.
* In the case of projective graphs, the words in this span are also the yield of the constituent rooted
* at this node.
*
* @param word The word acting as the root of the constituent we are finding.
* @return A span, represented as a pair of integers. The span is zero indexed. The begin is inclusive and the end is exclusive.
*/
public Pair<Integer, Integer> yieldSpan(IndexedWord word) {
int min = Integer.MAX_VALUE;
int max = Integer.MIN_VALUE;
Stack<IndexedWord> fringe = new Stack<>();
fringe.push(word);
while (!fringe.isEmpty()) {
IndexedWord parent = fringe.pop();
min = Math.min(min, parent.index() - 1);
max = Math.max(max, parent.index());
for (SemanticGraphEdge edge : outgoingEdgeIterable(parent)) {
if (!edge.isExtra()) {
fringe.push(edge.getDependent());
}
}
}
return Pair.makePair(min, max);
}
use of edu.stanford.nlp.ling.IndexedWord in project CoreNLP by stanfordnlp.
the class SemanticGraphFactory method makeFromGraphs.
/**
* Given a list of graphs, constructs a new graph combined from the
* collection of graphs. Original vertices are used, edges are
* copied. Graphs are ordered by the sentence index and index of
* the original vertices. Intent is to create a "mega graph"
* similar to the graphs used in the RTE problem.
* <br>
* This method only works if the indexed words have different
* sentence ids, as otherwise the maps used will confuse several of
* the IndexedWords.
*/
public static SemanticGraph makeFromGraphs(Collection<SemanticGraph> sgList) {
SemanticGraph sg = new SemanticGraph();
Collection<IndexedWord> newRoots = Generics.newHashSet();
for (SemanticGraph currSg : sgList) {
newRoots.addAll(currSg.getRoots());
for (IndexedWord currVertex : currSg.vertexSet()) sg.addVertex(currVertex);
for (SemanticGraphEdge currEdge : currSg.edgeIterable()) sg.addEdge(currEdge.getGovernor(), currEdge.getDependent(), currEdge.getRelation(), currEdge.getWeight(), currEdge.isExtra());
}
sg.setRoots(newRoots);
return sg;
}
Aggregations