2011-02-13 44 views
9

我想用斯坦福NLP解析器解析一個句子列表。 我的清單是ArrayList,我該如何解析與LexicalizedParser的所有清單?如何解析句子列表?

我想從每個句子來獲得這種形式:

Tree parse = (Tree) lp1.apply(sentence); 

回答

1

其實從斯坦福NLP文檔提供如何分析句子樣本。

您可以找到的文檔here

+2

另請參閱解析器附帶的ParserDemo示例。您可以直接在作爲句子的字符串上調用apply()。 – 2012-01-15 16:38:45

20

雖然一個可以挖掘到的文檔,我將在這裏提供的代碼在SO以來,特別是鏈接移動和/或死亡。這個特定的答案使用整個管道。如果對整個管道不感興趣,我會在一秒之內提供一個備選答案。

下面的例子是使用斯坦福管道的完整方式。如果對合作解決方案不感興趣,請從第三行代碼中刪除dcoref。因此,在下面的示例中,如果您只是將文本饋送到文本主體(文本變量)中,則管道會爲您(ssplit註釋器)分割文本。只有一句話?那麼,沒關係,你可以將它作爲文本變量來提供。

// creates a StanfordCoreNLP object, with POS tagging, lemmatization, NER, parsing, and coreference resolution 
    Properties props = new Properties(); 
    props.put("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref"); 
    StanfordCoreNLP pipeline = new StanfordCoreNLP(props); 

    // read some text in the text variable 
    String text = ... // Add your text here! 

    // create an empty Annotation just with the given text 
    Annotation document = new Annotation(text); 

    // run all Annotators on this text 
    pipeline.annotate(document); 

    // these are all the sentences in this document 
    // a CoreMap is essentially a Map that uses class objects as keys and has values with custom types 
    List<CoreMap> sentences = document.get(SentencesAnnotation.class); 

    for(CoreMap sentence: sentences) { 
     // traversing the words in the current sentence 
     // a CoreLabel is a CoreMap with additional token-specific methods 
     for (CoreLabel token: sentence.get(TokensAnnotation.class)) { 
     // this is the text of the token 
     String word = token.get(TextAnnotation.class); 
     // this is the POS tag of the token 
     String pos = token.get(PartOfSpeechAnnotation.class); 
     // this is the NER label of the token 
     String ne = token.get(NamedEntityTagAnnotation.class);  
     } 

     // this is the parse tree of the current sentence 
     Tree tree = sentence.get(TreeAnnotation.class); 

     // this is the Stanford dependency graph of the current sentence 
     SemanticGraph dependencies = sentence.get(CollapsedCCProcessedDependenciesAnnotation.class); 
    } 

    // This is the coreference link graph 
    // Each chain stores a set of mentions that link to each other, 
    // along with a method for getting the most representative mention 
    // Both sentence and token offsets start at 1! 
    Map<Integer, CorefChain> graph = 
     document.get(CorefChainAnnotation.class); 
1

所以如許,如果你不想訪問完整的斯坦福管道(雖然我認爲是推薦的方法),你可以直接與LexicalizedParser類工作。在這種情況下,您可以下載最新版本的Stanford Parser(而另一個將使用CoreNLP工具)。確保除了解析器jar之外,還有適合您需要的解析器的模型文件。示例代碼:

LexicalizedParser lp1 = new LexicalizedParser("englishPCFG.ser.gz", new Options()); 
String sentence = "It is a fine day today"; 
Tree parse = lp.parse(sentence); 

注意這適用於解析器的3.3.1版本。