2015-04-04 122 views
3

StringTokenizer用於標記JAVA中的標記字符串。該字符串使用斯坦福大學的詞性MaxentTagger進行標記。已標記文本的子字符串僅用於顯示POS標記,並且只是迭代地顯示該字詞。JAVA中的StringTokenizer

這裏的文本標記前:

人一直有這個概念,即中的英勇事蹟在物理作用明顯。雖然這不是完全錯誤的,但並不是單一的勇氣之路。從古至今,這是反擊野獸的力量標誌。如果爲防守而戰,這是可以理解的。然而,走出更遠的路,煽動動物並與之抗爭是人類可以展現的最低文明程度。更多的是,在這個推理和知識的時代。傳統可能會這樣稱呼它,但盲目堅持它是白癡,無論是在泰米爾納德邦(印度相當於西班牙鬥牛)的着名Jallikattu還是公雞戰鬥。在狗身上Pel and石頭並在痛苦中嚎叫是可怕的。如果只給予思想和良知的涓滴,這個問題在每個方面都會表現爲可悲的。動物在我們的生態系統中與我們一起發揮作用。而且,有些動物更加熱門:守衛我們街道的流浪狗,聰明的烏鴉,負擔的野獸以及牧場上的日常動物。文學以其自己的方式表達出來:在指環王的團契中,小心翼翼地對待了比爾·費爾尼的小馬;在哈利波特不聽從赫敏關於房子精靈治療的建議時,他們學到了導致自己失敗的難題;和傑克倫敦,寫的所有關於動物。恩,對動物的善良是一種美德。

這裏的POS標記文本:

Man_NN has_VBZ always_RB had_VBN this_DT notion_NN that_IN brave_VBP deeds_NNS are_VBP manifest_JJ in_IN physical_JJ actions_NNS ._。 while_IN it_PRP is_VBZ not_RB whole_RB erroneous_JJ,_,there_EX does_VBZ n't_RB lie_VB the_DT singular_JJ path_NN to_TO valor_NN ._。 From_IN old_JJ,_,it_PRP is_VBZ a_DT sign_NN of_IN strength_NN to_TO fight_VB back_RP a_DT wild_JJ animal_NN ._。 It_PRP is_VBZ understandable_JJ if_IN fought_VBN in_IN defense_NN; _:otherwise_RB,_,to_TO go_VB the_DT extra_JJ mile_NN and_CC instigate_VB an_DT animal_NN and_CC fight_VB it_PRP is_VBZ_IND_index_NN_man_NN can_MD exhibit_VB__。 More_RBR so_RB,_,in_IN this_DT age_NN of_IN reasoning_NN and_CC knowledge_NN ._。 Tradition_NN may_MD call_VB it_PRP,_,but_CC adhering_JJ blindly_RB to_TO it_PRP is_VBZ idiocy_NN,_,be_VB it_PRP the_DT famed_JJ Jallikattu_NNP in_IN Tamil_NNP Nadu_NNP -LRB -_- LRB- The_DT Indian_JJ equivalent_NN to_TO the_DT Spanish_JJ Bullfighting_NN -RRB -_- RRB- or_CC the_DT cock-戰鬥_NNS ._。 Pelting_VBG stones_NNS at_IN a_DT dog_NN and_CC relishing_VBG it_PRP howl_NN in_IN pain_NN is_VBZ dreadful_JJ ._。 If_IN one_CD only_RB given_VBD as_RB much_JJ as_IN a_DT trick_VB of_IN thought_NN and_CC conscience_NN the_DT issue_NN would_MD surface_VB as_IN deplorable_JJ in_IN every_DT aspect_NN ._。 Animals_NNS play_VBP a_DT part_NN along_IN with_IN us_PRP in_IN our_PRP $ ecosystem_NN ._。 And_CC,_,some_DT animals_NNS are_VBP dearer_RBR:_:the_DT stray_JJ dogs_NNS that_WDT guard_VBP our_PRP $ street_NN,_,the_DT intelligent_JJ crow_NN,_,the_DT beast_NN of_IN burden_NN and_CC the_DT everyday_JJ animals_NNS of_IN pasture_NN ._。 Literature_NN has_VBZ voiced_VBN in_IN its_PRP $ own_JJ way_NN:_:In_IN The_DT Lord_NN of_IN the_DT Rings_NNP the_DT fellowship_NN treated_VBN Bill_NNP Ferny_NNP 's_POS pony_NN with_IN utmost_JJ care_NN; _:in_IN Harry_NNP Potter_NNP when_WRB they_PRP did_VBD n't_RB heed_VB Hermione_NNP' s_POS advice_NN on_IN the_DT treatment_NN of_IN house_NN elves_NNS they_PRP learned_VBD the_DT hard_JJ way_NN that_IN it_PRP caused_VBD their_PRP $ own_JJ undoing_NN; _:and_CC Jack_NNP London_NNP,_,writes_VBZ all_DT about_IN animals_NNS ._。 Indeed_RB,_,Kindness_NN to_TO animals_NNS is_VBZ a_DT virtue_NN ._。

下面是其尋求獲得上述子代碼:

String line; 
StringBuilder sb=new StringBuilder(); 
try(FileInputStream input = new FileInputStream("E:\\D.txt")) 
    { 
    int data = input.read(); 
    while(data != -1) 
     { 
     sb.append((char)data); 
     data = input.read(); 
     } 
    } 
catch(FileNotFoundException e) 
{ 
    System.err.println("File Not Found Exception : " + e.getMessage()); 
} 
line=sb.toString(); 
String line1=line;//Copy for Tagger 
line+=" T";  
List<String> sentenceList = new ArrayList<String>();//TAGGED DOCUMENT 
MaxentTagger tagger = new MaxentTagger("E:\\Installations\\Java\\Tagger\\english-left3words-distsim.tagger"); 
String tagged = tagger.tagString(line1); 
File file = new File("A.txt"); 
BufferedWriter output = new BufferedWriter(new FileWriter(file)); 
output.write(tagged); 
output.close(); 
DocumentPreprocessor dp = new DocumentPreprocessor("C:\\Users\\Admin\\workspace\\Project\\A.txt"); 
int largest=50; 
int m=0; 
StringTokenizer st1; 
for (List<HasWord> sentence : dp) 
{ 
    String sentenceString = Sentence.listToString(sentence); 
    sentenceList.add(sentenceString.toString()); 
} 
String[][] Gloss=new String[sentenceList.size()][largest]; 
String[] Adj=new String[largest]; 
String[] Adv=new String[largest]; 
String[] Noun=new String[largest]; 
String[] Verb=new String[largest]; 
int adj=0,adv=0,noun=0,verb=0; 
for(int i=0;i<sentenceList.size();i++) 
{ 
    st1= new StringTokenizer(sentenceList.get(i)," ,(){}[]/.;:&?!"); 
    m=0;//Count for Gloss 2nd dimension 
    //GETTING THE POS's COMPARTMENTALISED 
    while(st1.hasMoreTokens()) 
    { 
     String token=st1.nextToken(); 
     if(token.length()>1)//TO SKIP PAST TOKENS FOR PUNCTUATION MARKS 
     { 
     System.out.println(token); 
     String s=token.substring(token.lastIndexOf("_")+1,token.length()); 
     System.out.println(s); 
     if(s.equals("JJ")||s.equals("JJR")||s.equals("JJS")) 
     { 
      Adj[adj]=token.substring(0,token.lastIndexOf("_")); 
      System.out.println(Adj[adj]); 
      adj++; 
     } 
     if(s.equals("NN")||s.equals("NNS")) 
     { 
      Noun[noun]=token.substring(0, token.lastIndexOf("_")); 
      System.out.println(Noun[noun]); 
      noun++; 
     } 
     if(s.equals("RB")||s.equals("RBR")||s.equals("RBS")) 
     { 
      Adv[adv]=token.substring(0,token.lastIndexOf("_")); 
      System.out.println(Adv[adv]); 
      adv++; 
     } 
     if(s.equals("VB")||s.equals("VBD")||s.equals("VBG")||s.equals("VBN")||s.equals("VBP")||s.equals("VBZ")) 
     { 
      Verb[verb]=token.substring(0,token.lastIndexOf("_")); 
      System.out.println(Verb[verb]); 
      verb++; 
     } 
     } 
    } 
    i++;//TO SKIP PAST THE LINES WHERE AN EXTRA UNDERSCORE OCCURS FOR FULLSTOP 
} 

D.txt包含純文本。

至於問題:

每一個字都被在空間標記化。除了'n't_RB',它被分別標記爲不是和RB。

這是輸出的外觀:

Man_NN 
NN 
Man 
has_VBZ 
VBZ 
has 
always_RB 
RB 
always 
had_VBN 
VBN 
had 
this_DT 
DT 
notion_NN 
NN 
notion 
that_IN 
IN 
brave_VBP 
VBP 
brave 
deeds_NNS 
NNS 
deeds 
are_VBP 
VBP 
are 
manifest_JJ 
JJ 
manifest 
in_IN 
IN 
physical_JJ 
JJ 
physical 
actions_NNS 
NNS 
actions 
While_IN 
IN 
it_PRP 
PRP 
is_VBZ 
VBZ 
is 
not_RB 
RB 
not 
entirely_RB 
RB 
entirely 
erroneous_JJ 
JJ 
erroneous 
there_EX 
EX 
does_VBZ 
VBZ 
does 
n't 
n't 
RB 
RB 

但如果我只是運行 'there_EX does_VBZ n't_RB lie_VB' 的標記生成器 'n't_RB' 獲取toknized在一起。當我運行程序時,我得到一個StringIndexOutOfBounds異常,這是可以理解的,因爲'not'或'RB'中沒有'_'。 任何人都可以看看它嗎?謝謝。

+0

你想問什麼? – Rahul 2015-04-04 09:59:26

+0

問題是爲什麼只有n't_RB'被分割爲不是和RB,而其他每個單詞都被下劃線分割? – 2015-04-04 10:07:41

+0

if(token.length()> 1)的原因//跳過標記爲PUNCTUATION MARKS行 – Rahul 2015-04-04 10:15:50

回答

1

DocumentPreprocessor文檔據說

注意:如果使用一個空參數,則假定該文件將被標記化和DocumentPreprocessor不進行標記化。

因爲您從文件加載文檔已經在程序的第一步是標記化,你應該做的:

DocumentPreprocessor dp = new DocumentPreprocessor("./data/stanford-nlp/A.txt"); 
dp.setTokenizerFactory(null); 

然後它正確輸出'的話,例如

... 
did_VBD 
VBD 
did 
n't_RB 
RB 
n't 
heed_VB 
VB 
heed 
Hermione_NNP 
NNP 
's_POS 
POS 
... 
+0

非常感謝。我認爲我無法理解你們回答隨機人員疑問的動機:) – 2015-04-04 13:38:23

+0

挑戰,也許;) – 2015-04-04 14:04:13

+0

現在又出現了另一個問題。 DocumentProcessor不僅僅是分割句子。 – 2015-04-05 18:38:04

0

我會嘗試String.split()而不是StringTokenizer

String str = "Man_NN has_VBZ always_RB had_VBN this_DT notion_NN that_IN brave_VBP deeds_NNS are_VBP manifest_JJ in_IN physical_JJ actions_NNS ._. While_IN it_PRP is_VBZ not_RB entirely_RB erroneous_JJ ,_, there_EX does_VBZ n't_RB lie_VB the_DT singular_JJ path_NN to_TO valor_NN ._. From_IN of_IN old_JJ ,_, it_PRP is_VBZ a_DT sign_NN of_IN strength_NN to_TO fight_VB back_RP a_DT wild_JJ animal_NN ._. It_PRP is_VBZ understandable_JJ if_IN fought_VBN in_IN defense_NN ;_: however_RB ,_, to_TO go_VB the_DT extra_JJ mile_NN and_CC instigate_VB an_DT animal_NN and_CC fight_VB it_PRP is_VBZ the_DT lowest_JJS degree_NN of_IN civilization_NN man_NN can_MD exhibit_VB ._. More_RBR so_RB ,_, in_IN this_DT age_NN of_IN reasoning_NN and_CC knowledge_NN ._. Tradition_NN may_MD call_VB it_PRP ,_, but_CC adhering_JJ blindly_RB to_TO it_PRP is_VBZ idiocy_NN ,_, be_VB it_PRP the_DT famed_JJ Jallikattu_NNP in_IN Tamil_NNP Nadu_NNP -LRB-_-LRB- The_DT Indian_JJ equivalent_NN to_TO the_DT Spanish_JJ Bullfighting_NN -RRB-_-RRB- or_CC the_DT cock-fights_NNS ._. Pelting_VBG stones_NNS at_IN a_DT dog_NN and_CC relishing_VBG it_PRP howl_NN in_IN pain_NN is_VBZ dreadful_JJ ._. If_IN one_CD only_RB gave_VBD as_RB much_JJ as_IN a_DT trickle_VB of_IN thought_NN and_CC conscience_NN the_DT issue_NN would_MD surface_VB as_IN deplorable_JJ in_IN every_DT aspect_NN ._. Animals_NNS play_VBP a_DT part_NN along_IN with_IN us_PRP in_IN our_PRP$ ecosystem_NN ._. And_CC ,_, some_DT animals_NNS are_VBP dearer_RBR :_: the_DT stray_JJ dogs_NNS that_WDT guard_VBP our_PRP$ street_NN ,_, the_DT intelligent_JJ crow_NN ,_, the_DT beast_NN of_IN burden_NN and_CC the_DT everyday_JJ animals_NNS of_IN pasture_NN ._. Literature_NN has_VBZ voiced_VBN in_IN its_PRP$ own_JJ way_NN :_: In_IN The_DT Lord_NN of_IN the_DT Rings_NNP the_DT fellowship_NN treated_VBN Bill_NNP Ferny_NNP 's_POS pony_NN with_IN utmost_JJ care_NN ;_: in_IN Harry_NNP Potter_NNP when_WRB they_PRP did_VBD n't_RB heed_VB Hermione_NNP 's_POS advice_NN on_IN the_DT treatment_NN of_IN house_NN elves_NNS they_PRP learned_VBD the_DT hard_JJ way_NN that_IN it_PRP caused_VBD their_PRP$ own_JJ undoing_NN ;_: and_CC Jack_NNP London_NNP ,_, writes_VBZ all_DT about_IN animals_NNS ._. Indeed_RB ,_, Kindness_NN to_TO animals_NNS is_VBZ a_DT virtue_NN ._. "; 

for(String word : str.split("\\s")){ 

    if(word.split("_").length==2){ 

     String filteredWord = word.split("_")[0]; 
     String wordType  = word.split("_")[1]; 

     System.out.println(word+" = "+filteredWord+ " - "+wordType); 

    } 

} 

與輸出看起來像:

Man_NN = Man - NN 
has_VBZ = has - VBZ 
always_RB = always - RB 
had_VBN = had - VBN 
this_DT = this - DT 
notion_NN = notion - NN 
that_IN = that - IN 
brave_VBP = brave - VBP 
deeds_NNS = deeds - NNS 
are_VBP = are - VBP 
manifest_JJ = manifest - JJ 
in_IN = in - IN 
physical_JJ = physical - JJ 
actions_NNS = actions - NNS 
...... 

爲什麼只有n't_RB」越來越分裂不和RB

StringTokenizer stk = new StringTokenizer("n't_RB","_"); 

while(stk.hasMoreTokens()){ 
    System.out.println(stk.nextToken()); 
} 

這將拆分c orrectly,

n't 
RB 
+0

謝謝,但爲什麼'n't_RB'會被拆分爲n't_RB,但是會被拆分爲不是和RB。這讓我很困惑。 – 2015-04-04 10:10:38

+0

String.split不能解決問題。從輸出中可以推斷出,每個單詞都被拆分爲'manifest_JJ',但爲什麼n't_RB被拆分爲不是和RB? – 2015-04-04 12:13:37

1

方法lastIndexOf,當存在錯誤時,返回-1。 您收到的例外是由於您使用lastIndexOf方法無法在字符串中獲取正確字符時使用的子字符串方法引起的。

我認爲你可以做的是檢查索引是否與-1不同,然後使用它。有了這個檢查,你可以避免你收到的那個惱人的錯誤。不幸的是,沒有整個輸入文本真的很難理解哪些字符串不包含您指定的特定字符。

爲了完整起見,我認爲您還需要修復獲得所有POS元素的方式。在我看來,String矩陣很容易出錯(你需要弄清楚如何管理索引),而且對於這類任務來說效率相當低。

也許你可以使用一個Multimap爲每個POS類型關聯所有屬於它的元素。我認爲這樣可以更好地管理一切。

+0

謝謝,我會研究你的建議。我也發佈了全文。我能夠理解異常錯誤。唯一不能理解的是爲什麼n't_RB在下劃線處被拆分,而不像其他在單詞間隙處拆分的元素。 – 2015-04-04 10:17:48