2013-03-28 82 views
-2

的代碼是:爲什麼我只是拋出拋出異常?

catch (WebException ex) 
      { 
       failed = true; 
       wccfg.failedUrls++; 
       return csFiles; 
      } 
      catch (Exception ex) 
      { 
       failed = true; 
       wccfg.failedUrls++; 
       throw; 
      } 

唯一的例外是在擲; 異常消息爲:的NullReferenceException:未將對象引用設置到對象的實例

System.NullReferenceException was unhandled by user code 
    HResult=-2147467261 
    Message=Object reference not set to an instance of an object. 
    Source=GatherLinks 
    StackTrace: 
     at GatherLinks.TimeOut.getHtmlDocumentWebClient(String url, Boolean useProxy, String proxyIp, Int32 proxyPort, String usename, String password) in d:\C-Sharp\GatherLinks\GatherLinks-2\GatherLinks\GatherLinks\TimeOut.cs:line 55 
     at GatherLinks.WebCrawler.webCrawler(String mainUrl, Int32 levels) in d:\C-Sharp\GatherLinks\GatherLinks-2\GatherLinks\GatherLinks\WebCrawler.cs:line 151 
     at GatherLinks.WebCrawler.webCrawler(String mainUrl, Int32 levels) in d:\C-Sharp\GatherLinks\GatherLinks-2\GatherLinks\GatherLinks\WebCrawler.cs:line 151 
     at GatherLinks.WebCrawler.webCrawler(String mainUrl, Int32 levels) in d:\C-Sharp\GatherLinks\GatherLinks-2\GatherLinks\GatherLinks\WebCrawler.cs:line 151 
     at GatherLinks.BackgroundWebCrawling.secondryBackGroundWorker_DoWork(Object sender, DoWorkEventArgs e) in d:\C-Sharp\GatherLinks\GatherLinks-2\GatherLinks\GatherLinks\BackgroundWebCrawling.cs:line 82 
     at System.ComponentModel.BackgroundWorker.OnDoWork(DoWorkEventArgs e) 
     at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) 
    InnerException: 

這是try代碼它的WebCrawler的函數內部:

public List<string> webCrawler(string mainUrl, int levels) 
     { 

      busy.WaitOne(); 


      HtmlWeb hw = new HtmlWeb(); 
      List<string> webSites; 
      List<string> csFiles = new List<string>(); 

      csFiles.Add("temp string to know that something is happening in level = " + levels.ToString()); 
      csFiles.Add("current site name in this level is : " + mainUrl); 

      try 
      { 
       HtmlAgilityPack.HtmlDocument doc = TimeOut.getHtmlDocumentWebClient(mainUrl, false, "", 0, "", ""); 
       done = true; 

       Object[] temp_arr = new Object[8]; 
       temp_arr[0] = csFiles; 
       temp_arr[1] = mainUrl; 
       temp_arr[2] = levels; 
       temp_arr[3] = currentCrawlingSite; 
       temp_arr[4] = sitesToCrawl; 
       temp_arr[5] = done; 
       temp_arr[6] = wccfg.failedUrls; 
       temp_arr[7] = failed; 

       OnProgressEvent(temp_arr); 


       currentCrawlingSite.Add(mainUrl); 
       webSites = getLinks(doc); 
       removeDupes(webSites); 
       removeDuplicates(webSites, currentCrawlingSite); 
       removeDuplicates(webSites, sitesToCrawl); 
       if (wccfg.removeext == true) 
       { 
        for (int i = 0; i < webSites.Count; i++) 
        { 
         webSites.Remove(removeExternals(webSites,mainUrl,wccfg.localy)); 
        } 
       } 
       if (wccfg.downloadcontent == true) 
       { 
        retwebcontent.retrieveImages(mainUrl); 
       } 

       if (levels > 0) 
        sitesToCrawl.AddRange(webSites); 



       if (levels == 0) 
       { 
        return csFiles; 
       } 
       else 
       { 


        for (int i = 0; i < webSites.Count(); i++) 
        { 


         if (wccfg.toCancel == true) 
         { 
          return new List<string>(); 
         } 
         string t = webSites[i]; 
         if ((t.StartsWith("http://") == true) || (t.StartsWith("https://") == true)) 
         { 
          csFiles.AddRange(webCrawler(t, levels - 1)); 
         } 

        } 
        return csFiles; 
       } 



      } 

      catch (WebException ex) 
      { 
       failed = true; 
       wccfg.failedUrls++; 
       return csFiles; 
      } 
      catch (Exception ex) 
      { 
       failed = true; 
       wccfg.failedUrls++; 
       throw; 
      } 
     } 

這是使用wccfg如何IM在類的頂部:

private System.Threading.ManualResetEvent busy; 
     WebcrawlerConfiguration wccfg; 
     List<string> currentCrawlingSite; 
     List<string> sitesToCrawl; 
     RetrieveWebContent retwebcontent; 
     public event EventHandler<WebCrawlerProgressEventHandler> ProgressEvent; 
     public bool done; 
     public bool failed; 

     public WebCrawler(WebcrawlerConfiguration webcralwercfg) 
     { 
      failed = false; 
      done = false; 
      currentCrawlingSite = new List<string>(); 
      sitesToCrawl = new List<string>(); 
      busy = new System.Threading.ManualResetEvent(true); 
      wccfg = webcralwercfg; 
     } 
+6

請包括此代碼的「嘗試」部分。 – Inisheer

+0

你確定'wccfg'在所有'catch'塊中都不爲空嗎? –

+4

認爲這可能是你*發現的異常*?你期望什麼*扔*做其他事情? –

回答

3

你得到的NullReferenceException,因爲你沒有初始化一些東西在你之前在你的try塊中唱歌。

然後代碼進入你的catch(Exception ex)塊,它遞增計數器,設置failed=true,然後重新拋出NullReferenceException

1

東西是不滿調用這個函數:

TimeOut.getHtmlDocumentWebClient(mainUrl, false, "", 0, "", "") 

調試器停在你的throw聲明的原因是因爲你抓住了原來的異常,從調試器隱藏它。設置你的調試選項爲「突破第一次例外」 - 然後你會看到異常真正來自哪裏,能夠檢查你的變量等。

通常是一個好主意,#if遠離任何catch-所有在調試過程中的異常處理程序,因爲它們吞噬了很多重要的錯誤信息。對於你正在做的事情,無論如何使用try/finally可能會更好。

相關問題