注意:這個例子是一個非常基本的獲取鏈接的方法,因此需要進行調整才能更健壯。 :)
我不知道這段代碼有多有用,但希望它可以給你一個進入方向的想法(只需將它複製並粘貼到R中,一旦安裝完成,它就是一個獨立的示例包RCurl和XML):
library(RCurl)
library(XML)
get.links.on.page <- function(u) {
doc <- getURL(u)
html <- htmlTreeParse(doc, useInternalNodes = TRUE)
nodes <- getNodeSet(html, "//html//body//a[@href]")
urls <- sapply(nodes, function(x) x <- xmlAttrs(x)[[1]])
urls <- sort(urls)
return(urls)
}
# a naieve way of doing it. Python has 'urlparse' which is suppose to be rather good at this
get.root.domain <- function(u) {
root <- unlist(strsplit(u, "/"))[3]
return(root)
}
# a naieve method to filter out duplicated, invalid and self-referecing urls.
filter.links <- function(seed, urls) {
urls <- unique(urls)
urls <- urls[which(substr(urls, start = 1, stop = 1) == "h")]
urls <- urls[grep("http", urls, fixed = TRUE)]
seed.root <- get.root.domain(seed)
urls <- urls[-grep(seed.root, urls, fixed = TRUE)]
return(urls)
}
# pass each url to this function
main.fn <- function(seed) {
raw.urls <- get.links.on.page(seed)
filtered.urls <- filter.links(seed, raw.urls)
return(filtered.urls)
}
### example ###
seed <- "http://www.r-bloggers.com/blogs-list/"
urls <- main.fn(seed)
# crawl first 3 links and get urls for each, put in a list
x <- lapply(as.list(urls[1:3]), main.fn)
names(x) <- urls[1:3]
x
如果您複製並粘貼到R,然後看X,我認爲它會是有意義的。
無論哪種方式,祝你好運隊友! Tony Breyal
非常感謝德魯,我會努力的(希望我能及時得到它)。 – 2010-07-12 06:21:57