好吧,我花了太多時間搞這個......注意最後一行是ggplot版本,所以你可以比較兩者。
#loess and error curves almost just like ggplot2
op <- par(las=1, mar = c(3,3,1,1))
n <- 30
x <- sort(rnorm(n)) #(varying density in predictor)
x <- x + abs(min(x))
x <- x/max(x)*2*pi
y <- sin(x)+rnorm(n) #(curvy)
m <- loess(y~x)
xx <- seq(min(x), max(x), (max(x)-min(x))/1000) #increase density of values to predict over to increase quality of curve
f <- predict(m, xx, se = TRUE)
ci <- f$se * qt(0.975, f$df)
cih <- f$fit + ci
cil <- f$fit - ci
plot(x,y, ylim = c(min(cil,y), max(cih,y)), cex.axis = 0.85, xlab = '', ylab = '', type = 'n')
title(xlab = 'x', ylab = 'y',line = 2)
grid(col = 'gray')
points(x,y, pch = 19, cex = 0.65)
lines(xx, f$fit, col = 'blue', lwd = 1.2)
xx <- c(xx, rev(xx))
yy <- c(cil, rev(cih))
polygon(xx, yy, col=rgb(0.1,0.1,0.1,0.25), border = NA)
par(op)
#qplot(x,y, geom = 'point') + stat_smooth()
令人印象深刻 - 你只是從你的其他答案中獲得了「獲勝的答案標記」。我希望你至少喜歡它:)(我很清楚你的第一句話,我用R做了很多事情......) – 2010-07-05 19:23:29