我想在R中使用CNN包使用CNN來嘗試和預測基於圖像的標量輸出(在我的情況下等待時間)。CNN圖像識別在R的MXNET輸出標量數而不是類
但是,當我這樣做,我得到相同的結果輸出(它預測相同的數字,可能只是所有結果的平均值)。我如何才能正確預測標量輸出。
我的圖像已經通過greyscaling進行了預處理,並轉換爲下面的像素格式並將其縮放到28 x 28(我也嘗試了不同的尺寸但沒有效果)。
我基本上使用圖像來預測等待時間,這就是爲什麼我的train_y是以秒爲單位的當前等待時間。在使用這種方法時,在將train_y作爲當前等待時間(以秒爲單位)時,算法只是預測相同的數字。
但是,當我通過猜測最大值(20000)將train_y轉換爲[0,1]時,CNN確實輸出不同的數字,但是當再次將這些數字乘以20000時,我似乎得到預測負數,以及數字太偏斜,給模型帶來不好的結果。負數尤其沒有意義,因爲我所有的train_y都是正數,而且由於我正在處理時間,所以沒有負數的東西
我也玩過0.05,0.01的學習率,0.001,0.0001,0.00001等,直到2e-8,對模型沒有影響。我也玩過初始化程序
我也通過將它從0.9改爲0.95而對模型沒有任何影響。
這裏是我的重複性代碼:
set.seed(0)
df <- data.frame(replicate(784,runif(7538)))
df$waittime <- 1000*runif(7538)
training_index <- createDataPartition(df$waittime, p = .9, times = 1)
training_index <- unlist(training_index)
train_set <- df[training_index,]
dim(train_set)
test_set <- df[-training_index,]
dim(test_set)
## Fix train and test datasets
train_data <- data.matrix(train_set)
train_x <- t(train_data[, -785])
train_y <- train_data[,785]
train_array <- train_x
dim(train_array) <- c(28, 28, 1, ncol(train_array))
test_data <- data.matrix(test_set)
test_x <- t(test_set[,-785])
test_y <- test_set[,785]
test_array <- test_x
dim(test_array) <- c(28, 28, 1, ncol(test_x))
library(mxnet)
## Model
mx_data <- mx.symbol.Variable('data')
## 1st convolutional layer 5x5 kernel and 20 filters.
conv_1 <- mx.symbol.Convolution(data = mx_data, kernel = c(5, 5), num_filter = 20)
tanh_1 <- mx.symbol.Activation(data = conv_1, act_type = "tanh")
pool_1 <- mx.symbol.Pooling(data = tanh_1, pool_type = "max", kernel = c(2, 2), stride = c(2,2))
## 2nd convolutional layer 5x5 kernel and 50 filters.
conv_2 <- mx.symbol.Convolution(data = pool_1, kernel = c(5,5), num_filter = 50)
tanh_2 <- mx.symbol.Activation(data = conv_2, act_type = "tanh")
pool_2 <- mx.symbol.Pooling(data = tanh_2, pool_type = "max", kernel = c(2, 2), stride = c(2, 2))
## 1st fully connected layer
flat <- mx.symbol.Flatten(data = pool_2)
fcl_1 <- mx.symbol.FullyConnected(data = flat, num_hidden = 500)
tanh_3 <- mx.symbol.Activation(data = fcl_1, act_type = "tanh")
## 2nd fully connected layer
fcl_2 <- mx.symbol.FullyConnected(data = tanh_3, num_hidden = 1)
## Output
#NN_model <- mx.symbol.SoftmaxOutput(data = fcl_2)
label <- mx.symbol.Variable("label")
#NN_model <- mx.symbol.MakeLoss(mx.symbol.square(mx.symbol.Reshape(fcl_2, shape = 0) - label))
NN_model <- mx.symbol.LinearRegressionOutput(fcl_2)
#Didn't work well, predicted same number continuously regardless of image
## Train on samples
model <- mx.model.FeedForward.create(NN_model, X = train_array, y = train_y,
# ctx = device,
num.round = 30,
array.batch.size = 100,
# initializer=mx.init.uniform(0.002),
initializer = mx.init.Xavier(factor_type = "in", magnitude = 2.34),
learning.rate = 0.00001,
momentum = 0.9,
wd = 0.00001,
eval.metric = mx.metric.rmse)
#epoch.end.callback = #mx.callback.log.train.metric(100))
pred <- predict(model, test_array)
#gives the same numeric output
#or when train_y is scaled to [0,1] gives very poor responses and negative numbers
當數據以平均值爲中心時(即移動平均值= 0),深度學習模型會更好地工作。嘗試將您的訓練數據預處理爲圖像輸入和迴歸輸出中的標準/分數Z值。 – j314erre
這沒有奏效,但謝謝你的建議。我認爲我的語法或準備工作有問題,所以讓我知道是否有人抓住了它。仍然不確定它爲什麼不起作用。 – Ic3MaN911