我已經完成了從CCD相機捕獲的YUV幀。不幸的是,有許多不同的YUV格式。我相信蘋果用於GL_YCBCR_422_APPLE
紋理格式的技術是2VUY422。將圖像從由IIDC火線相機產生的YUV422幀轉換爲2VUY422,我使用以下:
void yuv422_2vuy422(const unsigned char *theYUVFrame, unsigned char *the422Frame, const unsigned int width, const unsigned int height)
{
int i =0, j=0;
unsigned int numPixels = width * height;
unsigned int totalNumberOfPasses = numPixels * 2;
register unsigned int y0, y1, y2, y3, u0, u2, v0, v2;
while (i < (totalNumberOfPasses))
{
u0 = theYUVFrame[i++]-128;
y0 = theYUVFrame[i++];
v0 = theYUVFrame[i++]-128;
y1 = theYUVFrame[i++];
u2 = theYUVFrame[i++]-128;
y2 = theYUVFrame[i++];
v2 = theYUVFrame[i++]-128;
y3 = theYUVFrame[i++];
// U0 Y0 V0 Y1 U2 Y2 V2 Y3
// Remap the values to 2VUY (YUYS?) (Y422) colorspace for OpenGL
// Y0 U Y1 V Y2 U Y3 V
// IIDC cameras are full-range y=[0..255], u,v=[-127..+127], where display is "video range" (y=[16..240], u,v=[16..236])
the422Frame[j++] = ((y0 * 240)/255 + 16);
the422Frame[j++] = ((u0 * 236)/255 + 128);
the422Frame[j++] = ((y1 * 240)/255 + 16);
the422Frame[j++] = ((v0 * 236)/255 + 128);
the422Frame[j++] = ((y2 * 240)/255 + 16);
the422Frame[j++] = ((u2 * 236)/255 + 128);
the422Frame[j++] = ((y3 * 240)/255 + 16);
the422Frame[j++] = ((v2 * 236)/255 + 128);
}
}
對於YUV視頻源的有效顯示,則可能希望使用Apple's client storage extension,它可以設置使用類似以下內容:
glEnable(GL_TEXTURE_RECTANGLE_EXT);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, 1);
glTextureRangeAPPLE(GL_TEXTURE_RECTANGLE_EXT, videoImageWidth * videoImageHeight * 2, videoTexture);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_STORAGE_HINT_APPLE , GL_STORAGE_SHARED_APPLE);
glPixelStorei(GL_UNPACK_CLIENT_STORAGE_APPLE, GL_TRUE);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glTexImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, GL_RGBA, videoImageWidth, videoImageHeight, 0, GL_YCBCR_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, videoTexture);
這可以讓你迅速改變了存儲在您的客戶端視頻紋理中的數據每個幀之前在屏幕上顯示。
要繪製,那麼你可以使用如下代碼:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glViewport(0, 0, [self frame].size.width, [self frame].size.height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
NSRect bounds = NSRectFromCGRect([self bounds]);
glOrtho((GLfloat)NSMinX(bounds), (GLfloat)NSMaxX(bounds), (GLfloat)NSMinY(bounds), (GLfloat)NSMaxY(bounds), -1.0, 1.0);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, 1);
glTexSubImage2D (GL_TEXTURE_RECTANGLE_EXT, 0, 0, 0, videoImageWidth, videoImageHeight, GL_YCBCR_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, videoTexture);
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(0.0f, videoImageHeight);
glTexCoord2f(0.0f, videoImageHeight);
glVertex2f(0.0f, 0.0f);
glTexCoord2f(videoImageWidth, videoImageHeight);
glVertex2f(videoImageWidth, 0.0f);
glTexCoord2f(videoImageWidth, 0.0f);
glVertex2f(videoImageWidth, videoImageHeight);
glEnd();
感謝很多的信息亞當我真的很感激。現在你能告訴我如何從硬盤中提取原始數據文件嗎?我熟悉NSOpenPanel,我可以使用它來提取數據文件路徑,但是,如何使用文件路徑並將YUV文件加載到應用程序中? – ReachConnection 2009-07-03 20:28:56
假設文件只包含像素,只需使用NSData或NSFileHandle並讀取所有內容。如果文件包含元數據(如大小信息),則必須使用標準C指針和/或結構運算符來解釋該元數據。 – 2009-07-04 01:20:50