找不到具體描述的原因是有很多方法可以做到這一點。
讓維基百科開始:https://en.wikipedia.org/wiki/Chroma_subsampling#4:2:2
4:4:4:
每三個的Y'CbCr部件具有相同的採樣率,因此不存在色度子採樣。該方案有時用於高端膠片掃描儀和電影后期製作。
和
4:2:2:
兩個色度分量在亮度的採樣率的一半採樣:水平色度分辨率被減半。這將未壓縮視頻信號的帶寬減少了三分之一,幾乎沒有視覺差異。
注:術語YCbCr和YUV可互換使用。
https://en.wikipedia.org/wiki/YCbCr
的Y'CbCr經常混淆YUV顏色空間,並且通常術語YCbCr和YUV可互換使用,導致一些混亂;當提及視頻或數字形式的信號時,術語「YUV」大多意味着「Y'CbCr」。
數據存儲器排序:
同樣有多種格式。
英特爾IPP文檔定義了兩個主要類別:「像素順序圖像格式」和「平面圖像格式」。
這裏有一個很好的文檔:https://software.intel.com/en-us/node/503876
請參閱:http://www.fourcc.org/yuv.php#NV12 YUV像素排列格式。
請參考:http://scc.ustc.edu.cn/zlsc/sugon/intel/ipp/ipp_manual/IPPI/ippi_ch6/ch6_image_downsampling.htm#ch6_image_downsampling下采樣描述。
假設平面格式:
YUV 4:4:4 data order: Y0 U0 V0 Y1 U1 V1 Y2 U2 V2
YUV 4:2:2 data order: Y0 U0 Y1 V0 Y2 U1 Y3 V1
每個元素是一個單一的字節,和Y0是在存儲器中的低字節。
轉換算法:
「樸素子採樣」:
「投擲」 每一第二U/V分量:
採取U0,並拋出U1,V0取並拋出V1 ...
來源:Y0 U0 V0 Y1 U1 V1 Y2 U2 V2
目的地:Y0 U0 Y1 V0 Y2 U2 Y3 V2
我不推薦它,因爲它會導致aliasing文物。
平均每個U/V對:
取目的地U0等於源(U0 + U1)/ 2,爲相同V0 ...
來源:Y0 U0 V0 Y1 U1 V1 Y2 U2 V2
目的地: Y0(U0 + U1)/ 2 Y1(V0 + V1)/ 2 Y2(U2 + U3)/ 2 Y3(V2 + V3)/ 2
使用其他插值方法對U和V進行下采樣例如,立體插值)。
與簡單的平均值相比,您通常無法看到任何差異。
C實現:
問題是未標記爲C,但我認爲下面的C實現可能會有所幫助。
下面的代碼轉換像素下令YUV 4:4:4像素下令YUV 4:2:2通過平均每個U/V對:
//Convert single row I0 from pixel-ordered YUV 4:4:4 to pixel-ordered YUV 4:2:2.
//Save the result in J0.
//I0 size in bytes is image_width*3
//J0 size in bytes is image_width*2
static void ConvertRowYUV444ToYUV422(const unsigned char I0[],
const int image_width,
unsigned char J0[])
{
int x;
//Process two Y,U,V triples per iteration:
for (x = 0; x < image_width; x += 2)
{
//Load source elements
unsigned char y0 = I0[x*3]; //Load source Y element
unsigned int u0 = (unsigned int)I0[x*3+1]; //Load source U element (and convert from uint8 to uint32).
unsigned int v0 = (unsigned int)I0[x*3+2]; //Load source V element (and convert from uint8 to uint32).
//Load next source elements
unsigned char y1 = I0[x*3+3]; //Load source Y element
unsigned int u1 = (unsigned int)I0[x*3+4]; //Load source U element (and convert from uint8 to uint32).
unsigned int v1 = (unsigned int)I0[x*3+5]; //Load source V element (and convert from uint8 to uint32).
//Calculate destination U, and V elements.
//Use shift right by 1 for dividing by 2.
//Use plus 1 before shifting - round operation instead of floor operation.
unsigned int u01 = (u0 + u1 + 1) >> 1; //Destination U element equals average of two source U elements.
unsigned int v01 = (v0 + v1 + 1) >> 1; //Destination U element equals average of two source U elements.
J0[x*2] = y0; //Store Y element (unmodified).
J0[x*2+1] = (unsigned char)u01; //Store destination U element (and cast uint32 to uint8).
J0[x*2+2] = y1; //Store Y element (unmodified).
J0[x*2+3] = (unsigned char)v01; //Store destination V element (and cast uint32 to uint8).
}
}
//Convert image I from pixel-ordered YUV 4:4:4 to pixel-ordered YUV 4:2:2.
//I - Input image in pixel-order data YUV 4:4:4 format.
//image_width - Number of columns of image I.
//image_height - Number of rows of image I.
//J - Destination "image" in pixel-order data YUV 4:2:2 format.
//Note: The term "YUV" referees to "Y'CbCr".
//I is pixel ordered YUV 4:4:4 format (size in bytes is image_width*image_height*3):
//YUVYUVYUVYUV
//YUVYUVYUVYUV
//YUVYUVYUVYUV
//YUVYUVYUVYUV
//
//J is pixel ordered YUV 4:2:2 format (size in bytes is image_width*image_height*2):
//YUYVYUYV
//YUYVYUYV
//YUYVYUYV
//YUYVYUYV
//
//Conversion algorithm:
//Each element of destination U is average of 2 original U horizontal elements
//Each element of destination V is average of 2 original V horizontal elements
//
//Limitations:
//1. image_width must be a multiple of 2.
//2. I and J must be two separate arrays (in place computation is not supported).
static void ConvertYUV444ToYUV422(const unsigned char I[],
const int image_width,
const int image_height,
unsigned char J[])
{
//I0 points source row.
const unsigned char *I0; //I0 -> YUYVYUYV...
//J0 and points destination row.
unsigned char *J0; //J0 -> YUYVYUYV
int y; //Row index
//In each iteration process single row.
for (y = 0; y < image_height; y++)
{
I0 = &I[y*image_width*3]; //Input row width is image_width*3 bytes (each pixel is Y,U,V).
J0 = &J[y*image_width*2]; //Output row width is image_width*2 bytes (each two pixels are Y,U,Y,V).
//Process single source row into single destination row
ConvertRowYUV444ToYUV422(I0, image_width, J0);
}
}
真是一個偉大的答案!所有的網站都重複/重述了同樣的事情,並沒有切入點。感謝收集信息並說清楚。像素順序圖像格式表(來自你的鏈接之一)幾乎是我的問題的答案 - 它說明了重新採樣的YUV序列 - 我沒有找到它,儘管我發現序列[這裏](https:/ /www.cs.cf.ac.uk/Dave/Multimedia/node196.html)。我會盡力讓你知道。爲了正確,在平均目的地你的意思是_Y3(V2 + V3)/ 2_? – Nazar
它的工作原理!現在 - 實施平均方法。 – Nazar
根據您發佈的鏈接,它們看起來像他們使用的子採樣公式是:dstU1 = 0.5 * U2 + U3 + 0.5 * U4。 – Rotem