嘗試計算UDP客戶端和服務器之間的RTT(往返時間)時,我遇到了一個非常不直觀的結果。當我使用20字節的數據包大小時,RTT是4.0 ms,但是當我將數據包大小增加到15000字節時,RTT爲2.8 ms。這是爲什麼發生?隨着數據包大小的增加,RTT不應該增加嗎?增加數據包大小時RTT減少
下面是UDP服務器的代碼。我運行這個是java RTTServer 8080
。
public class RTTServer {
final static int BUFSIZE = 1024, COUNT=100000;
public static void main(String args[]) {
long start=Integer.MAX_VALUE;
byte[] bufferRecieve = new byte[BUFSIZE];
DatagramPacket recievePacket = new DatagramPacket(bufferRecieve, BUFSIZE);
for (;;)
try (DatagramSocket aSocket = new DatagramSocket(Integer.parseInt(args[0]));) {
aSocket.receive(recievePacket);
DatagramPacket sendPacket = new DatagramPacket(recievePacket.getData(), recievePacket.getLength(), recievePacket.getAddress(), recievePacket.getPort());
aSocket.send(sendPacket);
} catch (Exception e) {
System.out.println("Socket: " + e.getMessage());
}
}
}
下面是UDP客戶端的代碼。我運行這個爲java RTTClient 192.168.1.20 8080 15000
。
public class RTTClient {
final static int BUFSIZE = 1024;
final static int COUNT = 1000;
public static void main(String args[]) throws UnknownHostException {
InetAddress aHost = InetAddress.getByName(args[0]);
byte[] dataArray = args[2].getBytes();
byte[] bufferReceive = new byte[BUFSIZE];
DatagramPacket requestPacket = new DatagramPacket(
dataArray, dataArray.length, aHost, Integer.parseInt(args[1]));
DatagramPacket responsePacket = new DatagramPacket(bufferReceive,BUFSIZE);
long rtts = 0;
for (int i =0 ; i < COUNT; i++){
try (DatagramSocket aSocket = new DatagramSocket();) {
long start = System.currentTimeMillis();
aSocket.send(requestPacket);
aSocket.receive(responsePacket);
System.out.println(i);
rtts += System.currentTimeMillis() - start;
} catch (Exception e) {
System.out.println("Socket: " + e.getMessage());
}
}
System.out.println("RTT = "+(double)rtts/(double)COUNT);
}
}