我有一個查詢,我一直在努力,不能拼湊在一起的最後一部分。我希望以下查詢根據給定的StartTime範圍每小時返回一個總計數[Meta_ID]列,並且總計僅在該小時內計數[Meta_ID],且沒有任何愚蠢行爲。在此先感謝您的任何建議!如何按小時日期時間範圍計算唯一列值?
SELECT [Detail_ID]
,[Meta_ID] as PlayerID
,p.FirstName
,p.LastName
,u.FirstName as Host
,[StartTime]
,CAST(StartTime as date) AS ForDate,
DATEPART(hour,StartTime) AS OnHour,
COUNT(*) AS Totals
FROM [SDIDW].[dbo].[CDS_StatDetail] (nolock)
join CDS_Player p on p.Player_ID = Meta_ID
join CDS_User u on u.User_ID = p.HostUser_ID
--join dbo.PlayerDAP d on d.PlayerId = Meta_ID
WHERE StartTime >= '2017-06-04 00:00:00.000'
AND StartTime <= '2017-06-10 23:59:59.999'
AND StatType like '%SLOT%'
AND Meta_ID in (
10,
111,
112,
126,
127,
147,
155,
189,
234,
237,
271,
273,
287,
321,
404)
GROUP BY CAST(StartTime as date),
DATEPART(hour,StartTime),
[Detail_ID]
,[Meta_ID]
,p.FirstName
,p.LastName
,u.FirstName
,[StartTime]
這是目前我所得到的。我已經刪除了這張表中的名字。
+-----------+----------+-----------------+----------+--------+--------+
| Detail_ID | PlayerID | StartTime | ForDate | OnHour | Totals |
+-----------+----------+-----------------+----------+--------+--------+
| 209381040 | 1115 | 6/4/17 12:08 AM | 6/4/2017 | 0 | 1 |
| 209381317 | 1115 | 6/4/17 12:15 AM | 6/4/2017 | 0 | 1 |
| 209381453 | 492 | 6/4/17 12:10 AM | 6/4/2017 | 0 | 1 |
| 209381800 | 1891 | 6/4/17 12:36 AM | 6/4/2017 | 0 | 1 |
| 209381805 | 1200 | 6/4/17 12:37 AM | 6/4/2017 | 0 | 1 |
| 209382181 | 1200 | 6/4/17 12:48 AM | 6/4/2017 | 0 | 1 |
| 209382753 | 1069 | 6/4/17 12:13 AM | 6/4/2017 | 0 | 1 |
| 209382581 | 1200 | 6/4/17 1:02 AM | 6/4/2017 | 1 | 1 |
| 209383570 | 1069 | 6/4/17 1:10 AM | 6/4/2017 | 1 | 1 |
| 209383752 | 1069 | 6/4/17 1:47 AM | 6/4/2017 | 1 | 1 |
| 209386313 | 126 | 6/4/17 5:10 AM | 6/4/2017 | 5 | 1 |
| 209386352 | 126 | 6/4/17 5:22 AM | 6/4/2017 | 5 | 1 |
+-----------+----------+-----------------+----------+--------+--------+
但是我想得到的是更多沿右邊添加列的行,如下所示。我試圖每小時獲得一次Meta_ID的唯一計數。
+-----------+----------+-----------------+----------+--------+--------+------+--------+
| Detail_ID | PlayerID | StartTime | ForDate | OnHour | Totals | Hour | Hosted |
+-----------+----------+-----------------+----------+--------+--------+------+--------+
| 209381040 | 1115 | 6/4/17 12:08 AM | 6/4/2017 | 0 | 1 | 0 | 5 |
| 209381317 | 1115 | 6/4/17 12:15 AM | 6/4/2017 | 0 | 1 | 1 | 2 |
| 209381453 | 492 | 6/4/17 12:10 AM | 6/4/2017 | 0 | 1 | 5 | 1 |
| 209381800 | 1891 | 6/4/17 12:36 AM | 6/4/2017 | 0 | 1 | | |
| 209381805 | 1200 | 6/4/17 12:37 AM | 6/4/2017 | 0 | 1 | | |
| 209382181 | 1200 | 6/4/17 12:48 AM | 6/4/2017 | 0 | 1 | | |
| 209382753 | 1069 | 6/4/17 12:13 AM | 6/4/2017 | 0 | 1 | | |
| 209382581 | 1200 | 6/4/17 1:02 AM | 6/4/2017 | 1 | 1 | | |
| 209383570 | 1069 | 6/4/17 1:10 AM | 6/4/2017 | 1 | 1 | | |
| 209383752 | 1069 | 6/4/17 1:47 AM | 6/4/2017 | 1 | 1 | | |
| 209386313 | 126 | 6/4/17 5:10 AM | 6/4/2017 | 5 | 1 | | |
| 209386352 | 126 | 6/4/17 5:22 AM | 6/4/2017 | 5 | 1 | | |
+-----------+----------+-----------------+----------+--------+--------+------+--------+
的樣本數據(如DDL + DML)和期望的結果會提高你得到一個準確的答案的機會。 –
@ZoharPeled我添加了一些更多的細節。感謝您的建議。新的這個,所以還在學習。花了我一段時間來弄清楚如何格式化表lol。 – nmoore