In this article I look at the storage size of the time data type in SQL Server.
In particular, I look at the following:
- Microsoft’s documentation
- Data stored in a variable
- Length in bytes using
DATALENGTH()
- Length in bytes using
DATALENGTH()
after converting to varbinary
- Length in bytes using
- Data stored in a database
- Length in bytes using
COL_LENGTH()
- Length in bytes using
DBCC PAGE()
- Length in bytes using
Microsoft’s Documentation
Microsoft’s official documentation on the time data type indicates that its storage size is between 3 and 5 bytes, depending on the precision being used.
This data type allows for user-defined precision. You can use time(n) to specify the precision, where n is a scale between 0 and 7.
Here’s the data that Microsoft presents for the time data type:
Specified scale | Result (precision, scale) | Column length (bytes) | Fractional seconds precision |
---|---|---|---|
time | (16,7) | 5 | 7 |
time(0) | (8,0) | 3 | 0-2 |
time(1) | (10,1) | 3 | 0-2 |
time(2) | (11,2) | 3 | 0-2 |
time(3) | (12,3) | 4 | 3-4 |
time(4) | (13,4) | 4 | 3-4 |
time(5) | (14,5) | 5 | 5-7 |
time(6) | (15,6) | 5 | 5-7 |
time(7) | (16,7) | 5 | 5-7 |
For the purposes of this article, I’m mainly interested in the Column length (bytes) column. This tells us how many bytes are used to store this data type in a database.
From a user’s perspective, the time data type works the same way as the time portion of the datetime2. It has a user-defined fractional seconds precision, and it accepts a scale of 0 through 7.
The rest of this article runs through various examples where I return the storage size of time values in different contexts.
Data Stored in a Variable
First, I’ll store a time value in a variable and check its storage size. Then I’ll convert that value to varbinary and check it again.
Length in Bytes using DATALENGTH
Here’s what happens if we use the DATALENGTH()
function to return the number of bytes used for a time(7) value:
DECLARE @t time(7); SET @t = '10:15:30.1234567'; SELECT @t AS 'Value', DATALENGTH(@t) AS 'Length in Bytes';
Result
+------------------+-------------------+ | Value | Length in Bytes | |------------------+-------------------| | 10:15:30.1234567 | 5 | +------------------+-------------------+
The value in this example has the maximum scale of 7 (because I declare the variable as time(7)), and it returns a length of 5 bytes.
This is to be expected, as it matches the storage size outlined in Microsoft’s table.
However, if we convert the value to varbinary we get a different result.
Length in Bytes after Converting to ‘varbinary’
Some developers like to convert time or datetime2 variables to varbinary because it’s more representative of how SQL Server stores it in the database. While this is partially true, the results are not exactly the same as the stored value (more on that below).
Here’s what happens if we convert our time value to varbinary:
DECLARE @t time(7); SET @t = '10:15:30.1234567'; SELECT CONVERT(VARBINARY(16), @t) AS 'Value', DATALENGTH(CONVERT(VARBINARY(16), @t)) AS 'Length in Bytes';
Result
+----------------+-------------------+ | Value | Length in Bytes | |----------------+-------------------| | 0x0787A311FC55 | 6 | +----------------+-------------------+
In this case we get 6 bytes. Our value now uses 1 more byte than what’s stated in the documentation.
That’s because it needs an extra byte to store the precision.
This is a hexadecimal representation of the time value. The actual time value (and its precision) is everything after the 0x
. Each pair of hex characters is a byte. There are 6 pairs, and therefore 6 bytes. This is confirmed when we use DATALENGTH()
to return the length in bytes.
In this example we can see that the first byte is 07
. This represents the precision (I used a scale of 7 and so that is what’s displayed here).
If I change the scale, we can see that the first byte changes to match the scale:
DECLARE @t time(3); SET @t = '10:15:30.1234567'; SELECT CONVERT(VARBINARY(16), @t) AS 'Value', DATALENGTH(CONVERT(VARBINARY(16), @t)) AS 'Length in Bytes';
Result
+--------------+-------------------+ | Value | Length in Bytes | |--------------+-------------------| | 0x034B823302 | 5 | +--------------+-------------------+
We can also see that the length is reduced accordingly. But again, it’s one more byte than what the documentation says it should use.
Although Microsoft’s documentation for time doesn’t explicitly mention this, the documentation for datetime2 does state the following:
The first byte of a datetime2 value stores the precision of the value, which means the actual storage required for a datetime2 value is the storage size indicated in the table above plus 1 additional byte to store the precision. This makes the maximum size of a datetime2 value 9 bytes – 1 byte stores precision plus 8 bytes for data storage at maximum precision.
And the datetime2 data type works in exactly the same way with respect to the above examples. In other words, it only reports the extra byte when it’s converted to varbinary.
So the extra byte mentioned in the Microsoft documentation also appears to apply to time.
However, the actual storage size of your time values will on where the data is stored.
Data Stored in a Database
When a database column has a type of time, its precision is specified at the column level – not at the data level. In other words, it’s specified once for the whole column. This makes sense, because when you define a column as time(7), you know that all rows will be time(7). No need to use up precious bytes restating that fact on each row.
When you examine a time value as it’s stored in SQL Server, you’ll see that it’s the same as the varbinary result, but without the precision.
Below are examples that show how time values are stored in SQL Server.
In these examples, I create a database with various time(n) columns, and then use COL_LENGTH()
to return each column’s length, in bytes. I then insert values into those columns, before using DBCC PAGE
to check the storage size that each time value takes up on the page file.
Create a database:
CREATE DATABASE Test;
Create a table:
USE Test; CREATE TABLE TimeTest ( t0 time(0), t1 time(1), t2 time(2), t3 time(3), t4 time(4), t5 time(5), t6 time(6), t7 time(7) );
In this case I create eight columns – one for each user defined scale that we can use with time(n).
Now we can check the storage size of each column.
Length in Bytes using COL_LENGTH()
Use COL_LENGTH()
to check the length (in bytes) of each column:
SELECT COL_LENGTH ( 'TimeTest' , 't0' ) AS 't0', COL_LENGTH ( 'TimeTest' , 't1' ) AS 't1', COL_LENGTH ( 'TimeTest' , 't2' ) AS 't2', COL_LENGTH ( 'TimeTest' , 't3' ) AS 't3', COL_LENGTH ( 'TimeTest' , 't4' ) AS 't4', COL_LENGTH ( 'TimeTest' , 't5' ) AS 't5', COL_LENGTH ( 'TimeTest' , 't6' ) AS 't6', COL_LENGTH ( 'TimeTest' , 't7' ) AS 't7';
Result:
+------+------+------+------+------+------+------+------+ | t0 | t1 | t2 | t3 | t4 | t5 | t6 | t7 | |------+------+------+------+------+------+------+------| | 3 | 3 | 3 | 4 | 4 | 5 | 5 | 5 | +------+------+------+------+------+------+------+------+
So once again, we get the same result that the documentation states we will get. This is to be expected, because the documentation explicitly states “Column length (bytes)”, which is exactly what we’re measuring here.
Remember, this is before we insert any data. The columns themselves determine the precision (and therefore storage size) of any data that is inserted – not the other way around.
Use DBCC PAGE to Check the Stored Data
Now let’s insert data, then use DBCC PAGE
to find the actual storage size of the data that we store in each column.
Insert data:
DECLARE @t time(7) = '10:15:30.1234567'; INSERT INTO TimeTest ( t0, t1, t2, t3, t4, t5, t6, t7 ) SELECT @t, @t, @t, @t, @t, @t, @t, @t;
Now select the data (just to check it):
SELECT * FROM TimeTest;
Result (using vertical output):
t0 | 10:15:30 t1 | 10:15:30.1000000 t2 | 10:15:30.1200000 t3 | 10:15:30.1230000 t4 | 10:15:30.1235000 t5 | 10:15:30.1234600 t6 | 10:15:30.1234570 t7 | 10:15:30.1234567
As expected, the values use the precision that’s been previously specified at the column level.
Note that my system displays trailing zeros. Yours may or may not do so. Regardless, this doesn’t affect the actual precision or accuracy.
Now, before we use DBCC PAGE()
, we need to know which PagePID to pass to it. We can use DBCC IND()
to find that.
Find the PagePID:
DBCC IND('Test', 'dbo.TimeTest', 0);
Result (using vertical output):
-[ RECORD 1 ]------------------------- PageFID | 1 PagePID | 308 IAMFID | NULL IAMPID | NULL ObjectID | 1541580530 IndexID | 0 PartitionNumber | 1 PartitionID | 72057594043236352 iam_chain_type | In-row data PageType | 10 IndexLevel | NULL NextPageFID | 0 NextPagePID | 0 PrevPageFID | 0 PrevPagePID | 0 -[ RECORD 2 ]------------------------- PageFID | 1 PagePID | 384 IAMFID | 1 IAMPID | 308 ObjectID | 1541580530 IndexID | 0 PartitionNumber | 1 PartitionID | 72057594043236352 iam_chain_type | In-row data PageType | 1 IndexLevel | 0 NextPageFID | 0 NextPagePID | 0 PrevPageFID | 0 PrevPagePID | 0
This returns two records. We’re interested in the PageType of 1 (the 2nd record). We want the PagePID from that record. In this case the PagePID is 384.
Now we can take that PagePID and use it in the following:
DBCC TRACEON(3604, -1); DBCC PAGE(Test, 1, 384, 3);
Right now we’re mainly interested in the following part:
Slot 0 Column 1 Offset 0x4 Length 3 Length (physical) 3 t0 = 10:15:30 Slot 0 Column 2 Offset 0x7 Length 3 Length (physical) 3 t1 = 10:15:30.1 Slot 0 Column 3 Offset 0xa Length 3 Length (physical) 3 t2 = 10:15:30.12 Slot 0 Column 4 Offset 0xd Length 4 Length (physical) 4 t3 = 10:15:30.123 Slot 0 Column 5 Offset 0x11 Length 4 Length (physical) 4 t4 = 10:15:30.1235 Slot 0 Column 6 Offset 0x15 Length 5 Length (physical) 5 t5 = 10:15:30.12346 Slot 0 Column 7 Offset 0x1a Length 5 Length (physical) 5 t6 = 10:15:30.123457 Slot 0 Column 8 Offset 0x1f Length 5 Length (physical) 5 t7 = 10:15:30.1234567
So we get the same result as the documentation states. This would suggest that the precision isn’t being stored with the values.
We can confirm that by examining the actual data.
The actual time values are stored in this part of the page file:
Memory Dump @0x0000000423ADA060 0000000000000000: 10002400 42900095 a205d459 384b8233 02f31603 ..$.B..¢.ÔY8K3.ó.. 0000000000000014: 167ae51e dc00c1f6 34990887 a311fc55 080000 .zå.Ü.Áö4..£.üU...
We can extract the actual time values by removing a few things. Once removed, the following will remain:
42900095 a205d459 384b8233 02f31603 167ae51e dc00c1f6 34990887 a311fc55
These hex digits contain all our time data, but not the precision. However, they’re arranged into 4 byte chunks, so we must rearrange the spaces to get the individual values.
Here’s the end result. I’ve placed each date/time value on a new line for better readability.
429000 95a205 d45938 4b823302 f3160316 7ae51edc00 c1f6349908 87a311fc55
Those are the actual hexadecimal values (minus the precision) that we would get if we converted the time value to varbinary. Like this:
SELECT CONVERT(VARBINARY(16), t0) AS 't0', CONVERT(VARBINARY(16), t1) AS 't1', CONVERT(VARBINARY(16), t2) AS 't2', CONVERT(VARBINARY(16), t3) AS 't3', CONVERT(VARBINARY(16), t4) AS 't4', CONVERT(VARBINARY(16), t5) AS 't5', CONVERT(VARBINARY(16), t6) AS 't6', CONVERT(VARBINARY(16), t7) AS 't7' FROM TimeTest;
Result (using vertical output):
t0 | 0x00429000 t1 | 0x0195A205 t2 | 0x02D45938 t3 | 0x034B823302 t4 | 0x04F3160316 t5 | 0x057AE51EDC00 t6 | 0x06C1F6349908 t7 | 0x0787A311FC55
That query produces the same result – except that each value has been prepended with the precision.
Here’s a table that compares the actual page file data to the results of the CONVERT()
operation.
Page File Data | CONVERT() Data |
---|---|
429000 | 00429000 |
95a205 | 0195A205 |
d45938 | 02D45938 |
4b823302 | 034B823302 |
f3160316 | 04F3160316 |
7ae51edc00 | 057AE51EDC00 |
c1f6349908 | 06C1F6349908 |
87a311fc55 | 0787A311FC55 |
So we can see that the page file doesn’t store the precision, but the converted result does.
I highlighted the actual date and time parts in red. I also removed the 0x
prefix from the converted results, so that only the actual date/time data is displayed (along with the precision).
Also note that hexadecimal is case-insensitive, so the fact that one uses lowercase and the other uses uppercase is not an issue.
Conclusion
When converting a time value to varbinary, it needs an extra byte to store the precision. It needs the precision in order to interpret the time portion (because this is stored as a time interval, the exact value of which will depend on the precision).
When stored in a database, the precision is specified once at the column level. This seems logical, as there’s no need to add the precision to each row when all rows have the same precision anyway. That would require an extra byte for each row, which would unnecessarily increase the storage requirements.