Understanding ‘datetime2’ Storage Size in SQL Server

In this article I share some observations I’ve had regarding the datetime2 data type’s storage size in SQL Server. Perhaps I will clarify some points about the actual storage size used by this data type when stored in a database.

In particular, I look at the following:

  • Microsoft’s documentation
  • Data stored in a variable
    • Length in bytes using DATALENGTH()
    • Length in bytes using DATALENGTH() after converting to varbinary
  • Data stored in a database
    • Length in bytes using COL_LENGTH()
    • Length in bytes using DBCC PAGE()

Some of those seem to contradict each other, and you will see two different storage size amounts for the same value, depending on where you look.

A datetime2 value can show a different storage size, depending on whether it’s stored in a database, as a datetime2 variable, or converted to varbinary.

But there is a plausible explanation for this – it depends on where the precision is being stored.

During my research on this issue, I found Ronen Ariely’s in-depth article on how datetime2 is stored in the data file very informative, and it prompted me to run some similar tests in my own development environment and present them here.

Microsoft’s Documentation

First, let’s look at what the official documentation says.

Microsoft’s documentation on the datetime2 data type states that its storage size is as follows:

6 bytes for precision less than 3.
7 bytes for precision 3 or 4.
All other precision require 8 bytes. 1

But it qualifies the above table with the following statement:

1 The first byte of a datetime2 value stores the precision of the value, which means the actual storage required for a datetime2 value is the storage size indicated in the table above plus 1 additional byte to store the precision. This makes the maximum size of a datetime2 value 9 bytes – 1 byte stores precision plus 8 bytes for data storage at maximum precision.

So given the above information, the obvious conclusion to draw would be that the table could/(should?) be written as follows:

7 bytes for precision less than 3.
8 bytes for precision 3 or 4.
All other precision require 9 bytes.

That way, they wouldn’t need to qualify it with the extra information about precision.

But it’s not quite as simple as that.

Data Stored in a Variable

First, let’s store a datetime2 value in a variable and check its storage size. Then I’ll convert that value to varbinary and check it again.

Length in Bytes using DATALENGTH

Here’s what happens if we use the DATALENGTH() function to return the number of bytes used for a datetime2(7) value:

DECLARE @d datetime2(7);
SET @d = '2025-05-21 10:15:30.1234567';
SELECT 
  @d AS 'Value',
  DATALENGTH(@d) AS 'Length in Bytes';

Result

+-----------------------------+-------------------+
| Value                       | Length in Bytes   |
|-----------------------------+-------------------|
| 2025-05-21 10:15:30.1234567 | 8                 |
+-----------------------------+-------------------+

The value in this example has the maximum scale of 7 (because I declare the variable as datetime2(7)), and it returns a length of 8 bytes.

This seems to contradict what Microsoft state about needing an extra byte to store the precision. To quote Microsoft, This makes the maximum size of a datetime2 value 9 bytes – 1 byte stores precision plus 8 bytes for data storage at maximum precision..

While it’s true that we seem to get 8 bytes for data storage, we appear to be missing the 1 byte used to store the precision.

However, if we convert the value to varbinary we get a different story.

Length in Bytes after Converting to ‘varbinary’

Here’s what happens if we convert our datetime2 value to varbinary:

DECLARE @d datetime2(7);
SET @d = '2025-05-21 10:15:30.1234567';
SELECT 
  CONVERT(VARBINARY(10), @d) AS 'Value',
  DATALENGTH(CONVERT(VARBINARY(10), @d)) AS 'Length in Bytes';

Result

+----------------------+-------------------+
| Value                | Length in Bytes   |
|----------------------+-------------------|
| 0x0787A311FC553F480B | 9                 |
+----------------------+-------------------+

In this case we get 9 bytes.

This is a hexadecimal representation of the datetime2 value. The actual date time value (and its precision) is everything after the 0x. Each pair of hex characters is a byte. There are 9 pairs, and therefore 9 bytes. This is confirmed when we use DATALENGTH() to return the length in bytes.

In this example we can see that the first byte is 07. This represents the precision (I used a scale of 7 and so that is what’s displayed here).

If I change the scale, we can see that the first byte changes to match the scale:

DECLARE @d datetime2(3);
SET @d = '2025-05-21 10:15:30.1234567';
SELECT 
  CONVERT(VARBINARY(10), @d) AS 'Value',
  DATALENGTH(CONVERT(VARBINARY(10), @d)) AS 'Length in Bytes';

Result

+--------------------+-------------------+
| Value              | Length in Bytes   |
|--------------------+-------------------|
| 0x034B8233023F480B | 8                 |
+--------------------+-------------------+

We can also see that the length is reduced accordingly.

So in this case our results match the Microsoft documentation perfectly – an extra byte has been added for the precision.

Many developers assume that this is how SQL Server stores its datetime2 values in the database. However, that assumption appears to be incorrect.

Data Stored in a Database

In this example, I create a database that contains a table with various datetime2(n) columns. I then use COL_LENGTH() to return each column’s length, in bytes. After that, I insert values into it, before using DBCC PAGE to check the storage size that each datetime2 value takes up on the page file.

Create a database:

CREATE DATABASE Test;

Create a table:

USE Test;

CREATE TABLE Datetime2Test (
    d0 datetime2(0),
    d1 datetime2(1),
    d2 datetime2(2),
    d3 datetime2(3),
    d4 datetime2(4),
    d5 datetime2(5),
    d6 datetime2(6),
    d7 datetime2(7)
    );

In this case I create eight columns – one for each user defined scale that we can use with datetime2(n).

Now we can check the storage size of each column.

Length in Bytes using COL_LENGTH()

Use COL_LENGTH() to check the length (in bytes) of each column:

SELECT 
  COL_LENGTH ( 'Datetime2Test' , 'd0' ) AS 'd0',
  COL_LENGTH ( 'Datetime2Test' , 'd1' ) AS 'd1',
  COL_LENGTH ( 'Datetime2Test' , 'd2' ) AS 'd2',
  COL_LENGTH ( 'Datetime2Test' , 'd3' ) AS 'd3',
  COL_LENGTH ( 'Datetime2Test' , 'd4' ) AS 'd4',
  COL_LENGTH ( 'Datetime2Test' , 'd5' ) AS 'd5',
  COL_LENGTH ( 'Datetime2Test' , 'd6' ) AS 'd6',
  COL_LENGTH ( 'Datetime2Test' , 'd7' ) AS 'd7';  

Result:

+------+------+------+------+------+------+------+------+
| d0   | d1   | d2   | d3   | d4   | d5   | d6   | d7   |
|------+------+------+------+------+------+------+------|
| 6    | 6    | 6    | 7    | 7    | 8    | 8    | 8    |
+------+------+------+------+------+------+------+------+

So once again, we don’t seem to get the extra byte used to store the precision.

Use DBCC PAGE to Check the Stored Data

Now let’s use DBCC PAGE to find the actual storage size of the data that we store in this table.

First, let’s insert some data:

DECLARE @d datetime2(7) = '2025-05-21 10:15:30.1234567';
INSERT INTO Datetime2Test ( d0, d1, d2, d3, d4, d5, d6, d7 )
SELECT @d, @d, @d, @d, @d, @d, @d, @d;

Now select the data (just to check it):

SELECT * FROM Datetime2Test;

Result (using vertical output):

d0 | 2025-05-21 10:15:30
d1 | 2025-05-21 10:15:30.1
d2 | 2025-05-21 10:15:30.12
d3 | 2025-05-21 10:15:30.123
d4 | 2025-05-21 10:15:30.1235
d5 | 2025-05-21 10:15:30.12346
d6 | 2025-05-21 10:15:30.123457
d7 | 2025-05-21 10:15:30.1234567

As expected, the values use the precision that’s been previously specified at the column level.

Now, before we use DBCC PAGE(), we need to know which PagePID to pass to it. We can use DBCC IND() to find that.

Find the PagePID:

DBCC IND('Test', 'dbo.Datetime2Test', 0);

Result (using vertical output):

-[ RECORD 1 ]-------------------------
PageFID         | 1
PagePID         | 306
IAMFID          | NULL
IAMPID          | NULL
ObjectID        | 1205579333
IndexID         | 0
PartitionNumber | 1
PartitionID     | 72057594043039744
iam_chain_type  | In-row data
PageType        | 10
IndexLevel      | NULL
NextPageFID     | 0
NextPagePID     | 0
PrevPageFID     | 0
PrevPagePID     | 0
-[ RECORD 2 ]-------------------------
PageFID         | 1
PagePID         | 360
IAMFID          | 1
IAMPID          | 306
ObjectID        | 1205579333
IndexID         | 0
PartitionNumber | 1
PartitionID     | 72057594043039744
iam_chain_type  | In-row data
PageType        | 1
IndexLevel      | 0
NextPageFID     | 0
NextPagePID     | 0
PrevPageFID     | 0
PrevPagePID     | 0

This returns two records. We’re interested in the PageType of 1 (the 2nd record). We want the PagePID from that record. In this case the PagePID is 360.

Now we can take that PagePID and use it in the following:

DBCC TRACEON(3604, -1);
DBCC PAGE(Test, 1, 360, 3);

This produces a lot of data, but we’re mainly interested in the following part:

Slot 0 Column 1 Offset 0x4 Length 6 Length (physical) 6

d0 = 2025-05-21 10:15:30            

Slot 0 Column 2 Offset 0xa Length 6 Length (physical) 6

d1 = 2025-05-21 10:15:30.1          

Slot 0 Column 3 Offset 0x10 Length 6 Length (physical) 6

d2 = 2025-05-21 10:15:30.12         

Slot 0 Column 4 Offset 0x16 Length 7 Length (physical) 7

d3 = 2025-05-21 10:15:30.123   

Slot 0 Column 5 Offset 0x1d Length 7 Length (physical) 7

d4 = 2025-05-21 10:15:30.1235       

Slot 0 Column 6 Offset 0x24 Length 8 Length (physical) 8

d5 = 2025-05-21 10:15:30.12346      

Slot 0 Column 7 Offset 0x2c Length 8 Length (physical) 8

d6 = 2025-05-21 10:15:30.123457     

Slot 0 Column 8 Offset 0x34 Length 8 Length (physical) 8

d7 = 2025-05-21 10:15:30.1234567                                   

So it appears that it doesn’t use the extra byte for precision.

But let’s examine the actual data before we reach any conclusions.

The actual data is stored in this part of the page file:

Memory Dump @0x000000041883A060

0000000000000000:   10003c00 4290003f 480b95a2 053f480b d459383f  ..<.B..?H.•¢.?H.ÔY8?
0000000000000014:   480b4b82 33023f48 0bf31603 163f480b 7ae51edc  H.K‚3.?H.ó...?H.zå.Ü
0000000000000028:   003f480b c1f63499 083f480b 87a311fc 553f480b  .?H.Áö4..?H.‡£.üU?H.
000000000000003C:   080000                                        ...                                           ...   

As you can see, none of that looks like the results we would get by converting the datetime2 value to varbinary. But it’s pretty close.

Here’s what it looks like if I delete a few things:

4290003f 480b95a2 053f480b d459383f
480b4b82 33023f48 0bf31603 163f480b 7ae51edc
003f480b c1f63499 083f480b 87a311fc 553f480b

The remaining hex digits contain all our date and time data, but not the precision. However, we must rearrange the spaces to get the actual values for each row.

Here’s the end result. I’ve placed each date/time value on a new line for better readability.

4290003f480b 
95a2053f480b 
d459383f480b 
4b8233023f480b
f31603163f480b 
7ae51edc003f480b 
c1f63499083f480b 
87a311fc553f480b

Those are the actual hexadecimal values (minus the precision) that we would get if we converted the datetime2 value to varbinary. To be sure, let’s go right ahead and do just that:

SELECT 
  CONVERT(VARBINARY(10), d0) AS 'd0',
  CONVERT(VARBINARY(10), d1) AS 'd1',
  CONVERT(VARBINARY(10), d2) AS 'd2',
  CONVERT(VARBINARY(10), d3) AS 'd3',
  CONVERT(VARBINARY(10), d4) AS 'd4',
  CONVERT(VARBINARY(10), d5) AS 'd5',
  CONVERT(VARBINARY(10), d6) AS 'd6',
  CONVERT(VARBINARY(10), d7) AS 'd7'
FROM Datetime2Test;

Result (using vertical output):

d0 | 0x004290003F480B
d1 | 0x0195A2053F480B
d2 | 0x02D459383F480B
d3 | 0x034B8233023F480B
d4 | 0x04F31603163F480B
d5 | 0x057AE51EDC003F480B
d6 | 0x06C1F63499083F480B
d7 | 0x0787A311FC553F480B

So we get the same result – except that it has been prepended with the precision.

But to make things crystal clear, here’s a table that compares the actual page file data to the results of the CONVERT() operation.

Page File Data CONVERT() Data
4290003f480b 004290003F480B
95a2053f480b 0195A2053F480B
d459383f480b 02D459383F480B
4b8233023f480b 034B8233023F480B
f31603163f480b 04F31603163F480B
7ae51edc003f480b 057AE51EDC003F480B
c1f63499083f480b 06C1F63499083F480B
87a311fc553f480b 0787A311FC553F480B

So we can clearly see that the page file doesn’t store the precision, but the converted result does.

I highlighted the actual date and time parts in red. I also removed the 0x prefix from the converted results, so that only the actual date/time data is displayed (along with the precision).

Also note that hexadecimal is case-insensitive, so the fact that one uses lowercase and the other uses uppercase is not an issue.

Conclusion

When converting a datetime2 value to varbinary, it needs an extra byte to store the precision. It needs the precision in order to interpret the time portion (because this is stored as a time interval, the exact value of which will depend on the precision).

When stored in a database, the precision is specified once at the column level. This would seem logical, as there is no need to store an extra byte with each row if it can be specified at the column level. So if you specify say, datetime2(7) at the column level, then every single row will be datetime2(7). No need to reiterate this on every row.

Ronen Ariely reached the same conclusion in his article mentioned above.

If you have a million rows with datetime2(7) values, storing the precision with each row would require 9,000,000 bytes, compared to just 8,000,001 if the precision is stored once for the whole column.

This also strengthen’s datetime2‘s case when comparing it to datetime. Even when using the same number of decimal places as datetime (i.e. 3), the datetime2 data type uses less storage (at least when stored in a table with more than one row). And it does this with higher accuracy. A datetime value uses 8 bytes, whereas datetime2(3) uses 7 bytes (plus 1 “precision” byte that’s shared across all rows).