Understanding ‘datetimeoffset’ Storage Size in SQL Server

In this article I look at how the datetimeoffset data type is stored in SQL Server, and how you can get different reported storage size results, depending on what you’re doing with it.

This is similar to what I did with the datetime2 data type.

In particular, I look at the following:

  • Microsoft’s documentation
  • Data stored in a variable
    • Length in bytes using DATALENGTH()
    • Length in bytes using DATALENGTH() after converting to varbinary
  • Data stored in a database
    • Length in bytes using COL_LENGTH()
    • Length in bytes using DBCC PAGE()

Microsoft’s Documentation

Microsoft’s official documentation on the datetimeoffset data type indicates that its storage size is between 8 and 10 bytes, depending on the precision being used.

Similar to datetime2(n), you can use datetimeoffset(n) to specify the precision, where n is a scale between 0 and 7.

Here’s the data that Microsoft presents for this data type:

Specified scale Result (precision, scale) Column length (bytes) Fractional seconds precision
datetimeoffset (34,7) 10 7
datetimeoffset(0) (26,0) 8 0-2
datetimeoffset(1) (28,1) 8 0-2
datetimeoffset(2) (29,2) 8 0-2
datetimeoffset(3) (30,3) 9 3-4
datetimeoffset(4) (31,4) 9 3-4
datetimeoffset(5) (32,5) 10 5-7
datetimeoffset(6) (33,6) 10 5-7
datetimeoffset(7) (34,7) 10 5-7

For the purposes of this article, I’m mainly interested in the Column length (bytes) column. This tells us how many bytes are used to store this data type in a database.

The main reason I wanted to write this article (and run the experiments below), is that the Microsoft documentation doesn’t explain that an extra byte is used for the precision (as it does in its documentation for the datetime2 data type). In its documentation for datetime2, it states:

The first byte of a datetime2 value stores the precision of the value, which means the actual storage required for a datetime2 value is the storage size indicated in the table above plus 1 additional byte to store the precision. This makes the maximum size of a datetime2 value 9 bytes – 1 byte stores precision plus 8 bytes for data storage at maximum precision.

But the documentation for datetimeoffset doesn’t include this text, and neither does the time documentation.

This caused me to wonder if there’s a difference between how these data types store their values. Logic told me that they should work the same, as they all have a user-defined precision, but I wanted to find out.

The short answer is yes, datetimeoffset appears to work the same as datetime2 (with regards to the extra byte), even though it’s not documented as such.

The remainder of the article runs through various examples where I return the storage size of datetimeoffset values in different contexts.

Data Stored in a Variable

Let’s store a datetimeoffset value in a variable and check its storage size. Then I’ll convert that value to varbinary and check it again.

Length in Bytes using DATALENGTH

Here’s what happens if we use the DATALENGTH() function to return the number of bytes used for a datetimeoffset(7) value:

DECLARE @d datetimeoffset(7);
SET @d = '2025-05-21 10:15:30.1234567 +07:00';
SELECT 
  @d AS 'Value',
  DATALENGTH(@d) AS 'Length in Bytes';

Result

+------------------------------------+-------------------+
| Value                              | Length in Bytes   |
|------------------------------------+-------------------|
| 2025-05-21 10:15:30.1234567 +07:00 | 10                |
+------------------------------------+-------------------+

The value in this example has the maximum scale of 7 (because I declare the variable as datetimeoffset(7)), and it returns a length of 10 bytes.

No surprises here, this is the exact storage size that the Microsoft documentation indicates it should be.

However, if we convert the value to varbinary we get a different result.

Length in Bytes after Converting to ‘varbinary’

Some developers like to convert datetimeoffset and datetime2 variables to varbinary, because it’s more representative of how SQL Server stores it in the database. While this is partially true, the results are not exactly the same as the stored value (as you’ll see later).

Here’s what happens if we convert our datetimeoffset value to varbinary:

DECLARE @d datetimeoffset(7);
SET @d = '2025-05-21 10:15:30.1234567 +07:00';
SELECT 
  CONVERT(VARBINARY(16), @d) AS 'Value',
  DATALENGTH(CONVERT(VARBINARY(16), @d)) AS 'Length in Bytes';

Result

+--------------------------+-------------------+
| Value                    | Length in Bytes   |
|--------------------------+-------------------|
| 0x0787CBB24F1B3F480BA401 | 11                |
+--------------------------+-------------------+

In this case we get 11 bytes.

This is a hexadecimal representation of the datetimeoffset value. The actual date time offset value (and its precision) is everything after the 0x. Each pair of hex characters is a byte. There are 11 pairs, and therefore 11 bytes. This is confirmed when we use DATALENGTH() to return the length in bytes.

In this example we can see that the first byte is 07. This represents the precision (I used a scale of 7 and so that is what’s displayed here).

If I change the scale, we can see that the first byte changes to match the scale:

DECLARE @d datetimeoffset(3);
SET @d = '2025-05-21 10:15:30.1234567 +07:00';
SELECT 
  CONVERT(VARBINARY(16), @d) AS 'Value',
  DATALENGTH(CONVERT(VARBINARY(16), @d)) AS 'Length in Bytes';

Result

+------------------------+-------------------+
| Value                  | Length in Bytes   |
|------------------------+-------------------|
| 0x03CBFCB2003F480BA401 | 10                |
+------------------------+-------------------+

We can also see that the length is reduced accordingly.

Data Stored in a Database

In this example, I create a database with various datetimeoffset(n) columns, and then use COL_LENGTH() to return each column’s length, in bytes. I then insert values into the columns, before using DBCC PAGE to check the storage size that each datetimeoffset value takes up on the page file.

Create a database:

CREATE DATABASE Test;

Create a table:

USE Test;

CREATE TABLE DatetimeoffsetTest (
    d0 datetimeoffset(0),
    d1 datetimeoffset(1),
    d2 datetimeoffset(2),
    d3 datetimeoffset(3),
    d4 datetimeoffset(4),
    d5 datetimeoffset(5),
    d6 datetimeoffset(6),
    d7 datetimeoffset(7)
    );

In this case I create eight columns – one for each user defined scale that we can use with datetimeoffset(n).

Now we can check the storage size of each column.

Length in Bytes using COL_LENGTH()

Use COL_LENGTH() to check the length (in bytes) of each column:

SELECT 
  COL_LENGTH ( 'DatetimeoffsetTest' , 'd0' ) AS 'd0',
  COL_LENGTH ( 'DatetimeoffsetTest' , 'd1' ) AS 'd1',
  COL_LENGTH ( 'DatetimeoffsetTest' , 'd2' ) AS 'd2',
  COL_LENGTH ( 'DatetimeoffsetTest' , 'd3' ) AS 'd3',
  COL_LENGTH ( 'DatetimeoffsetTest' , 'd4' ) AS 'd4',
  COL_LENGTH ( 'DatetimeoffsetTest' , 'd5' ) AS 'd5',
  COL_LENGTH ( 'DatetimeoffsetTest' , 'd6' ) AS 'd6',
  COL_LENGTH ( 'DatetimeoffsetTest' , 'd7' ) AS 'd7';  

Result:

+------+------+------+------+------+------+------+------+
| d0   | d1   | d2   | d3   | d4   | d5   | d6   | d7   |
|------+------+------+------+------+------+------+------|
| 8    | 8    | 8    | 9    | 9    | 10   | 10   | 10   |
+------+------+------+------+------+------+------+------+

So once again, we get the same result as the documentation states we will get. This is to be expected, because the documentation explicitly states “Column length (bytes)”, which is exactly what we’re measuring here.

Use DBCC PAGE to Check the Stored Data

Now let’s use DBCC PAGE to find the actual storage size of the data that we store in this table.

First, let’s insert some data:

DECLARE @d datetimeoffset(7) = '2025-05-21 10:15:30.1234567 +07:00';
INSERT INTO DatetimeoffsetTest ( d0, d1, d2, d3, d4, d5, d6, d7 )
SELECT @d, @d, @d, @d, @d, @d, @d, @d;

Now select the data (just to check it):

SELECT * FROM DatetimeoffsetTest;

Result (using vertical output):

d0 | 2025-05-21 10:15:30.0000000 +07:00
d1 | 2025-05-21 10:15:30.1000000 +07:00
d2 | 2025-05-21 10:15:30.1200000 +07:00
d3 | 2025-05-21 10:15:30.1230000 +07:00
d4 | 2025-05-21 10:15:30.1235000 +07:00
d5 | 2025-05-21 10:15:30.1234600 +07:00
d6 | 2025-05-21 10:15:30.1234570 +07:00
d7 | 2025-05-21 10:15:30.1234567 +07:00

As expected, the values use the precision that’s been previously specified at the column level.

Note that my system displays trailing zeros. Yours may or may not do so. Regardless, this doesn’t affect the actual precision or accuracy.

Now, before we use DBCC PAGE(), we need to know which PagePID to pass to it. We can use DBCC IND() to find that.

Find the PagePID:

DBCC IND('Test', 'dbo.DatetimeoffsetTest', 0);

Result (using vertical output):

-[ RECORD 1 ]-------------------------
PageFID         | 1
PagePID         | 307
IAMFID          | NULL
IAMPID          | NULL
ObjectID        | 1525580473
IndexID         | 0
PartitionNumber | 1
PartitionID     | 72057594043170816
iam_chain_type  | In-row data
PageType        | 10
IndexLevel      | NULL
NextPageFID     | 0
NextPagePID     | 0
PrevPageFID     | 0
PrevPagePID     | 0
-[ RECORD 2 ]-------------------------
PageFID         | 1
PagePID         | 376
IAMFID          | 1
IAMPID          | 307
ObjectID        | 1525580473
IndexID         | 0
PartitionNumber | 1
PartitionID     | 72057594043170816
iam_chain_type  | In-row data
PageType        | 1
IndexLevel      | 0
NextPageFID     | 0
NextPagePID     | 0
PrevPageFID     | 0
PrevPagePID     | 0

This returns two records. We’re interested in the PageType of 1 (the 2nd record). We want the PagePID from that record. In this case the PagePID is 376.

Now we can take that PagePID and use it in the following:

DBCC TRACEON(3604, -1);
DBCC PAGE(Test, 1, 376, 3);

Right now we’re mainly interested in the following part:

Slot 0 Column 1 Offset 0x4 Length 8 Length (physical) 8

d0 = 2025-05-21 10:15:30 +07:00     

Slot 0 Column 2 Offset 0xc Length 8 Length (physical) 8

d1 = 2025-05-21 10:15:30.1 +07:00   

Slot 0 Column 3 Offset 0x14 Length 8 Length (physical) 8

d2 = 2025-05-21 10:15:30.12 +07:00  

Slot 0 Column 4 Offset 0x1c Length 9 Length (physical) 9

d3 = 2025-05-21 10:15:30.123 +07:00 

Slot 0 Column 5 Offset 0x25 Length 9 Length (physical) 9

d4 = 2025-05-21 10:15:30.1235 +07:00

Slot 0 Column 6 Offset 0x2e Length 10 Length (physical) 10

d5 = 2025-05-21 10:15:30.12346 +07:00                                    

Slot 0 Column 7 Offset 0x38 Length 10 Length (physical) 10

d6 = 2025-05-21 10:15:30.123457 +07:00                                   

Slot 0 Column 8 Offset 0x42 Length 10 Length (physical) 10

d7 = 2025-05-21 10:15:30.1234567 +07:00                                                                

So we get the same result again. Exactly as the documentation states.

While we’re here, let’s examine the data – the actual date/time values as they’re stored in SQL Server.

The actual values are stored in this part of the page file:

Memory Dump @0x000000041951A060

0000000000000000:   10004c00 d22d003f 480ba401 35ca013f 480ba401  ..L.Ò-.?H.¤.5Ê.?H.¤.
0000000000000014:   14e6113f 480ba401 cbfcb200 3f480ba4 01f3dffd  .æ.?H.¤.Ëü².?H.¤.óßý
0000000000000028:   063f480b a4017abf ea45003f 480ba401 c17a2bbb  .?H.¤.z¿êE.?H.¤.Áz+»
000000000000003C:   023f480b a40187cb b24f1b3f 480ba401 080000    .?H.¤.‡Ë²O.?H.¤....

That still includes a few extra bits. Let’s delete a few things, so that only our date and time values remain:

d22d003f 480ba401 35ca013f 480ba401
14e6113f 480ba401 cbfcb200 3f480ba4 01f3dffd
063f480b a4017abf ea45003f 480ba401 c17a2bbb
023f480b a40187cb b24f1b3f 480ba401

The remaining hex digits contain all our date and time data, but not the precision. However, they’re arranged into 4 byte chunks, so we must rearrange the spaces to get the individual values.

Here’s the end result. I’ve placed each date/time value on a new line for better readability.

d22d003f480ba401 
35ca013f480ba401
14e6113f480ba401 
cbfcb2003f480ba401
f3dffd063f480ba401
7abfea45003f480ba401 
c17a2bbb023f480ba401
87cbb24f1b3f480ba401

Those are the actual hexadecimal values (minus the precision) that we would get if we converted the datetimeoffset value to varbinary. Like this:

SELECT 
  CONVERT(VARBINARY(16), d0) AS 'd0',
  CONVERT(VARBINARY(16), d1) AS 'd1',
  CONVERT(VARBINARY(16), d2) AS 'd2',
  CONVERT(VARBINARY(16), d3) AS 'd3',
  CONVERT(VARBINARY(16), d4) AS 'd4',
  CONVERT(VARBINARY(16), d5) AS 'd5',
  CONVERT(VARBINARY(16), d6) AS 'd6',
  CONVERT(VARBINARY(16), d7) AS 'd7'
FROM DatetimeoffsetTest;

Result (using vertical output):

d0 | 0x00D22D003F480BA401
d1 | 0x0135CA013F480BA401
d2 | 0x0214E6113F480BA401
d3 | 0x03CBFCB2003F480BA401
d4 | 0x04F3DFFD063F480BA401
d5 | 0x057ABFEA45003F480BA401
d6 | 0x06C17A2BBB023F480BA401
d7 | 0x0787CBB24F1B3F480BA401

So we get the same result – except that it has been prepended with the precision.

Here’s a table that compares the actual page file data to the results of the CONVERT() operation.

Page File Data CONVERT() Data
d22d003f480ba401 00D22D003F480BA401
35ca013f480ba401 0135CA013F480BA401
14e6113f480ba401 0214E6113F480BA401
cbfcb2003f480ba401 03CBFCB2003F480BA401
f3dffd063f480ba401 04F3DFFD063F480BA401
7abfea45003f480ba401 057ABFEA45003F480BA401
c17a2bbb023f480ba401 06C17A2BBB023F480BA401
87cbb24f1b3f480ba401 0787CBB24F1B3F480BA401

So we can see that the page file doesn’t store the precision, but the converted result does.

I highlighted the actual date and time parts in red. I also removed the 0x prefix from the converted results, so that only the actual date/time data is displayed (along with the precision).

Also note that hexadecimal is case-insensitive, so the fact that one uses lowercase and the other uses uppercase is not an issue.

Conclusion

When converting a datetimeoffset value to varbinary, it needs an extra byte to store the precision. It needs the precision in order to interpret the time portion (because this is stored as a time interval, the exact value of which will depend on the precision).

When stored in a database, the precision is specified once at the column level. This seems logical, as there’s no need to add the precision to each row when all rows use the same precision anyway. That would require an extra byte for each row, which would unnecessarily increase the storage requirements.