How to Prevent Overflow Errors When Aggregating Data in SQL Server

When working with aggregate functions in SQL Server, it’s easy to overlook that certain datatypes have strict limits. If you’re summing values in large tables (or even just summing very large numbers), you might run into an arithmetic overflow error. This happens when the result of an aggregate exceeds the maximum value the datatype can hold.

Understanding how this works and how to prevent errors will help you write reliable queries.

Why Overflow Errors Happen

Each numeric type in SQL Server has a defined range. For example:

  • SMALLINT goes from -32,768 to 32,767.
  • INT can hold values from -2,147,483,648 to 2,147,483,647.
  • BIGINT can handle numbers up to about 9 quintillion.

When you use an aggregate function on a numeric column, the return type will depend on the column type. For example, when you use the SUM() function on an INT, TINYINT, or SMALLINT column, SQL Server performs the calculation using the INT datatype. If the total exceeds 2,147,483,647, you’ll get an overflow error – even if all individual values are valid.

So you can see how easy it can be to run into this error, even when the database seems to cater for the largest expected values. Just factoring in the largest individual value might not be enough if you plan to run aggregate queries against the column.

Example

Here’s a script demonstrates what I mean:

-- Drop the table if it exists
DROP TABLE IF EXISTS dbo.SalesData;
GO

-- Create a table with an INT column
CREATE TABLE dbo.SalesData
(
    Id INT IDENTITY PRIMARY KEY,
    Quantity INT NOT NULL
);
GO

-- Insert some large values
INSERT INTO dbo.SalesData (Quantity)
VALUES (2000000000), (2000000000), (2000000000);
GO

Here, I defined the Quantity column as INT. Each value that I inserted is small enough to fit within that data type. So the column satisfies our requirements as far as storing each individual value. But if we start doing aggregate queries against this column, we can easily run into problems.

To demonstrate this, let’s use SUM() to add up the values in the column:

SELECT SUM(Quantity) AS TotalQuantity
FROM dbo.SalesData;

Output:

Msg 8115, Level 16, State 2, Line 1
Arithmetic overflow error converting expression to data type int.

This error occurs because the sum of the three rows (6,000,000,000) exceeds the maximum value an INT can hold.

Solution: Cast to a Larger Type

The solution is to cast the column to a datatype large enough to hold the result. BIGINT is usually the best choice:

SELECT SUM(CAST(Quantity AS BIGINT)) AS TotalQuantity
FROM dbo.SalesData;

Result:

6000000000

With this approach, SQL Server calculates the sum using BIGINT, which safely accommodates the total.

Safe Datatypes for Common Aggregates

Here’s a quick reference table for avoiding overflow in SQL Server aggregates:

Aggregate FunctionTypical Column TypeSafe Aggregate Type
SUM(SMALLINT)SMALLINTINT or BIGINT
SUM(INT)INTBIGINT
SUM(BIGINT)BIGINTDECIMAL(38,0)
COUNT()AnyINT (if < 2B rows), otherwise COUNT_BIG()
AVG()INT, SMALLINTUse CAST() to BIGINT or DECIMAL
SUM(DECIMAL(p,s))DECIMALDECIMAL with larger precision if needed

Casting to a larger type proactively ensures aggregates never exceed storage limits.

General Tips

Here are some things you should keep in mind when designing your databases and creating queries:

  • Anticipate large sums: If your table may grow or contain large numeric values, consider using BIGINT or DECIMAL for columns you’ll aggregate.
  • Use COUNT_BIG() instead of COUNT() if counting extremely large datasets.
  • Cast explicitly: SQL Server doesn’t automatically “upgrade” datatypes in aggregate functions. Being explicit prevents surprises.

Conclusion

Overflow errors in SQL Server are usually a matter of datatype limits rather than a bug. By knowing the maximum values your data might generate and casting columns when necessary, you can prevent errors and make your queries robust. Even a small table with large numbers can trigger an overflow if you aren’t careful.