How to Handle “Violation of UNIQUE KEY Constraint” Errors in SQL Server

If you’ve spent any time working with SQL Server, you’ve almost certainly run into the “Violation of UNIQUE KEY constraint” error.

It’s one of those errors that can show up in a few different ways depending on how your data flows into the database. Sometimes it’s a data quality issue. Sometimes it’s a race condition. Sometimes it’s just a gap in the logic that nobody caught until production. Whatever the cause, there are quite a few different ways to handle it. Some reactively, some proactively. This article walks through the main ones.

The Code That Triggers It

The error occurs when an INSERT or UPDATE attempts to write a value that already exists in a column (or combination of columns) covered by a unique constraint or unique index. Here’s a simple example:

-- Table with a unique constraint on email
CREATE TABLE Subscribers (
    SubscriberID   INT           IDENTITY PRIMARY KEY,
    Email          VARCHAR(255)  NOT NULL,
    FullName       VARCHAR(100),
    CONSTRAINT UQ_Subscribers_Email UNIQUE (Email)
);

-- First insert goes in cleanly
INSERT INTO Subscribers (Email, FullName)
VALUES ('[email protected]', 'Alex Kim');

-- Second insert with the same email blows up
INSERT INTO Subscribers (Email, FullName)
VALUES ('[email protected]', 'Alex K.');

Output:

Msg 2627, Level 14, State 1, Line 2
Violation of UNIQUE KEY constraint 'UQ_Subscribers_Email'. Cannot insert duplicate key in object 'dbo.Subscribers'. The duplicate key value is ([email protected]).
The statement has been terminated.

The constraint is doing exactly what it’s supposed to do. The question is what you do about it in your code or data pipeline.

Solutions

There’s no shortage of ways to fix this problem. It all depends on what approach you want to take. Below are some of your main options.

SQL Alternatives

The most obvious way to deal with this is with SQL, so we’ll look at some SQL options first. These approaches handle the constraint violation directly in your T-SQL code. Some catch the error after it happens, others prevent it from occurring in the first place. The choice depends on your concurrency needs, performance tolerance, and how you want to handle edge cases.

Wrap the INSERT in a TRY/CATCH Block

This is the most direct approach when you can tolerate the occasional (attempted) duplicate and just want to move on without crashing the whole batch. SQL Server’s TRY/CATCH lets you catch the error and decide what to do. You could log it, skip the row, return a message, or whatever makes sense for your use case.

BEGIN TRY
    INSERT INTO Subscribers (Email, FullName)
    VALUES ('[email protected]', 'Alex K.');
END TRY
BEGIN CATCH
    IF ERROR_NUMBER() = 2627
    BEGIN
        PRINT 'Skipped duplicate: ' + ERROR_MESSAGE();
    END
    ELSE
    BEGIN
        THROW; -- Re-raise anything unexpected
    END
END CATCH;

This example simply prints a message along with the error message, so the output looks like this:

Skipped duplicate: Violation of UNIQUE KEY constraint 'UQ_Subscribers_Email'. Cannot insert duplicate key in object 'dbo.Subscribers'. The duplicate key value is ([email protected]).

But as mentioned, do whatever makes sense for your use case.

One thing worth noting is that TRY/CATCH reacts after the fact. You’re still generating the violation. You’re just catching it gracefully. For high-volume inserts or tight loops, this can add overhead. It’s more suitable for one-off inserts or lower-frequency operations.

Check Before You Insert (IF NOT EXISTS)

This is a basic pre-check. You query for the value first, and only insert if it doesn’t exist yet:

IF NOT EXISTS (
    SELECT 1 FROM Subscribers WHERE Email = '[email protected]'
)
BEGIN
    INSERT INTO Subscribers (Email, FullName)
    VALUES ('[email protected]', 'Alex K.');
END

It’s simple and readable, but it’s not really safe in concurrent environments. In concurrent environments, two sessions could both pass the check and then both attempt the insert, resulting in one of them hitting the error anyway.

Use this method when you’re in a single-threaded context or when duplicates are genuinely rare and race conditions aren’t a realistic concern. For anything concurrent or high-volume, look at the other options below.

Use MERGE (Upsert)

SQL Server’s MERGE statement is the right tool when you want to insert a row if it doesn’t exist, or update it if it does. It handles both cases in a single statement, which is handy when “already exists” isn’t necessarily a problem and you just want the latest data in place.

MERGE INTO Subscribers WITH (HOLDLOCK) AS target
USING (VALUES ('[email protected]', 'Alex K.'))
      AS source (Email, FullName)
ON target.Email = source.Email
WHEN MATCHED THEN
    UPDATE SET FullName = source.FullName
WHEN NOT MATCHED THEN
    INSERT (Email, FullName)
    VALUES (source.Email, source.FullName);

MERGE can be a solid option, but it’s also one of the more finicky statements in T-SQL. It has some well-documented edge cases and quirks, especially around race conditions and multiple matches (this is why the above example uses WITH (HOLDLOCK) on the target). Use MERGE carefully, and make sure the USING source doesn’t produce duplicate join keys.

Use INSERT with a WHERE NOT EXISTS Subquery

This is a slightly more compact alternative to the IF NOT EXISTS block. This does the existence check inline and is safe for single-row inserts, although it does share the same concurrency limitations as the explicit pre-check approach.

INSERT INTO Subscribers (Email, FullName)
SELECT '[email protected]', 'Alex K.'
WHERE NOT EXISTS (
    SELECT 1 FROM Subscribers
    WHERE Email = '[email protected]'
);

Set IGNORE_DUP_KEY on the Index

If you create a unique index with the IGNORE_DUP_KEY option enabled, SQL Server will silently skip duplicate rows in an INSERT rather than rolling back the entire batch. This can be useful for bulk load scenarios where you expect some noise in the incoming data and don’t want a single duplicate to kill the whole operation.

Here’s an example of creating such an index:

CREATE UNIQUE INDEX UX_Subscribers_Email
ON Subscribers (Email)
WITH (IGNORE_DUP_KEY = ON);

Now let’s run some code that previously caused the error:

INSERT INTO Subscribers (Email, FullName)
VALUES ('[email protected]', 'Alex K.');

Output:

Duplicate key was ignored.
(0 rows affected)

So rather than get the error, we got a message that the duplicate key was ignored.

One thing to be aware of here is that duplicates are dropped silently. If you need to know which rows were skipped, you’ll have to detect them another way (this option doesn’t log them or surface them to the caller). It’s a blunt instrument, so use it only when silent skipping is genuinely acceptable.

Application Code Alternatives

Sometimes you might choose to deal with the issue in your application code. Sometimes you might have no option. Whatever the reason, all is not lost. The following options will move some of the duplicate detection and handling into your app layer, giving you more control over the user experience and error messaging.

Validate for Duplicates Before Hitting the Database

Rather than letting the database be the first line of defense, you can check for duplicates at the application layer before issuing the INSERT. In this case you’ll query for the value ahead of time and only proceed if it’s not already there.

This is a reasonable approach for low-concurrency scenarios and keeps your database code clean. The downside is the same as the SQL pre-check: it doesn’t eliminate the race condition window in concurrent applications. If two requests come in simultaneously with the same value, both could pass the check and one will still hit the constraint. Pair it with proper error handling on the database side for a complete solution.

Catch the Specific Exception in Your Application

Most database drivers and ORMs expose error numbers or error codes, which means you can catch constraint violations specifically rather than catching all exceptions. In C# with ADO.NET, for example, error number 2627 maps to a unique constraint violation. In Python with pyodbc, you’d inspect the exception’s args. This lets you handle “duplicate detected” as a known, expected outcome rather than a crash. From there you can display a friendly message, retry with different data, redirect the user, whatever fits the workflow.

This can be a solid option when duplicates are part of normal operation (e.g., a user tries to register with an email that already exists). It keeps the database constraint as the authoritative guard while giving the application layer control over the user experience.

Design for Idempotency

If your application regularly re-sends the same data (for example retries, message queues, event streams, sync jobs, etc) consider designing your write operations to be idempotent from the start. Instead of a plain INSERT, use MERGE or INSERT ... WHERE NOT EXISTS as your default write pattern, so that running the same operation twice has the same effect as running it once. This eliminates an entire class of duplicate errors rather than handling them after the fact.

This is especially relevant in distributed systems, where “at least once” delivery guarantees mean you will see duplicate messages. Building your consumers to be naturally idempotent is cleaner than trying to deduplicate at the message broker level.

Database Design Alternatives

If you’re hitting this error frequently, the root cause might be structural. These strategies address the problem at the schema level. For example, tweaking how constraints are defined, when they apply, or how data flows through your tables in the first place.

Use a Filtered Unique Index for Soft Deletes

Unique constraints can cause problems when your table uses soft deletes (when rows are marked as deleted rather than physically removed). If a user deactivates their account (setting IsDeleted = 1) and someone new later tries to register with the same email, a regular unique constraint will block it even though the original row is logically gone.

A filtered unique index can solve this by replacing the regular unique constraint and only enforcing uniqueness among active rows:

CREATE UNIQUE INDEX UX_Subscribers_Email_Active
ON Subscribers (Email)
WHERE IsDeleted = 0;

Don’t create this index in addition to a regular unique constraint. It needs to replace it. If both exist, the regular constraint will still block all duplicates regardless of IsDeleted status.

With only the filtered index in place:

  • You can insert a new row with an email that exists in a soft-deleted row (IsDeleted = 1)
  • You cannot insert a new row with an email that exists in an active row (IsDeleted = 0)
  • You can have multiple soft-deleted rows with the same email, because they’re outside the filter

This allows email reuse after account deletion without changing your INSERT statements. Just make sure new rows default to IsDeleted = 0, either through a column default or explicit values in your inserts.

Here’s an example of how we could build the table using this technique:

-- Table WITHOUT a unique constraint on email
CREATE TABLE Subscribers (
    SubscriberID   INT           IDENTITY PRIMARY KEY,
    Email          VARCHAR(255)  NOT NULL,
    FullName       VARCHAR(100),
    IsDeleted      BIT           NOT NULL DEFAULT 0
);

-- Filtered unique index replaces the constraint
CREATE UNIQUE INDEX UX_Subscribers_Email_Active
ON Subscribers (Email)
WHERE IsDeleted = 0;

Now we can insert some rows successfully:

-- First insert goes in cleanly
INSERT INTO Subscribers (Email, FullName, IsDeleted)
VALUES ('[email protected]', 'Alex Kim', 0);

-- Soft-deleted row with duplicate email - no error
INSERT INTO Subscribers (Email, FullName, IsDeleted)
VALUES ('[email protected]', 'Homer J', 1);

-- New active row with same email as soft-deleted row - no error
INSERT INTO Subscribers (Email, FullName, IsDeleted)
VALUES ('[email protected]', 'Homer Jay', 0);

Output:

(1 row affected)
(1 row affected)
(1 row affected)

But the following operation still fails:

INSERT INTO Subscribers (Email, FullName, IsDeleted)
VALUES ('[email protected]', 'Homer Simpson', 0);

Output:

Msg 2601, Level 14, State 1, Line 1
Cannot insert duplicate key row in object 'dbo.Subscribers' with unique index 'UX_Subscribers_Email_Active'. The duplicate key value is ([email protected]).
The statement has been terminated.

Let’s check the table:

SELECT * FROM Subscribers;

Result:

SubscriberID  Email             FullName   IsDeleted
------------ ---------------- --------- ---------
1 [email protected] Alex Kim false
2 [email protected] Homer J true
3 [email protected] Homer Jay false

We can see that [email protected] can appear multiple times, but only when the IsDeleted column is not 0.

Bear in mind, that it’s still possible to add more duplicates in deleted status. For example, let’s run the same INSERT statement a couple more times and select all data:

INSERT INTO Subscribers (Email, FullName, IsDeleted)
VALUES ('[email protected]', 'Homer Simpson', 1);
INSERT INTO Subscribers (Email, FullName, IsDeleted)
VALUES ('[email protected]', 'Homer Simpson', 1);

SELECT * FROM Subscribers;

Result:

SubscriberID  Email             FullName       IsDeleted
------------ ---------------- ------------- ---------
1 [email protected] Alex Kim false
2 [email protected] Homer J true
3 [email protected] Homer Jay false
5 [email protected] Homer Simpson true
6 [email protected] Homer Simpson true

We’ve just inserted a bunch of duplicates, but only one can be active.

Reconsider Whether the Constraint Is Appropriate

Sometimes the right answer is to question whether the unique constraint should exist in its current form. Maybe email uniqueness should be scoped to an account or tenant rather than the whole table. Maybe what you’re treating as a unique natural key is actually not unique in all the edge cases your data now covers. Maybe a composite constraint (unique across two columns together) would be more accurate than a single-column constraint.

Before you write workaround code, it’s worth asking if the constraint is correctly modelling the real-world rule. A constraint that’s technically enforced but logically wrong will just push the confusion downstream. Revisiting the design might save you from a more complicated fix later.

Use a Staging Table for Bulk Loads

For large data imports or ETL pipelines, it often makes sense to load data into a staging table first. No constraints, no indexes beyond what you need for performance . And then you can handle deduplication as a deliberate step before moving records into the target table. You can use ROW_NUMBER() or GROUP BY to identify duplicates in the staging data, pick a winner (the most recent record for example), and only promote clean rows to the production table.

This option separates the concerns of data ingestion and data validation, makes it easy to audit what came in versus what was accepted, and avoids the performance impact of constraint violations on large batches.

Choosing the Right Approach

There’s no single best solution here. It all depends on where the duplicates are coming from, how often they occur, and what the right behavior is when they show up. For interactive user-facing features, catching the exception in application code and returning a clear message is usually the cleanest path. For batch operations and ETL, a staging table or IGNORE_DUP_KEY may be more practical. For write patterns that need to be robust by design, idempotent queries with MERGE or WHERE NOT EXISTS are worth the upfront investment.

The one thing to avoid is suppressing the error without understanding it. Unique constraints exist because duplicate data causes real problems downstream. Handle the violation deliberately, and make sure you understand why the duplicate appeared in the first place.