Achieving Thread Safety with Interlocked Operations in .NET

Lock-Free Thread Synchronization

Interlocked operations provide atomic thread-safe updates without the overhead of locks. They're faster than traditional locking mechanisms and prevent threads from blocking, making them ideal for high-throughput scenarios where contention is common.

The Interlocked class maps directly to CPU-level atomic instructions. When you increment a counter with Interlocked.Increment, the processor guarantees that read-modify-write happens as a single operation. No other thread can observe the variable in an intermediate state. This eliminates race conditions on shared counters, flags, and references without the complexity of lock statements.

You'll learn when to use Interlocked instead of locks, how to implement common patterns like thread-safe counters and compare-and-swap operations, and understand the performance benefits these techniques provide in concurrent applications.

Thread-Safe Counters with Interlocked.Increment

The most common use case for Interlocked is maintaining shared counters across threads. Without synchronization, incrementing a counter from multiple threads produces incorrect results because the increment operation isn't atomic—it involves reading the value, adding one, and writing back. Interlocked.Increment performs all three steps atomically.

This pattern appears frequently in metrics collection, connection pools, and request tracking where multiple threads need to update the same counter simultaneously. The lock-free nature means threads never wait, providing consistent performance even under heavy load.

CounterComparison.cs
public class RequestCounter
{
    private int _unsafeCount;
    private int _interlockedCount;
    private readonly object _lockObj = new();
    private int _lockedCount;

    // UNSAFE: Race condition possible
    public void IncrementUnsafe()
    {
        _unsafeCount++; // Three operations: read, add, write
    }

    // SAFE: Atomic operation
    public void IncrementInterlocked()
    {
        Interlocked.Increment(ref _interlockedCount);
    }

    // SAFE: But slower than Interlocked
    public void IncrementLocked()
    {
        lock (_lockObj)
        {
            _lockedCount++;
        }
    }

    public void SimulateConcurrentRequests()
    {
        const int iterations = 100_000;
        var threads = new Thread[10];

        for (int i = 0; i < threads.Length; i++)
        {
            threads[i] = new Thread(() =>
            {
                for (int j = 0; j < iterations; j++)
                {
                    IncrementUnsafe();
                    IncrementInterlocked();
                    IncrementLocked();
                }
            });
            threads[i].Start();
        }

        foreach (var thread in threads)
            thread.Join();

        int expected = iterations * threads.Length;
        Console.WriteLine($"Expected:           {expected:N0}");
        Console.WriteLine($"Unsafe count:       {_unsafeCount:N0} (likely wrong)");
        Console.WriteLine($"Interlocked count:  {_interlockedCount:N0}");
        Console.WriteLine($"Locked count:       {_lockedCount:N0}");
    }
}

The unsafe increment will almost certainly show a count lower than expected due to lost updates when threads interleave operations. Both Interlocked and lock versions produce correct counts, but Interlocked avoids the overhead of acquiring and releasing locks on every operation.

Compare-and-Swap with CompareExchange

Interlocked.CompareExchange implements the compare-and-swap pattern: it compares a variable to an expected value and only updates it if they match. This atomic operation returns the original value, letting you know whether the swap succeeded. It's the foundation for building lock-free data structures and optimistic concurrency patterns.

You'll use CompareExchange when updating shared state based on its current value. For example, implementing a lazy initialization pattern where multiple threads might try to initialize simultaneously, but you only want one initialization to succeed.

LazyInitialization.cs
public class ExpensiveResource
{
    public ExpensiveResource()
    {
        Console.WriteLine("Initializing expensive resource...");
        Thread.Sleep(100); // Simulate expensive initialization
    }

    public string Id { get; } = Guid.NewGuid().ToString();
}

public class LazyResourceManager
{
    private ExpensiveResource? _resource;

    public ExpensiveResource GetResource()
    {
        // If already initialized, return immediately
        if (_resource != null)
            return _resource;

        // Create new instance
        var newResource = new ExpensiveResource();

        // Try to set it atomically - only succeeds if still null
        var original = Interlocked.CompareExchange(
            ref _resource, newResource, null);

        // If original was null, we won (our instance is now _resource)
        // If original wasn't null, another thread won (use their instance)
        return original ?? newResource;
    }
}

// Usage
var manager = new LazyResourceManager();
var tasks = Enumerable.Range(0, 5)
    .Select(_ => Task.Run(() =>
    {
        var resource = manager.GetResource();
        Console.WriteLine($"Got resource: {resource.Id}");
    }))
    .ToArray();

Task.WaitAll(tasks);

Even though five threads attempt initialization simultaneously, CompareExchange ensures only one ExpensiveResource gets created. The atomic comparison and swap prevents race conditions without blocking threads with locks.

Atomic Value Replacement with Exchange

Interlocked.Exchange atomically replaces a variable's value and returns the previous value. Unlike CompareExchange which only swaps if a condition is met, Exchange always performs the replacement. This is useful for implementing flags, state switches, and single-writer scenarios where you need to know the previous value.

CircuitBreaker.cs
public enum CircuitState { Closed = 0, Open = 1 }

public class SimpleCircuitBreaker
{
    private int _state = (int)CircuitState.Closed;
    private int _failureCount;

    public bool TryExecute(Action operation)
    {
        // Check current state atomically
        var currentState = (CircuitState)Interlocked.CompareExchange(
            ref _state, (int)CircuitState.Closed, (int)CircuitState.Closed);

        if (currentState == CircuitState.Open)
        {
            Console.WriteLine("Circuit is OPEN - request rejected");
            return false;
        }

        try
        {
            operation();
            Interlocked.Exchange(ref _failureCount, 0); // Reset on success
            return true;
        }
        catch (Exception ex)
        {
            var failures = Interlocked.Increment(ref _failureCount);
            Console.WriteLine($"Failure #{failures}: {ex.Message}");

            if (failures >= 3)
            {
                var previous = (CircuitState)Interlocked.Exchange(
                    ref _state, (int)CircuitState.Open);
                Console.WriteLine($"Circuit state: {previous} → Open");
            }

            return false;
        }
    }

    public void Reset()
    {
        Interlocked.Exchange(ref _state, (int)CircuitState.Closed);
        Interlocked.Exchange(ref _failureCount, 0);
        Console.WriteLine("Circuit manually reset to Closed");
    }
}

The circuit breaker tracks failures and opens after three consecutive failures, all without locks. Exchange updates the state atomically while Increment tracks failures thread-safely. This pattern is common in resilience libraries where multiple threads call the same protected operation.

Benchmark Lock vs Interlocked

Compare the performance difference between traditional locking and Interlocked operations. You'll see how Interlocked avoids thread blocking and provides superior throughput under contention.

Steps

  1. Scaffold project: dotnet new console -n InterlockedBench
  2. Navigate: cd InterlockedBench
  3. Install BenchmarkDotNet package
  4. Replace Program.cs with benchmark code
  5. Run with dotnet run -c Release
InterlockedBench.csproj
<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net8.0</TargetFramework>
    <Nullable>enable</Nullable>
    <ImplicitUsings>enable</ImplicitUsings>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="BenchmarkDotNet" Version="0.13.*" />
  </ItemGroup>
</Project>
Program.cs
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkRunner.Run();

[MemoryDiagnoser]
public class CounterBenchmarks
{
    private int _interlockedCounter;
    private int _lockedCounter;
    private readonly object _lock = new();

    [Benchmark(Baseline = true)]
    public void InterlockedIncrement()
    {
        Interlocked.Increment(ref _interlockedCounter);
    }

    [Benchmark]
    public void LockedIncrement()
    {
        lock (_lock)
        {
            _lockedCounter++;
        }
    }

    [Benchmark]
    public int InterlockedAdd()
    {
        return Interlocked.Add(ref _interlockedCounter, 5);
    }

    [Benchmark]
    public int LockedAdd()
    {
        lock (_lock)
        {
            return _lockedCounter += 5;
        }
    }
}

What You'll See

| Method              | Mean      | Allocated |
|-------------------- |----------:|----------:|
| InterlockedIncrement|  ~2.5 ns  |     -     |
| LockedIncrement     | ~15.0 ns  |     -     |
| InterlockedAdd      |  ~3.0 ns  |     -     |
| LockedAdd           | ~16.0 ns  |     -     |

Interlocked operations are ~5-6x faster than locks

Performance at Scale

Interlocked operations excel in high-contention scenarios because they never block. When ten threads compete for the same counter, all ten can make progress simultaneously using atomic CPU instructions. With locks, nine threads wait while one proceeds, creating a serialization bottleneck.

The performance advantage compounds as thread count increases. Measure your specific workload with BenchmarkDotNet to quantify the benefit. For simple operations like incrementing counters, reading timestamps, or swapping references, Interlocked provides dramatically better throughput than locks with virtually no allocation overhead.

However, don't prematurely optimize. If you're incrementing a counter once per second, the lock vs Interlocked difference is irrelevant. Use Interlocked when profiling shows contention or when you know the operation will be hit by many threads frequently—like request counters in web servers or task completion tracking in parallel processing pipelines.

Common Questions

When should I use Interlocked instead of lock?

Use Interlocked for simple operations like incrementing counters or updating flags. It's faster than lock because it doesn't block threads. Use lock when you need to protect multiple operations or complex state changes that must happen atomically together.

What's the difference between Interlocked.Increment and the ++ operator?

The ++ operator performs three separate operations: read, add, write. Multiple threads can interleave these steps causing lost updates. Interlocked.Increment performs all three as one atomic CPU instruction, guaranteeing thread safety without locks.

Does Interlocked.CompareExchange work with reference types?

Yes, CompareExchange has overloads for object references. It compares reference equality and swaps atomically. This enables lock-free patterns for replacing entire objects based on their current state, useful for immutable data structures.

Are Interlocked operations slower than regular operations?

Interlocked operations have minimal overhead compared to unsynchronized code—typically a few nanoseconds. They're dramatically faster than locks which require OS kernel transitions. For high-throughput scenarios with contention, Interlocked provides excellent performance.

Can Interlocked prevent all race conditions?

Interlocked prevents races on individual variables but doesn't protect complex multi-step operations. If your logic requires checking and updating multiple fields atomically, use lock or higher-level primitives like SemaphoreSlim or concurrent collections.

Back to Articles