Understanding Concurrency Control in DBMS with Examples

Understanding Concurrency Control in DBMS with Examples-feature image
January 23, 2026 7 Min read

Key Points:

  • Concurrency control manages multiple transactions running at the same time.
  • It prevents conflicts and keeps data consistent.
  • Locking and timestamp protocols are the main control methods.
  • It ensures safe, reliable, and correct database transactions.

Imagine there is a large database of, say, a bank. Thousands of people are trying to access it, which could result in conflicts, interfere with each other’s transactions, show incorrect account balances, and much more.

To resolve this, concurrency control is used, which ensures modern applications like banking apps, ticket booking platforms, e-commerce websites, etc., run smoothly even when thousands of users work on the same database at the same time.

This is why database concurrency control is not just a theoretical concept in textbooks; it is a real-world necessity for every multi-user environment.

This blog explains

  • what concurrency control is,
  • why it is needed,
  • how it works,
  • key principles,
  • popular concurrency control methods in DBMS and more.

Let’s dive in.

What is Concurrency Control in DBMS?

Concurrency control in DBMS refers to the set of techniques and mechanisms that manage simultaneous execution of transactions in a database without causing data inconsistency.

When multiple users access the same data concurrently, each user’s transaction must behave as if it is executing in isolation, even though internally, many are being processed in parallel.

In simpler words, concurrency control ensures that:

  • multiple transactions can run at the same time
  • data remains accurate and consistent
  • one transaction does not negatively impact another

Concurrency in DBMS is closely linked with transaction properties, especially the ACID properties- Atomicity, Consistency, Isolation, and Durability. Among these, isolation is central to database concurrency control because it regulates how the intermediate states of one transaction are hidden from others.

Lio logo

Lio

4.8

Starting Price

Price on Request

Why do we need Concurrency Control in DBMS?

The need for concurrency control in DBMS arises because databases are shared systems. In real environments, hundreds or millions of transactions happen simultaneously. Without proper control, these concurrent transactions can cause severe anomalies. Concurrency problems in DBMS typically include lost updates, dirty reads, inconsistent reads, and phantom reads.

Below is a simple example for better understanding.

Example illustrating the need for concurrency control

Imagine two users accessing the same bank account balance.

ScenarioDetails
Initial BalanceShared bank account balance = INR 10,000
User A TransactionWithdraws INR 2,000
User B TransactionDeposits INR 1,000
Result Without Concurrency ControlFinal balance may become INR 8,000 or INR 11,000 instead of the correct balance of INR 9,000

In this case, because both transactions read the same old value at the same time, one update may overwrite the other. This leads to incorrect balances, which is unacceptable in financial systems.

In this case, because both transactions read the same old value at the same time, one update may overwrite the other. This leads to incorrect balances, which is unacceptable in financial systems.

SQLServerStd

Microsoft SQL Server Standard Edition

4.1

Starting Price

Price on Request

Problems in concurrency control

Some of the major concurrency problems in DBMS include:

  • Lost update problem where one transaction overwrites another transaction’s result
  • Dirty read problem where a transaction reads data written by another uncommitted transaction
  • Inconsistent read problem where a transaction reads different values of the same data during execution
  • Uncommitted dependency where failure of one transaction corrupts results of others

These issues clearly show why concurrency management in DBMS is necessary. Without it, databases become unreliable, error-prone, and unsafe for mission-critical applications.

Principles of Concurrency Control

Several fundamental principles govern concurrency control in database management systems. The main objective is to maintain consistency and isolation while allowing maximum parallelism.

The main principle is based on serializability, which states that even though transactions execute concurrently, the final result must be equivalent to some serial execution of those transactions.

Another important principle is conflict control. When two transactions attempt to access the same data item, especially for writing, their execution must be ordered or regulated so that no conflict arises. This is often managed through locking techniques in DBMS and timestamp-based concurrency control in DBMS.

Recovery and durability also play a role. If a concurrent transaction fails, the system must be able to undo or redo operations without affecting other correctly executed transactions. All these principles together ensure correctness, integrity, and reliability.

FlySpeedDataExport

FlySpeed Data Export

4.2

Starting Price

Price on Request

How Does Database Concurrency Control Work?

Database concurrency control works through control protocols and algorithms that schedule transactions intelligently. The aim is to allow transactions to run in parallel wherever possible, but delay or block operations where conflicts may arise.

The DBMS checks:

  • whether transactions are independent
  • whether their operations conflict
  • whether serializability can be preserved

The scheduler inside the DBMS is responsible for deciding the execution order of transactions. It ensures that concurrency in DBMS does not violate isolation or consistency.

Two major approaches are commonly used: locking-based protocols and timestamp-based protocols.

Two-Phase Locking Protocol in DBMS

One of the most widely used concurrency control methods in DBMS is based on locks. Locking techniques in DBMS prevent multiple transactions from accessing the same data item in conflicting ways. There are typically two basic types of locks: shared lock for reading and exclusive lock for writing.

The two-phase locking protocol in DBMS (2PL) is the most widely used. According to this protocol, each transaction must complete two phases:

  • Growing phase: the transaction acquires all the locks it needs and cannot release any lock
  • Shrinking phase: the transaction releases locks and cannot acquire any new ones

This ensures serializability by preventing cyclic dependencies. However, two-phase locking can also lead to deadlocks, where two or more transactions wait for each other indefinitely. To solve this, deadlock detection and prevention techniques* are used.

*Deadlock detection and prevention techniques either monitor transactions to identify circular waits and resolve them by aborting or rolling back one of the transactions, or they impose rules on resource allocation and transaction ordering so that circular waits never arise.

mariadblogo

MariaDB

4.5

Starting Price

Price on Request

Timestamp-based Protocol in DBMS

Another major technique is the timestamp-based protocol in DBMS. In this method, every transaction is assigned a unique timestamp when it starts. The timestamp defines the order of execution logically.

The DBMS then ensures that operations are executed according to timestamp order, not the physical time order of arrival. If a younger transaction tries to update data already modified by an older committed transaction, the system may force a rollback depending on the specific protocol used.

This approach avoids deadlocks completely because no waiting cycle is formed. However, it can result in frequent rollbacks when conflicts occur, particularly in high-contention systems.

There are different variations, such as basic timestamp protocol, multiversion timestamp protocol, and Thomas’ write rule, each balancing performance and correctness differently.

Distributed Concurrency Control in DBMS

When databases are distributed across multiple locations or network nodes, concurrency control becomes even more complex. Distributed concurrency control in DBMS deals with transactions executed over multiple interconnected databases, often in cloud and large enterprise systems.

The key challenges include network latency, node failures, message delays, and synchronization issues. Algorithms such as distributed two-phase locking, distributed timestamp ordering, and quorum-based protocols are widely used. Modern distributed databases, such as NoSQL systems and NewSQL engines, also rely heavily on distributed concurrency techniques to ensure global consistency.

Advantages and Disadvantages of Concurrency Control

Concurrency management in DBMS offers several benefits but also introduces certain overheads. A brief discussion helps understand its practical impact.

Advantages

  • Concurrency control prevents data inconsistency, enabling multiple users to work safely on the same database.
  • It improves system utilization and throughput by allowing parallel execution instead of forcing serial processing.
  • It protects integrity constraints and maintains ACID properties.
  • It ensures user isolation so that intermediate, uncommitted changes of one transaction are not visible to others.
  • Overall, it makes large-scale multiuser systems efficient and reliable.

Disadvantages

  • At the same time, concurrency control adds system complexity and overhead. Locking mechanisms may cause deadlocks and blocking delays.
  • Timestamp-based approaches may cause high rollback rates. Implementation and maintenance require complex algorithms and careful transaction design.
  • Performance tuning also becomes challenging, especially in distributed environments.

However, despite these disadvantages, concurrency control remains indispensable because the alternative would be incorrect databases and unreliable applications.

Master Data Management

Master Data Management

4.3

Starting Price

Price on Request

Conclusion

Concurrency control in DBMS is the backbone of reliable multiuser database systems. As applications scale and more users interact simultaneously, concurrency in database management systems becomes unavoidable.

Whether implemented through locking, timestamps, or hybrids, the ultimate goal of database concurrency control is the same: to allow safe parallel transaction execution while guaranteeing consistency.

Written by Mehlika Bathla

Mehlika Bathla is a passionate content writer who turns complex tech ideas into simple words. For over 4 years in the tech industry, she has crafted helpful content like technical documentation, user guides, UX content, website content, social media copies, and SEO-driven blogs. She is highly skilled in... Read more

Still Have a Question in Mind?

Get answered by real users or software experts

Talk To Tech Expert