CAP theorem for distributed systems explained

Horizontal scaling of software systems has become necessary in recent years, due to the global nature of computing and the ever-increasing performance demands on applications. It is no longer sufficient to run a single server with a single database in a single data center to handle the scalability needs of today. Distributed systems that span multiple data centers and regions have been in place since long at companies like Google since late 1990’s.

Unfortunately, the performance benefits that horizontal scaling provides come at a cost – complexity. Distributed systems introduce many more factors into the performance equation than existed before. Data records vary across clients/nodes in different locations. Single points of failure destroy system up-time, and intermittent network issues creep up at the worst possible time.

In the late 1990’s the CAP theorem was introduced by Eric Brewer to justify the need to explore a wider design space than the traditional RDBMS systems that were only focused on ACID properties. The original definition of the CAP thoerem when it was first formalized in 1999 was

Any networked distributed system can have at most two of three desirable properties:

  • Consistency (C) : equivalent to having a single up-to-date copy of the data
  • High Availability (A) : of that data (for updates) and
  • Tolerance to network partitions (P).

Simply put, the CAP theorem demonstrates that any distributed system cannot guarantee C, A, and P simultaneously, rather, trade-offs must be made at a point-in-time to achieve the level of performance and availability required for a specific task. For example, if consistency and availability (CA) are important, you can’t partition the data. If consistency is not that important, then you can partition the data for high availability (AP). If you have to partition the data and consistency is important (CP), then availability suffers.  In the past decade, a vast range of new systems have emerged, as well as much debate on the relative merits of consistency and availability.The new breed of NoSQL solutions play around with these concepts so you can choose your tolerance for your desired properties.

scalability-cap-theorem1.png

Note: There’s no distributed system that wants to live with “Partioning” – if it does, it’s not distributed. That is why putting SQL in this triangle may lead to confusion.

The 2 of 3 formulation was always misleading because it tended to oversimplify the tensions among properties.  CAP prohibits only a tiny part of the design space: perfect availability and consistency in the presence of partitions, which are rare.

Why 2 of 3 formulation is misleading

The three attributes: Consistency, Availability and Partition Tolerance, are not binary. In practical large scale distributed systems spanning data centers, all of these are in fact continuous variables. Partitions are rare in a distributed system; so there is little reason to forfeit C or A when the system is not partitioned. Second, the choice between C and A can occur many times within the same system at very fine granularity; not only can subsystems make different choices, but the choice can change according to the operation or even the specific data or user involved.

Because partitions are rare, CAP should allow perfect C and A most of the time, but when partitions are present or perceived, a strategy that detects partitions and explicitly accounts for them should be in order. This strategy should have three steps: detect partitions, enter an explicit partition mode that can limit some operations, and initiate a recovery process to restore consistency and compensate for mistakes made during a partition.

It’s really just A vs C

  1. Availability is achieved by replicating the data across different machines
  2. Consistency is achieved by updating several nodes before allowing further reads
  3. Total partitioning, meaning failure of part of the system is rare. However, we could look at a delay, a latency, of the update between nodes, as a temporary partitioning. It will then cause a temporary decision between A and C:
    1. On systems that allow reads before updating all the nodes, we will get high Availability
    2. On systems that lock all the nodes before allowing reads, we will get Consistency
And since this decision is temporary, it exists only for the duration of the delay, that we are really contrasting Availability against Consistency.
References:

http://www.infoq.com/articles/cap-twelve-years-later-how-the-rules-have-changed

https://dzone.com/articles/better-explaining-cap-theorem

 

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s