ALL COVERED TOPICS

NoSQLBenchmarksNoSQL use casesNoSQL VideosNoSQL Hybrid SolutionsNoSQL PresentationsBig DataHadoopMapReducePigHiveFlume OozieSqoopHDFSZooKeeperCascadingCascalog BigTableCassandraHBaseHypertableCouchbaseCouchDBMongoDBOrientDBRavenDBJackrabbitTerrastoreAmazon DynamoDBRedisRiakProject VoldemortTokyo CabinetKyoto CabinetmemcachedAmazon SimpleDBDatomicMemcacheDBM/DBGT.MAmazon DynamoDynomiteMnesiaYahoo! PNUTS/SherpaNeo4jInfoGridSones GraphDBInfiniteGraphAllegroGraphMarkLogicClustrixCouchDB Case StudiesMongoDB Case StudiesNoSQL at AdobeNoSQL at FacebookNoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

C in CAP != C in ACID

Alex Feinberg explains it again:

Just to expand on this, the “C” in CAP corresponds (roughly) to the “A” and “I” in ACID. Atomicity across multiple nodes requires consensus. According to FLP Impossibility Result (CAP is a very elegant and intuitive re-statement of FLP), consensus is impossible in a network that may drop or deliver packets. Serializable isolation level requires that operations are totally ordered: total ordering on multiple nodes, requires solving the “atomic multicast” problem which is a private instance of the general consensus problem.

In practice, you can achieve consensus across multiple nodes with a reasonable amount of fault tolerance if you are willing to accept high (as in, hundreds of milliseconds) latency bounds. That’s a loss of availability that’s not acceptable to many applications.

This means, that you can’t build a low-latency multi-master system that achieves the “A” and “I” guarantees. Thus, distributed systems that wish to achieve a greater form of consistency typically (Megastore from Google being a notable exception, at the cost of 140ms latency) choose master slave systems (with “floating masters” for fault tolerance). In these systems availability is lost for a short period of time in case the master fails. BigTable (or HBase) is an example of this: (grand simplification follows) when a tablet master (RegionServer in HBase) for a specific token range fails, availability is lost until other nodes take over the “master-less” token range.

These are not binary “on/off” switches: see Yahoo’s PNUTS for a great “middle of the road” system. The paper has an intuitive example explaining the various consistency models.

Note: in a partitioned system, the scope of consistency guarantees (that is, any consistency guarantees: eventual or not) is typically limited to (at best) a single partition of a “table group”/”entity group” (in Microsoft Azure Cloud SQL Server and Google Megastore, respectively), a single partition of a table (usual sharded MySQL setups) or just a single row in a table (BigTable) or document in a document oriented store. Atomic and isolated cross row transactions are impractical on commodity hardware (and are limited even in systems that mandate the use of infiband interconnect and high-performance SSDs).

Alex and Sergio Bossa have previously had an interesting conversation on the topic of consistency from the ACID and CAP perspectives.

Original title and link: C in CAP != C in ACID (NoSQL databases © myNoSQL)