Distributed Web Services
Big Data

Distributed Web Services


Introduction:

In designing distributed web services  it’s desirable to achieve three properties :

Consistency
Availability
Partition-tolerance



fig.1

In a highly distributed network, it’s crucial to achieve a certain amount of faulty tolerance as it’s important that if some nodes crash for some reasons (network failure, hardware failure ..etc) the system still performs as expected.

In CAP theorem it is impossible to provide the 3 mentioned properties there is a must for a trade-off as illustrated in fig.1.

Asynchronous Networks Model:

The first point of view is that it’s impossible in the asynchronous network model to implement a read/write that guarantees availability and atomic consistency.


Execution including messages lost:

Assume that there is an executing algorithm that guarantees the three criteria working on a network that contains two divided disjoint nodes {N1, N2}that contain data instances such as v0 as v0 is the initial value of a certain atomic object.

Execution scenario that a write of value for the v0 was committed on  N1 and due to lost messages (communication failure) between N1 and N2 the value of v0 will be v1 on N1 but the same object won’t be updated on N2 so if any read request was executed there is 2 possible outcomes one that will come from N1 if the read was executed on this node and will return v1 on the other hand if the execution was held by N2 the returned value will be V0 and here consistency is violated which indicates that there is no such executing algorithm as assumed in this network environment.

Execution that guarantees message deliveries :

Assume that there is an executing algorithm that guarantees the three criteria working on a network of N nodes and this algorithm guarantees that the messages between the N nodes whether a message has been lost or delayed in a way such that it terminates and always guarantees atomic consistency eventually.

Execution scenario let there a value B that has been updated by a write operation on a certain node then the algorithm will make sure that all the values are consistent but at a finite point of time if a read operation was held on one of the nodes it will return a non-atomic response, therefore, the execution of the algorithm is not atomic so there isn’t such an algorithm.


Approaches in the Asynchronous Network:

The design has to guarantee only two of the three properties:

Atomic and partition tolerance:

If availability is not required it is doable to ignore some requests.
A simple centralized database algorithm meets the requirements :

A single node  is dedicated to storing the values and when a node receives a request forwards it to the dedicated node and it sends an acknowledgment when the requested node receives it sends a response to the client

Atomic, Available:

If no partition is required it’s possible to provide atomic, available data. As a centralized algorithm can guarantee the requirements.

Availability with partition:

If no atomicity is required the nodes can return inconsistent values at every request.

synchronous Networks Model:

We will assume a partially synchronized system where each node has a clock and all the clocks increases with the same rate however the clocks are not synchronized so each clock behaves as a local timer that indicates how much time has passed and it is used to determine when some periodic functions to execute when a certain time pass.

However, it is still impossible to fulfill the three requirements when messages can be lost.

The execution scenario behaves with the same methodology used in asynchronous requests and will return inconsistent results.

Beating the CAP theorem:

Problem Evaluation:

The easiest way to make a query is to make a function run across the whole dataset but here is the problem the running time of the function won’t meet the latency constraints as it is not reasonable to wait seconds to get the response, but if we assumed that there is a function that meets the latency requirement the trade-off between Consistency and availability will still exist but the overhead of the CAP theorem is avoided by using immutable data and computing the queries from raw data.

if consistency is chosen, that means that the system will not be available and it won’t be possible to read/write for some periods of time.

If availability is chosen the system is eventually consistent without the complexity of eventual consistency the key to that is the data is immutable so there are no updates as the data is treated like facts it was true at some point in time so by that a lot of complexity was avoided like read-repairs in eventual consistency and with computing queries from scratch the caused complexity by the CAP theorem is beaten but how to achieve this property here arise the Batch processing.


Batch computations:

For simplicity let's assume that it is okay for the queries to be outdated for a few hours and since we compute the queries from scratch and we want it to run fast it’s reasonable to precompute the results whenever there is new data it’s only needed to recompute everything.

Implementation wise:

To build such a system we need a fast-growing dataset and some sort of function that runs on that dataset to run in a scalable way.

For that we have Hadoop, it is said that Hadoop is good only for unstructured data but this is not true when it is combined with Thrift or Protocol Buffers.

Hadoop consists of two essential parts: distributed fileSystem and a batch processing framework(MapReduce).

Immutable data is stored in flat files in HDFS as a sequence of records and the only supported operation is to append and by that, we meet the Large constantly increasing data.

Pre Computations is held by MapReduce. The only left thing is to store the results in a way to be quickly accessed. ElephantDB is perfect for that as it is specialized in storing Key/value data as it supports random reads and doesn't support random writes so the complexity is reduced.

Fault Tolerance:

CAP theorem wise the system meets the requirements as it is eventually consistent as writes take a few hours to be added and computed.

Fault-tolerance wise: faults in the system can be caused by one of two things a buggy function or a fault data, for the buggy function all is needed is to solve the bug and push the new version of the function and recompute everything from scratch, for the fault data it is possible to delete the wrong data and recompute everything as the data is immutable so it doesn’t override the good data.


Expanding The system by real-time layer:

The only left thing to do is to process the data of the few hours behind in the batch processing, and time-wise it’s easier to process a few hours of data, so it is possible to run a RealTime view in parallel with the batch view and merge the results to get the final outcome.

In RealTime layer read/write databases are used like Cassandra and it relies on incremental algorithms to update the state in those databases.

The analog for RealTime computations is Storm as storm runs infinite computations over streams of data and gives strong guarantees of the computed data.


Final system Analysis:

It seems that the system still suffers from the real-time and incremental algorithms complexities, but actually, the real-time layer is only responsible for a few hours of data processing, and everything computed by the RealTime layer is eventually overridden by the batch view as it will be written after that few hours and will be the batch view responsibility.

If some mistakes happen it doesn’t permanently corrupt the data, so isolating the whole complexity into the RealTime layer makes a huge difference and increases the system robustness.

The RealTime layer doesn’t affect human fault-tolerance as the main core of the system is the append-only of immutable data, so any mistakes can be recovered.

Garbage Collection:

The system is built upon the idea of immutable data but what happens if the data is growing exponentially  and it’s impractical to store all that replicated data 

This can be solved by extending the functionality of the Garbage Collection, as it will run on the master dataset and returns a filtered one, the functionality itself can be customized to git rid of low-value data or it can filter it by any desired criteria.

it is easy to extend the functionality of the


  • Youssef Akl
  • Apr, 26 2022

Add New Comments

Please login in order to make a comment.

Recent Comments

Be the first to start engaging with the bis blog.