🧩Web 3.0 Infrastructure
Last updated
Last updated
With the digitization of our everyday life, more and more data is produced. The central role in this flood of data is played by the IoT applications, with its many use cases from smart homes, industrial IoT, healthcare, smart cities, agriculture, energy management, and autonomous driving. Mobile networks also evolve to 5G and beyond technologies, generating huge amounts of data near the antenna sites. These applications have significantly changed our lives, decreasing air pollution, enhancing traffic flows, and saving valuable resources such as electricity.
According to IDC, the Global Datasphere will grow to 175 Zettabytes by 2025, while more than 40% of the world population is expected to use 5G and beyond (6G) services.
It is obvious that the current model of execution of applications is not enough to meet the stringent requirements of the new emerging applications due to the finite computing and networking resources. Until now, applications were following a monolithic approach. Monolithic applications are large, self-contained software systems typically deployed as a single monolithic unit. They are built using a single technology stack and, thus, are typically designed to run on the cloud.
Cloud resources which are assumed to be of infinity capacity, lie in a limited number of places around the world interconnected through high-capacity optical links (fig. 1). The data are transferred to these locations to be processed, which further increases the execution delay as they have to travel long distances, resulting in increased propagation latency.
The cloud revolutionized how businesses and individuals provision and use computing resources. Cloud computing uses remorse servers to save and process data. Companies host or maintain massive data centers and HPCs that provide the security, storage capacity, and computing power to support the cloud infrastructure. In the cloud computing model, the computing and networking resources are allocated dynamically to meet the users' application demands that fluctuate over time. However, although it works well with the current application models, it fails to address the stringent requirements of the next-gen applications efficiently.
The traditional cloud model cannot fulfil the requirements of the new emerging applications.
Also, it has been proved that only a part of the data transferred to the cloud is needed. Given the forecasts, if all these data generated at the network's periphery have to be transferred to the cloud, the networking resources would be the bottleneck, limiting the cloud computing capabilities.
For this reason, the hierarchical computing infrastructure model has emerged, with computing and storage resources placed across the different networking domains (fig. 2).
Edge computing has emerged as a new paradigm for processing and analyzing data at the edge, the network's periphery. With the advent of edge computing, the applications' execution model is transforming into a distributed one. Processing data at the edge brings several benefits, such as reducing latency, needed bandwidth, deployment and equipment costs, power consumption, and memory footprint, as well as increasing security and data protection. Mini data centers, GPUs, and CPUs are the main components of the edge, with ASICS and FPGAs also present.
Edge computing is particularly relevant for the growing number of connected devices and applications requiring real-time data processing and analysis, such as IoT devices, autonomous vehicles, and augmented/virtual reality applications. By performing processing and data analysis closer to their source, edge computing enables these devices and applications to function with lower latency and greater efficiency.
In Callisto Network, we are working on taking advantage of the infinite possibilities that will be offered Web 3.0. Enriching the network with new applications will enable the permissionless trusted execution of new emerging applications. In this score, we will release a series of articles on how we plan to position the network in this direction.