Basics of Distributed Systems - BunksAllowed

BunksAllowed is an effort to facilitate Self Learning process through the provision of quality tutorials.

Random Posts

Basics of Distributed Systems

Share This

Utilizing massively distributed computer facilities to rent out computing services is not a new concept. It dates back to the early 1950s, when mainframe computers first appeared. Since then, technology has advanced and been improved. A number of favourable circumstances have been produced as a result of this process for cloud computing to be realized. The concept of utility computing emerged, where computing resources could be provided like a utility such as electricity.

We quickly go through five key technologies that were crucial to the development of cloud computing as we follow its historical history. Distributed systems, virtualization, Web 2.0, service-oriented computing, and utility computing are some of these technologies.



Mainframe


These large and powerful machines were used primarily by large organizations and government agencies for complex calculations and data processing. Mainframes became more accessible and widely used in various industries. 

They played a crucial role in data processing, transaction processing, and supporting applications in fields like finance, manufacturing, and research. The rise of client-server computing and the emergence of more affordable computing options led to a decline in the prominence of mainframes. However, they still remained essential for critical enterprise applications that required high reliability and processing power

Mainframes continue to be used by large enterprises for their unmatched reliability, security, and processing capabilities. They have evolved to integrate modern technologies, allowing them to coexist with distributed systems and cloud computing environments.




Cluster Computing


Cluster computing involves connecting multiple computers to work together as a single system. It's used to enhance processing power and is particularly important for high-performance computing. 

In the 1980s, Cluster computing began to gain traction in research and academic settings, where clusters of computers were used to solve complex scientific and engineering problems. These clusters were often built from off-the-shelf hardware. 

With the advent of Linux and open-source software, building and managing clusters became more accessible. High-performance computing (HPC) clusters were widely adopted in fields like scientific research, simulations, and financial modeling. 

Later, Cluster computing evolved to support broader applications beyond research. It found use in industries like data analytics, machine learning, and even web services. Cloud providers also offered cluster-like services, enabling users to build and manage clusters in the cloud.


Grid Computing


Grid computing focuses on creating a distributed network of resources to solve computational problems that require substantial processing power or storage capacity. 

In the late 1990s, the concept of grid computing emerged as a way to share and utilize computing resources across organizations and locations. It was used for large-scale scientific projects that required significant computational resources. 

Grid computing faced challenges related to standardization, security, and management. Despite these challenges, grid-based infrastructure was used in fields such as particle physics, climate modeling, and drug discovery. 

While the term "grid computing" has become less common, the underlying concepts have evolved into cloud computing, where resources are abstracted and shared across the internet. Many of the grid's principles, such as resource sharing and scalability, are present in modern cloud platforms.


Overall, mainframe computing, cluster computing, and grid computing have all contributed to the advancement of technology and computing capabilities, each catering to specific needs and use cases across different time periods.


With the growth of the internet, the idea of providing computing services remotely gained traction. 

Virtualization technologies, which allow multiple virtual machines to run on a single physical machine, have become a crucial component of cloud computing. Companies like VMware played a significant role in advancing virtualization.

Amazon Web Services, launched in 2006, is often credited with popularizing the modern cloud computing model. AWS offered scalable infrastructure services, storage solutions, and computing power as a service. This marked the beginning of the Infrastructure as a Service (IaaS) model.

The cloud computing landscape expanded rapidly with the introduction of various services and deployment models. 

Platform as a Service (PaaS) offerings like Google App Engine and Heroku allowed developers to build and deploy applications without managing the underlying infrastructure. 

Software as a Service (SaaS) applications, such as Salesforce and Google Workspace (formerly G Suite), became popular for providing software over the Internet.



Happy Exploring!

No comments:

Post a Comment