HISTORY OF VIRTUALIZATION

While computer technology continues to march forward with smaller, sleeker, more powerful machines, in many ways how we use computers has come full circle. Obviously the computers of today are way more powerful than the computers of the 1950s, but it turns out the way we are using them is harkening back to the early days of computing. Let’s take a look at the history of virtualization to see how it all began and look at some of the ways that the world of virtualization has evolved.



Mainframes
Anyone who has been around computers for a while will quickly recognize that the concept of running a client’s session on a server and then displaying the results on the client machine describes the way mainframes work.

While modern mainframes are certainly speedy things, they are more notable for their redundant internal engineering, ensuring a high level of redundancy, security, and backward compatibility with legacy applications. It’s not uncommon for a mainframe to run constantly for years without incident, even while upgrades and repairs are performed. Software upgrades are nondisruptive because one system can take over another’s application while it is being improved.



The Evolution of the Mainframe
Between the late 1950s and 1970s, several manufacturers built the first mainframes. The group of manufacturers was known at one point as “IBM and the Seven Dwarfs.” Those companies were: IBM, Burroughs, UNIVAC, NCR, Control Data, Honeywell, General Electric, and RCA.

IBM was the dominant leader in the field with its 700/7000 series, and later its 360, which continued to evolve into the current zSeries/z9 mainframes. In the 1960s, mainframes tended not to have an interactive interface. They accepted punch cards, paper tape, and magnetic tape, operating solely in batch mode to support office functions, like customer billing.

By the 1970s, mainframes had acquired interactive user interfaces and were used as timesharing computers, able to support thousands of users simultaneously. This was done mostly via a terminal interface. These days, most mainframes have phased out terminal access and end users are able to access the mainframe using a web user interface.

In the early 1980s, less demand and competition led to a lot of companies pulling out of the mainframe arena. Also, companies realized the benefits of client-server solutions, and as a result, mainframe sales fell while server sales boomed. By the early 1990s, it seemed that the mainframe was going the way of the dinosaur, but in the late 1990s, organizations found new uses for their existing mainframes. The growth of e-business increased the number of back-end transactions processed by the mainframe, as well as the size and throughput of databases.



Operation
Mainframes are able to host multiple operating systems, and operate not as a single computer, but as a number of virtual machines, which are called partitions in the mainframe world. In this capacity, a mainframe can replace hundreds of smaller servers. While mainframes were the first to function this way, we’re now seeing regular servers used in this way.

Mainframes are designed to handle very high-volume input and output. Since the mid-1960s, mainframe designs have incorporated subsidiary computers called channels or peripheral processors. These computers manage the IO devices, leaving the CPU open to manage high-speed memory. When compared to a PC, mainframes have thousands of times as much storage.

As bulletproof as they sound, mainframes do have disadvantages. Their primary issue is that they are centralized. This isn’t a problem when everything is housed under one roof, but as the organization becomes more geographically dispersed, it’s harder to justify the cost of a mainframe at several locations.

Source of Information : Microsoft Virtualization with Hyper V

0 comments


Subscribe to Developer Techno ?
Enter your email address:

Delivered by FeedBurner