Distributed Computing System

In our distributed computing system:

A "Node" is a Network-User* Interface (NUI) that provides network access to the WWW*. This node maybe as simple and economical as a "JavaTerm", which has a decent processor, limited memory/cache, I/O devices and optional pheripherals such as CD ROM, hard disk, an input device which handles portable storage etc.. A node could also be a terminal, such as a UNIX workstation, PC or Mac with network capabilities*. Their processing storage and local applications may differ, but their operations should be mostly dependent on their network bandwidth (which network service providers, such as PacTel, MCI provide) and the pipe of the servers (end-service providers).

A "Server" is a computer that provides services interactively. Services include providing executables (e.g. we may remotely load Word and run it in our network interface), database or search engine (e.g Component library of TI), banks, stock broker firms or any entity that handles and processes requests.

A "Site" is a network destination that provides non-interactive information. For example, most people/organization's home page nowadays which contains visual display only and does not accept/require user input is merely a site.

What differentiates a Server from and a Site is: a server is interactive "active" while a site is "inactive."
Serena (aka wleung) argues that the above two could/should be grouped together and called sites, while another definition of Server should be formulated.

During the last group meeting (10/12), Professor Newton mentioned that there could/should be something between a node and servers. This intermediary could be:
1) State Manager
2) Memory
3) Temporary mirror site (proposed by Susan)

State Manager manages things that doesn't fit into the cache, it could be handled by a central "Service Provider"* which interacts with other servers/sites. However, this would present a major security problem; who's to believe that a "Service Provider" would ensure security of clients' data from internal and external access. (Maybe digital signitures would be required to access and retrieve enscripted data, or maybe encription could be done at the clients or over the network) There would also be a durability problem. What happens when a State Manager goes down? If we have mirror images, then consistency and security problems arise and this all leads us to the ultimate debate of how distributed systems should be architected.
As for network main memory and mirror sites, administration problems immediately come up to my mind. How can they be administered, monitored and by whom? How can data security be provided for this virual object?

My argument is that none of these intermediate onjects should exist, i.e. nodes should interact directly with servers (present model of WWW). At todays' price and technology curve, pockets-sized DRAM or hard-disk at an acceptable price, performance and capacity (>=500MB) is imminent. One might argue that 500MB is not a lot of stoarge. That's because in today's standards, people store executables in their hard disks, but in the future, all people need is their personal documents (e.g. word-processing files, database, spread-sheets, etc.) that they (regularly) edit as executables will be run off the Net. As for large audio, video files and graphically intense operations such as CAD or games, they should stay at their respective servers where an adequte bandwidth and special transmission mechanisms are provided.

State management in this case is done either on a local storage (cache or hard disk) and/or at the server. Less consistency concerns is achieved at the expense of a higher response time for applications (updates need to go as far as the server instead of an intermediate node).

The Future

Microsoft's dominance of local processing will be displaced by major database and database tools (e.g. Oracle, Informix) companies together with software vendors that develop network-based applications that run at the servers, aimed at providing high throughput, scalability, etc..
Hardware vendors, such as Cisco and Bay Networks will be a force as well in helping clients design and implement the appropriate network/WAN strategies.

FootNote *
1) A User may be a human being, processes or other computers.
2) WWW may include or be a part of the Information Superhighway.
3) If "Everything" (from mail to Word, Quicken) is run within a network interface, would CPU processing power and speed be relevant in the future, or this will be a hardware issue that primarily interests "Server" side of the operations. Primary end-user concern would be network bandwidth and display capabilities.
4) "Service Provider" could be network services providers such as PacTel or software vendors such as Oracle.



Back to Index


Modified: November 7, 1995
Feedback: Francis Chan (fchan@ic.eecs.berkeley.edu)