Architecture of Plan 9 systems and grids

Working in a distributed environment

The modular and flexible architecture of Plan 9 allows much greater freedom in how functionality is distributed and organized. The power of modern computing hardware, the relatively low resource demands of Plan 9, virtualization and quasi-virtualization, and the diversity of implementations and interfaces means that it is possible to create a distributed Plan 9 environment that operates within, alongside, and integrates with other operating systems.

One of the consequences is that the number of possible configurations - mappings of functions to hardware/implementations - increases greatly. How should a distributed system be organized and administered so as to reap rewards of functionality and reliability while minimizing costs such as complexity?

Enumeration of elements

Data and file storage This is a large enough topic that it subdivides into:

  1. Venti data storage service which in turn includes
    • Ventis backing fossils and providing /dump service
    • Ventis providing general data storage to local and/or remote clients
  1. Fossil file systems, standalone and venti-backed.
  2. Host OS filesystems if any of the numerous non-native implementations are being used.

CPU servers

One or more cpu servers are generally the core of a Plan 9 environment. It is not necessary for a CPU server to actually be 'remote' - very good results can be obtained from a CPU server running in a VM and accepting connections from a terminal running on the same machine.

Integration and bridging tools

Assuming that the user is integrating Plan 9 with other operating systems, how does data move between them? Some implementations provide integration automatically, such as Inferno and 9vx, operate within a given subsection of the host OS filesystem - other methods are based on network connections or ported tools.

Networks: local vs, wide area, and private vs. public

How is access to resources from within the LAN different from remote access? What if any resources are imported from external networks, and what if any are exported? The use of virtual machines extends the network 'internally'. Questions of intended use are directly integrated with security and reliability. Secure separation of public from private resources can be accomplished in many ways, such as sandboxed VMs.

Service registries and indexes

The original Plan 9 mechanism of /lib/ndb/local handles some indexing of systems, but for a dynamic grid of diverse and changing services, other mechanisms may be needed. The Inferno registry system provides a model and functionality that is borrowed for the g/toolkit of scripts for tracking 9p services.

Terminals and other interfaces

The final component are the user-facing interfaces. The goal for is for the user to have efficient and transparent access to everything - in Plan 9 terms, for all services to be available for binding in the user's namespace. Fortunately, the design of Plan 9 makes this easy, once the components have been assembled!

The 9gridchan.org local grid as test implementation

The system in place at 9gridchan.org provides a Plan 9 based distributed environment for the local user, and a subset of this environment for public users. It is built from a heterogenous variety of Plan 9 components. We have recently (spring and summer of 2009) greatly expanded the variety and flexibility of configurations we have been using. Our previous configuration information is now mostly irrelevant. The information below is still somewhat correct, but now represents only a 'slice' of the larger testing infrastructure. Things are currently in a state of flux as we work out a semi-stable configuration to act as a public demo based on the latest version of the preconfigured grid node image.

This configuration is obviously more complex than a single machine running no virtualization - but it is not complex to use. The Plan 9 design goal of transparent, network agnostic usage means that the diverse locations and implementation methods are mostly invisible to the user. Subjectively, the operator's environment is a unified hyper-os, with the Plan 9 layer serving to unify separate machines and abstract away from the underlying hardware.

The same principles used to construct this system can be applied to smaller (or larger) environments. A decently powerful desktop could run a functionally distributed system entirely internally, with a VM cpu server, plan9port venti, and Drawterm or 9vx terminal. Of course, some of the advantages (such as failure protection through component separation and selective redundancy) are lost when using only one machine. In larger systems, the role of registry and indexing services becomes more important.

9gridchan.org is committed to open collaboration with the Plan 9/Inferno community using Plan 9/Inferno tools. We invite the use of our public resources for collaborative coding, testing, and other such purposes.