Architecture of Plan 9 systems and grids
Working in a distributed environment
The modular and flexible architecture of Plan 9 allows much greater freedom in how functionality is distributed and organized. The power of modern computing hardware, the relatively low resource demands of Plan 9, virtualization and quasi-virtualization, and the diversity of implementations and interfaces means that it is possible to create a distributed Plan 9 environment that operates within, alongside, and integrates with other operating systems.
One of the consequences is that the number of possible configurations - mappings of functions to hardware/implementations - increases greatly. How should a distributed system be organized and administered so as to reap rewards of functionality and reliability while minimizing costs such as complexity?
Enumeration of elements
Data and file storage This is a large enough topic that it subdivides into:
Venti data storage service which in turn includes
- Ventis backing fossils and providing /dump service
- Ventis providing general data storage to local and/or remote clients
- Fossil file systems, standalone and venti-backed.
- Host OS filesystems if any of the numerous non-native implementations are being used.
One or more cpu servers are generally the core of a Plan 9 environment. It is not necessary for a CPU server to actually be 'remote' - very good results can be obtained from a CPU server running in a VM and accepting connections from a terminal running on the same machine.
Integration and bridging tools
Assuming that the user is integrating Plan 9 with other operating systems, how does data move between them? Some implementations provide integration automatically, such as Inferno and 9vx, operate within a given subsection of the host OS filesystem - other methods are based on network connections or ported tools.
Networks: local vs, wide area, and private vs. public
How is access to resources from within the LAN different from remote access? What if any resources are imported from external networks, and what if any are exported? The use of virtual machines extends the network 'internally'. Questions of intended use are directly integrated with security and reliability. Secure separation of public from private resources can be accomplished in many ways, such as sandboxed VMs.
Service registries and indexes
The original Plan 9 mechanism of /lib/ndb/local handles some indexing of systems, but for a dynamic grid of diverse and changing services, other mechanisms may be needed. The Inferno registry system provides a model and functionality that is borrowed for the g/toolkit of scripts for tracking 9p services.
Terminals and other interfaces
The final component are the user-facing interfaces. The goal for is for the user to have efficient and transparent access to everything - in Plan 9 terms, for all services to be available for binding in the user's namespace. Fortunately, the design of Plan 9 makes this easy, once the components have been assembled!
The 9gridchan.org local grid as test implementation
The system in place at 9gridchan.org provides a Plan 9 based distributed environment for the local user, and a subset of this environment for public users. It is built from a heterogenous variety of Plan 9 components. We have recently (spring and summer of 2009) greatly expanded the variety and flexibility of configurations we have been using. Our previous configuration information is now mostly irrelevant. The information below is still somewhat correct, but now represents only a 'slice' of the larger testing infrastructure. Things are currently in a state of flux as we work out a semi-stable configuration to act as a public demo based on the latest version of the preconfigured grid node image.
- The public CPU server (Omni) runs a Venti-backed fossil with the venti configured for local-only access. Plan9port provides an open Venti at venti.9gridchan.org for archiving and file sharing via .vac files.
- The Venti provides one mechanism for sharing files to linux host OSes via the plan9port vac tools. Drawterm provides the other primary method.
- The hostowner-in-residence's physical terminal hosts one or more Qemu VM cpu servers usually accessed with Drawterm. Connections to the native Plan 9 machines are made via cpu from the local VMs.
- An Inferno instance provides the main registry. Plan 9 nodes connect to it using the g/toolkit to acquire and announce services. In addition to direct 9p services, the registry also provides a good place to publish .vac files for use with a venti.
- 9gridchan.org provides open public services, but its resources are also used as part of a private computing environment. The administrator's terminal and machine do not generally export any public services, but can still draw on the full resources of the local grid. Other nodes are configured to suit their role - the public Venti provides an open venti service but only local access to its terminal, for instance. Separation of functions between nodes makes it easy to configure public/private distinctions.
This configuration is obviously more complex than a single machine running no virtualization - but it is not complex to use. The Plan 9 design goal of transparent, network agnostic usage means that the diverse locations and implementation methods are mostly invisible to the user. Subjectively, the operator's environment is a unified hyper-os, with the Plan 9 layer serving to unify separate machines and abstract away from the underlying hardware.
The same principles used to construct this system can be applied to smaller (or larger) environments. A decently powerful desktop could run a functionally distributed system entirely internally, with a VM cpu server, plan9port venti, and Drawterm or 9vx terminal. Of course, some of the advantages (such as failure protection through component separation and selective redundancy) are lost when using only one machine. In larger systems, the role of registry and indexing services becomes more important.
9gridchan.org is committed to open collaboration with the Plan 9/Inferno community using Plan 9/Inferno tools. We invite the use of our public resources for collaborative coding, testing, and other such purposes.