It's time for another entry in the Prior Art Department and today we'll consider a forgotten yet still extant sidebar of the early 1990s Internet. If you had Internet access at home back then, it was almost certainly dialup modem (like I did); only the filthy rich had T1 lines or ISDN. Moreover, from a user perspective, the hosts you connected to were their own universe. You got your shell account or certain interactive services over Telnet (and, for many people including yours truly, E-mail), you got your news postings from the spool either locally or NNTP, and you got your files over FTP. It may have originated elsewhere, but everything on the host you connected to was a local copy: the mail you received, the files you could access, the posts you could read. Exceptional circumstances like NFS notwithstanding, what you could see and access was local — it didn't point somewhere else.
Around this time, however, was when sites started referencing other sites, much like the expulsion from Eden. In 1990 both HYTELNET and Archie appeared, which were early search engines for Telnet and FTP resources. Since they relied on accurate information about sites they didn't control, both of them had to regularly update their databases. Gopher, when it emerged in 1991, consciously tried to be a friendlier FTP by presenting files and resources hung from a hierarchy of menus, which could even point to menus on other hosts. That meant you didn't have to locally mirror a service to point people at it, but if the referenced menu was relocated or removed, the link to it was broken and the reference's one-way nature meant there was no automated way to trace back and fix it. And then there was that new World Wide Web thing introduced to the public in 1993: a powerful soup of media and hypertext with links that could point to nearly anything, but they were unidirectional as well, and the sheer number even in modest documents could quickly overwhelm users in a rapidly expanding environment. Not for nothing was the term "linkrot" first attested around 1996, as well as how disoriented a user might get following even perfectly valid links down a seemingly infinite rabbithole.
Of course, other technically-minded folks had long been aware of the problem, and as early as 1989 an academic team in Austria was already trying to attack the problem of "access to all kinds of information one can think of." In this world, documents and media resources could be associated together into a defined hierarchy, the relationships between them were discoverable and bidirectional, and systems were searchable by design. Links could be in anything, not just text. Clients could log into servers or be anonymous, logged-in users could post content, and in the background servers could talk to other servers to let them know what changes had occurred so they could synchronize references. Along the way, as new information resources via WAIS, Gopher and the Web started to appear, their content could also be brought into these servers to form a unified whole. This system was Hyper-G, and we'll demonstrate it — on period-correct classic RISC hardware, as we do — and provide the software so you can too.
Hypertext — as well as hypermedia, even when the term hadn't yet been coined — was hardly a new concept, dating back to at least 1945 and Vannevar Bush's "memex" idea, imagining not only literature but photographs, sketches and notes all interconnected with various "trails." The concept was exceedingly speculative and never implemented (nor was Ted Nelson's Xanadu "docuverse" in 1965) but Douglas Engelbart's oN-Line System "NLS" at the Stanford Research Institute was heavily inspired by it, leading to the development of the mouse and the 1968 Mother of All Demos. The notion wasn't new on computers, either, such as 1967's Hypertext Editing System on an IBM System/360 Model 50, and early microcomputer implementations like OWL Guide appeared in the mid-1980s on workstations and the Macintosh.
Hermann Maurer, then a professor at the Graz University of Technology in Austria, had been interested in early computer-based information systems for some time, pioneering work on early graphic terminals instead of the pure text ones commonly in use. One of these was the MUPID series, a range of Z80-based systems first introduced in 1981 ostensibly for the West German videotex service Bildschirmtext but also standalone home computers in their own right. This and other work happened at what was the Institutes for Information Processing Graz, or IIG, later the Institute for Information Processing and Computer-Supported New Media (IICM). Subsequently the IIG started researching new methods of computer-aided instruction by developing an early frame-based hypermedia system called COSTOC (originally "COmputer Supported Teaching Of Computer-Science" and later "COmputer Supported Teaching? Of Course!") in 1985, which by 1989 had been commercialized, was in use at about twenty institutions on both sides of the Atlantic, and contained hundreds of one-hour lessons.
COSTOC's successful growth also started to make it unwieldy, and a planned upgrade in 1989 called HyperCOSTOC proposed various extensions to improve authoring, delivery, navigation and user annotation. Meanwhile, it was only natural that Maurer's interest would shift to the growing early Internet, at that time under the U.S. National Science Foundation and by late that year numbering over 150,000 hosts. Maurer's group decided to consolidate their experiences with COSTOC and HyperCOSTOC into what they termed "the optimal large-scale hypermedia system," code-named Hyper-G (the G, natürlich, for Graz). It would be networked and searchable, preserve user orientation, and maintain correct and up-to-date linkages between the resources it managed. In January 1990, the Austrian Ministry of Science agreed to fund a prototype for which Maurer's grad student Frank Kappe formally wrote the architectural design as his PhD dissertation. Other new information technologies like Gopher and the Web were emerging at the same time, at the University of Minnesota and CERN respectively, and the Hyper-G team worked with the Gopher and W3 teams so that the Hyper-G server could also speak to those servers and clients.
The prototype emerged in January 1992 as the University's new information system TUGinfo. Because Hyper-G servers could also speak Gopher and HTTP, TUGinfo was fully accessible by the clients of the day, but it could also be used with various Hyper-G line-mode clients. One of these was a bespoke tool named UniInfo which doesn't appear to have been distributed outside the University and is likely lost. The other is called the Hyper-G Terminal Viewer, or hgtv (not to be confused with the vapid cable channel), which became a standard part of the server for administration tasks. The success of TUGinfo convinced the European Space Agency to adopt Hyper-G for its Guide and Directory in the fall, after which came a beta native Windows client called Amadeus in 1993 and a beta Unix client called Harmony in 1994. Yours truly remembers accessing some of these servers through a web browser around this time, which is how this whole entry got started trying to figure out where Hyper-G ended up.
Being (or at least starting as) an academic endeavour Hyper-G was well-documented, especially by the standards of the time: there are many primary-source papers on it from the IIG/IICM team and almost all of them were self-published or otherwise public. Similarly, the software, or at least the binaries of the software (something else we'll talk about), was also readily provided for download. The problem is that the articles and the software were generally distributed via FTP or Gopher and pretty much all of those repositories are no longer available. Although the Internet Archive has a partial copy of these files, it lacks, for example, any of the executables for the Harmony client. Fortunately there were also at least two books on Hyper-G, one by Hermann Maurer himself, and a second by Wolfgang Dalitz and Gernot Heyer, two partnering researchers then at the Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB). Happily these two books have CDs with full software kits, and the later CD from Dalitz and Heyer's book is what we'll use here. I've already uploaded its contents to the Floodgap Gopher server to serve as a supreme case of historical irony.
As its history would suggest, Hyper-G aimed to impose a strong hierarchy on the resources it managed, much more so than the Web and even stronger than Gopher's rigid menu-document paradigm. Rather than simply a sea of documents, items are instead organized into various collections. A resource must belong to at least one collection, but it may belong to multiple collections, and a collection can span more than one server. A special type of collection is the cluster, where semantically related materials are grouped together such as multiple translations, alternate document formats, or multimedia aggregates (e.g., text and various related images or video clips). We'll look at how this appears practically when we fire the system up.
Any resource may link to another resource. Like HTML, these links are called anchors, but unlike HTML, anchors are bidirectional and can occur in any media type like PostScript documents, images, or even audio/video. Because they can be followed backwards, clients can walk the chains to construct a link map, like so:
This example uses the
manpage for
grep(1), showing what it connects to, and what those pages connect to. Hyper-G clients could construct such maps on demand and all of the resources it shows can of course be jumped to directly. This was an obvious aid to navigation because you always could find out where you were in relation to anything else.
Under the hood, anchors aren't part of the document, or even hidden within it; they're part of the metadata. Here's a real section of a serialized Hyper-G database:
This textual export format (HIF, the Hyper-G Interchange Format) is how a database could be serialized and backed up or transmitted to another server, including internal resources. Everything is an object and has an ID, with resources existing at a specified path (either a global ID based on its IPv4 address or a filesystem path), and the parent indicating the name of the collection the resource belongs to. These fields are all searchable, as are text resources via full-text search, all of which is indexed immediately. You don't need to do anything to set up a site search facility — it comes built-in.
Anchors are connected at either byte ranges or spatial/time coordinates within their resource. This excerpt defines three source anchors, i.e., a link that goes out to another resource. uudecodeing the text fragment and dumping it,
the byte offsets in the anchor sections mean the text ranges for hg_comm.h, hg_comm.c and hg_who.c will be linked to those respective entries as destination anchors in the database. For example, here is the HIF header for hg_comm.h:
These fields are indexed, so the server can walk them backwards or forwards, and the operation is very fast. The title and its contents and even its location can change; the link will always be valid as long as the object exists, and if it's later deleted, the server can automatically find and remove all anchors to it. Analogous to an HTML text fragment, destination anchors can provide a target covering a specific position and/or portion within a text resource. As the process requires creating and maintaining various unique IDs, Hyper-G clients have authoring capability as well, allowing a user to authenticate and then insert or update resources and anchors as permitted. We're going to do exactly that.
Since resources don't have to be modified to create an anchor, even read-only resources such as those residing on a mounted CD-ROM could be linked and have anchors of their own. Instead of having their content embedded in the database, however, they can also appear as external entities pointed to by conventional filesystem paths. This would have been extremely useful for multimedia in particular considering the typical hard disk size of the early 1990s. Similarly, Internet resources on external servers could also be part of the collection:
While resources that are not Hyper-G will break the link chain, the connection can still be expressed, and at least the object itself can be tracked by the database. The protocol could be Hyper-G, Gopher, HTTP, WAIS, Telnet or FTP. It was also possible to create SQL queries this way, which would be performed live. Later versions of the server even had a CGI-compatible scripting ability.
I mentioned that the user can authenticate to the server, as well as being anonymous. When logged in, authenticated access allows not only authoring and editing but also commenting through annotations (and annotating the annotations). This feature is obviously useful for things like document review, but could also have served as a means for a blog with comments, well before the concept existed formally, or a message board or BBS. Authenticated access is also required for resources with limited permissions, or those that can only be viewed for a limited time or require payment (yes, all this was built-in).
In the text file you can also see markup tags that resemble and in some cases are the same as, but in fact are not, HTML. These markup tags are part of HTF, or the Hyper-G Text Format, Hyper-G's native text document format. HTF is dynamically converted for Gopher or Web clients; there is a corresponding HTML tag for most HTF tags, eventually supporting much of HTML 3.0 except for tables and forms, and most HTML entities are the same in HTF. Anchor tags in an HTF document are handled specially: upon upload the server strips them off and turns them into database entries, which the server then maintains. In turn, anchor tags are automatically re-inserted according to their specified positions with current values when the HTF resource is fetched or translated.
The Hyper-G "server" is actually several independent but interrelated components, all running as separate processes, and their specific functions have varied slightly over time. Internally, three servers implement the backend. The two straightforward ones are the database server (
dbserver) that handles the database and the full-text index server (
ftserver) used for document search. The document cache server (
dcserver), however, has several functions: it serves and stores local documents on request, it runs CGI scripts (using the same Common Gateway Interface standard as a webserver of the era would have), and to request and cache resources from remote servers referenced on this one, indicated by the upper 32 bits of the global ID.
In earlier versions of the server, clients were responsible for other protocols. A Hyper-G client, if presented with a Gopher or HTTP URL, would have to go fetch it.
In later releases, this capability was added to the document cache server as well, allowing it to act almost like a caching proxy. While every Hyper-G client dealt in HTF as its native format, and relatively simplistic translation could convert basic HTML into HTF, nothing said a later one couldn't understand HTML directly.
Because Hyper-G is a stateful protocol, enveloping the backend is a process providing the session layer called hgserver (no relation to Mercurial). This talks directly to other Hyper-G servers (using TCP port 418), and also directly to clients with port 418 as a control connection and a dynamically assigned port number for document transfer (not unlike FTP). Since links are bidirectional, Hyper-G servers contact other Hyper-G servers to let them know a link has been made (or, possibly, removed), and then those servers will send them updates.
There are hazards with this approach. One is that it introduces an inevitable race condition between the change occurring on the upstream and any downstream(s) knowing about it, so earlier implementations would wait until all the downstream(s) acknowledged the change before actually making it effective. Unfortunately this ran into a second problem: particularly for major Hyper-G sites like IIG/IICM itself, an upstream server could end up sending thousands of update notifications after making any change at all, and some downstreams might not respond in a timely fashion for any number of reasons. Later servers use a probablistic version of the "flood" algorithm from the Harvest resource discovery system (perhaps a future Prior Art entry) where downstreams pass the update along to a smaller subset of hosts, who in turn do the same to another subset, until the message has propagated throughout the network (p-flood). Any temporary inconsistency is simply tolerated until the message makes the rounds. This process was facilitated because all downstreams knew about all other Hyper-G servers, and updates to this master list were sent in the same manner. A new server could get this list from IICM after installation to bootstrap itself, becoming part of a worldwide collection called the Hyper Root.
Finally, a set of protocol conversion layers sit atop the session layer if the client is not speaking Hyper-G, such as a Gopher client or a web browser. These translate collections, HTF and links to resources into Gopher menus and HTML, as shown in the screenshot here of how the main IICM Hyper-G site looked in NCSA Mosaic in 1995. When translating from HTF or collection metadata, the Web gateway composes pages from templates and server-side includes and supports different templates for different languages; it can also serve pages of its own that are not part of the regular database. You can authenticate to the server through the Web gateway, but since HTTP is stateless and Hyper-G predated the wide use of cookies, the Web conversion layer generates a session key from the subprocess port number and a nonce for inclusion in the URL, noting the IP address using it. (It goes without saying this is incredibly insecure by modern standards.) An SNMP conversion layer also existed at one time, but was "rarely used" and not included in the default distribution.
There are additional security considerations to running these early servers. Besides those obvious ones like authentication without encryption and the potential for spoofing session keys, the Hyper Root concept clearly hails from an earlier era where it wouldn't occur to a server to be an intentional bad actor. However, since messages aren't signed and are simply passed along, a malicious Hyper-G intermediary could wreck things in a hurry by substituting nonsense or deliberately false information about other hosts. Also, because only binaries have survived, we have no way of auditing the source for other lurking horrors or inadvertent holes. Since message passing and user logins are pretty essential parts of its basic functionality, although I'm unaware of any other classic Hyper-G or HyperWave servers still out there that could mess with us (we'll talk about its modern incarnation at the end), we'll be running it within our strict non-routable test network here at the Floodgap lab.
Even with Gopher's then-significant install base and the rapid growth of the nascent Web, Hyper-G was successful enough that its team had increased to 65 members by August 1995. The University of Minnesota had made (in my opinion) a serious tactical error in 1993 by abruptly requiring license fees for commercial use of their Gopher server implementation. Subsequent posts were made to clarify this only applied to UMN
gopherd, and then only to commercial users, nor is it clear exactly how much that license fee was or whether anybody actually paid, but the damage was done and the Web — freely available from the beginning — continued unimpeded on its meteoric rise. (UMN eventually relicensed 2.3.1 under the GNU Public License in 2000.) Hyper-G's principals would have no doubt known of this precautionary tale. On the other hand, they also clearly believed that they possessed a fundamentally superior product to existing servers that people would be willing to pay good money for. Indeed, just like they did with COSTOC, the intention of spinning Hyper-G/HyperWave off as a commercial enterprise had been planned from the very beginning.
Hyper-G, now renamed HyperWave, officially became a commercial product in June 1996. This shift was facilitated by the fact that no publicly available version had ever been open-source. Early server versions of Hyper-G had no limit on users, but once HyperWave was productized, its free unregistered tier imposed document restrictions and a single-digit login cap (anonymous users could of course still view HyperWave sites without logging in, but they couldn't post anything either). Non-commercial entities could apply for a free license key, something that is obviously no longer possible, but commercial use required a full paid license starting at US$3600 for a 30-user license (in 2025 dollars about $6900) or $30,000 for an unlimited one ($57,600). An early 1997 version of this commercial release appears to be what's available from the partial mirror at the Internet Archive, which comes with a license key limiting you to four users and 500 documents — that expired on July 31, 1997. This license is signed with a 128-bit checksum that might be brute-forceable on a modern machine but you get to do that yourself.
Fortunately, the CD from our HyperWave book, although also published in 1996, predates the commercial shift; it is a offline and complete copy of the Hyper-G FTP server as it existed on April 13, 1996 with all the clients and server software then available. We'll start with the Hyper-G server portion, which on disc offers official builds for SunOS 4.1.3, Solaris 2.2 (SPARC only), HP-UX 9.01 (PA-RISC only), Ultrix 4.2 (MIPS DECstation "PMAX" only), IRIX 3.0 (SGI MIPS), Linux/x86 1.2, OSF/1 3.2 (on Alpha) and a beta build for IBM AIX 4.1.
The manual suggests modest but (for 1996) non-trivial system requirements, e.g., at least 48MB of RAM, a minimum of 50MB disk for the server software and then around 120MB or so of disk for every 10,000 objects (their estimate). I pricked up my ears when I saw AIX 4.1 listed, because running it on our trusty Apple Network Server 500 would have been perfect: it has oodles of disk space (like, whole gigabytes, man), a full 200MHz PowerPC 604e upgrade, zippy 20MB/s SCSI-2 and a luxurious 512MB of parity RAM. I'll just stop here and say that it ended in failure because both the available AIX versions on disc completely lack the Gopher and Web gateways, without which the server will fail to start. I even tried the Internet Archive pay-per-u-ser beta version and it still lacked the Gopher gateway, without which it also failed to start, and the new Web gateway in that release seemed to have glitches of its own (though the expired license key may have been a factor). Although there are ways to hack around the startup problems, doing so only made it into a pure Hyper-G system with no other protocols which doesn't make a very good demo for our purposes, and I ended up spending the rest of the afternoon manually uninstalling it. In fairness it doesn't appear AIX was ever officially supported.
Otherwise, I don't have a PA-RISC HP-UX server up and running right now (just a 68K one running HP-UX 8.0), and while the SunOS 4 version should be binary compatible with my Solbourne S3000 running OS/MP 4.1C, I wasn't sure if the 56MB of RAM it has was enough if I really wanted to stress-test it.
So I went with my Silicon Graphics Indy workstation as the server, which isn't completely ridiculous because SGI also sold the Indy headless as the Challenge S rackmount. This Indy has been upgraded to a 150MHz R4400SC, faster than the R4600 or baseline R4000, and second only to the higher R4400 tiers and the R5000. Even though its disk space is a little limited, something I need to rectify in the near future, it would be enough for our demonstration and it has 256MB of RAM. It runs IRIX 6.5.22 but that should still start these binaries.
That settled the server part. For the client hardware, however, I wanted something particularly special. My original Power Macintosh 7300 (now with a G4/800) sitting on top will play a supporting role running Windows 98 in emulation for Amadeus, and also testing our Hyper-G's Gopher gateway with UMN TurboGopher, which is appropriate because when it ran NetBSD it was gopher.floodgap.com. Today, though, it runs Mac OS 9.1, and the planned native Mac OS client for Hyper-G was never finished nor released.
Our other choices for Harmony are the same as for the server, sans AIX 4.1, which doesn't seem to have been supported as a client at all. Unfortunately the S3000 is only 36MHz, so it wouldn't be particularly fast at the hypermedia features, and I was concerned about the Indy running as client and server at the same time. But while we don't have any PA-RISC servers running, we do have a couple choices in PA-RISC workstations, and one of them is a especially rare bird. Let's meet ...
...
ruby, named for HP chief architect Ruby B. Lee, who was a key designer of the PA-RISC architecture and its first single-chip implementation. This is an RDI PrecisionBook 160 laptop with a 160MHz PA-7300LC CPU, one of the relatively few PA-RISC chips with support for a categorical L2 cache (1MB, in this case), and the last and most powerful family of 32-bit PA-RISC 1.1 chips.
Basically a shrunken Visualize B160L, even appearing as the same exact model number to HP-UX, it came in the same case as its better-known SPARC UltraBook siblings (I have an UltraBook IIi here as well) and was released in 1998, just prior to RDI's buyout by Tadpole. This unit has plenty of free disk space, 512MB of RAM and runs HP-UX 11.00, all of which should run Harmony splendidly, and its battery incredibly still holds some charge. Although the on-board HP Visualize-EG graphics don't have 3D acceleration, neither does the XL24 in our Indy, and its PA-7300LC will be better at software rendering than the Indy's R4400. Fortunately, the Visualize-EG has very good 2D performance for the time.
With our hardware selected, it's time to set up the server side. We'll do this by the literal book, and the book in this case recommends creating a regular user hgsystem belonging to a new group hyperg under which the server processes should run. IRIX makes this very easy.
Root desktop.
Manurally creating the new group, which I also put myself into just in case.
The rest of the user setup can be done from the GUI System Manager.
Selecting "Security and Access Control" from the left frame, you can go into the User Manager and manually add from there, or just jump right into the add-a-user wizard from here.
Creating the user ...
... and, after a few more questions, selecting
hypergas the user's sole group membership, ...
... and the setup is complete. Its home directory will hold most of the files. The scripts and setup files we will install require this login to use the C shell or
tcsh, which is fine by me because other shells are for people who don't know any better.
Logging in and checking our prerequisites:
This is the Perl that came with 6.5.22. Hyper-G uses Perl scripts for installation, but they will work under 4.036 and later (Perl 5 isn't required), and pre-built Perl distributions are also included on the CD. Ordinarily, and this is heavily encouraged in the book and existing documentation, you would run one of these scripts to download, unpack and install the server. At the time you had to first manually request permission from an E-mail address at IICM to download it, including the IPv4 address you were going to connect from, the operating system and of course the local contact.
Fortunately some forethought was applied and an alternative offline method was also made available if you already had the tarchive in your possession, or else this entire article might not have been possible. Since the CD is a precise copy of the FTP site, even including the READMEs, we'll just pretend to be the FTP site for dramatic purposes. The description files you see here are exactly what you would have seen accessing TU Graz's FTP site in 1996.
You'll notice we downloaded two archives. One is the server proper, and the other is the toolset. Realistically you could run with just the server portion, but the toolset is necessary to do much useful with it, especially for a new installation.
While the server will live in ~hgsystem, a central directory (by default /usr/local/Hyper-G) holds links to it as a repository. We'll create that and sign it over to hgsystem as well.
Next, we unpack the server (first) package and start the offline installation script. This package includes the server binaries, server documentation and HTML templates. Text in italics was my response to prompts, which the script stores in configuration files and also in your environment variables, and patches the startup scripts for hgsystem to instantiate them on login.
Because port 418 is privileged, something with root privileges is required to bind the port. Wisely the team did make this piece open-source so a paranoid sysadmin could see what they were running as root (in this case setuid).
Now for the tools. This includes adminstration utilities but also the hgtv client and additional documentation. The install script is basically the same for the tools as for the server.
Last but not least, we will log out and log back in to ensure that our environment is properly setup, and then set the password on the internal hgsystem user (which is not the same as hgsystem, the Unix login). This account is setup by default in the database and to modify it we'll use the hgadmin tool. This tool is always accessible from the hgsystem login in case the database gets horribly munged.
That should be all that was necessary (NARRATOR: It wasn't.), but starting up the server still failed.
It's possible the tar offline install utility wasn't updated as often as the usual one. Nevertheless, it seemed out-of-sync with what the startup script was actually looking for. Riffling the Perl and shell-script code to figure out the missing piece, it turns out I had to manually create ~hgsystem/HTF and ~hgsystem/server, then add two more environment variables to ~hgsystem/.hgrc (nothing to do with Mercurial):
Logging out and logging back in to refresh the environment,
... we're up!
Immediately I decided to see if the webserver would answer. It did, buuuuut ...
... it doesn't understand HTTP/1.1. Now, later versions of the server most certainly do, but this version doesn't. Sure, I could pull out an old copy of NCSA Mosaic, which would even be period-correct, but we're not going to just do the boring easy thing here, are we? Let's boot the PrecisionBook.
It's not just booted: it's HARD booted! (Sorry, since I intended to take direct framebuffer grabs instead of from the monitor port, the laptop wasn't connected to my usual capture rig.)
In CDE under HP-UX 11.0, at the LCD's supported maximum 1024x768 resolution. (
unameidentifies this PrecisionBook 160 as a 9000/778, which is the same model number as the Visualize B160L workstation.) Netscape Navigator Gold 3.01 is installed on this machine, and we're going to use it later, but I figured you'd enjoy a crazier choice. Yes, you read that right ...
... it's the relatively short-lived PA-RISC build of Microsoft Internet Explorer for UNIX, which officially had two major versions (4 and 5) and a final service pack I'll get around to installing one of these days. This build of IE5 (5.00.2013.1312) is the cryptographically hopeless 40-bit export version circa 1999 and can run on HP-UX 10.20 or later.
IE for Unix was in development as early as 1996 in an attempt to crush Netscape on its home turf but got bogged down in a dispute (later a lawsuit) with Bristol Technologies, who originally planned to do the port with their Wind/U compatibility layer. Wind/U was Bristol's flavour of the ill-starred Windows Interface Source Environment initiative, using licensed Windows source code to allow Win32 applications to be recompiled on other operating systems. (Insignia SoftWindows was another WISE implementation, but in this case used for actual Windows emulation.) After contract negotiations failed Microsoft then went to Bristol's competitor Mainsoft in 1997, who themselves had a WISE-based porting layer called MainWin, and Mainsoft completed the port of MSHTML and the rest of IE4 for Solaris as a beta in November. The HP-UX version and the Solaris release version subsequently emerged in February 1998, and IE5 in February (Solaris) and March (HP-UX) 1999.
IE Unix was only ever released for Solaris and HP-UX, along with MainWin-based ports of Outlook Express and Windows Media Player purely for Solaris, though Microsoft originally intended it to be much more widely available. Ports to IRIX and AIX were mentioned, and MainWin could reportedly run on those platforms as well as Tru64, but no version for them ever emerged. After releasing 5.0 SP1 in 2001, Microsoft cited low uptake of the browser and ended all support for IE Unix the following year. As for Mainsoft, they became notorious for the 2004 Microsoft source code leak when a Linux core in the file dump fingered them as the source; Microsoft withdrew WISE completely, eliminating MainWin's viability as a commercial product, though Mainsoft remains in business today as Harmon.ie since 2010. IE Unix was a completely different codebase from what became Internet Explorer 5 on Mac OS X (and a completely different layout engine, Tasman) and of course is not at all related to modern Microsoft Edge either.
By default IE5 also uses HTTP/1.1 (Navigator Gold 3.01 only uses HTTP/1.0) but this can be turned off.
Once we do so, there's our gateway! Let's explore this a bit as a baseline, because we'll come back to this view a little later when the database is populated. We click the big "OK" to continue, which generates the session key and logs us in as an anonymous user.
Because there's nothing in the database right now, there's nothing to see (i.e., the root collection is empty), but there are icons we can click on. I remember these back in the mid-90s when I stumbled on Hyper-G hosts. Notice the malformed HTML at the bottom which should be fixable by tweaking the templates.
The first, identify, allows you to log in as a user (with a conventional HTTP 401 authentication dialogue). Again, you can't author from here, but doing so would allow you to access resources marked as restricted to your username, or pay resources that would charge your account.
The second shows options, just language (English or German) and settings about how the collection listing should be displayed, which right now are irrelevant.
If we click the who's online button in the options page, we come to a current activity list and performance monitor. This just shows my previous anonymous sessions, as well as the fact that this version of the server software wasn't Y2K-ready, but if others were logged in they would also appear here. Possibly because there are Y2K issues, the server fails to calculate its own uptime, but everything else basically works.
Finally, the help button brings up very basic on-line help, which is provided from the static files served by the Web gateway and can also be customized. The last button, home, just takes us back where we were.
Let's also have a look at the Gopher gateway. Unfortunately, the generated Gopher menu is much more primitive and doesn't use i itemtype, making it look even more impoverished. Compared to the Floodgap gopher, our new Hyper-G gopher just has a search item to search the empty database and a spurious extra menu item from a spurious extra dot.
We'll need to author something to do anything more meaningful at this point, so I suppose this is a good time to install the Harmony client. Let's do our hardware survey and check prerequisites.
Other than the CD-ROM, which was originally connected to the PrecisionBook's external SCSI port behind one of its side doors, everything else you've seen beforehand.
In the Hyper-G world there are PDF-like documents, except that they're actually PostScript, not PDF. (PDF can be considered a descendant of PostScript, and the PDF specification was made public in 1993, but it was still proprietary in those days and not many people used it.) To view them we'll need Ghostscript installed, and we do have an old version ready to run.
Otherwise images and most other media types are handled by Harmony itself, so let's grab and setup the client now. We'll want both Harmony proper and, later when we play a bit with the VRML tools, VRweb. Notionally these both come in Mesa and IRIX GL or OpenGL versions, but this laptop has no 3D acceleration, so we'll use the Mesa builds which are software-rendered and require no additional 3D support.
There is no specific installation process; you just put everything where you want it, so I'll use our /pro logical volume which has ample space.
Because this is for an earlier version of HP-UX, although it should run, we'd want to make sure it wasn't using outdated libraries or paths. Unfortunately, checking for this in advance is made difficult by the fact that ldd in HP-UX 11.00 will only show dependencies for 64-bit binaries and this is a 32-bit binary on a 32-bit CPU:
So we have to do it the hard way. For some reason symlinks for the shared libraries below didn't exist on this machine, though I had to discover that one by one.
After a couple forged links it seems able to start. We'll set that environment variable in a moment. Fortunately the same shared library symlinks were enough for VRweb as well:
Finally, I created a startup script to get it going, using the settings recommended in the book. We need the -hghost option passed to Harmony or it will connect to the IICM by default.
Ta-daa! Harmony is running. We have nothing else in our local Hyper Root than ourselves, and nothing in our root collection, but we should now be able to populate it.
Besides Harmony itself, a process handling text is also running (we'll see it momentarily). Other handlers start up as necessary.
Our next step is to create a non-privileged user. This will be largely for illustrative purposes since I'm going to be creating all the content under hgsystem anyway, but in a larger deployment you'd of course have multiple users with appropriate permissions. Hyper-G users are specific to the server; they do not have a Unix uid. Users may be in groups and may have multiple simultaneously valid passwords (this is to facilitate automatic login from known hosts, where the password can be unique to each host). Each user gets their own "home collection" that they may maintain, like a home directory. Each user also has a credit account which is automatically billed when pay resources are accessed, though the Hyper-G server is agnostic about how account value is added.
We can certainly whip out hgadmin again and do it from the command line, but we can also create users from a graphical administration tool that comes as part of Harmony. This tool is haradmin, or the Harmony Administrator.
The Administrator connects to the Hyper-G host on port 418 as usual.
Naturally we'll have to login to actually do anything, using the password we set previously.
The one user on the system is us. We'll click New.
Creating the new user. I thought about coming up with a stablecoin or something to denominate the value for this account (we'll call it Grazcoin) but that can be an exercise for another time.
The user is immediately live with the new password. I have not created a group for them. We'll revisit authentication and permissions (or in Hyper-G jargon, its "Rights") when we discuss annotations at the end.
Back in Harmony proper, we log in through the Identity option.
Looking at the attributes for the Hyper Root. Although it's called a "Document," this is only to distinguish it from anchors — the
DocumentTypeis what we'd consider the "actual" object type. By default, all users, including anonymous ones, can view objects, but cannot write or delete ("unlink") anything; only the owner of the object and the system administrators can do those. In practical terms this unprivileged user with no group memberships we created has the same permissions as an anonymous drive-by right now. However, becase this user is authenticated, we can add permissions to it later. I've censored the most significant word in this and other screenshots with global IDs for this machine because it contains the Indy's IPv4 address and you naughty little people out there don't need to know the details of my test network.
Similarly, here's the root collection properties. There is nothing for us to view, of course, so let's now try to mass populate the database.
The snippets of the database I showed you are real fragments of a HIF archive containing technical documentation for that later version of the server, but with a little massaging we can import that HIF database into our older server. I had to manually remove duplicate names from the file and manually adjust parents where one of those names was actually used somewhere else. I then copied it over to the Indy and imported it. Here are some highlights.
We told the importer to attach these new collections to our default root collection (rootcollection). The import then proceeds in three passes. The first part just loads the objects and the second one sets up additional collections. This dump included everything, including user collections that did not exist, so those additional collections were (in this case desirably) not created. For the final third pass, all new anchors added to the database are processed for validity. Notice that one of them referred to an outside Hyper-G server that the import process duly attempted to contact, as it was intended to.
In the end, the import process successfully added 75 new collections and 528 documents with 596 anchors. Instant content!
To make sure it stuck, I fired up
hgtv, the line-mode Hyper-G client, on the Indy. This client, or at least this version of the client, does not start at the Hyper Root and we are immediately within our own root collection. The number is the total number of documents and subdocuments within it.
Entering the new master collection. Our Y2K woes persist, though here those issues are largely cosmetic. The "T" indicates this is a text document (actually HTF).
hgtvunderstands HTF, so we can view the documents directly. Looks like it all worked. Let's jump back into Harmony and see how it looks there too.
We have a nice tree view in Harmony, and the document shows up in a separate window. This is the text handler process we saw running quietly in the background. Notice the red check marks that appear as documents are loaded and cached locally.
If you flip back earlier in this article, I pointed out a text file in the database that had anchors in it. This is that document, as Harmony displays it, with hyperlinks highlighted. (You can change the appearance of Harmony controls, windows and text The X11 Way by changing the X resources to taste. Insert anti-Wayland snark here.)
We can follow the link to the C program source itself.
Here's the HTF for it, viewable the same way you might use the now almost quaint Use Source feature in your web browser. (Again, while many of the tags have similar, and in some cases semantically exact, equivalents in HTML, HTF and HTML are not the same thing.)
And, because links are bi-directional, the Local Map (shown by the handy tooltip) can show what links to this file and what files link to it, configurable out as far as you'd like to query. This map was nearly instantaneous to generate — it's just a handful of database queries, after all.
Everything we loaded is also instantly searchable, full text and titles, you name it. We didn't have to lift a finger to make that happen.
Some things don't quite work, though. I mentioned that HTF is not HTML, and if you try to view an HTML document, this version of Harmony can't handle it. (A consequence of this to be shown when we get to linking web sites.)
In particular, there is no support for forms. Indeed, if we look at the text window from where we started Harmony, we can see all the objections the parser made.
I couldn't get IE5 to properly load up the page, but Netscape would. But, because this version of Hyper-G doesn't properly support CGI and the templates aren't set up for it, it fails anyway. Still, that would be enough to view HTML documents.
Let's author a few things of our own. For this, we'll go back to logging in as hgsystem.
Since our new content reflects a later version of the server, we should add a disclaimer. We'll do this by creating a new HTF text document.
By default, the editor is
emacs, but this house is Team
vi, and we will brook no deviations. For CDE, though,
dtpadwould be better. You can change the X resource for
Harmony.Text.editcommandaccordingly. Here I've written a very basic HTF file that will suffice for the purpose.
Once we save the temporary document and close the editor, the text viewer offers to upload it for us, which we accept.
For any last minute changes the properties of the new item remain open, but we're done now, so we just click Close.
Our document duly appears and is viewable. However, it sticks it down at the bottom which doesn't seem like the right place for it. You can certainly change the sort order, or try tricks like AAAAA new nAAAAAme it to sort to the top, but Hyper-G has a better way.
In the attributes for our new item, we add a new property "Sequence." Items without a Sequence follow the default sort order, but items with one will override. This is a signed integer, so negative values are allowed, and we'll use one to force it to the top of the collection. (The deep red just means it was changed, not that it was [necessarily] invalid.)
Done!
You can insert PostScript documents the same way. Here's a real one that isn't already part of the tech docs dump, explaining the tags and format of HTF. Sounds like a good thing to put in the database too, right?
We'll insert it into the HTF subcollection of the tech docs (a reasonable place for it) by providing the filename and clicking Insert. By the way, you can do this with HTF too — you're not obligated to use the internal editor.
Then, when we double click it, Harmony's PostScript viewer starts up and downloads the document ...
... and displays it. This is handled by Ghostscript in the background, but the PostScript viewer makes it seamless — and, because PostScript has text within it, it's instantly searchable too.
Let's add a completely new collection of our own and show how Hyper-G handles links to outside sites. This will be a Floodgap-specific collection.
We'll sort it to the top, as appropriate for the hundreds and thousands of no people coming to view it.
We'll then add links to the Floodgap web server ...
... and the Floodgap gopher server. Because Hyper-G knows what these protocols are, they each get options appropriate to them (such as, with Gopher, the selector). Also, notice that both of these resources could be inserted with their own Rights attached.
The reason is because when you access them with a Hyper-G client, the server itself fetches the content, and pulls it through. This only sort of works for Web sites ...
... and, at least for this version of the server, not well at all for Gopher menus. But the fact you can assign rights to them could be a way to manage a paywall or restricted access by having a multihomed Hyper-G server where the internal Web or gopher resources are only accessible through the Hyper-G box, effectively turning it into a simple translating proxy. The Hyper-G box then gates views and assigns charges as applicable. Could your server do that in 1996?
Let's explore anchors a bit more now. For this part I'm going to make a Hyper-G version of the Floodgap machine room page (which hasn't been updated since 2019, but I'll get around to it soon enough).
As a point of contrast I've elected to do this as a cluster rather than a collection so that you can see the difference. Recall from the introduction that a cluster is a special kind of collection intended for use where the contents are semantically equivalent or related, like alternative translations of text or alternative formats of an image. This is a rather specific case so for most instances, admittedly including this one, you'd want a collection. In practice, however, Hyper-G doesn't really impose this distinction rigidly and as you're about to see, a cluster mostly acts like a collection by a different term — except where it doesn't.
This page is organized around a large panoramic view of the lab, so we start off by uploading that image.
Harmony has its own image viewer; it does not depend on any existing image conversion or viewing utilities. The loading and scaling process is not fast on our 160MHz machine, but an 1894x700 image would have been a lot for a modest machine in the late 1990s.
Uploading the rest of the images.
For the HTF I manually massaged the original HTML into HTF using find-and-replace and some adjustments to content which HTF cannot express, then uploaded it to the laptop, which in turn here is uploading it to the server.
And here's that file, which mostly survived the conversion intact.
You can set up the anchor relationship in any order, but here we'll start by defining a destination anchor, and then source anchors to connect to it. This is one of the detail views of the room. Image links are done as polygons much like the old HTML
<map>for imagemaps.
In the main view of the room we'll define another polygon and make it the source anchor, i.e., what you click on to go to the destination.
The link creator pops up with those polygons and we create the link. Now, when you click the A and arrow in the main view, it pops to the detail view.
Similarly, the main view can also link to portions of the text directory. We'll define the name of
alex, our beige-box Am5x86 DOS games machine, as the destination.
The source will be the actual number in the main view.
The source anchor is again a polygon, but the destination anchor is now a text range. Linked!
And we can do it the other way. Here, we'll have a source anchor in the text linking to the a polygon in the main image.
You will have noticed, however, that some epenthetic-ish errors snuck in and I think there was a bug in the way this version of Harmony handled text ranges. Here I'm editing the HTF in place to fix that. When editing in-place, the anchors are turned into HTF tags and inserted in their proper locations so they can be changed in-line. Instead of opening and closing anchor tags, HTF instead uses start and end anchor tags and no closing tags, using the ID to distinguish them.
Corrected. When the text handler uploads it, it will strip the anchor tags back out and turn them into anchors in the database.
We can either follow it manually from the menu ...
... or click on it, and it will open the main image and highlight the polygon. One could even envision a zooming client where the polygon would be scrolled into view.
Our other link goes back the other way, too.
As these are all anchors, they turn up in the local map, and you can duly view the relationships between them graphically. Anchors can be created in any document type. In PostScript, they're just text ranges too; in movie files, they would be polygons covering a particular range of frames; in audio files, they would be a sample range.
You probably didn't see any difference between treating the machine room as a cluster and a collection, but in actual use the client will assume that, being a cluster, you don't want multiple images from the cluster open simultaneously because theoretically they are the "same" image (for various values of "same"). This made creating links more difficult than these screenshots would indicate, facilitated by the fact the link creator will remember recent anchors even if those documents aren't open anymore. However, if you have text in the cluster also, then the text viewer and the image viewer can be open at the same time — I guess for people who might do text descriptions for the visually impaired, or something. Clusters appear differently in two other respects we'll get to a little later on.
Hyper-G really wanted to lean into visualization in a big way, and VRML, the Virtual Reality Modeling Language (formerly Markup Language), was nearly a first-class citizen. Most modern browsers don't support VRML and its technological niche is now mostly occupied by X3D, which is largely backwards-compatible with it. Like older browsers, Hyper-G needs an external viewer (in this case VRweb, which we loaded onto the system as part of the Harmony client install), but once installed VRML becomes just as smoothly integrated into the client as PostScript documents. Let's create a new collection with the sample VRML documents that came with VRweb.
The process of uploading them is much the same, but notice there is a new document type we're using called Scene.
Such documents get their own special icon and appear as scenes in the attributes view. Again, you can assign rights and permissions to this like any other document.
As with Ghostscript, VRweb is subsumed into the Harmony native viewer, and you can move around in the scene just as you would in VRweb from the command line.
You can also reportedly assign anchors to VRML objects. Although I wasn't able to demonstrate this for you, it's logical that you could, and most likely this failure is just a manifestation of my usual personal incompetence.
Harmony itself can also generate 3D visualizations, including of its entire "world." For this we'll open the Landscape view.
By default this opens up on the Hyper Root, which in our little private hell is just us, and follows the connections out in the distance.
In general the height of the bar represents the file size. Blue, here, represents a cluster (the first of those two other visible differences) but can be navigated like any other collection.
It's too bad scenes couldn't have been inserted into this view and zoomed, but that would probably have been too much for the machines of the era.
As you select the bars, they highlight, and can be opened — here, one of our images, and then one of our VRML scenes.
If this kind of network visualization seemed familiar to you, here's some prior art for the prior art: the SGI File System Navigator (
fsn), as most famously seen in 1993's Jurassic Park. Jurassic Park is a candy store for vintage technology sightings, notably the SGI Crimson, Macintosh Quadra 700 and what is likely an early version of the Motorola Envoy, all probably due to Michael Crichton's influence.
Although FSN (say it "fusion") existed in the same time frame as Hyper-G, it almost certainly precedes Harmony's landscape view and was likely a strong influence on it. It didn't exactly develop into a comprehensive file manager, but it does work, and even though the Indy was famously derided as "an Indigo without the Go" that jibe applies more to the original R4K processor than this upgraded one which runs it just fine. Still, Harmony's landscape view was probably one of the first automatically generated such views for online resources, made possible by the fact that edges could be walked in both directions and retrieved rapidly from the database.
Just for yuks, I threw the site into the classic Mac version of GopherVR, which came out in 1995 and post-dates both FSN and earlier versions of Harmony, but it now renders a lot better with some content. (I do need to get around to updating GopherVR for 64-bit.)
As far as the rest of the Gopher version, you can now browse collections, which appear as Gopher menus.
Here there is no difference between collections and clusters; they are both treated as menus.
Full-text search works fine from the Gopher side, which is treated as a Gopher item type 7 indexed search server, and documents and collections appear as search results.
Since many Gopher clients of the era did not understand HTML, the server simply turns HTF into plain text, though it doesn't seem to do it as successfully as
hgtvdid. This problem likely never got fixed because the beta versions on the Internet Archive unfortunately appear to have removed Gopher support entirely.
Likewise, collections now appear as clickable links in the web view. We are logged in as
Anonymous.
Attributes are also viewable from the web interface, with the same Y2K issues.
While there were various problems turning HTML into HTF, turning HTF into HTML yields much less loss of fidelity.
Interestingly, links to outside URLs like Gopher and WWW here are just links, not proxied (as shown by the Gopher link I'm hovering in the bottom status bar). Again, however, you could just set these links to be invisible to people who weren't logged in or authorized (i.e., remove read rights), and/or require them to use the Hyper-G client to access those sites if you wanted to paywall or restrict them by not allowing the sites to be externally accessible otherwise.
The status view again, for thoroughness, though it does not seem to have changed much from when nothing was loaded (other than all of my various test sessions).
Finally, the second way in which clusters seem to be treated differently: instead of a collection menu with the images and text, everything is consolidated together on one page — because they're supposed to be "the same thing," right? For this purpose, doing so is arguably even desirable. The links still go to the individual resources rather than jumping from fragment to fragment, though.
The other major client was Amadeus, which is Windows-specific. A later version of Amadeus is in the Internet Archive mirror but we'll stick with the beta version 1.00.001 on the CD just in case the later one tries to do operations this server doesn't like. Although I'm not going to demonstrate authoring in Amadeus, it is fully capable of doing so, and uniquely offers RTF-to-HTF translation so you can use most Windows word processors to write documents instead of dinking around in a text editor by hand. It can run on Windows 3.1 with Win32s, or Windows 95 and later. Here we'll run it under Windows 98 in Virtual PC 3.0 on the Power Macintosh 7300.
Starting the setup program.
I don't really miss installers thinking they can dump everything in
C:\...
... or having to abbreviate
\Program Filesas
\PROGRA~1.
This version of Amadeus is distributed in multiple floppy-sized archives. That was a nice thing at the time but today it's also really obnoxious. Here are all the points at which you'll need to "switch disks" (e.g., by unzipping them to the installation folder):
And with that ...
... the Program Manager group is made and installation is complete.
I'm sure Mozart would have approved of the splash screen (insert Simpsons music).
Naturally, of course, Amadeus tries to access the IICM by default, which fails.
Pointing it at the Indy, which conveniently is sitting right under the Power Mac.
This version of Amadeus has an interface closer to
hgtvthan it does to Harmony, which to be sure was getting the majority of development resources. In particular, there isn't a tree view, just going from collection to collection like individual menus.
When documents are viewed, they appear in a left-sided pane and the current collection in the right. Amadeus does not treat the cluster like a collection and rather like a unified document, which again is appropriate, but interestingly it had difficulty with accessing the images. If this error was actually a problem starting the viewer, it's not obvious.
The other documents appear fine.
About box and credits. There is also a Windows-based "Easy" standalone viewer suitable for public kiosks, basically a stripped-down Amadeus (the same or similar codebase) with a different interface, server-customizeable styling, and no authoring support. The ZIP files for earlier versions of Easy could fit on a single 1.44MB floppy. I won't be demonstrating it here, but you can download and play with it if you like.
These clients could also be set up with an offline database and run independently, even completely off the network. This feature didn't get much public traction but apparently was used in a few educational settings like museums.
As our last stop, I want to go back to annotations and permissions, which I think have a uniquely strong claim on prior art because they anticipate the more participatory features of "Web 2.0" (however you would define it) but years earlier.
Let's consider this setup where we have two unprivileged users and the system administrator. We will put both unprivileged users (
blofeldand
spectre) into the
lusersgroup.
Annotations let you tag documents you create onto documents other people have created. They could be comments or replies. They can link to those documents, or ranges in those documents, and reference them. You know what also works that way? Blog comments work that way. So do Web forums. Let's make the Hyper-G equivalent.
Here I'm creating a collection where "threads" will live. Notice that I've given this collection a special Rights string W:g lusers which keeps the default read and unlink permissions, but specifically allows users in group lusers to create and modify documents here.
Now, as the administrator, I'll create a thread for display purposes, though the permissions as stated could let any of the two unprivileged users do it.
Switching to
blofeld, because you can never say never again, I will now annotate that "thread." (Oddly, this version of Harmony appears to lack an option to directly annotate from the text view. I suspect this oversight was corrected later.)
Fair use, Amazon MGM! Fair use! Fair use! If Marvel can have their Spectre, so can I! As this is an annotation, we post our text with a special document type Annotation and not as a regular text document. Because the write permissions for this collection are different, any new annotations default to them instead. I'll come back to this in a minute.
Uploading that text, it links back to the thread and is live.
spectrecan post too.
As a visualization note these are all treated as individual documents despite being tagged as annotations. Again, this sounds like something that was/should have been corrected on the client side, because the links between them are easily discoverable and the comments are marked as such in the database.
Indeed, since they're implemented with anchors between them, the Local Map will show the edges and what replies to what.
By the way, you can annotate collections too. We wouldn't want threads posted in that fashion but it's a logical outcome of the permissions.
Unfortunately, this scheme is not perfect, because since
blofeldand
spectrehave the same permissions and the default is to allow anyone in the group to write, without taking some explicit steps they can then edit each other's posts with impunity. To wit, we'll deface
blofeld's comment.
Editing it in place with an extremely classy bit of vandalism. Notice the "do not edit" metadata that is also part of the edit session.
You could get around this by each user implementing more restrictive Rights on annotations as they are posted. That would have been a pain in Harmony but a clueful Hyper-G client might do just that as a convenience, and could also implement the tree view graphically. On the server side Hyper-G was already ready to support interactive content like this and, at least early on, the Hyper-G team even encouraged that sort of creative client software. Who's Web 2.0 now?
That concludes our demonstration, so on the Indy we'll type dbstop to bring down the database and finish our story.
In 1997 HyperWave became an independent company with Maurer as chairman and offices in Germany and Austria, later expanding to the US and UK. Gopher no longer had any large-scale relevance and the Web had clearly become dominant, causing Hyperwave to also gradually de-emphasize its own native client in lieu of uploading and managing content with more typical tools like WebDAV, Windows Explorer and Microsoft Word, and administering and accessing it with a regular web browser (offline operation was still supported), as depicted in this screenshot from the same year. Along the way the capital W got dropped, becoming merely Hyperwave. In all of these later incarnations, however, the bidirectional linkages and strict hierarchy remained intact as foundational features in some form, even though the massive Hyper Root concept contemplated by earlier versions ultimately fell by the wayside.
Hyperwave continues to be sold as a commercial product today, with the company revived after a 2005 reorganization, and the underlying technology of Hyper-G still seems to be a part of the most current release. As proof, at the IICM — now after several name changes called Institute of Human-Centred Computing, with professor Frank Kappe its first deputy — there's still a HyperWave [sic] IS/7 server. It has a home collection just like ours with exactly one item, Herman Maurer's home page, who as of this writing still remains on Hyperwave's advisory board.
Although later products have attempted to do similar sorts of large-scale document and resource management, Hyper-G pioneered the field by years, and even smaller such tools owe it a debt either directly or by independent convergent evolution. That makes it more than appropriate to appear in the Prior Art Department, especially since some of its more innovative solutions to hypermedia's intrinsic navigational issues have largely been forgotten — or badly reinvented. That said, unlike many such examples of prior art, it has managed to quietly evolve and survive to the present, even if by doing so it lost much of its unique nature and some of its wildest ideas. Of course, without those wild ideas, this article would have been a great deal less interesting.
You can access the partial mirror on the Internet Archive, or our copy of the CD with everything I've demonstrated here and more on the Floodgap gopher server.