search Nooface:
In Search of the Post-PC Interface
- Home
- About
- Discussions
- Journals
- Messages
- Topics
- Contact

- Preferences
- Older Stuff
- Past Polls
- Submit Story

- 3D User Interfaces
- Archives


What is a post-PC interface?

A post-PC interface describes the method by which users interact with post-PC computing devices, which may have immersive, mobile, or ubiquitous ("invisible") characteristics. While traditional PC interfaces are primarily optimized for steadily increasing processor performance, post-PC interfaces target additional design points in their design, including global networking, advanced graphics capabilities and vast storage systems.

What are post-PC computers?

Post-PC computers are emerging computing devices that are operated directly by end-users, but break out of the traditional PC mold. Some examples are mobile computing devices such as smart phones, PDAs, and digital music players; immersive devices for immobile users, such as HDTV-based home entertainment systems and next-generation video games; and ubiquitous (a.k.a "invisible") computing devices that blend into the natural environment and require little or no involvement from users to interact with.

How can you use the term "post-PC" when PC companies are doing more business than ever?

The term "post-PC" does not imply that the PC is somehow vanishing or becoming irrelevant. Mainframes and minicomputers were once also thought to be obsolete, but those platforms continue to be used widely today. It simply implies that, like its predecessors, the PC will remain important, but its central role as a driver of innovation in the technology industry is waning. Most of the leading PC suppliers have reduced their investments in product innovation, focusing instead on improving their manufacturing and distribution capabilities to become more price-competitive in what has become a commodity market. As a result, the task of innovation has shifted to fewer and fewer players in the PC industry, who are now struggling to create meaningful reasons for users to upgrade. This has created an opportunity for new "post-PC" technologies to break in with fundamentally new approaches for providing computing experiences.

Who will use post-PC computers?

Nearly 1 billion PCs have shipped in the last 25 years, most of which run some form of Microsoft Windows. But for the global population of 6.5 billion people, most of whom have not yet even begun to use computers, post-PC devices will eventually emerge as the standard computing platform. Of course, a variety of factors such as language, education, and cost still represent major barriers for many of those users to gain access to computers. But despite the seemingly intractable price points of key hardware components such as displays, the overall cost of computer hardware continues to drop, with disposable computing devices already on the horizon. It is the goal of this site to identify ways to overcome the other obstacles through innovative user interface approaches.

Why shouldn't post-PC devices have the same user interfaces that already exist?

Thanks to environments such as MacOS and Windows, the "Windows, Icons, Menus, Pointing Device" (WIMP) user interface that we are all familiar with has been phenomenally successful at bringing computing power into the hands of a vast new set of users. Indeed, for many users, there is no reason to use anything other than the traditional user interfaces they are using today. But the fact remains that the current method of managing information based on files, hierarchical folders, and the desktop metaphor has not changed since 1984, and is becoming obsolete in today's web-centric computing environment. WIMP, which was first introduced in the 1960's, and then commercialized during the 1970's and 80's, was designed for a computing environment that is very different than the one that exists today. Xerox PARC research, from which much of the original Macintosh design was derived, was primarily aimed at office automation to support small workgroups and a few thousand documents. By contrast, the web is a shared information environment for millions of users and potentially billions of documents. Hardware capabilities have evolved dramatically since then as well, including continuously growing CPU and graphics performance, and storage capacity growth that is beating Moore's Law. It simply no longer makes sense to run 1984 software on current hardware.

How will post-PC interfaces be different from traditional user interfaces, such as MacOS or Windows?

Ever since the Apple Macintosh was introduced in early 1984, the WIMP paradigm has defined the standard for user interfaces. Indeed, the WIMP interface is now so familiar to most users that it can be hard to grasp that other models for a user interface are also possible. But since most users of post-PC devices will not have used PCs, just as most PC users had no experience using mainframes or minicomputers, they will have few expectations in terms of similarity to PC user interfaces. Thus, designers of post-PC interfaces have the benefit of discarding most previous assumptions and constraints related to the WIMP approach, gaining the ability to drive into new directions with little or no regard for backwards compatibility. While no clear successor to the WIMP interface has yet been determined, several post-PC computer usage models have emerged, each of which will require interfaces that are quite different from those of the PC (see Figure 1). These include:
  • Immersive computing, i.e. the way that sedentary users in home or office environments experience their systems. End-user system designs have evolved dramatically since the PC was originally introduced, with dramatically increased graphics capabilities in next-generation processors; increased display sizes as consumers embrace high-definition television (HDTV); ever-increasing network bandwidth available to end-users; continously growing disk storage capacity; virtualization, in which software runs independently of underlying hardware characteristics; and the emergence of next-generation recording media such as Blu-ray and HD-DVD. These design points enable users to engage with their computers more deeply than ever before with a variety of visual computing approaches, including game interfaces; 3D user interfaces; advanced data visualization applications; and increasingly life-like virtual worlds.
  • Mobile computing, i.e. the way that users work with devices that are not tied to a single location. These kinds of devices are becoming the de-facto standard interface for the majority of users simply due to the sheer volume of their adoption, spurred by the growing popularity of mobile phones and digital music players such as the Apple iPod. Mobile computing devices are becoming increasingly powerful and connected, introducing a need for new interfaces that are geographically-aware, and optimized for smaller screens and keypads.
  • Invisible computing, sometimes also referred to as "pervasive computing" or "ubiquitous computing", describes a somewhat contrarian approach that seeks to simplify and reduce the visibility of computing devices in order to minimize the effort and expertise required of their users. Some examples of such technologies include speech and audio interfaces and simplified information appliances that are optimized for a specific task; and specialized ambient computing devices that are optimized for users to interact with passively. The growing use of smart sensor devices using Radio Frequency Identification (RFID) will also enable technology to be embedded in the real world ever more transparently.

FIGURE 1: Post-PC Computing Trends

Are fundamental user interfaces improvements even possible? Won't most UI enhancements just be limited to modest changes in their look-and-feel?

A variety of incremental usability improvements to user interfaces are indeed possible, i.e. through techniques such as gesture input, support for new kinds of peripherals, and even improved command-line interfaces. However, the superficial "look-and-feel" of an interface is distinct from the more fundamental issue of how the interface represents data to its users. In this regard, potentially dramatic improvements are possible. Again, the hierarchical directory (i.e. "file folder") method of representing data has been very successful in the context of single user machines and small workgroups of users, but it may not be suited for effectively managing data on the scale of global networks like the web. In this environment, the concept of a "file system" may evolve into more scalable data management models based on relational or temporal structures, in which users tag data with a variety of attributes reflecting its context and meaning, rather than referencing it in terms of a unique file name. As data becomes "free", both in terms of its control and the cost of its storage, value will increasingly shift to the control of such metadata (i.e. "data about data"), which can be shared through peer-to-peer searches and explored with visual user interfaces such as mind maps. However, it is debated whether global metadata can be reliably assigned in a public network, or can only be extracted from implicit factors. Some experts are betting on the former with initiatives such as the semantic web, but critics charge that any labelling system will be at risk of corruption by off-topic tags (e.g. spam) or insincere tags (e.g. trolls). By contrast, leading-edge search interfaces such as Google have been extraordinarily effective at eliciting the "meaning" of data by analyzing its text and the web links connecting it, rather than depending on humans to categorize the data. Still, the concept of collaborative categorization continues to generate interest, as shown by the emergence of folksonomies and the popularity of "tagging"-oriented sites such as and flickr. One possible defense against the corruption of shared metadata would be to validate it with online reputation schemes such as distributed trust metrics. The integration of such loosely consistent mechanisms directly into applications would allow them to process data in a way that accounts for varying credibility, rather than the rigid constraints of current security infrastructure.

What role will the web play in post-PC interfaces?

Because developers of post-WIMP user interfaces have the benefit of starting with a clean slate, they will be able to build web connectivity into their designs from the ground up. As a result, the web will permeate post-WIMP interfaces at every level, rather than being activated explicitly through particular client applications such as a browser. This will dramatically increase the sophistication of web interaction, extending mere browsing of on-line content into full-fledged web sensing, in which the user's on-line experience will respond in real time, depending on a variety of conditions related to both physical and logical aspects of the network and its available information. Also, it should be noted that the definition of the web itself represents a moving target. Again, the HTML/browser paradigm that we are currently familiar with is so entrenched that it is hard to imagine this approach won't be used forever. However, we are clearly in the early stages of the web's evolution, and a variety of potential innovations still lie ahead. Some hints of how the web might evolve are already visible, including peer-to-peer networks, in which clients access content directly from each other rather than through dedicated web servers; the semantic web, which enables richer levels of data abstraction for classifying web content; and grid computing, which enables large networks to be treated as a single logical computer system.

What role will Linux play in the adoption of post-PC interfaces?

Linux offers a startling opportunity for enabling post-PC computers to emerge, and thus driving next-generation user interfaces into the mainstream. Linux supports the broad base of existing Intel X86 desktop hardware extraordinarily well, and has also proven its ability to run on a variety of emerging platforms, including handheld devices, information appliances, point-of-sale equipment, and wearable computers. Further, since Linux is open source software, it is not controlled by any single entity, which provides exceptional flexibility for developers and offers a level playing field for new players to break into the client space. Of course, the absence of licensing fees for the Linux kernel also aligns well with the goal of addressing volume markets. But although Linux has achieved considerable success as a server platform, it has failed to generate any significant appeal with mass-market desktop users. Until now, most Linux GUI efforts have followed the traditional WIMP model, and most efforts to develop Linux desktop applications have been focused on simply recreating equivalents of existing software products. Thus, mainstream desktop users have found few compelling reasons to switch to Linux because it does not currently offer an experience that is fundamentally any different from that of Windows or MacOS (notwithstanding its lower price and superior reliability). However, as truly next-generation user interfaces for Linux emerge, they will enable the development of new kinds of applications that will be difficult or impossible to match on the existing platforms. Such "killer" applications (defined as applications that are so valuable that they justify adoption of a new platform simply to gain access to them) will start the virtuous cycle of platform-application interdependency that is critical for the success of new environments.

What does the name "Nooface" mean?

In 1925, an obscure French Jesuit priest, paleontologist, biologist, and philosopher named Pierre Teilhard de Chardin (1881-1955) foresaw the coming of a globally networked consciousness that now appears remarkably similar to today's Internet. De Chardin spent the bulk of his life trying to integrate religious experience with natural science, specifically Christian theology with theories of evolution. The gist of de Chardin's theory was that evolution encompasses not just animals, but the entire global ecosystem. De Chardin believed that the earth and all its parts - including rocks, plants, animals, and people - represent one organism, and that this organism will evolve into some sort of outer layer of universal consciousness that he likened to the outer ring of the core of a tree. He called this evolving outer layer the Noosphere (from "noos", the Greek word for "mind"). Today, a number of leading techno-philosophers credit de Chardin's theories as one of the earliest known visions for the World Wide Web. The purpose of Nooface is to study the many possible ways that our Noosphere - the web and computers attached to it - can appear to its users.

What software is this site running?

Nooface uses Slash, a database-driven news and message board engine based on Perl, Apache and MySQL. Slash was originally developed to run Slashdot, a popular discussion site for leading-edge technology issues.

How do I submit stories to the site?

You can submit a story by using the Submissions Bin. When you submit a story, please remember to include appropriate links. Also, you'll have a better chance of getting our attention if you use a clear and specific subject line.

I want to write an editorial. What should I do?

Before you get carried away, mail me a synopsis of your idea (put the text 'Proposed Feature' in the subject). That way I can tell you if it is something we would consider posting before you bother to write the whole thing.

Comments and Moderation


I'm not a robot like you. I don't like having disks crammed into me... unless they're Oreos, and then only in the mouth. -- Fry

[ home | contribute story | older articles | past polls | faq | authors | preferences ]