Olsen, D. R., Buxton, W., Ehrich, R., Kasik, D., Rhyne, J. & Sibert, J. (1984). A Context for User Interface Management. IEEE Computer Graphics and Applications 4(12), 33-42.


A Context for User Interface Management

Dan R. Olsen, Jr.
Arizona State University
David J. Kasik
Boeing Computer Services
William Buxton
University of Toronto
James R. Rhyne
IBM Research
Roger Ehrich
Virginia Polytechnic Institute
and State University
John Sibert
George Washington University


Recognition that human productivity in the use of computer systems is dramatically affected by the nature of the human-computer interface has created a demand for improved human-computer interaction. Since significant portions of all application systems are part of the human-computer interface (one study resulted in a range of 30 to 60 percent1), the productivity of human-computer interface designers has become an important research topic.

Investigators in the computer science community responded to this need by designing the User Interface Management System (DIMS) to assist in the development of successful interactive graphics systems. This project was given impetus by the GIIT workshop in Seattle,2 by a later workshop in Seeheim, Germany, and by a number of publications on the subject, including independently published works by each of the authors of this article.

Each author had been working independently for quite some time implementing various portions of a UIMS. We agreed to meet at a workshop held last April at Arizona State University to discuss the issues involved and to share the experience we had acquired. During these discussions it was determined that some statement needed to be made about the problem environment in which a UIMS must exist and the kinds of support functions that a UIMS should provide.

This article is intended to place UIMS research within the context of the problems that it is intended to solve, in terms of both the application software being developed and the kinds of individuals involved in such a project. We are not presenting an exhaustive survey of UIMS research nor a reporting of our personal research results. The article is a consensus of opinion from six different research centers about the problems UIMS research should be addressing.

A UIMS should be used in developing interactive applications for several reasons:

  1. Use of a UIMS provides a more consistent interface both within and across applications.
  2. Interface specifications can be represented, validated, and evaluated easily.
  3. Designs can be rapidly prototyped and implemented.
  4. Interactive applications can be more quickly and economically maintained.
  5. Distribution of functionality across systems and processors is facilitated.
  6. The proper roles of those involved in interface development are represented and supported throughout the evolution of the interface.
With these goals in mind, we will discuss the context within which such a UIMS must exist and function. It is not our purpose to review the various forms and strategies that have been proposed and used for UIMS development but lather to clarify the environment of a UIMS.

The application context

In order to be more specific in our statements about the issues surrounding UIMSs, we first present three examples drawn from potential applications. The issues, which relate the services of a UIMS to the applications that it is intended to support, range along a continuum from the keystroke/transaction level, or micro level, to the macro level of integration across an entire application environment. Three examples were selected to illustrate the range of this continuum and the issues that arise at each level.

Simple transactions. Let us first examine a single transaction drawn from a spreadsheet program like Visicalc. The transaction involves changing the value of the numerical fields of the spreadsheet. What is to be specified, therefore, is the field to be changed and its new value. The wide variety of interactive resources and interactive styles that can be applied to such a simple task is illustrated here. Please note that for illustrative purposes we have chosen that the task consist only of entering numeric values rather than the more general requirements of a spreadsheet program.

One way to perform this task is by typing the field's row and column number and then typing in its value. Thus, for example, we might set the fifth field of row C to the value 3.5 by typing:

set C5 to 3.5
This approach, or some variation of it, can and does work. However, while it is precise, it is prone to errors in syntax, naming, and spelling. This is true partially because the executive or accountant using the program is most likely not a touch-typist.

We can obviously improve on this interface by using the arrow keys to move the screen cursor over the field to be changed and then typing in the new value. In so doing, we eliminate the need to type the command name Set (since it can be implied by the context) and the precise row letter and column number.

An alternative approach to discrete pointing with cursor keys is to use a continuous 2-D pointing device such as a mouse or a tablet. We appear to have evolved to the best of all worlds. But have we? Systems that use both mice and keyboards suffer performance problems due to the switching of hands between devices. We could, in fact, investigate going one step further and having the entire task performed by one hand. With a mouse, for example, we could treat the numerical value like a potentiometer. Instead of reaching out and turning a knob, we "reach out" and, using the mouse, position the cursor over the appropriate field. We can then "grab" the field's magnitude and "drag" it up or down for as long as the select button remains depressed. Whether this would be an effective approach is dependent upon the range of values that our application might be using.

In order to make a final point, consider trying to duplicate the one-handed mouse technique using a joystick that has a self-returning rotary potentiometer on the handle. In this case, the cursor is positioned by the xy motion of the joystick, and the value is scaled by rotating the potentiometer.

However, a potential problem arises in twisting the knob while maintaining an x-y position. One way to avoid this would be to utilize another, but similar device: a 3-D trackball. This device is plug compatible with the joystick, uses the same major muscle groups to operate, has the same footprint, and is enclosed in a comparably sized housing. But notice one important difference: To rotate the trackball in the third dimension, it must be gripped so that the fingertips rest against the bezel of the housing, thereby effectively eliminating any change to xy positions when changing z. Hence we see an example of how the mechanical design of the transducer itself has subtle but important characteristics that, if understood, can be exploited to reduce error and possibly improve the quality of the user interface.

We have seen how broad the range of considerations is in designing the micro-level user interface. We started with a seemingly simple and isolated transaction and saw that its design is greatly affected by things at the very lowest level of devices (e.g., the trackball variant). What we do with this particular transaction influences and is influenced by other transactions within the same application, as well as those in other applications in the same system. A UIMS should facilitate the prototyping and evaluation of the various alternatives so as to create finely tuned and productive interfaces and applications.

Integration of functions. In stepping up one level from the simple transaction level, we next consider the integration of functions within a given interactive application. This question of integration is complicated by the need to add functionality as a system develops over a period of time. The example chosen is a word processor that is to be integrated with a spelling checker. The issues here are not word processing or spelling checkers. The issue is how a UIMS must integrate separate but related functions within a common user interface.

The task is to create a document that will be formatted in near real time and redisplayed to the user. A spelling checker is available to check spelling as words are entered or modified.

When the user enters a keystroke to indicate the closure of a word, the spelling checker checks the word for spelling errors. A misspelled word is highlighted to notify the user of the errors. This highlighting may, however, occur at some time after the user has typed in the word. The spelling checker also displays a list of alternative spellings at a convenient location on the screen.

When the user hits a key to indicate closure of a line, the line is formatted, and the results displayed. If the user modifies a word, the spelling checker again checks the new spelling.

Ordinarily, the text formatter and the spelling checker would be written as a single runnable code module. Moreover, this module directly handles keystrokes from the keyboard and position increments from a mouse. The module also drives the display, either directly or via a low-level software interface.

From the point of view of functionality, nothing is wrong with this monolithic packaging of the word processor software. However, the spelling checker is unusable in other applications. It is not possible to check the spelling in a document that has been created outside this particular program. Suppose that the spelling checker is to be transported to another system environment in which intensity or color highlighting is not available, or in which it is desirable to display the spelling alternatives at a different location or to select among the alternatives using function keys (because there is no mouse). In many cases, it would be difficult to accomplish these modifications without extensive changes to the whole package. Let's now examine an alternative construction for the word processing function.

One might separate this application into five major modules:

  1. A data-sharing facility in which the document structure, document format, and spelling-error information are kept.
  2. A text formatter, which determines the positions for document components within a presentation space.
  3. A spelling checker, ,%,hich exaines the document and records information about spelling errors ,. .
  4. A display-management service, « which provides a measure of device independence and which can be shared across many different applications that will make use of the display.
  5. A UIMS, which connects these components together, controls the display resulting from both format processing and spelling checking, and interprets the user input actions.
The advantage of such a construction comes from the separation of function into these components. Thus, if some behavior of the formatter must be changed, the spelling checker will not be impacted. If the user interaction protocol must be changed, only the user interface specification will be affected.

It is plain that the spelling checker and the text formatter must share the same source data, that is, the document. The spelling checker does not need to know how the text is laid out, only what words are in it and how to determine if they are spelled correctly. The UIMS must have access to the document (for key-in), the format information (for echoing the key-in), and the spelling-error data (for highlighting the misspelled words). It need not know how to format text or what rules to use to determine if a word is spelled correctly. It follows that the shared data manager is essential in this construction.

The spelling checker must be able to identify misspelled words in the document. It could use the format position of the word, but this data will be useless whenever the formatter rearranges the document format. It could use a sequential position in the document, but this will change whenever the document is modified. If each word in the document is "named," that is, given a unique identifier, then the spelling checker can use this identifier regardless of changes in the document or in the format.

Several key issues are involved here. The first is that the handling of the user interface or "external facade" must be independent of the various functions with which it interfaces. The communication and interaction between the components need at least partially to consist of a shared common data structure with some problems to be solved as to how this data structure should be referenced. Finally, some level of concurrency between the various modules must be included.

The global context. Having reviewed some issues concerning the interrelationships of functions within an application, we look finally at the macro view of interactive applications within the entire application domain. Within a large enterprise there must of necessity be many specialized applications or systems that address various problems.  User interfaces need to have a similarity within this family.

The composite area of computer-aided design, manufacturing, and engineering contains a wealth of such diverse but highly related applications. Requirements for access to the functions within this environment can span both a large number of users per application and a single user who requires access to a large number of applications. In the aerospace industry, for example, the application environment includes conceptual and preliminary design; detailed design and analysis of structures, payloads, and weights; drafting of completed systems; parts planning; and numerical control.

Access to this wide variety of applications can be provided in a number of different ways. In a totally integrated environment, a user will have access to data that is represented in a consistent way through an interface that is consistent across applications. Each application contains a collection of functions that allows the user to perform tasks relating to a particular work assignment.

The collection of functions into an "application" is highly dependent upon the work strategies employed in a particular company. For example, one company may provide functions for thermodynamic analysis across a number of different applications while another may choose to group all thermodynamics functions into a single application. Similarly, organization of geometry-based functions may be grouped by function performed (create, delete, edit, etc.) or by entity class (points, lines, circles, and so on). Such organizations of applications are in many ways dependent on how work is partitioned within the organization of the company. Flexibility in the configuration of applications is achievable only if the interface can be separated from the application with a UIMS.

In such an environment interactive techniques for locating a point in three dimensions should be handled consistently. Help and error-recovery functions should be consistently available across all applications within a task domain. Unique, special-purpose interaction techniques should be created only when they provide a new functional capability. Such similarity in user interfaces and the reorganization or restructuring of functions within a user interface are almost impossible if the user interface is hand coded into each functional unit. A UIMS should, and does for most current implementations, support common interface techniques and easy restructuring.

Personnel for interactive application development

The preceding section presented a number of examples and issues in implementing interactive applications and illustrated the role of a UIMS in integrating and facilitating such development. Before a more detailed discussion of a UIMS can proceed, it is important to identify the roles involved and define who the users of a UIMS system are and what expertise and requirements are needed.

The users of a UIMS are not just the end users of the interactive system. More importantly, they are the developers of the interactive system. The most sophisticated tool ever created has failed in a major way if it cannot be comfortably and effectively applied by those whom it is intended to support.

We have identified nine distinct roles in the development of an interactive application. These roles are the

Frequently a single individual will fill more than one of these roles, depending on the scope of the project and the capabilities of the individual. Not every role is involved in every application. Hopefully the UIMS builder will not be involved in every application, and frequently some of the other individuals will establish role guidelines to cover a set of applications, thereby avoiding the need to become personally involved in each application. We are not proposing an organizational structure but rather are trying to indicate the individual skills needed to build usable interactive applications.

The role of the end user is, of course, to use the system being developed to successfully accomplish some goal or set of goals. The user's primary concern is, in fact, the accomplishment of that goal and not the means by which it is achieved. From the user's point of view, then, the system should be as unobtrusive as possible. The challenge is in serving uses who are notorious for coming equipped with varying levels of skills and knowledge, both of the application area and of computers.3

The application analyst provides a functional description of the interactive application being developed. This description includes the range of functions to be provided and any constraints that exist. The analyst, in conjunction with the end user, is responsible for defining and specifying the problem that the interactive system is expected to solve.

The application programmer produces the application-specific programs needed to carry out the functions of the system. The programmer is concerned with producing efficient, maintainable, and reusable code, which provides correct solutions to the application domain's problems. The programmer must be an expert in the context of the programming environment (languages, development environments, operating systems, etc.) and should be sufficiently familiar with the application domain to be able to design and implement the appropriate application routines. In the word processing/spelling checker example the application programmer would have to understand document retrieval, text formatting, and spelling checking but not necessarily graphics, input language design, or screen layout design.

The dialogue author designs and writes the human-computer interactive dialogue. This task includes specification of the form of each transaction, as well as how transactions are combined to form a dialogue. The dialogue author's concerns include providing natural and appropriate interaction styles and forming consistent and easily understood dialogues. The alternatives presented for the spreadsheet transaction example illustrate the design decisions that must be made by a dialogue author. A dialogue author must be familiar with the functions of the application, know the characteristics of the end user, and be familiar with the applicability of the various interactive styles and devices. The dialogue author should be skilled in the design of human-computer interfaces but not necessarily be a computer programmer.

Working in close cooperation with the dialogue author is the graphics designer. The role of the graphics designer is to specify the appearance of both textual and graphic information. The relationship between a dialogue author and a graphics designer is similar to the relationship existing between the author of a children's book and its illustrator. This designer should have skills and formal training in the field of visual communications or graphics design.

Since it is unlikely that the first draft of the dialogue will be beyond improvement, a dialogue evaluator is necessary to analyze the performance of the dialogue and suggest changes and improvements. The dialogue evaluator must be able to collect and analyze data generated both from the dialogue specification and from actual use of the dialogue. Although the evaluator shares certain skills with the user, the skills required for creation and evaluation differ. The relationship between author and evaluator is analogous to the author/editor relationship.

The role of the UIMS builder is at least twofold. The first role is to build tools for the dialogue author, dialogue evaluator, and graphics designer. The UIMS should provide an environment that not only allows but actively encourages good user interface design. The second role involves providing the runtime environment to support execution of the interactive dialogue. To fulfill these roles, the builder must have a thorough knowledge of the principles of interactive dialogue design in order to provide an appropriate environment for the dialogue author. Furthermore, an extensive knowledge of computer languages, operating systems, graphics algorithms, and language processing is required to provide the runtime environment.

The physical environment (chairs, lights, screens, keyboards, for example) plays an important role in the overall quality of a human/computer interface. An environmental designer with a background in industrial design has the responsibility for specifying these environmental elements. In addition, an environmental evaluator, trained in human factors (ergonomics), bears the same relationship to the environmental designer as the dialogue evaluator does to the dialogue author. The Physical environment is not within the control of a UIMS; however, we present these two roles to point out that the physical facilities are an important part of the human-machine interface. No amount of software can overcome a bad choice of interactive devices or a poor working environment.

Information and tools

User Interface Management Systems are intended to support the personnel involved in developing interactive applications. Defining the roles and skills of these individuals allows us to fully characterize a UIMS by the information required to specify a user interface and by the tools that support the information collection and integration.

Information. We view the creation of a user interface as a process of specifying or collecting information about the user interface and then creating a working application by either generating the user interface or interpreting the specification directly. The user interface specification captures the user-accessible functions, parameters, expected results, error conditions, and required vocabularies (both textual and visual). In general the specification of user interfaces (and in general most language systems) can be divided into lexical, syntactic, semantic, and pragmatic levels. The role of these levels is best discussed by Foley and Van Dam.4 For our purposes the information to be specified is divided into an input description, application actions, and an output description. The boundaries of this partitioning are somewhat fuzzy, as will be shown later.

Input description. The purpose of the input description is to specify how the end user will supply inputs to the interactive system. The heart of the input description is the specification of the input language. The input language defines the syntax or sequential and semantic constraints on end-user inputs. In addition, the input language defines the mapping between user inputs and the application and/or display actions. As the primary work product of the dialogue author, the input language is a major object of study for the dialogue evaluator.

An important subset of the input language is the input/output linkage. This linkage defines the display or feedback actions, which inform the user of the interactive system's understanding of the input. We separated this concept from the output description because, in general, much of this can be performed without application-specific knowledge.

There are however some important trade-offs involved in whether this linkage should be defined at the lexical or the syntactic level. GKS performs the prompting and echoing at the lexical or input primitive level; this approach has been formalized by Rosenthal et al.5 and has some advantages in that it can be somewhat device specific and helps relieve some response-time problems. On the other hand, if the kind of feedback and echoing required is application specific, it is best defined at the syntactic level. This question appears in our spreadsheet example when we specify where the typed-in numbers should be echoed. In GKS the graphics package implemented decides where the echo should be placed. For a spreadsheet, however, this approach is unacceptable.

A second important aspect to the input language includes the application constraints. A simple linguistic model may not be powerful enough to specify the acceptable input sequences. Let's suppose that in our spelling checker example we wanted to synchronously check the spelling and force immediate correction. This requires that the spelling checker would be able to exert some level of control over the input language when an error is found. Such controls need to be expressed in the definition of the input language.

A final component of the input description is the physical/logical device binding. Here again this can be viewed as a lexical problem. However, if there are more logical devices than there are actual physical devices, some runtime binding strategy must be used, which is controlled at the syntactic level. In many systems there are more logical events (one per command at least) than there are physical buttons or menu locations. The syntactic handling of contexts or modes dynamically creates such bindings.

Application actions. The application, not the UIMS, performs the computation that is required to process user requests. This function is within the domain of the application programmer. In addition to performing computational services, application actions in many user interface managers also control the display of applicationspecific information. Some level of application control of the display is necessary, but, for reasons discussed below, the UIMS needs to exert more control than it now does in many UIMS implementations.

Output description. The output description consists of three major components: the dialogue presentation, the data presentation, and the image manipulation. The dialogue presentation consists of the allocation of physical screen resources and the design of information objects. Such information includes both location and coloring of viewports for display and feedback, and the design of icons and other graphic symbology. Most of these considerations are within the domain of the graphics designer. Also included in this category are the error and help messages. The content of this information requires input from the dialogue author, the application programmer, and the end user.

The data presentation defines how application-specific data objects are to appear on the screen. This data can take a wide variety of form, For some applications, ions, such as process control, the data is high!), iconic, whereas in others, such as airframe design, it is highly geometric. A highly iconic application is characterized by data that is not intrinsically visual. The graphics display presents a symbolic encoding of the data for easy understanding and comprehension. In a highly geometric application, the application data objects are actually being viewed rather than symbolized. A highly iconic application would rely heavily on the graphics designer to provide data presentations; in a highly geometric application, the application programmer would have a more dominant role in the visual appearances.

In either the iconic or the geometric case, it is imperative that the presentation be both complete and informative. A presentation is complete if all the data to be displayed appears on the screen. Application programmers do an adequate job in meeting the completeness goal. A presentation is informative if the end user can easily perceive the desired information from the display. In numerous applications, programmers failed to create informative displays. The informative aspect of a presentation frequently requires the skills of a graphics designer.

The image-manipulation information defines how the application actions and the input language can control the data presentation. It is obvious that the application actions need this control, which is why data presentation information is frequently embedded in the application action code. This solution is not adequate, however, if the graphics designer is going to participate in the specification of the presentation or if the presentation is to be automatically analyzed by the tools that support the dialogue evaluator. The considerations and tradeoffs, both in terms of implementation and specification of models to manipulate the data presentation, are too numerous to discuss here. Suffice it to say that no clearcut solution to this problem has surfaced.

Tools. A UIMS is basically a set of tools. In fact, most UIMSs that have been constructed so far actually provide parts of the total functionality that a UIMS should provide. In this section we discuss the kinds of tools that should make up a complete UIMS.

Implementation tools. The implementation tools are those tools already found on most machines. They include operating systems, programming languages, and software management tools. However, a few points should be mentioned about each of these and their support role of UIMS development.

In our experience, asynchronous processing with a message-passing mechanism is an important feature in UIMS construction. In particular this capability should be cleanly supported within the programming language.

Because of the preponderance of interactive engineering applications, it is necessary to make special mention of Fortran. The Fortran language with its limited data structuring capabilities and lack of multiprocessing features is inadequate for UIMS implementation. There does not appear to be a problem, however, with application actions written in Fortran or most other languages. This would indicate that UIMS technology can he adequately integrated with existing computation packages written in Fortran, given a crisply defined interface between the UIMS and the application actions.

The use of software management tools is a special problem in UIMS work. Most software management tools are built to manage files of text. Much of the information in a user interface specification, however, may not be textual. In particular, the output description and the input/output linkage information is graphic. New software management facilities need to be developed to handle such information.

Internal user interface support. These tools support the user interface directly and are not necessarily exposed to the dialogue author, graphics designer, or dialogue evaluator. In general these tools lie within the domain of the UIMS builder.

The first such tools are designed for input parsing and include state machines, pushdown automata, naturallanguage parsers, menu managers, etc. These tools actually manage the runtime dialogue. In general such tools
are chosen because of their efficient runtime behavior and their generality in expressing syntactic constraints on the input language in an application-independent manner. The emphasis therefore is on implementation rather than specification of input languages. We distinguish between implementation and specification because, even though most existing UIMS specifications reflect their underlying implementation model, there is still some question of their suitability for specification by a nonprogramming dialogue author.

The second set of internal support tools includes the display management tools. Graphics systems such as Core and GKS fall into this category. These tools should support the lexical level of input and should provide an internal encoding and implementation for the output description and the input/output linkage. The existing graphics standards, however, are somewhat lacking in this regard because of their poor handling of input/output linkage and lack of capabilities for expressing image manipulation.

A name management capability is another desired function. This requirement is illustrated by the spellingchecker example in which the various components of the interaction need to reference the same data objects. Some naming mechanism is required for this function. In single process systems a pointer or address is usually adequate, but in multiprocess or multiprocessor implementations a more general capability is required.

User interface specification aids. This class of tools supports the dialogue author and graphics designer. Such tools are concerned with collecting, managing, and presenting the user interface specification information in a form that is useful to the author and designer. These tools also handle the encoding of the information in a form suitable for the implementation and internal tools. It is assumed that this encoding is automatically performed.

Three general techniques can be used to formulate such specifications: notation, example, and composition. In a notational specification, a special language is created for expressing the information. Such techniques include BNF, picture languages, color-naming schemes, forms fill-in, etc. This technique is the most general, since anything at all can be expressed. However, it is awkward for some kinds of information and difficult for nonprogrammers.

The technique of specification by example is a powerful one because the specifier is not mentally encoding the concepts into a notation. A particularly powerful illustration of this is an interactive drafting tool for creating layouts and data presentations by drawing them. Such a technique is a natural process for a graphics designer whose skills are visual rather than logical or notational. In spite of the power of this technique as an expressive medium, it appears that many kinds of information do not have good visual examples. In such cases the notational approach can be applied. Users trying to integrate the image manipulation information with the data presentation drawing tool are examples of the rotational approach.

We know that we can express any of the information required in a notational form even if it is awkward. We also know that we can express many kinds of information by example. We cannot say, however, that there is any information in the user interface specification that cannot be expressed by example. We do, however, have many forms of information that we do not know how to express with a visual example. There is hope, however, that such forms are possible as evidenced by the use of spreadsheets for calculation and Query by Example for database queries. This area is exciting but as yet unsolved. Even if specification-by-example techniques existed for all the information in a user interface specification, the problem of integrating all the related representations into a cohesive system remains.

The final technique of specification by composition consists of retrieving existing pieces of a specification from a library and "gluing" them together to form a new specification. The value of this technique has long since been shown by assembly language macros, Fortran subroutine libraries, and, more recently, Unix. An important subcategory of this technique is the retrieval of generic or template specifications, which are then tailored to fit the present need. A voluminous amount of literature is available for the support of this technique within both programming languages and software management environments. We will not discuss this further except to note, as we have discussed already, the variety of kinds of information that exists in a user interface specification.

As a final note on the specification of user interfaces, we should point out that the ultimate criterion for evaluating the work of a dialogue author or a graphics designer is how the interface appears to the end user. Because of this dominance of the external view, the user interface development process is of necessity an iterative prototyping process. This imposes three requirements on the specification tools. (1) The information should be expressed wherever possible in terms of its external appearance. (This is the reason for the emphasis on specification by example.) (2) The specify/execute/ evaluate loop must be as short as possible and as tightly integrated as possible to preserve the context of the appearance to the end user. (3) The specification tools should actively support and encourage the design of good user interfaces. This last point leads us to the tools for evaluating user interfaces.

Evaluation tools. The dialogue evaluator, like all the other participants in user interface development, needs specific support tools. These tools can be applied either to the user interface specification or to the actual or prototype system at runtime.

Tools applied to the user interface specification are of particular value because they can be used to guide the development of the system in hopes of preventing poor interfaces rather than just detecting them. Because of the existence of formal notations for describing input languages, several proposals for tools to evaluate such specifications have been published.6-10 Factoring out the input language description from the application actions greatly aids in the automation of such techniques. A whole body of knowledge exists in the field of graphic design for evaluating the informativeness of images. Little or none of these works have been algorithmically analyzed because adequate representations for the output description information are lacking. Without such representations, algorithmic evaluation is impossible. Such representations do not exist because the output description has traditionally been buried in the application code and because the information is inherently graphic and thus not easily expressed in a textual notation. The specification approaches described earlier should help alleviate this problem.

Tools, which can be applied at runtime to evaluate a user interface, should obtain information about end-user performance and analyze the resulting information. The performance-monitoring metrics are relatively easy to add to a UIMS and consistently identify which part of the interface specification is responsible for each piece of information that is logged. The design of evaluation tools, however, is only in its infancy and has a long way to go.

Example User Interface Management Systems. Several User Interface Management Systems already exist. We will not attempt an exhaustive survey of such systems but simply present those with which we are most familiar as examples of the concepts that have been presented earlier.

Syngraph. The Syngraph11,12 tool has been developed to support graphics interactions. The input language is specified in a modified form of Backus-Naur form, or BNF. Based on the input language description, the menu and simulated valuator areas are automatically formatted by the system, and all prompt, echo, and error messages are automatically provided. In addition, facilities are provided for rubbing out undesired inputs and escaping from a dialogue to some "home" state.

Syngraph generates a Pascal program, which contains the application actions as Pascal procedures and allows normal Pascal parameter passing between segments of the dialogue. The dialogue author writes only in modified BNF and references the application actions. Syngraph performs all layout functions, and it is somewhat inflexible in this regard. The output description is buried in the application actions with the exception that pick identifiers are represented as pointers to application data objects and the picking mechanism syntactically distinguishes between picks of objects of differing user-defined types. No dialogue evaluation tools are included.

Menulay. Menulay13 is part of a UIMS that allows layout and data presentation information to be graphically expressed and tied to application actions written in C. The input language is specified by selecting interactive techniques from a library and integrating them with the graphic objects to form an interactive application. No formal specification of the input description exists; however, a "dribble file" feature, which logs the end user's inputs for later analysis, has been proposed.

TIGER/UIMS. The Interactive Graphic Engineering Resource, or TIGER/UIMS14 has been constructed as part of a larger engineering support system for computer' aided design, engineering, manufacturing, and applications. This UIMS acts as an external control of the application (that is, the UIMS calls the application procedures). The information in the UI specification is described in a specialized textual programming language called TICCL, which is precompiled. TICCL, or TIGER Interactive Command and Control Language, provides a context that contains links to the application via a strict procedure interface. The application provides all organization of the application data objects while the UIMS controls the viewing of these objects. Informational objects are displayed by the UIMS, and the layout is statically determined.

The TIGER/UIMS is implemented on the IBM/VM system in Pascal. Internally, dialogue structures are represented as trees, with entire branches accessible as dialogue subroutines. A specially developed 3-D graphics package that provides name management is used by both the UIMS and the application. Dialogue specification occurs through a text editor, and an infamous "copy-and-conquer" technique is used where existing dialogues are modified for new functions. Some performance analysis tools to measure UIMS application traffic are available, but tools for measuring user paths are isolated to a keystroke-level capture facility.

DMS. The DMS-Dialogue Management System15,16 is a UIMS in which the human-computer interfaces are represented internally with a linguistic structure. The linguistic structure imposes constraint relations on user actions and provides pointers to descriptions of the mechanisms for obtaining user inputs and producing associated actions, including those associated with error conditions. The linguistic structure and the definitions of the 1/O mechanisms are produced without programming, using a set of specialized interactive tools collectively referred to as AIDE, or Author's Interactive Design Environment. DMS is supported by a multitasking environment that facilitates the decomposition and design of the individual components of the system, such as the application, human-computer interface, and l/O device components.

REXX/FSX. REXX17 is a procedural command language syntactically similar to PL/I supported on VM/CMS. FSX is a graphics support package for panels of text. REXX is capable of cooperating with routines between applications written in standard programming languages and the FSX graphics support package.
Dialogues are programmed as REXX procedures; thus, the dialogue author must have a fair level of programming skill. There is an increasing trend among internal IBM CMS application developers to design applications with REXX/FSX user interfaces. The advantages cited are

Implications of UIMS technology for the user community

If we assume that all that has been discussed so far can and will come to be, it is important to stop and look at the implications of this technology for those involved in the development of interactive applications.

End user. The major implication of UIMS technology for end users is the reduced cost of developing new interactive applications. The reliability of UIMS-based systems should be much better than hand-coded systems for the same reasons that large programs written in a high-level language are more reliable than those written in assembly language. A UIMS should also provide more consistent handling of keystroke-level interactions across multiple devices and multiple applications. If all applications within an organization are developed using the same UIMS, there is a greater likelihood that a consistent "single system" image can be maintained across all such applications. Lastly, much better capabilities should be provided for handling defaults, exceptional conditions, and errors than is currently possible in hand-coded systems.

Application programmer. The application programmer will be relieved of the problems of dialogue design and will be able to concentrate on how the computations are to be performed, with greatly reduced concern for how input is obtained from the user. The existence of a UIMS to handle the user interface, however, will in many cases force radically different control structures on the application. This change will entail careful design of the UIMS/application interface, including the application actions to be provided and the way application control of the input language is to be exercised.

Dialogue author. The dialogue author will be freed from the algorithmic concerns of the application and will be better able to concentrate on the user interface. Questions of appropriateness. of interactive devices and the proper integration of input and output can be resolved by a design/prototype/evaluate loop with the end user actively involved. Tools defined in terms of the user interface rather than a programming language will be available.

Graphics designer. The tools described above will allow the graphics designer to become actively involved in the development process. The graphics designer will be able to create the layouts and presentations directly rather than working through a programmer.

Dialogue evaluator. As with the graphics designer, this individual will be supported directly by the UIMS analysis tools. In addition the adaptability of user interfaces will increase the likelihood that the recommendations of the evaluator will be incorporated into the system more easily. As the cost of changing the interface falls, it will become economically feasible to carefully refine the interface.

UIMS builder. For the UIMS builder, the prospect of long-term employment appears with the significant challenge of providing systems that are tightly integrated internally and highly adaptable to a wide variety of applications.

We have discussed a broad range of issues related to User Interface Management Systems and the development of interactive applications, discussed several examples illustrating the role that a UIMS might play in interactive application development, identified several key personnel roles, and pointed out the range of their skills and responsibilities. In order to further characterize the task of a UIMS builder, we outlined the kinds of information that must be specified either implicitly or explicitly about a user interface. The tools necessary to form a complete UIMS were also presented. Together, these points represent a rather broad range of capabilities.

From the examples of UIMSs currently in use, it appears that the beginning of much of the described technology is available, though only in bits and pieces. Numerous questions remain to be answered in terms of techniques for specifying user interfaces and techniques for evaluating the interfaces.

Significant steps have been taken by a number of researchers beyond the limited ones discussed here. The preliminary UIMS implementations have been applied with much success, but extensive work still must be done before a comprehensive UIMS is readily available.


1. J. A. Sutton and R. H. Sprague, "A Study of Display Generation and Management in Interactive Business Applications," IBM Research Report RJ2392, Nov. 1978.

2. Graphical Input Interaction Technique Workshop Summary, Computer Graphics, Vol. 17, No. 1, Jan. 1983, pp. 5-66.

3. M. L. Schneider, "Models for the Design and Static Software Systems," Proc. Workshop/Symp. on Human-Computer Interaction, Mar. 1981.
4. J. D. Foley and A. van Dam, Fundamentals of Interactive Computer Graphics, Addison-Wesley Publishing, Reading, Mass., 1982.

5. D. S. H. Rosenthal, J. C. Michener, G. Pfaff, R. Kessener, and M. Sabin, "Detailed Semantics of Graphics Input Devices," Computer Graphics (Proc. Siggraph 82), Vol. 16, No. 3, July 1982, pp. 33-38.

6. T. Bleser and J. Foley, "Towards Specifying and Evaluating the Human Factors of User-Computer Interfaces," Proc. Human Factors in Computer Systems, Mar. 1982.

7. S. K. Card, T. P. Moran, and A. Newell, "The KeystrokeLevel Model for User Performance Time with Interactive Systems," Comm. ACM, Vol. 23, No. 7, July 1980, pp. 396-410.

8. P. Reisner, "Formal Grammar and Human Factors Design of an Interactive Graphics System," IEEE Trans. on Software Eng., Vol. SE-7, No. 2, Mar. 1981, pp. 229-240.

9. P. Reisner, "Further Developments Toward Using Formal Grammar as a Design Tool," Proc. Conf. on Human Factors in Computer Systems, Mar. 1982, pp. 309-314.

10. P. Reisner, "Analytic Tools for Human Factors of Software," IBM Research Laboratory Report RJ 3803 (43605), San Jose, CA, 1983.

11. D. R. Olsen and E. P. Dempsey, "Syntax Directed Graphical Interaction," Proc. Sigplan 83 Symp. Programming Language Issues in Software Systems, June 1983.

12. D. R. Olsen and E. P. Dempsey, "SYNGRAPH: A Graphical User Interface Generator," Computer Graphics, (Proc. Siggraph 83), Vol. 17, No. 3, July 1983, pp. 43-50.

13. W. Buxton, M. R. Lamb, D. Sherman, and K. C. Smith, "Towards a Comprehensive User Interface Management System," Computer Graphics (Proc. Siggraph 83), Vol. 17, No. 3, July 1983, pp. 35-42.

14. D. J. Kasik, "A User Interface Management System," Computer Graphics (Pros. Siggraph 82), Vol. 16, No. 3, pp- 99-106.

15. H. R. Hartson, R. W. Ehrich, and D. H. Johnson, "The Management of Dialogue for Human-Computer Interfaces," submitted to Human Computer Interaction, 1983.

16. H. R. Hartson, D. H. Johnson, and R. W. Ehrich, "A Human-Computer Dialogue Management System," Proc. of Interact 84, Vol. 1, IFIPS, Sept. 1984, pp. 57-61.

17. IBM Corp., VM/SP System Product Interpreter Reference, edition SC24-5239, Sept. 1983.