View sample User Interface Design Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom writing service for professional assistance. We offer high-quality assignments for reasonable rates.
This research paper discusses those design principles that enable the interface between a user and a device to form a synergistic partnership that is user centered and task oriented. The original term for this work was ‘man–machine interface design.’ However, the term ‘man–machine’ has been replaced by the more inclusive ‘human–machine.’ In turn, this term has given way to ‘user interface design.’ ‘User’ emphasizes the role the human plays in the system and also highlights the dominant framework for developing and applying interface design, namely user-centered design. ‘User interface design’ is concise and has currency in recent books by Barﬁeld (1993), Cooper (1995), and Shneiderman (1998). The paper is organized around four key concepts: aﬀordances, user models, feedback, and error handling.
When an animal perceives an object it will perceive fundamental properties that signal possible interactions. The size, shape, and solidity of a chair aﬀords sitting for a person, but not for an elephant. A vine aﬀords swinging for a monkey or Tarzan, but not for me. The concept of aﬀordances originates with Gibson (1979), who was interested primarily in actions that take place in natural environments. The term aﬀordance was extended to users interacting with artifacts by Norman in his inﬂuential books on user-centered design (1988, 1993, 1998). For Norman, aﬀordances are the set of action–outcome pairings perceived by a user. Aﬀordances derive from a user’s past experience and provide obvious clues for how things operate. If the designer and user share the same knowledge, then actions can be signaled naturally, not arbitrarily: switches are for ﬂipping and knobs are for turning.
Aﬀordances are very important in the design of artifacts that users will walk up and use, or are manipulated directly. Users must resolve quickly the issues of what actions are possible, and where and how the selected action is performed. The closed door of an emergency exit aﬀords opening, but the hardware should tell you to push, not to pull or slide. A broad, ﬂat plate that aﬀords little to grasp on the right side broadcasts where to push. In contrast, imagine a sliding door with the type of handle that usually aﬀords grasping, twisting, and pulling. Harry Potter or Harry Houdini would have diﬃculty escaping such a malevolent design.
Constraints reduce the number of possible actions. Aﬀordances and constraints, working in tandem, allow the user to discover the correct sequence of actions, even in a novel situation. Constraints can be physical, semantic, cultural, or logical (Norman 1988). If you want to leave our courtyard via the gate, the vertical handle aﬀords gripping and the thumb lever just above it aﬀords pressing. However, the crossbar rests and catches on the indented bracket and it is clear that the crossbar must be raised, by pressing on the thumb level, before pulling will do any good. This simple, two-step process is much harder for guests to get right when they ﬁrst arrive because the physical constraint, the crossbar, is not visible from the opposite side of the opaque door.
Aﬀordances oﬀer a range of possible actions and should be made visible, not hidden. An actual aﬀordance that is not also a perceived aﬀordance contributes nothing to usability. The design of the human– computer interface often relies on conventions to convey aﬀordances. The scrollbar on the side of a computer window aﬀords vertical movement, but a novice may fail to perceive this aﬀordance. These conventions enable experienced users to transfer their user models positively from one application to the next, but one must stretch to think of these conventions as visible and natural aﬀordances to the novice user.
2. User Models
User models refer to the user’s understanding of how the device can be used in order to perform tasks and accomplish goals. Good user models enable the user to determine the optimal procedures for accomplishing routine tasks, and to infer the solution to somewhat diﬀerent or even novel tasks. Designers of user interfaces rely on diﬀerent techniques for revealing good models to users. A user model may emphasize the underlying technology, a metaphor, or the user’s task (Cooper 1995).
2.1 Implementation Models From The Technology Paradigm
The technology paradigm is based on an understanding of how things work. If the mechanism is simple and visible this is an obvious choice. If the aﬀordances lead a person to the right place, suggest the right action, and it is easy to see the consequences of that action, then the user has a very handy mental model. An implementation model even enables the user to repair the device, as I recently had to do with our courtyard gate. Unfortunately, the human–computer interface is often expressed through the technology paradigm in terms of how the software works. Thus, the user must learn how the program works in order to use it eﬀectively. Once acquired, this is also a very handy mental model. The user could even modify the program to suit new needs or preferences. However, for most of us, the cost of gaining an implementation model will be horrendous. It is better to emphasize a functional model of how to use the tool to accomplish speciﬁc tasks.
2.2 Borrowed Models From The Metaphor Paradigm
A quick way to provide a user with a model of how a tool works is to draw an analogy to a tool the user already knows how to use. The similarities between the known and unknown allow the user to intuit use without understanding the mechanics of the hardware or software. The most famous metaphor of them all is the desktop metaphor invented by Xerox PARC and successfully implemented by Apple Macintosh. By deﬁnition a metaphor involves an implicit comparison, in which the tenor of comparison (e.g., computer screen) resembles the vehicle (e.g., desktop). The set of attributes shared between the tenor and the vehicle form the basis for the metaphor.
The metaphor paradigm for conveying a user model initially received reviews as stellar as the pig who built a brick house, but interface design pundits have huﬀed and puﬀed to ﬂatten the metaphor paradigm (Cooper 1995, Norman 1998). Metaphors have several weaknesses. The set of features that are shared between tenor and vehicle may include irrelevant similarities that just clutter the user model. The useful ones are not necessarily the salient ones. There may also be dissimilarities that encourage inappropriate actions. There may be novel features of the tenor (computer interface) that are outside the scope of the vehicle that actually impede the user’s discovery of these new features. Finally, metaphors may not scale as size and complexity increases. For example, keeping track of three or four documents on a messy desktop or computer screen is not too diﬃcult, but when I’m working with dozens of articles, books, tables, ﬁgures, and printouts my desktop resembles a pigsty and there may be a better way to organize and use a similar electronic collection.
2.3 Task-Oriented Models From The User Centered Paradigm
The important idea behind task-oriented models is that if there is a good tool with a good interface one can quickly, easily, and naturally learn how to use it. Interface objects and actions focus on the user’s task regardless of how the device actually works.
Cooper’s chapter ‘Lord of the Files’ provides one of the very best detailed examples of the tension between designing an interface based on the underlying technology vs. designing the interface from the viewpoint of the user’s task. Consider the ‘ﬁle’ menu from any standard graphical-user interface. The menu options displayed for the user are focused on managing ﬁles, particularly the managment of the copy stored on the disk and the copy active in RAM. Many users of application programs have never developed a mental model that correctly reﬂects the relationship between the yin and yang of these two ‘copies.’ Nor should they, because the user’s task is to generate a usable document. These users would beneﬁt from a revised interface that hides the ﬁle management task (a computer implementation problem) and presents a single document that the user ‘appears’ to control from initiation, through revision, to completion. From a task oriented perspective, the ‘ﬁle’ menu should be replaced by a ‘document’ menu, where users can ﬁnd options that correspond to current goals that are a frequent and integral part of document construction (e.g., options to rename a document, make a snapshot copy, move the document to a new location, or make a milestone copy).
A basic principle of good design is that every user action should be followed by a visible reaction. When you push on the hand plate of an emergency door you hear it click, feel the bar move, and see the door open. Several feedback distinctions are useful: (a) real vs. abstract (Barﬁeld 1993), (b) articulatory vs. semantic (Hix and Hartson 2000), (c) delayed vs. immediate (Shneiderman 1998), and (d) positive vs. negative (Cooper 1995).
When I rotate the switch that turns my car lights on at night I get real feedback when the lights come on. I receive concurrent, but abstract feedback, when the indicator light on the dash shows that the headlights are on. Redundancy of real and abstract feedback can be good, because abstract feedback can exaggerate real changes that are subtle such as the diﬀerences between high and low-beams, particularly at dusk. Abstract feedback is, of course, necessary when the real eﬀect is remote or, by necessity, hidden from the user.
Articulatory feedback tells users their hands worked correctly, while semantic feedback tells them their heads worked correctly. For example, when the user selects ‘open’ from a pull-down menu, the choice may blink brieﬂy just before the menu disappears and the dialogue box appears. Paap and Cooke (1997) recommend that menu panels should provide feedback indicating: (a) which options are selectable, (b) when an option is under the pointer and, therefore, can be selected, (c) which options have been selected so far, and (d) the end of the selection process. The latter three provide articulatory feedback that helps the users select the intended options. The ﬁrst is more semantic: if the desired option is not available for selection then the user has probably made a navigation or mode error.
In an ideal world our interfaces would guide us with continuous and immediate feedback. In the real world some delays are inevitable. Whenever the system is performing a lengthy process the user should be given feedback, a status indicator, that processing is ongoing and not suspended. It is most useful to display an indicator showing what portion of the task has been completed.
Both Cooper (1995) and Norman (1988, 1993) are strong advocates of using more positive feedback and less negative feedback. They argue that designers should try to see users as making imperfect progress toward their goals, and that error messages should be friendly, make clear the scope of the problem, and provide alternative solutions (not just OK buttons). Don’t think of the user as making errors; think of their actions as approximations to what is desired. Even the quintessential error recovery mechanism, the undo command, can be reconceptualized as a method of exploration.
4. Error Handling
The distinction between positive and negative feedback brings us to the special topic of errors. An interface with a good design will help users to detect errors, diagnose the cause of errors, assist error recovery, and prevent errors in the ﬁrst place. In a dynamic system, many dangerous states can be avoided by forcing the operations to take place in proper sequence. Opening the door of a microwave oven switches the oven oﬀ automatically. Norman (1988) and Barﬁeld (1993) have useful discussions of diﬀerent types of forcing functions that prevent dangerous errors from occurring.
4.1 Slips: Errors Of Action
A slip occurs when the user has formed the correct intention, but the appropriate action is waylaid on the way to execution. Slips often occur in the context of actions that are highly automatic and under unconscious control (Norman 1988). As such, slips are more likely to emanate from highly skilled users. Slips should be easy to detect because there will be an obvious discrepancy between the goal and result. However, detection is likely only if there is eﬀective and immediate feedback.
Slips include the infamous mode error. Mode errors occur when devices have diﬀerent modes of operation and the same action produces diﬀerent results depending on the mode. Mode errors can be benign in digital watches or catastrophic in automatic pilots. They are more likely to occur when tasks are interrupted and time elapses between mode setting and action selection. Mode errors can be prevented either by eliminating modes or by making the mode visible and attention catching. Cooper (1995) oﬀers a cogent defense of well-designed modes.
Description errors are slips caused by performing the right action on the wrong object. When driving in Australia, I usually signal my intention to turn by ﬂipping the left-hand lever, thereby turning on the windshield wipers. Consistency may be the hobgoblin of little minds, but violations of consistency will constantly frustrate the ingrained habit of skilled users. Relative frequency also matters. A high-frequency object may surreptitiously replace the intended target, particularly if they are similar. For example, the frequently summoned ﬁle ‘mybook’ accidently replaces the newer ﬁle ‘mydoodle’, resulting in the unintentioned request to ‘delete mybook.’ Potentially critical errors like this usually are ‘prevented’ by conﬁrmation boxes. However, when conﬁrmations are issued routinely, users get used to approving them routinely, even when the conﬁrmation warns of an impending disaster. When experts are working on automatic pilot, it is sometimes better to make sure they can recover from slips easily. The Macintosh trash basket and the Windows recycle bin serve such a purpose.
4.2 Mistakes: Errors Of Intention
Mistakes, unlike slips, are errors of intention and are often caused by an incorrect user model. Aﬀordances and feedback should help form correct user models, but even error detection is diﬃcult for general purpose technology that is powerful and ﬂexible. When there are a large number of ways of accomplishing a large array of goals, there will be countless numbers of action sequences that are perfectly legal, but most of them will not move the user closer to task completion. If I turn right on Triviz, it is an error because I will eventually run into a construction detour, but nothing on my car’s dash ﬂags that error, because it has no knowledge of where I want to go and no knowledge of current route conditions. Specialized technologies, such as GPS navigation systems, can have usercentered, task-oriented designs. The user and the interface share goal information and, consequently, errors can be detected and solutions prompted.
The general-purpose PC and its powerful applications programs are plagued with the same problem in error detection. Having evolved within the technology paradigm, the interface detects when user actions violate the program, but the interface oﬀers little or no help as users perform a complicated series of legal actions that the program slavishly follows. The user must detect errors by inspecting the document to see if the intended changes have been made. A specialized program for preparing psychology research articles in the format prescribed by the American Psychological Association, could detect and prevent far more errors and oﬀer more useful guidance because it would have far more knowledge of the task at hand. These considerations play a large role in recent calls to shift from general-purpose devices to information appliances (Norman 1998, Mohageg and Wagner 2000).
5. Evaluating Usability
Usability engineers and designers should get involved early and often. Landauer (1995) makes an impressive case for the economic beneﬁts of usability in product development. Guidelines such as those discussed in this research paper need to be augmented by expert reviews and usability testing. There are a variety of expert-review methods (Shneiderman 1998), of which, Nielson’s (1994) heuristic evaluation method may be the most popular. Heuristic evaluation involves having three to ﬁve usability experts examine an interface with respect to ten usability heuristics or guidelines. Heuristic evaluation is vastly superior to no review, but not as eﬀective as user testing (Desurvire 1994). User-centered design should include tests with real users doing real work. The beneﬁt of such activities is simply enormous, almost always over 20 to 1, and often in the 200 to 1 range (Landauer 1995).
- Barﬁeld L 1993 The User Interface: Concepts and Design. Addison-Wesley, Reading, MA
- Cooper A 1995 About Face: The Essentials of User Interface Design. IDG Books Worldwide, Foster City, CA
- Desurvire H W 1994 Faster, cheaper! Are usability inspection methods as eﬀective as empirical testing? In: Nielsen J, Mack R L (eds.) Usability Inspection Methods. Wiley, New York, pp. 173–202
- Gibson J J 1979 The Ecological Approach to Visual Perception. Houghton Miﬄin, Boston
- Hix D, Hartson H R 2000 Developing User Interfaces. Wiley, New York
- Landauer T K 1995 The Trouble with Computers: Usefulness, Usability and Productivity. MIT Press, Cambridge, MA
- Mohageg M F, Wagner A 2000 Design considerations for information appliances. In: Bergman E (ed.) Information Appliances and Beyond. Academic Press, San Diego, CA
- Nielson J 1994 Heuristic evaluation. In: Nielsen J, Mack R L (eds.) Usability Inspection Methods. Wiley, New York
- Norman D A 1988 The Psychology of Everyday Things. Basic Books, New York
- Norman D A 1993 Things That Make Us Smart. Perseus, Reading, MA
- Norman D A 1998 The Invisible Computer. MIT Press, Cambridge, MA
- Paap K R, Cooke N J 1997 Design of menus. In: Helander M, Landauer T K, Prabhu P (eds.) Handbook of Human–Computer Interaction, 2nd edn. Elsevier Science, Amsterdam
- Shneiderman B 1998 Designing the User Interface. AddisonWesley Longman, Reading, MA