HCI — Human Computer Interaction
In this blog I try to give some brief things of human computer interaction. First of all we should know what is HCI? Wikipedia says that “Human–computer interaction (HCI) studies the design and use of computer technology, focused on the interfaces between people (users) and computers. Researchers in the field of HCI observe the ways in which humans interact with computers and design technologies that let humans interact with computers in novel ways. As a field of research, human-computer interaction is situated at the intersection of computer science, behavioural sciences, design, media studies, and several other fields of study”. So in this series of blog articles I mainly focus on these three topics.
- Design rules for interactive systems
- Evaluation techniques for interactive systems
- Universal Design for Interactive Systems
so now we can look one by one of these topics.
Design rules for interactive systems
The most abstract design rules are general principles, which can be applied to the design of an interactive system in order to promote its usability. For easy understanding this topic can divide to these subtopics,
- Principles of Learnability
- Principles of Flexibility
- Principles of Robustness
- Standards and Guideline for Interactive systems
- Schneiderman’s 8 Golden Rules
- Norman’s 7 Principles
1. Principles of Learnability
Learnability principles are concerned with interactive system features, which aid novice users to learn quickly and also allows steady progression to expertise. The principles discussed below support the learnability design principle.
- Predictability — This interactive design principle requires a user’s knowledge of interaction to be sufficient to determine the outcome of present or future interaction with the system.
- Synthesizability — Two aspects of synthesizability are immediate honesty and eventual honesty. In general, these principles relate to the ability of the interactive system to provide the user with an observable and informative notification about the operation state changes within the system.
- Familiarity — The familiarity principle is concerned with the ability of an interactive system to allow a user to map prior experiences, either real world or gained from inter
- Generalizability — This interactive design principle provides support for users to extend knowledge of specific interaction within, and across applications, to new, but similar situations.
- Consistency — To support generalizability, consistency is essential and is probably one of the most widely applied design principle in user interface design. Consistency between application is always favorable, however consistency within an application is essential.
2. Principles of Flexibility
Flexibility refers to the multiplicity of ways in which the end-user and the system exchange information. We identify several principles that contribute to the flexibility of interaction.
- Dialog initiative — When the system controls the dialog flow, the dialog is said to be system preemptive. Conversely, when the flow is controlled by the user, the dialog is said to be user preemptive. In general a user preemptive dialog is favored although some situations require a system preemptive dialog. In reality some line between these two extremes is usually the most satisfactory solution.
- Multi-threading — Within a user interface a thread can be considered a part of dialog that allowing a task to be performed. Multi-threading within a interface provides support for multiple tasks to be performed at one time.
- Task migratability — Task migratability means passing responsibility of execution of tasks between user and system. A computerized spell checker is a good example to this.
- Substitutivity — Substitutivity offers a user alternative ways of specifying input or viewing output. Indeed the distinction between output and input can be blurred.
- Customizability — The user interface should be able to support individual preferences. For example standard control bars in MS Word can be amended as required. The Customizability principle supports a user’s ability to adjust systems settings or features to a form that best suites the preferred way of usage.
3. Principles of Robustness
In a work or task domain, a user is engaged with a computer in order to achieve some set of goals. The robustness of that interaction covers features that support the successful achievement and assessment of the goals. Here, we describe principles that support robustness.
- Observability — Observability should provide users with an ability to evaluate the internal state from its representation. If a user cannot understand the internal state of the system, there is a high likelihood that the user’s confidence will be very low
- Recoverability — Users should be able to reach a desired goal after recognition of errors in previous interaction. Error recovery can be achieved in two ways, forward (negotiation) and backward (undo).
- Responsiveness — Responsiveness is usually measured in terms of the rate of communication between the system and a user. Response time, indicating change of states within the system, is important. Short duration or instantaneous response time is more desirable.
- Task conformance — There are two aspects of task conformance, task completeness, and task adequacy. Task completeness is concerned with whether a system is capable of supporting the entire task that a user wishes to perform. The task adequacy is concerned with addressing the user’s understanding of these tasks It is necessary that an interactive system should allow the user to perform the desired tasks as defined during the task analysis.
4. Standards and Guideline for Interactive systems
Standards for interactive system design are usually set by national or international bodies to ensure compliance with a set of design rules by a large community. Standards can apply specifically to either the hardware or the software used to build the interactive system. There are differing characteristics between hardware and software, which affect the utility of design standards applied to them:
Underlying theory Standards for hardware are based on an understanding of physiology or ergonomics/human factors, the results of which are relatively well known, fixed and readily adaptable to design of the hardware. On the other hand, software standards are based on theories from psychology or cognitive science, which are less well formed, still evolving and not very easy to interpret in the language of software design. Consequently, standards for hardware can directly relate to a hardware specification and still reflect the underlying theory, whereas software standards would have to be more vaguely worded.
Change Hardware is more difficult and expensive to change than software, which is usually designed to be very flexible. Consequently, requirements changes for hardware do not occur as frequently as for software. Since standards are also relatively stable, they are more suitable for hardware than software.
for these reasons, a given standards institution, such as the British Standards Institution (BSI) or the International Organization for Standardization (ISO) or a national military agency, has had standards for hardware in place before any for software.
We have observed that the incompleteness of theories underlying the design of interactive software makes it difficult to produce authoritative and specific standards. As a result, the majority of design rules for interactive systems are suggestive and more general guidelines. Our concern in examining the wealth of available guidelines is in determining their applicability to the various stages of design. A classic example was a very general list compiled by Smith and Mosier in 1986 at the Mitre Corporation and sponsored by the Electronic Systems Division of the US Air Force. The basic categories of the Smith and Mosier guidelines are:
- Data Entry
- Data Display
- Sequence Control
- User Guidance
- Data Transmission
- Data Protection
Each of these categories is further broken down into more specific subcategories which contain the particular guidelines.
5. Schneiderman’s 8 Golden Rules
Shneiderman’s eight golden rules provide a convenient and succinct summary of the key principles of interface design. They are intended to be used during design but can also be applied, like Nielsen’s heuristics, to the evaluation of systems. Notice how they relate to the abstract principles discussed earlier.
- Strive for consistency — Consistent sequences of actions should be required in similar situations; identical terminology should be used in prompts, menus, and help screens; and consistent commands should be employed throughout.
- Enable frequent users to use shortcuts — As the frequency of use increases, so do the user’s desires to reduce the number of interactions and to increase the pace of interaction. Abbreviations, function keys, hidden commands, and macro facilities are very helpful to an expert user.
- Offer informative feedback — For every operator action, there should be some system feedback. For frequent and minor actions, the response can be modest, while for infrequent and major actions, the response should be more substantial.
- Design dialog to yield closure — Sequences of actions should be organized into groups with a beginning, middle, and end. The informative feedback at the completion of a group of actions gives the operators the satisfaction of accomplishment, a sense of relief, the signal to drop contingency plans and options from their minds, and an indication that the way is clear to prepare for the next group of actions.
- Offer simple error handling — As much as possible, design the system so the user cannot make a serious error. If an error is made, the system should be able to detect the error and offer simple, comprehensible mechanisms for handling the error.
- Permit easy reversal of actions — This feature relieves anxiety, since the user knows that errors can be undone; it thus encourages exploration of unfamiliar options. The units of reversibility may be a single action, a data entry, or a complete group of actions.
- Support internal locus of control — Experienced operators strongly desire the sense that they are in charge of the system and that the system responds to their actions. Design the system to make users the initiators of actions rather than the responders.
- Reduce short-term memory load — The limitation of human information processing in short-term memory requires that displays be kept simple, multiple page displays be consolidated, window-motion frequency be reduced, and sufficient training time be allotted for codes, mnemonics, and sequences of actions.
6. Norman’s 7 Principles
To assess the interaction between human and computers, Donald Norman in 1988 proposed seven principles. He proposed the seven stages that can be used to transform difficult tasks. Following are the seven principles of Norman −
- Use both knowledge in world & knowledge in the head.
People work better when the knowledge they need to do a task is available externally — either explicitly or through the constraints imposed by the environment. But experts also need to be able to internalize regular tasks to increase their efficiency. So systems should provide the necessary knowledge within the environment and their operation should be transparent to support the user in building an appropriate mental model of what is going on.
- Simplify task structures.
Tasks need to be simple in order to avoid complex problem solving and excessive memory load. There are a number of ways to simplify the structure of tasks. One is to provide mental aids to help the user keep track of stages in a more complex task. Another is to use technology to provide the user with more information about the task and better feedback. A third approach is to automate the task or part of it, as long as this does not detract from the user’s experience. The final approach to simplification is to change the nature of the task so that it becomes something more simple. In all of this, it is important not to take control away from the user.
- Make things visible.
bridge the gulfs of execution and evaluation. The interface should make clear what the system can do and how this is achieved, and should enable the user to see clearly the effect of their actions on the system.
- Get the mappings right.
User intentions should map clearly onto system controls. User actions should map clearly onto system events. So it should be clear what does what and by how much. Controls, sliders and dials should reflect the task — so a small movement has a small effect and a large movement a large effect.
- Exploit the power of constraints, both natural and artificial.
Constraints are things in the world that make it impossible to do anything but the correct action in the correct way. A simple example is a jigsaw puzzle, where the pieces only fit together in one way. Here the physical constraints of the design guide the user to complete the task.
- Design for error.
To err is human, so anticipate the errors the user could make and design recovery into the system.
- When all else fails, standardize.
If there are no natural mappings then arbitrary mappings should be standardized so that users only have to learn them once. It is this standardization principle that enables drivers to get into a new car and drive it with very little difficulty — key controls are standardized. Occasionally one might switch on the indicator lights instead of the windscreen wipers, but the critical controls (accelerator, brake, clutch, steering) are always the same.
This is the end of the first blog of this series. For more details read next two blogs also. Finally I think you got some idea from this blog thankyou for reading. Reference and the next blog link are mention bellow.
Next blog article of this series : Evaluation techniques for interactive systems
References:-
ALAN DIX, JANET FINLAY, GREGORY D. ABOWD, RUSSELL BEALE HUMAN–COMPUTER INTERACTION (THIRD EDITION)