This article discusses the methods of applying ergonomic assessment of the characteristics of the interface. They discussed methods based on formal models, the expert uses, methods of inspection, inspection cognitive assessment of compliance with recommendations, assessment of compliance with ergonomic dimensions (standards, principles, heuristics) and tools for automatic evaluation.
Applicable methods to the interface characteristics
This category of methods is essentially the previous one by the absence of direct interaction between user and system. In these methods, users as well as their tasks are represented. In this category will be discussed: models, methods and formal languages, the use of expert and inspection methods.
The methods based on formal models
Scores based on theoretical models and / or formal (discussed in Chapter 3 of the work of Kolski ) can predict the complexity of a system (eg the number of production rules of type "To this So-and-carry") that a user must know ideal for a task with the system that is proposed and therefore the performance of users. The evaluation from these models, however, a very long, very expensive and difficult to be implemented by non-specialists.
The use of expert
The expert assessment is usually defined as an informal assessment where the expert compares the performance attributes and characteristics of a system, whether in the form of specifications, in the form of models or prototypes, recommendations or standards in order to detect design flaws.
The inspection methods of usability (usability inspection methods) include a range of approaches involving the trial of evaluators, whether or not experts in usability. Although all these methods have different objectives, they are generally designed to detect aspects of interfaces can lead to difficulty in using or increase the work of users. Inspection methods are distinguished from each other by how the judgments of evaluators are derived and the evaluation criteria to base their judgments.
Among the methods of inspection, those that are of particular interest: inspection cognitive (cognitive walkthrough), the analysis of compliance with a set of recommendations (guideline reviews), and the analysis of compliance with standards (inspection standard), principles, dimensions, heuristics. This interest is partly due to the fact that they are well documented and have been tested and compared.
Cognitive Inspection (cognitive walkthrough) is a method of inspection is to assess the ease of learning, through exploration of an interactive system. This assessment requires a detailed description of the interface (ideally as a model paper, software or a prototype), a description of the task is a description of potential users and the context of use, and description precise sequence of actions that the user must make to accomplish the tasks described.
During the inspection, evaluators reviewed each of the actions that the user must perform. For each of these actions, they must wonder what the target user will be tempted to do, based on the objectives of use and knowledge of them, and shall compare these hypothetical actions for actions that enables the system at this stage of interaction.
If the interface is well designed, the actions proposed or permitted by the system should correspond to those which the user is entitled to expect. In other words, cognitive inspection seeks to identify design choices which may impede learning by exploration.
The assessment of compliance with recommendations
The assessment of compliance with recommendations (guideline reviews) is to judge the conformity of the interface elements of the recommendations (or ergonomic style) contained in various types of collections. There are guides and style, ie the recommendations proposed by builders, developers, or consortia, and codes of recommendations. Recommendations style generally distinguish software interfaces developed for different environments (e.g. for Unix, Windows and Macintosh). The Macintosh Human Interface Guidelines "from Apple or the Window interface: An Application Design Guide" from Microsoft are examples of style guides.
The recommendations are usually presented in collections (e.g. Scapin , Smith  and Vanderdonckt ) or guides (e.g. Mayhew ). Codes of recommendations are probably the most important source of design guides.
Conformity assessment of ergonomic aspects (standards, principles, heuristics)
Along with collections of recommendations, the ergonomic knowledge have been made available in different forms. There are, for example, design guides, principles, heuristics and standards.
The many design guides available to meet various objectives. When considering these guides, there is clearly a lack of uniformity in the presentation of recommendations and a variable number of recommendations presented. These are sometimes organized by criteria, principles or themes of the highest levels who seek to organize and synthesize (eg homogeneity, the stimulus-response compatibility, ease of learning, etc..), Or sometimes even Theme from a cutting of the interface (eg command language, menu selection, data entry, data display, etc.)..
While some guides specifically address the design of user interfaces, while others deal mainly with the evaluation. These can then be more or less complex and more or less detailed. There are also chapters of the book, brief and quite general (as Marshall, Nelson & Gardiner ) or guides with more detailed check-lists (eg Clegg et al. ).
The distinction between the various dimensions, at least between principles, norms and heuristics is sometimes tenuous. In some cases, the distinction comes from their official (eg the standards), in others it may be related to the precision of the definitions or the number of examples of recommendations accompanying these dimensions. The design and evaluation (design standards) are usually a series of statements about the design of interactive systems. What distinguishes them from other papers also set out general principles as is their official status and their origin, these documents come from standards bodies. There are national standards (eg DIN for Germany, AFNOR in France) and standards (eg ISO) (see the article about the standards).
The principles are general statements that are based on data from research on the way in which people learn and work. Thus, the principle of consistency in the choice of words, formats and procedures "is the result of research that has shown that people learn faster and better transferring their assets when they are presented information and the procedures they should followed were consistent. The principles are therefore objectives, without specifying how to meet them.
Tools for automatic evaluation
Various software tools to aid in the assessment have already been proposed. Some are software versions of paper documents, others are tools to support the assessment, ie to help the evaluator to structure and organize the assessment ultimately other allow for the automatic evaluation. It is this last category to be discussed here. This is to describe the tools to retrieve files describing the interface on which tests are applied for assessment of compliance with some recommendations ergonomic principles, criteria or recommendations of style (eg ERGOVAL, KRI / AG, CHIMES, SYNOP, tool Mahajan and Shneiderman, etc..). This does not mean the tools to capture user events as those presented in the first part of this article.
ERGOVAL (Farenc ) is an evaluation system based on knowledge. The ergonomic rules into the knowledge base are related to graphical interfaces and do not require the use of knowledge on the job. These rules are from various compilations and recommendations are classified in categories that are related to Ergonomic Criteria (which we see in Part 3 of this article). In addition to the ergonomic rules, ERGOVAL includes a breakdown of structural objects of the interface, based on the standard CUA (Common User Access). This typology of objects among other related sets of rules for each type of object. ERGOVAL shall, at the end of the diagnosis, the text justification for each rule violated.
KRI / AG (LÃ¶wgren & Nordqvist ) is an expert system connected to a UIMS (TeleUse, using X-Window), which assesses the files generated by it. This system relies on a base of about one hundred and ergonomic recommendations and style (Motif) on the syntactic aspects and presentation of the interface.
CHIMES (Jiang et al. ) is a system capable of evaluating the conformity of the interface to the recommendations of style OSF / Motif and recommendations on the use of color. In the evaluation, CHIMES made proposals for improvement.
SYNOP (Kolski and Millot ) is an expert system for automatic evaluation of the presentation of static synopsis from industrial recommendations concerning the presentation of information on the screen. The evaluation focuses on a description of pages, created using the graphical IMAGINE. This system also allows you to make changes automatically when they are not possible, recommendations are proposed. This system would help detect errors related to certain dimensions ergonomic (eg Grouping / Distinction of items by the location and format, readability, information density and consistency).
Mahajan and Shneiderman
Mahajan and Shneiderman  have developed a tool for assessing the consistency of the interface. The tool converts the interface created by using Visual Basic in the form of a canonical description of objects. The tool allows them to assess the consistency of the interface. Specifically, the tool evaluates the style and size of fonts used in dialog boxes, to detect inconsistencies, the colors used for the wallpapers, the interface in order to detect inconsistent use of capitals in words in the buttons, labels, titles, menus, etc.., and in all dialog boxes, and the coherence of the command buttons, ie their titles, employment of capitals, their relative location, size (height, width) and the spelling of words used in all objects, as well as evaluation of synonyms.
For now, those aspects of the ergonomic quality that can be used to assess these tools are relatively small, at least in its present condition. The tools for evaluation should be considered as complementary techniques for the inspection of the ergonomic quality of interactive software and usability tests.
All these tools, although they make no reference to the tasks and characteristics of users are nonetheless useful. Recall, for example, that the only evaluation of an interface in terms of consistency, and even more so if the interface is complex, is an extremely difficult task to achieve. This assessment requires that the value of the parameters of an object or a set of objects (eg, positioning of the buttons on cancellation and validation of a dialog box) is consistent throughout the interface (in our example, the relative positioning command buttons should be the same in all dialog boxes unless contrary evidence). In addition, for this evaluation is complete, the evaluator must have a good representation of the dialogue, which is not a simple thing to achieve. Thus any tool facilitating this type of assessment allows the assessor to devote more time to matters related to the tasks. These tools show that some dimensions ergonomic recommendations or rules are fairly well in automatic evaluation. However, these tools will, for an assessment of the more semantic based tools for job description and tools for describing the interface. One can already imagine the magnitude of the task. Indeed, how for example, from a description of the task, can we automatically determine that the dialogue is its structure? To do this will require a good description of the dialog interface and link it to the task description.