The cognitive walkthrough (Lewis et al. ; Polson et al. a) is a usability inspection method that focuses on evaluating a design for ease of learning. Usability inspection methods are a class of usability evaluation procedures We describe two usability inspection methods, the Cognitive Walkthrough and. Usability inspection methods. [Jakob Nielsen; Robert L Mack;] -- The last few years have seen the emergence of usability inspection (UI) as an important new.
|Language:||English, Spanish, Dutch|
|ePub File Size:||26.34 MB|
|PDF File Size:||20.62 MB|
|Distribution:||Free* [*Sign up for free]|
INTRODUCTION. Software inspection.  has long been used as a method for debugging and improving code. Similarly, usability inspection.  has seen. Full Text: PDF . Jakob Nielsen, Heuristic evaluation, Usability inspection methods, John Wiley & Sons, Inc., New York, NY, Jakob Nielsen, Scenarios. Engineering usability in software application is never cheap so as the methods to evaluate it. Although there already exist several inspection.
Abstract Usability methods, such as heuristic evaluation, cognitive walk-throughs and user testing, are increasingly used to evaluate and improve the design of clinical software applications. However, there is still some uncertainty as to how those methods can be used to support the development process and evaluation in the most meaningful manner. In this study, we compared the results of a heuristic evaluation with those of formal user tests in order to determine which usability problems were detected by both methods. Both methods yielded strong evidence that the dental CPRs have significant usability problems. Some statements of heuristic violations were specific enough to precisely identify the actual usability problem that study participants encountered. Other violations were less specific, but still manifested themselves in usability problems and poor task outcomes. In this study, heuristic evaluation identified a significant portion of problems found during usability testing.
Heuristic Evaluation Usability Evaluation Materials 2 Conditions of Use We welcome collaborators in our research into analytic evaluation methods. Heuristic Evaluation Usability Evaluation Materials 3 Heuristic Evaluation Introduction Heuristic Evaluation Nielsen and Molich, ; Nielsen, is a method of usability evaluation where an analyst finds usability problems by checking the user interface against a set of supplied heuristics or principles.
Heuristics The following heuristics were proposed by Nielsen Nielsen, Each heuristic is presented in a structured manner, with one or more of the following elements: Conformance Question What the system should do, or users should be able to do, to satisfy the heuristic. Evidence of Conformance Things to look for , for example design features or lack of design features that indicate partial satisfaction or breaches of the heuristic.
Motivation Usability problems that the heuristic tries to avoid. Visibility of System Status Conformance Question Are users kept informed about system progress with appropriate feedback within reasonable time? Evidence of Conformance Necessary evidence must be identified through analysis of individual tasks. Motivation Feedback allows the user to monitor progress towards solution of their task, allows the closure of tasks and reduces user anxiety. Match between system and the real world Conformance Question Does the system use concepts and language familiar to the user rather than system- oriented terms.
Does the system use real-world conventions and display information in a natural and logical order? Description of Heuristic Evaluation 1 Evidence of Conformance Necessary evidence must be identified through user studies or through assumptions about users! Motivation A good match minimises the extra knowledge required to use the system, simplifying all task action mappings re-expression of users intuitions into system concepts.
User control and freedom Conformance Question Can users do what they want when they want? Motivation Quite simply, users often choose actions by mistake.
Consistency and Standards Conformance Question Do design elements such as objects and actions have the same meaning or effect in different situations? Evidence of Conformance Necessary evidence must be identified through several analyses consistency within system, conformance to style guides, consistency across task methods. Motivation Consistency minimises user knowledge required to use system by letting users generalise from existing experience of the system or other systems.
Error prevention Conformance Question Can users make errors which good designs would prevent? Description of Heuristic Evaluation 2 Evidence of Conformance Necessary evidence must be identified through analysis of individual tasks and of system details e. Motivation Errors are the main source of frustration, inefficiency and ineffectiveness during system usage.
Recognition rather than recall Conformance Question Are design elements such as objects, actions and options visible?
Is the user forced to remember information from one part of a system to another. Motivation Forcing users to remember details such as command and file names is a major source of error. Recognition minimises user knowledge required to use the system. Summarising available commands or options may allow the user to guess their meaning or purpose.
Flexibility and efficiency of use Conformance Question Are task methods efficient and can users customise frequent actions or use short cuts?
Evidence of Conformance Necessary evidence must be identified through analysis of individual tasks, and the presence of design features such as keyboard accelerators etc. Motivation Inefficient task methods can reduce user effectiveness and cause frustration.
Aesthetic and minimalist design Conformance Question Do dialogues contain irrelevant or rarely needed information? Description of Heuristic Evaluation 3 Motivation Cluttered displays have the effect of reducing search times for commands or users missing features on the screen. Users unfamiliar with a system often have to find an action to meet a particular need — reducing the number of actions available could make the choice easier.
Help users recognize, diagnose and recover from errors Conformance Question Are error messages expressed in plain language no codes , do they accurately describe the problem and suggest a solution? Evidence of Conformance Necessary evidence must be identified through analysis of error messages. Motivation Ideally, a system should not require documentation. However, it may be necessary to provide help which users need to access at very short notice.
A structure has been applied to the heuristics described in Nielsen, , and to the best of our intentions we have kept the original meanings of the individual heuristics. The materials must not be copied by anyone else who has not visited the web page and agreed to the conditions of use. Description of Heuristic Evaluation 5 Heuristic Evaluation Check List This check list has been supplied as a reading aid to the Heuristic Evaluation method and as a reminder for the evaluation of the prototype 1.
Visibility of System Status Are users kept informed about system progress with appropriate feedback within reasonable time? Match between system and the real world Does the system use concepts and language familiar to the user rather than system- oriented terms.
User control and freedom Can users do what they want when they want? We also present the qualitative analysis for both case studies, comparing their performance. This qualitative analysis allowed a better understanding of the quantitative results and also helped to improve the WDP-RT.
The contributions of this paper are two: 1 describe how the data about our case studies was obtained and analyzed, discussing the results of using the WDP-RT in industrial environment; and 2 disseminate the knowledge about planning, executing, and analyzing case studies to support the improvement of new technologies in Software Engineering.
This paper is organized as follows. At last, in Section VI, we present our conclusions. A reading technique is an inspection technique based on several steps that aim at understanding and comprehend a specific task in a software product . Thus, the WDP-RT was developed as a reading technique to be employed by inspectors with a low knowledge about usability. The WDP-RT development and performance evaluation was based in experimentations using case studies in academic and industrial environments.
This WDP-RT technique is based in another inspection technique, known as WDP , that is an inspection technique based in checklists that was also developed using an evidence based methodology  .
The version of WDP-RT used in the studies reported below is the second version of the technique WDP-RT v2 , formulated based on the results of an in vitro study conducted to evaluate the feasibility of the technique .
In this version, the instructions from the WDP-RT are grouped in two inspection phases, being executed first the instructions for the usability verification in relation to the perspectives Presentation and Conceptualization, and, at last, the instructions for the perspective Navigation. WDP-RT v2 extract 3 Case Studies The experimentation allows researchers to create and maintain a knowledge base in which each item is verified in real world case studies, making them more trustworthy .
Among the several kinds of experimentation studies, the case studies allow the careful analysis of a specific process in the context of a software lifecycle .
Thus, the case studies for the evaluation of the WDP-RT were conducted with the main goal of evaluating the adequacy of the technique in the industrial environment. It is computed as the ratio between the number of defects and the inspection time. This indicator was verified using two main factors: o Effort spent in the technique training: measured by man-hours, it shows the time spent in training the inspectors to use the technique; o Perception of difficulty for applying the technique: the opinion of the inspectors about how hard was to apply the WDP-RT during the usability inspection.
The Efficiency and Efficacy indicators are usually used in the evaluation of defect detection techniques. However, since the total number of usability defects in the inspected applications was not known initially, the Efficacy indicator was not computed since it is measured as the ratio between the number of detected defects and the total number of defects.
In order to facilitate the inspection, two activities guide A and B were created having equivalent number of activities. Participants: the case study had eight informatics professionals as participants five systems analyst and three support analysts. Four participants performed the activities of guide A and another four participants performed the activities of guide B.
Procedures: the participants had a training with duration of one hour and fifteen minutes about usability and the technique WDP-RT.
The defect detection was conducted individually by the inspectors that had a deadline of one week to execute it. Data Collection: seven inspectors four of guide A and three of guide B sent their spreadsheets with annotations and discrepancies. The researchers involved in this study made the compilation of the discrepancies identified by the inspectors. For each activity guide, it was generated a unique list having all identified discrepancies.
These discrepancies were then classified as unique or duplicate a discrepancy identified by more than one inspector and, at last, the inspector identifier was removed. In these meetings, the evaluated interactions were re-executed, allowing the in 1 www.
After the discussion about the discrepancy among the inspectors and integrants, each discrepancy was classified as either a defect or a false-positive.
Thus, it was observed the number of defects identified by each one of the inspectors. Three of the inspectors already had some previous knowledge in usability and inspections, while two of them had already been in a usability inspection.
Table 1 shows the individual results of the inspection. Table 1.
Results by Inspector. It was found 84 defects during the inspection. On average, the inspectors spent 1 hour and 32 minutes in detection. Thus, the Efficiency in the Detection Phase is 7.
The second indicator is the Effort in Detection and Discrimination Phase. To compute this indicator, it is important to take into consideration the time spent in the discrimination activity. Two meetings were conducted, one for each activity guide. The first meeting lasted 1 hour, while the second meeting lasted 1 hour and 40 minutes.
In this last case, the inspection cost was low, since the average effort of an inspector, adding the detection activity effort 1 hour and 32 minutes and discrimination 1 hour e 20 minutes was of about 2 hours and 52 minutes. The third indicator is the Learnability degree. Regarding the Effort spent in the technique training, the time spent in training the WDP-RT was of only 1 hour and 15 minutes by inspector.
To capture the Perception of difficulty for applying the technique, it was conducted an evaluation survey about the technique as well as some semi-structured interviews with the inspectors. The analysis of the qualitative data will be shown in Section IV. This study, besides the 2 www. In particular the qualitative data collected about the use of WDP- RT were relevant to its improvement.
This experiment will be summarized in this section and the qualitative results will be presented in Section IV. Each activity was briefly described while informing the auxiliary data needed to accomplish it as well as the expected results after its execution. Evaluators verbalized the heuristics that they considered violated while completing the tasks. An observer [TT] wrote down the violations and helped record illustrative screen shots when necessary using a recorded macro function in MS Word [Microsoft, Redmond, WA].
While the evaluation was grounded in three clinical documentation tasks, evaluators were free to explore other clinical not administrative program functions in order to increase the coverage of the heuristic evaluation. For further details, please refer to the paper published previously [ 6 ].
Usability evaluation We conducted usability assessments [ 4 , 9 ] on the charting interfaces of working demonstration versions of Dentrix Version Each participant used only one software package and worked through nine clinical documentation tasks using a think-aloud protocol [ 4 , 9 , 13 ]. The tasks were explained in detail in a previously published paper [ 7 ]. The purposive sample of novice users for each system consisted of one full-time faculty member, two practicing dentists and two senior dental students from the School of Dental Medicine SDM and the Pittsburgh area.
After the completion of all sessions, two researchers coded usability problems based on an established coding scheme [ 9 ]. For each task, both the task outcome rate of completed tasks, incomplete tasks and incorrectly completed tasks as well as the type s of usability problems that occurred were coded.
Comparing heuristic evaluation and usability evaluation results Heuristic evaluation results were reviewed to identify violations that led to usability problems during usability testing. The results were then summarized and described using descriptive statistics. The heuristic violations statements were classified into two groups: one group consisting of specific violations that directly predicted actual usability problems, and the second consisting of general violations that suggested, but did not directly predict, observed usability problems.
While in some cases, such as for EagleSoft and Dentrix, a significant majority of heuristic violations was specific enough to predict the actual usability problem, most heuristic violations found for of PracticeWorks and SoftDent only suggested usability problems.