Evaluation Techniques for Interactive Systems

Niroshan Pushparaj
7 min readApr 17, 2022

--

Do you think a day would go by without a software product? From the early morning alarm to the end of the day we interact with many software products. Since interacting with a software product can take a considerable amount of time in a day, it is essential to have a pleasant and productive interaction with these products.

No one wants to spend their time on uncomfortable, inefficient and annoying software products. That’s why we need to make sure that the communication between human and computer device is very effective, efficient and enjoyable. Evaluation plays a major role in this place.

Evaluation is the process of assess designs and systems that make sure they actually behave as expectations of designers and meet the needs of their users’ requirements.

Goals of Evaluation
To understand the goals of the evaluation it is divided into three main topics

  • Assess system functionality and usability
    The functionality of the system is very essential because it has to meet the needs of the users. This level of evaluation can analyze the performance of the system in supporting the task by measuring the performance of the user.
  • Assess effect of interface on user
    It is essential to evaluate the user experience for the interaction and how it has affected them. Also, factors such as how easy the system is to learn, how usable it is, and how satisfied users are with it. This includes his satisfying and emotional response, especially in leisure or recreational things.
  • Identify problems with the system
    This goal helps identify specific design issues. These can be design elements that, when used in their intended context, can create unintended consequences or cause confusion among users. However, it focuses primarily on identifying potential problems and then fixing them.

There are basically two ways of doing these evaluations.

  1. Without the participation of the real users. (Evaluation through expert analysis)
  2. With the participation of the real users

1)Evaluation Through Expert Analysis
This evaluation process takes place without the involvement of real users. You may find this a drawback, but there are more efficient methods used in this evaluation process that help build quality interactions between human and computer.

Here some expert analyze technique for you

  • Cognitive walkthrough
    This is one of the most efficient and most cost effective way to increase the usability of a computer. Most users prefer to do things to learn a product rather than reading a manual or following a set of instructions. Therefore, with this evaluation, it is ensured that a newcomer will take the design easily and take less time to gain expertise in using the design.

An expert ‘walk through’ every possible path of design to understand what potential problems a user may face. This expert should think from the perspective of the potential user in order to properly maximize the evaluation result. So the host is an expert in cognitive psychology.

  • Heuristic evaluation
    Heuristic Evaluation is an application review method for computer software that identifies utility issues in a user interface (UI) style. Heuristic evaluation is a technique developed by the Nielsen Norman team to assess the utility of a digital product.
    This is usually done by a set of utility experts who review a product against a set of thumb rules derived from the Norman group. These thumb rules are sometimes edited by utility engineers to allow for additional inventions. The best way to enhance the user experience or usability of a product is for the user to test it, which gives better results even when consuming more resources.
  • Review-based evaluation
    This is a model based evaluation method. which means using a model of how a human would use the proposed system to obtain the predicted application by calculation or simulation. This method can be used to filter design options. Design rationality can also provide useful evaluation information in that filtration process.

2) Evaluation Through User Participation
This evaluation process takes place with the involvement of real users.

A. Styles of evaluation
These techniques performed under laboratory conditions and those conducted in the work environment or ‘in the field’.

  1. Laboratory studies — Users are taken out of their normal work environment to take part in controlled tests, often in a specialist usability laboratory.
  2. Field studies — This type of evaluation takes the designer or evaluator out into the user’s work environment in order to observe the system in action.

B. Empirical methods: experimental evaluation
One of the most powerful methods for evaluating a feature of a design or layout Should use controlled testing. It provides empirical evidence for a support specific claim or hypothesis. It is used to study various issues
At different stages of the detail.

We should consider some of the factors in this evaluation

Participants - The choice of participants is vital to the success of any experiment. In evaluation experiments, participants should be chosen to match the expected user population as closely as possible.

Variables - Experiments manipulate and measure variables under controlled conditions, in order to test the hypothesis. There are two main types of variable: those that are ‘manipulated’ or changed and those that are measured.

Hypothesis - A hypothesis is a prediction of the outcome of an experiment. It is framed in terms of the independent and dependent variables, stating that a variation in the independent variable will cause a difference in the dependent variable. The aim of the experiment is to show that this prediction is correct.

Experimental design - In order to produce reliable and generalizable results, an experiment must be carefully designed.

Statistical measures - The first two rules of statistical analysis are to look at the data and to save the data. It is easy to carry out statistical tests blindly when a glance at a graph, histogram or table of results would be more instructive.

Observational Methods

  1. Think Aloud
    In this method the user is asked to perform a task and is asked to describe what he is doing, why, what he thinks. Relatively simple but with an interface that can provide useful insights and show how the system is actually used. But there is a big drawback here, these answers are user based.
  2. Cooperative evaluation
    A variation on think aloud is known as cooperative evaluation in which the user is encouraged to see himself as a collaborator in the evaluation and not simply as an experimental participant. As well as asking the user to think aloud at the beginning of the session, the evaluator can ask the user questions (typically of the ‘why?’ or ‘what-if ?’ type) if his behavior is unclear, and the user can ask the evaluator for
    clarification if a problem arises.
  3. Protocol analysis
    Protocol analysis is one of the most successful approaches for evaluating an information system’s usability and determining which components of the system should be modified to increase usability.

Methods for recording user actions

  • Paper and pencil
  • Audio recording
  • Video recording
  • Computer logging
  • User notebooks

4. Automated analysis
Analyzing protocols, whether video, audio or system logs, is time consuming and tedious by hand. It is made harder if there is more than one stream of data to synchronize. One solution to this problem is to provide automatic analysis tools to support the task. These offer a means of editing and annotating video, audio and system logs and synchronizing these for detailed analysis.

Query techniques

Another set of evaluation techniques relies on asking the user about the interface directly. Query techniques can be useful in eliciting detail of the user’s view of a system.

Interviews - Interviewing users about their experience with an interactive system provides a direct and structured way of gathering information. Interviews have the advantages that the level of questioning can be varied to suit the context and that the evaluator can probe the user more deeply on interesting issues as they arise. An interview will usually follow a top-down approach, starting with a general question about a task and progressing to more leading questions to elaborate aspects of the user’s response.

Questionnaires - An alternative method of querying the user is to administer a questionnaire. The method is giving set of fixed questions to users. Those are general, open-ended, scalar, multiple choice and ranked questions. The data collected in this way can be analyzed very rigorously.

Evaluation through monitoring physiological responses

Eye Tracking
Eye movements are thought to indicate the amount of cognitive processing required by a display and, as a result, how simple or difficult it is to process. As a result, tracking not just where people look but also their eye movement patterns may reveal which parts of a screen they find easy or difficult to comprehend. Here some measurements,

fixations: eye maintains stable position.

Number of fixations :- The more fixations the less efficient the search strategy

Fixations duration :- Indicate level of difficulty with display

Saccades :- rapid eye movement from one point of interest to another

Physiological Measurements

Users’ emotions and physiological changes while using the user interface are observed in this method, and an evaluation is made based on those data. Physiological measuring entails the person being fitted with a variety of probes and sensors. These assess a variety of factors, including:

  • Heart activity, including blood pressure, volume and pulse.
  • The activity of sweat glands: Galvanic Skin Response (GSR)
  • Electrical activity in muscle: electromyogram (EMG)
  • Electrical activity in the brain: electroencephalogram (EEG)

I hope you get some knowledge about evaluation techniques for interactive systems from this article. Thank you!!

Reference: Human–Computer Interaction, Third Edition (Alan Dix, Lancaster University Janet Finlay, Leeds Metropolitan University Gregory D. Abowd, Georgia Institute of Technology Russell Beale, University of Birmingham)

--

--

No responses yet