When we talk about interactions with complex systems that work in real-time — like navigational systems on board of the vessels, aircraft, or other vehicles — we need to consider two modes of perception. Let’s call them fast and slow modes.
The slow one is more standard way of perception — when we have enough time and our attention is not disturbed, we percive the informational displays with care about details and interrelations between items.
The fast one is all the same, but with serious lack of cognitive resources — we quickly overview the screen in search of changes or alerts, and if there is nothing particularly special, jump to other stuff. It is typical for stressful situations, like landing, collision avoidance and so on. Casual manual car driving is the typical example — you don’t have time to look at the dashboard or infotainment system — you observe the road and traffic on it.
With fast mode it is very easy to skip important details about current state of the system. You just don’t have enough cognitive resources to do it. As an example, eye-tracking studies show, that during manual landing the attention to specific indicators of the aircraft (like altimeter) lasts for half a second.
This raises the task to make the design of HMI in a specific manner, where all the nuances of the layout, font-size, color, and other typical graphical design efforts are just ignored — you have to make different (or changing) parts and values of the HMI as brutal as possible, with huge contrast both in size, the color, and the positioning of all the elements. The state of switches, the changes in static values, new messages — should attract much more attention and be quite stronger distinguished compared to user interfaces for “slow perception” mode.
If there is a chance that something may be skipped — it will be skipped. If there is a chance that the value may be ignored — it will be ignored. And so on.
The simplest method to check the readiness of the HMI or UI for such type of perception is to blur it in graphical editor and pay more attention to the items that became unrecognizable — it is better to improve them, or remove completely.
More sensible way is to make user tests. Just show the screenshots with different states and values of the UI for a second and ask if there are any changes in elements compared to previous demonstrated screenshots.
Of course, the best method is to use both eye-tracking and these awareness tests at the same time.