What does the user interface mean

User interface terminology and definition

The user interface is part of the input-processing-output system in computer programs and stands between input and processing. A well thought-out user interface is particularly important for complex tasks, as it not only receives instructions from the operator, "offers" or makes new functions available to him if necessary, but also provides feedback on which tasks the program is currently busy with. Depending on the type of user interface, this can be done purely via text, graphic symbols, audiovisual displays or even haptic feedback, for example force feedback on steering wheels or the rumble effect on controllers.

The different types of user interfaces are defined in DIN EN ISO 9241-110. Here it is called "Part of an interactive system", Which" provides information or control elements that are necessary for the user to carry out defined tasks with the aid of the system ".

The history of the user interface

When the first computers were developed for solving mathematical, stochastic or statistical problems, the user interface was of secondary importance and communication with the "program" was only rudimentary. There were at most a few switches to different ones Program functions to choose. The final output was limited to a few numbers displayed using LCDs.

A real user interface only appeared with the development of the tube calculator. Since programming was no longer limited to simple punch cards, users had to be given the option of manually arranging and assembling program parts intended for different tasks. The user interface consisted of a few lines of text, mostly also implemented on an LCD screen, which was used as a menu acted.

In the 1980s, the user interface was revolutionized for the first time with the advent of the first home computers. Instead of entire program parts, the operating system provided the user with a series of commands, with the help of ASCII and so-called "Drop-down menus“Which, however, were only operated with the keyboard, could even be relatively user-friendly interface can be simulated.

It was only with the introduction of the mouse that it was technically possible to build a user interface consisting solely of graphic symbols and movable menu lines. The introduction of this user interface has greatly improved the usability of computers; even beginners were able to complete their first simple tasks after just a few minutes thanks to the high level of intuitiveness. That is why it has proven itself to this day.

Different types of traditional user interfaces

The simplest type of user interface on computers is the one mentioned in the previous paragraph Command line, sometimes called a text-based user interface. The inputs consist of commands written in English, the system uses the display of menus and numbers as feedback, and symbols rarely implemented using ASCII. Greater disadvantage Such a user interface is the extremely poor beginner-friendliness, users usually need several months until all commands have been internalized and the inputs are relatively fluent. In addition, due to technical limitations, only rudimentary menus without great context sensitivity are possible.

The successor to the command line is the one used until today Graphical user interface. Programmers have the opportunity to work with images, symbols, texts and graphically separated windows. Operation is via a combination of mouse and keyboard, but special controllers can also be used for game consoles and PC games. Graphical user interfaces are context sensitive, can decide which functions to make available to the user by analyzing tasks that have already been carried out. Depending on the area in which such a user interface is used, it is also called "Graphical user interface" designated. In the German terminology, however, this English term is only used in connection with computer games; in the case of simple programs, the term “user interface” is used.

Voice activated user interfaces

Voice activated user interfaces analyze those of a microphone Recorded sounds and convert them into commands that the computer can understand. Such interfaces are capable of learning to a certain extent, they can remember accents or pronunciations and thus adapt to the user. Voice-activated user interfaces are now also integrated into various operating systems because of their high error rate, susceptibility to failure and generally poor usability however, they are not of great importance in everyday life.

The same applies to gesture-controlled user interfaces that analyze movements of the hands or the whole body via a camera. Different manufacturers of Game consoles have already implemented such user interfaces more or less successfully in their devices, and in some cases even met with great approval from users, but this trend has not yet established itself in the computer sector.

The user as "input device"

The situation is completely different with the so-called "tangible" and "natural" user interfaces. Here the user himself is the input device; no additional peripherals are required to give the computer clear commands, as the analytical components are already implemented in the system. Prominent examples of such user interfaces are those that have become widely used in recent years Smartphones, tablets and Touchscreen notebooks, however, this type goes far beyond the touchscreens.

Various manufacturers are currently experimenting with the possibilities of so-called "Shape memory alloys“Which take on a certain crystalline structure through magnetic action. So far, the effect is still very limited, and only a few shape-reactive states are possible. By improving the technology, however, it might be possible to build a user interface that does not have to separate haptic input and optical output, as is already being used in part with so-called “Braille readers” (but magnetic pens are used here).

A look into the future - Brain Computer Interface

In many science fiction books, the "Brain computer interface“, A user interface that is purely thought-driven, has already been discussed, but in reality this is still a long way off. Nevertheless, some researchers have already succeeded in building such an interface simply by analyzing the changes in the motor cortex and transmitting simple commands to the computer. Of course, such an interface currently only works in one direction, output directly to the nerve tracts of the brain is not possible and a lot of computing power is still required for the analysis of very simple commands, for example the decision between two possible parameters, but such an interface could especially Physically disabled people can easily communicate with the computer or the outside world enable.

The problems in developing such a system that would also work for everyday life lie primarily in the form of analyzing the “thoughts”. It is possible to “train” us with a certain number of commands, but in practice it is extremely difficult to keep them apart prone to failure, because the values ​​of the EEG, which is currently the main source of analysis for the computer, are not always clear. Refinement is needed here, such as implanting electrodes directly into the brain. However, considering all the resulting advantages, this is ethically not justifiable. According to estimates, it will take at least several decades before brain computer interfaces even achieve a status that would be beneficial for the general public.

For your quotes: just copy and paste the permalink