"Just in Time Watch" (Frey, 2005)

Interview/Article by Jonah Brucker-Cohen

In the rapidly changing world of consumer electronics and portable devices, a trend is occurring amongst artists and designers to embed social problems and concerns into these objects to shift their intended focus and intent. Taking this credo to an extreme with his work into technologically enhanced objects and accessories is German artist/designer Martin Frey. From creating a watch that keeps track of time and positioning data to guide you towards your destination with his "Just in Time Watch" to integrating GPS and motors into the soles of hiking boots with "Cab Boots", Frey is interested in the connections and clashes between physical realities and digital data displays. Gizmodo recently caught up with Frey to discuss his wide variety of projects and take on the future of interactivity and how designers of technologically enhanced consumer products often expect too much of an investment from their users.

Advertisement

Images and Interview after the jump.

Name: Martin Frey
Age: 28
Education: Diploma Designer - University of Applied Sciences Wuerzburg-Schweinfurt, "Absolvent" of the University of the Arts Berlin - passed with distinction
Affiliation: Independent designer
Exhibitions (selected): "REALITY CHECK @ c-Base / Partner Event Transmediale '06" (Berlin, 2006), "Designmai '06 / Designtransfer Galerie" (Berlin, 2006), "Ars Electronica Festival '06" (Linz, Austria, 2006), "EUROPRIX Top Talent Festival '06" (Vienna, Austria, 2006), "TEI '07 - First International Conference on Tangible and Embedded Interaction" (Baton Rouge, USA, 2007)
URL:http://www.freymartin.de

GIZMODO: The "Just in Time Watch" is a "thinking" watch that connects over Bluetooth to cell phones to retrieve GPS coordinates and also gathers traffic and timetable information from the Internet over GPRS. Thus when you are traveling from point A to point B, the watch displays the amount of time you have left in your journey and color coded messages indicate whether you should "speed up" or "slow down" to make good time. Why did you add this layer of "punctuality" to the wristwatch and how important is adding social consequences to an everyday product like a watch?

Advertisement

MF: I came up with the Idea of the "Just in Time Watch" (JITWatch) in a semester term at the Berlin University of the Arts (UdK Berlin) on the theme "here/there - networked everyday objects". The task was: "What happens, when physical objects that are crafted to elicit personal attachment, that one lives and ages with, become network aware?" Pondering about a personal object that would be worth thinking about its "networkedness", I stumbled upon a small, extremely common device that nonetheless passed out of my mind: my wristwatch. In fact I didn't wear a watch for several years. To know what time it is I usually fish my mobile out of my pocket. So how did that come about? Is that just because of the equivalent replacement with my mobile? Is the mobile equivalent for this task at all? Isn't the watch far more user-friendly due to its "no-hands" interface? Sure it is. So it has to do with the core-function of the watch itself, that it isn't worth anymore for me to wear it: Providing the time.


"Just in Time Watch" - video (Frey, 2005)

The actual time is a rather subjective and therefore relative value. Looking at a conventional watch very often a "mental dialogue" happens: What time is it right now? When is my next appointment? Where does the appointment take place and how long does it take to get there from here? Thus, when should I leave? How much time is left till then? Should I leave now? Am I already too late? These "on-the-fly" considerations and calculations are not only cumbersome but often very imprecise as well: E.g. the necessary amount of time to cover the distance is estimated too optimistically or depends on irregular, external factors like timetable or traffic situation. So the important information I want to know looking at a watch is: Am I on schedule? But the answer of this question depends on data like, my dates incl. positions, my actual position, routes, public transport timetables, traffic situation etc.. All this information already exists. There are a lot of services that handle these tasks, like calendar applications, route planner and navigation systems. But on the one hand the interoperability is usually missing. On the other hand the accessibility via websites is inadequate in certain situations, like for people on the move. The networked object "JITWatch" does nothing more than bringing these technically most complex information systems together and breaking their output down into really easy to read information at a glance. Combined with a reliable and well-known interface you can simply carry along: the wrist-watch. [For me] "JITWatch" makes everyday life or at least mastering a days tasks a bit more comfortable. In my opinion the concept of "JITWatch" is a nice example of "User Centered Design", making technology accessible for a wide range of people.

Advertisement

This image was lost some time after publication.


"Cab Boots" (Frey, 2006)

GIZMODO: "CabBoots" integrates GPS and motors into ordinary hiking boots to create a customized pedestrian navigation system that tilts and adjusts the shape of the shoes to direct their wearer towards certain destinations. Why did you choose to combine navigation into the physical object of a shoe and how successful have the prototypes been with users? What kind of feedback have you gotten? How do you plan to improve or change the project based on this feedback?

Advertisement

MF: Once again, the concept of "CabBoots" is the result of an user centered design thinking: Conventional navigational devices normally communicate with the user on the acoustic and visual levels. Concerning pedestrian navigation, the visual and auditive based output channels do not always work satisfyingly due to several reasons. "CabBoots" describe the concept for an alternate interface for pedestrian guidance applications. The information transmission process can be perceived tactilely, is intuitively understandable, and is applied to the part of the body most directly involved in the act of walking: the foot. The applied communications metaphor is familiar to all; it's something that everyone who's ever walked along a well-trodden path is aware of. Navigation systems in vehicles [have] gained wide currency for the last years. The common output devices are based on visual information, presented as maps and arrows and auditive instructions and work rather well in the given environment of cars. The traffic infrastructure including streets, crossroads etc. provides an underlying grid in which orders like turn right at the next crossing" behave mostly target-orientated. Furthermore, the visual and auditive setting of the cars ,cockpit" support the comfortable and normally proper perception of the information. Displays integrated in the cars dashboard are usually in the drivers visual angle. The traffic noise reductive passenger cell as well as the already built-in sound system provide a proper acoustic base.

Common navigational devices for pedestrians are usually based on the same hardware structure and communication metaphors. Very often the units are advertised as universal devices. Usable for car-navigation as well as for walkers in a special pedestrian mode. The perception of the given auditive surroundings of pedestrians is important, e.g. for safety reasons because dangerous traffic situations can be detected in advance. Earphones often cause an inconvenient isolation, loudspeaker normally can not compete with the surrounding sound level. Navigational devices have to catch the users attention at certain points. If the user doesn't hold his display in its view angle there is no way to notice the current information provided. These are the main reasons for "CabBoots" following the concept of a tangible interface.

Looking for the favored part of the users body to connect the interface to, the feet seemed to be a suitable position: the foot is the part of the body most directly involved in the act of walking. After that, the main challenge was to find a working tactile stimulation method to get the user to know in which direction to walk. The goal was achieved by observing the following case: Paths on a natural surface, for instance - usually have a concave cross-section. When you walk along such a well-trodden path, your feet come down on a flat surface only right in the middle of the trail. Veering over to the edge of the path, they land on a slight outward slope that causes the ankle to be angulated slightly. While walking, the body registers this angulation and intuitively compensates by steering back towards the middle. This actually allows you to walk the path 'blind'.
The principle of "walking in a path" is translated to the "virtual, augmented topography", generated in the sole of the shoes: Electromechanical elements in the sole of the CabBoots can produce an artificial angulation of the shoes and, accordingly, of the feet. The resulting oblique posture of the foot is difficult to distinguish from the real thing. So the sensing of the border of the trail brings about the same counter-steering back on the right track as observed on the well-trodden path. A virtual and thereby individual topography layer can thus be simulated and communicated by the augmented reality.

Advertisement


"CabBoots" - Video (Frey, 2006)

The theory of the capability of walking on a trail blindly was easy to prove, as well-trodden paths to walk on with e.g. closed eyes were easy to find. The adaptation to the virtual trail however implied the development and production of a prototype that included the necessary behavior for examining the theory. The prototype consists of a pair of shoes equipped with sensors and mechanics wirelessly connected to a computer running a control-software. Servo motors connected to wooden flaps in the shoes are able to set the angle of the sole if necessary. Several sensors like light sensors, accelerometers and distance sensors deliver information about the actual state of the shoe, and thereby the foot. The software provides a visual control-panel for monitoring the shoes spacial state and setting the paths direction and thereby triggering the shoes actuators. My most extensive "user-testing" was at the exhibition at Ars Electronica Festival were visitors could try the shoes. After a short introduction about the principle of the concept, several persons of different ages were asked to walk with the shoes. Almost everybody got the idea just after a few footsteps. The test subjects directly translated the route set by an operator at the control-computer by walking into the given direction.

Designing pervasive interfaces for human-machine-interaction the user and its needs in certain situations has to be in the center of attention. In this case, small handheld displays provide a smart solution for input issues, like setting the destination but they are impractical for the walking user as output devices. Finding communication metaphors that are based on the users experience result in easy to use and feasible interfaces. For further development and evaluation, the mechanics in the shoes have to shrink, providing a shoe that is virtually not distinguishable from a usual one concerning its look, weight and fit (Several users were rather cautious walking around, as they were afraid of braking mechanics of the shoes). This could be done by pneumatic actuators. The future combination with an accurate enough tracking system like Galileo and suitable map data is a solvable technical challenge too.

Advertisement

This image was lost some time after publication.


"SnOil" (Frey, 2006)

GIZMODO: "SnOil" (short for Snake + Oil) is a tactile display that uses Ferrofluid (magnetically reactive liquid) and an array of electromagnets to control 144 individual "bumps" and integrated motion sensors that allows for the game of "Snake" to be played when the user tilts the tactile display back and forth. Why did you choose this type of interaction? What interests you in this type of material as a display mechanism?

Advertisement

MF: In contrast to my latest projects "CabBoots" or "JITWatch" "SnOil" isn't the result of a conceptual design process. In my opinion building "SnOil" didn't and doesn't make any sense. At least in terms of function or message. It is not even usable as a tactile output device (as your question suggests), as the Ferrofluid is still liquid, even when forming this little "bump". Furthermore Ferrofluid consists of oil and its black color stays for days on your skin after touching it. To be honest, the only motivation to build "SnOil" was this strange fluid called Ferrofluid with its extreme strange behavior of moving towards magnets. I was fascinated by its magic appearance and the possibility of forming its shape by triggering nearby magnetic fields. So I built this physical display by using electromagnets, which enable the appearing and disappearing of a magnetic field by switching the flow of an electric current. Therefore "SnOil" consists of two main parts: An approx. 25 by 25 (ca. 10 by 10 inches) sized basin that is filled with Ferrofluid up to a height of a few millimeters. Directly underneath there is a grid of 144 (12 by 12) electromagnets that are arranged closely to each other. The magnets are arranged in four structurally identical blocks with 36 pieces each. The electronics for triggering the separate magnets are located on several layers of printed circuit boards directly underneath the layer of magnets. This makes the system highly scaleable in size and its footprint.


"SnOil" - video (Frey, 2006)

This array enables the creation of 144 individually selectable "fluid-bumps". By pulsing the magnetic field intermediate states between maximum and no height can be realized. The maximum elevation of the fluid-accumulation can measure several millimeters in height. As already described the base material of Ferrofluid consists of oil. This results in a deep black, strong glossy surface. In addition to the spatial mutation the bumps stand out from the remaining surface due to the changing reflections of the surrounding light sources. The occurrence and the disappearance of the fluid-bumps, as well as the appearance of the surface has a slight mystical and extreme aesthetic impression. Images and animations can be produced in a pixel-graphic style as well as plain pixel-typography. There are different reasons, why an interesting application for the Ferrofluid-display is based on the classic game Snake: the food pieces are shaped out of the surrounding fluid and are instantly converted to the snakes body after consumption. So the growth in length of the snakes tail comes along with a real swelling volume of the collected fluid. The snake on the screen is steered by a joystick or a keyboard whereas the input interface of SnOil relies on a straighter action: The player holds the whole ferrofluid-basin in his hands and controls the flow direction of the snake by slightly tilting in the according direction. The controller measures this by tilt-sensors. I really like this kind of input method as the direction the snake moves to comes directly along with the direction the plane is tilted to. So the snake moves exactly into the direction you'll expect it to do, due to gravity. That makes the interaction very direct and the snake a kind of lively.

Advertisement

This image was lost some time after publication, but you can still view it here.


"First Contact" (Frey, 2006)

GIZMODO: "First Contact" was inspired by your first experience with a Sony AIBO where you imparted an emotional connection to the toy when interacting with it. In contrast, "First Contact" takes the shape of smooth, white, symmetric globes that react to users by becoming "frightened" at their presence and rolling away to a position as far away as possible. Over time, the spheres "gain confidence" and move slowly towards the user depending on their hand gestures. Why was adding a layer of "human emotion" and intuition to something as emotionally vapid as a white sphere important to you? What did you notice about how people connected to the object and did this surprise you or confirm your initial assumptions?

Advertisement

MF: A lot of efforts have been made in the development of animal-like or humanoid robots in the past. Robots like AIBO or nowadays ASMIO try to simulate the living archetype in different aspects: Its appearance, its movement possibilities and patterns and sometimes its behavior. As an adult, you definitely know that this thing is a piece of electronics and mechanics. But a lot of people interacting with this "thing" will agree that there are several moments loaded with emotions. That is what aroused my curiosity about the AiBO and these situations are interesting for "First Contact". The central point that produces that "spark of live" is based on the interaction between the "user" and the AiBO. The "emotion" would not be this strong, if the "dog" would just move and act by itself. Of course the AiBO simulates a lot of "animal-like" gestures and movement patterns. But emotions could also be created just by interaction based on behavior expressed by movements. I even think the easiest and most effective way to build "lively" robots or objects is to concentrate on their behavior based on the interaction with the user. To prove this assumption I came to the following main aim of the project: Interacting with a "thing" just by changing positions and movements (both parts: the user and the object). To factor out the visual appearance and any possible connotations, the object should have the simplest and reduced shape. A white ball would be closely to that requirement. A globe is completely symmetric. There is no front, back, top, bottom... etc. The reaction of people interacting and playing with the globes was pretty the same like in the AiBO setting. Utterances like "how cute" and the carving for touching and petting the spheres were ever-present.

This image was lost some time after publication.


"Herman - the Unprepossessed" (Frey, 2005)

GIZMODO: "Herman - The Unprepossessed" consists of a chair with two speakers attached to the frame that are triggered to play sounds when someone is seated. The right and left speakers tell stories on subjects like the "death penalty" or "genetic engineering" in two different voices, but the stories are told from opposing viewpoints. Why did you choose a chair as an interface for this project? What were you attempting to show by playing opposing viewpoints and was it successful?

Advertisement

MF: "Herman - The Unprepossessed" is the result of a pressure project (one week working time) on the topic: "Design an interactive extension for the chair "Herman" (Ikea in white)" at Udk Berlin. So the question why I chose a chair as an interface should better be: "Why did I choose this interface for my chair?" As you mentioned, the two speakers on each side of the sitting person tell two different stories from opposing viewpoints. By sitting straight on the chair, you can hear both voices from left and right - soft but hearable. You can follow both contents (the speeches are slow and the sentences are short and simple). So it is quite possible that the person tries to listen to just one of the stories by moving his head/ear to one side near one speaker. By doing so the voice of the selected side turns down. So it is impossible to understand it. The voice from the other side gets louder to attract attention - but still not that loud that you can understand the content. The behavior of the volumes is the same on both sides. So it is only possible to follow the stories by hearing both voices, both opinions. Combining the active selection of one of the voices with the movement of the body into the direction of the sound source and the resulting unnatural effect (the sound softens) is an interesting experience. Furthermore the shifting of weight in this situation reminds me of "Lady Justices Scales".

GIZMODO: What projects are you currently working on? How are they similar or different than your past projects?"

MF: After I finished my studies last year I was developing further some projects, like I built a second prototype of "CabBoots". At the moment I have the chance to travel around presenting projects like my "CabBoots" at different events. This weekend e.g. I'll be in Baton Rouge, Louisiana attending an international conference on "Tangible and Embedded Interaction" (short TEI). My future projects will most likely be in the area of human-computer interaction again, as this is my main field of interest.

Advertisement