Natural interactions with spatially aware devices

April 12th, 2012

This weekend I wanted to transfer some articles I had in my web browser to my Kindle, to read them later. Just thinking about all the necessary procedures made me rule out the idea.

Despite how well virtually connected our devices are (iCloud, Dropbox, …), they still lack a tangible connection. A (representation of a) physical connection of those devices would facilitate a more intuitive interaction built on traditional mental models from the physical world. That’s one of the main reasons why kids interact with iPad so naturally, because it uses interfaces based on natural, tangible interactions.

I tried to imagine how a more intuitive interaction would be while transferring media between devices, sketching it in a short video*:

The interaction feels natural, and provides a seamless transition while consuming media – in this case, listening to music from being in front of the computer to going mobile. It’s a more intuitive way to synchronize media across devices, and the ‘cloud’ would take care of the data transferring in the background (high res files, music that is not existing in your device yet, etc.)

 

Portable devices can locate other devices quite precisely on a large scale (using GPS, wifi triangulation, etc.) but in small spaces they only ‘sense’ the existence of other devices (bluetooth, local network) – neither the absolute nor relative position of other devices are being measured with precision enough to enable a physical connection beyond the cable.

Some platforms use a physical connection using the device itself to create more intuitive ways to interact with them. Sifteo cubes use IrDA transceivers to detect other cubes nearby (<1cm). Microsoft Surface is using near-infrared light and cameras to detect objects sitting on the table. Last versions of Surface use PixelSense technology, being able to detect the object using micro sensors embedded on the screen pixel array.

Sifteo

Microsoft Surface

It seems that desktop and tablets are converging into a personal touch-screen device. Incorporating the technologies mentioned above on these devices would create a new canvas for exploring more natural ways of engaging with media on the tangible realm across multiple devices.

 

*The video sketch was done in a a few hours using the following tools:

- Keynote for the animations.

- Screenium for screencasting.

- iMovie for video editing.

- Freesound.org for the sound fx.

Mixing analog and digital techniques in photography post-production

March 18th, 2012

Analog photography forced us to better understand the light and the physics behind photography, to better select the right moment to shoot, to better accept flaws as part of the captured moment. The fact of not being able to see the picture right after shooting, the ritual of bringing the roll film for development, having to wait some days and then, while browsing the photographs, making the connection to the moment they were shot… analog photography might not be convenient today, but it definitely has more magic than digital photography.

Likewise, digital post-processing is much easier and convenient than the analog process, but it lacks the same magic. The process of burning and dodging for example, isn’t it beautiful?

After watching this documentary (War Photographer, 2001, which I highly recommend) I wanted to try this process. Today it is not easy to access one of those labs, and each print, specially on this size, is quite expensive. I start thinking about how could I experience the process of manipulating the exposure as in the analog process, using more accessible means.

The analog dodging and burning process requires a light source, the negative of the picture, and the photographic paper to capture the projected light. A projector can substitute the light source + the film to project a picture, and a digital camera on long-exposure can substitute the photographic paper, to capture the projection. The exposure manipulation is the same in both methods, and definitely with more magic than the digital one.

I set up a room with the setup above (same as I used for Generative Photography experiments). To be aligned with the mix of analog and digital process I was about to try (camera and projector are digital, manipulation is analog), I decided to play with some pictures I took with my film camera. The pictures were developed using and A/D process, so I had them on my computer. The workflow then is the following:

First of all I made some tests to find the right settings for the camera and the projector, so the exposure as neutral as possible – the long-exposure picture of the projection (without manipulation) had similar exposure than the original.

I started playing with a picture of the sky, quite homogenous, in order to see how sensitive is the result to the manipulation. I used a circular tool, and a square one for big areas.

Trying to do some gradients, the first one is not quite smooth, the other two a bit better:

I was using 20 second exposures, and specially with the circular tool it was difficult to remember exactly which areas were already manipulated.

Combining the circular and square tool:

Then I used another picture to try a smooth gradient or to darken an area. Those some of the results:

I made some tests with this other picture, slightly overexposed in some areas:

Similar to what happens using the analog burning & dodging method, I didn’t have completely direct feedback about the manipulation, just what I could see on the small LCD screen of the camera. I thought it would be interesting to see the result on the screen, and be able to work on it right after. This way it would be an iterative process so I could manipulate small details in each iteration, that would be accumulated in each step. I used Processing to do the following steps:

1. Project the picture A

2. Open the shutter of the camera

3. (me) Manipulate the exposure

4. Close the shutter of the camera (after 20 seconds)

5. Send the recently taken picture A’ to the computer

6. Project the picture A’

7. (me) see the changes and analyze which area needs manipulation

(back to 2 and repeat)

 

The result was good in terms of the experience, being able to make small modification each time. The drawbacks were that there wasn’t a Ctrl+Z feature, and that it was extremely difficult to adjust the crop of the camera in order to keep the frame and aspect ratio of the picture. Actually after many tests I didn’t succeed – the width of the picture was diminishing at each iteration, while the contrast was increasing. This resulted in some freaky images:

Willing to do some other digital/analog post-processing with accessible tools, I tried to apply a texture to a picture. I printed a picture in a plain white paper, and I used a photocopier and my camera to capture the texture of the paper.

This is the original picture:

This is the result, texturized:

Another test with another picture:

It was fun but I still want to spend some time in the photography lab :)

I made a short video of two of the tests I made:

A little letterpress

November 26th, 2011

Some weeks ago Elena brought me a set of cast metal sorts from Italy, a gift from an old typographer. She knows I have a soft spot for typography, analog processes and old machinery: letterpress printing is a good example. I love all that stuff.

I decided to build a small letterpress so I could use the types. I checked which materials I had in my material box and I found some scraps from past experiments. Three blocks of wood (I’m pretty sure is Mahogany), brass little rods and some copper. These random pieces defined the shape of the little letterpress.

 

I made a little ink brayer with some parts of an old radio cassette player I found in the studio.

Despite typography and letterpress printing is everything about accuracy, I must say this is not the most precise letterpress ever. This type of wood is really hard and thin drills were bending. Also I couldn’t work with a precise router this time (I used a Dremel with the router table).

I made a little video of one of the first print tests:

Last weeks at CIID I’ve been involved mainly in research projects and mentoring the students on their final projects, not much hands-on prototyping. This little press project has balanced the thinking with some making.

There are some more pictures in my Flickr set.

 

 

 

I have a vision

November 13th, 2011

A couple of weeks ago I watched the Microsoft’s future vision video, showing ‘How people will get things done at work, at home, and on the go, in 5-10 years’. Well, I hope they’re wrong because I don’t quite like what this future looks like. The whole video is an extrapolation of the power of technology to the future, brutally forced into everyday moments.

I’ve been working on projects that required making a video to show how a product or service would integrate on a future context. Sometimes is challenging to convey ideas without bending some of the features in order to make them understandable for everybody, using just audio and video. But there are ways to do it. In Microsoft’s video I don’t feel neither the concepts nor the representation of them are exactly on track.

I picked three moments that called my attention:

1. Is she using her glasses to translate the audio? I don’t think glasses are the best product to integrate such a feature. Solving an audio problem by putting a piece glass in front of your eyes when you don’t need it doesn’t sound appropriate. Wouldn’t be the phone a better option?

If the reason behind it is to make it discreet for other people and integrated in an object people usually wears, then I don’t understand the shining “translating” on the glasses arm which is seen only by people surrounding the user.

2. It scares me the amount of screens that will be surrounding people. Or people surrounding screens… But it feels the relationship between users, screens and context (which is what I’d like to know about the future) remains unclear. It’s even more confusing. Is all the information (work-related, personal, confidential) available from every screen? How is it filtered? By location, by people around the screen? Maybe it’s too risky to answer some of these questions on a visionary video.

Also I’m not sure why information is not confined within the screen frame anymore, and expands on walls and tables.

3. Do you really need to check the content of your fridge on a screen? Is it too much effort to just open the fridge?

 

Anyway. This July I spent some days in California. One afternoon I was sitting with my friend Eric on a campsite near Yosemite. It was a place with great views, nice weather,  and the right time to open a bottle of wine. The cork popping sound is kind of iconic, and always helps celebrate a memorable event.

Then we imagined the cork was able to capture that moment. *Of course* technology would make it possible – this technology that will be everywhere, that will be very small, and very intelligent. With a pinch of irony, we imagined another example of what we might find around us in the future, specially if we follow certain future visions.

Take it lightly :)

Presenting… the e-Cork.

Creating music samples with vinyl records

August 18th, 2011

Music sampling has been done for years using different techniques. Currently samplers (either as a piece of hardware or as software) is the most extended tool for playing samples that can come from digital formatted music, live recording, vinyls or tapes. One of the most old techniques for sampling was cut&paste the audio tape. I love this video from Delia Derbyshire using reel-to-reel recording, creating loops by cut&pasting the audio tape, and sync the samples to create music.

Driven by my devotion for vinyls and analog processes (perhaps a bit of Dj wannabe too), and emulating the audio tape cut&paste technique, I tried to make the vinyl sampling a bit more analog – literally cut and paste pieces of vinyl to create samples.

I bought some second hand vinyl records, different music styles: Supertramp, Wagner, Paul Anka, Chicago, Lil Jon and some random ones to make the first tests. I spend a couple of hours browsing and listening to old records – I remember thinking “all projects should start like this”.

Back to the studio, I considered different options to cut the vinyls – it had to be a clean cut in order to minimize the resulting groove and therefore the stress on the stylus.

I first used a hot wire cutter – it took some time to set the right temperature so the wire cut but didn’t deform the vinyl. It was quite important to keep a constant speed to avoid undesired melting too. I cut a small sector with the idea of reversing it afterwards, so a song from Side A would have a sample from a song from Side B.

The piece fitted quite well in its natural position but not in its reverse position. I had to smooth it out with a file, but there was already a serious gap and V-shape groove pretty difficult to resolve.

So I jumped into the second attempt, using a blade. It took around 50 passes to cut one straight line.

I cut a radial sector, it was slightly better than the first trial (no melted material) but I had to remove a burr with a file and again, it created a tiny gap, big enough to scare the stylus.

Then I tried the laser cutter and things went better.

I made many tests to find the right laser power in order to get the cleanest cut possible. The best setting was to let the laser go through *almost* through the vinyl, and then crack manually the last thin layer (1). If the laser goes all the way through, it melts too much material and leaves a gap (2). If the laser doesn’t go enough deep, it’s pretty much impossible to take the piece out without creating an undesired crack (3).

Even if the laser is well calibrated, it always cuts creating a cone-shape cut. Using the first option the crack doesn’t take out any material or creates burrs on the bottom surface, so that surface is the one I used for playing the record afterwards. The top one always have a gap where the stylus would go in.

I made some tests with different sectors to analyze repeatability and the cut wasn’t totally consistent on different positions of the disc and even on different positions of a sector. I think it’s due to the difference in resolution of the laser head depending of the combinations of X and Y axis speeds.

These are sectors from the same record, already exchanged, seen from the laser cut side:

And this is seen from the bottom side, where the final layer is cracked. Aligning the surfaces properly the gap is almost not perceptible by the finger:

The first time I placed the record on the vinyl player for testing, I noticed that the sectors were too small and it was difficult to guess which sample was it. The transition wasn’t clean – when the stylus found the groove it created a low sound (similar than a bass drum).

I decided to cut larger sectors on different records and exchange them to create loops or tunes using samples from different albums. I cut these patterns:

I cut the same angle in the label area so after the sectors were exchanged, I can remember which samples contain every record.

 

I exchanged the sectors from 4 different records: Paul Anka, Supertramp, Lil Jon and Chicago. I selected these four from the once I bought since they have the same thickness (1,2mm). The pieces snapped pretty well on its new position but I secured them temporarily with tape, so I could adjust the height and make the surface as even as possible before playing the record.

 

This are some of the resulting albums:

I made a video containing part of the process and the result, playing the records in a vinyl player.

It’s possible to hear (and see) the the stylus jumping a little bit – that’s not good for the needle. However this bumps create a new beat over the unmatching beats of the two samples, and that helps to define a new rhythm. I thought about selecting specific samples and make them match perfectly but that would work only for one rotation, so it might be good for scratching but not for listening continuously – it’s quite difficult to find records that the beat corresponds with a revolution.

It’s been an interesting experiment with a really fun process. I knew it would be, having vinyls, music and lasers involved :)

 

 

Dynamic typeface

July 25th, 2011

For the Generative Photography project I used basic patterns created with Processing. I thought it would be interesting to capture words with the same technique, so they would appear ‘broken’ depending on the framerate, and thus, difficult to read. I started seeking this effect projecting words with Processing – glitches didn’t appear when using regular typography, probably because the way is rendered is different than basic shapes (rectangle, ellipse, etc.)

I created a typography based on squares, so each character can fit in a 3×5 matrix, inspired on MiniML fonts by Craig Kroeger. I adapted some characters to make them fit into the 3×5 – ‘M’ and ‘W’ characters look a bit weird in such a small container but are still recognizable.

After coding the typo in processing to be generated parametrically, I was appealed by the aesthetics of the characters being overlapped with some transparency. The resulting symbols using negative tracking define a visual code or identity for each word.

It’s also interesting to think about these symbols as a way to encrypt information. I haven’t spent time thinking on how many words (from the English dictionary, say) could a single symbol represent.

It might be relatively easy to extract which characters are inside each symbol although it doesn’t have any information about the sequence, so anagrams are not distinguishable – ‘LISTEN’ and ‘SILENT’ look the same on the most compressed symbol.

It starts being more complicated (and beautiful) when the letter-spacing is not multiple of the pixel size, creating interesting patterns and shades of grey.

Using this idea I took the poem “Ma Bohème” by Arthur Rimbaud and I distributed each line vertically. These are two different layouts using different spacing.

 

These are two close-ups of the images above, I really like to think that this apparent randomness have a meaning behind.

I wanted to capture the dynamism of using this typography while morphing from one position to another – I did this small experiment using the same poem, this time with horizontal arrangement so is easier to distinguish the text. The poem is recited by a french virtual lady (i.e. text-to-speech):

Separately, I thought about using the property of the typo to shrink for visualizing the population density of the world’s 16 most populated cities. The more dense is a city (population / km2), the more compressed are the characters.

I’m not completely convinced about it since the appearance of the symbols not only depends on the population density but also on how long is the name of the city. What is true (and taking the meaning of density literally) is that the density of the symbols change according to the population density – amount of black / cm2, for example. Is not strictly comparable from one city to another since the name length is not the same, but the grey intensity together with the level of legibility gives a sense of density.

While comparing different cities, the population density lacks some meaning without the population number. In the following poster each city symbol contains information about the population (using the font size, linear relationship) and the population density (using letter-spacing, Dens^2+Dens+A relationship; letter-spacing being relative to the font size).

White version:

Black version:

In parallel I made some tests adding color. These are samples using different ‘densities’ and color patterns:

Using the population parameter, I created some posters for other cities. Here the ones from the two cities I was visiting in the moment of doing these experiments:

There has been a slight deviation from the first purpose of that dynamic font and the experiments shown above – I’ll come back to the photography path some day.

 

 

 

 

 

Future

July 16th, 2011

Filter the filter

May 16th, 2011

I’m not a heavy user of Instagram but I enjoy its easiness – take a picture, make it look better, and share it. Thanks to Statigram I know that I use Instagram mostly on Fridays, I publish 88% of the photos with filter and that the filter I use more is X-Pro II.

Recently I discovered that is possible to use more than one filter on the same picture:

1. Take a picture with instagram (or retrieve it from your Camera Roll)

2. Apply a filter and tap Next.

3. Wait a couple of seconds on this screen. Your picture will be saved in your Camera Roll without the need of publishing it.

4. Press Back twice and retrieve your picture (already filtered) using the bottom-left icon. Adjust the crop, and press Next.

5. Apply the new filter and publish.

This is an example of a picture taken from Waterloo Bridge (London), after applying two filters (Apollo and then Hefe) to combine a vignetting and the color levels:

This serie is the original picture, only with the Apollo filter and only with Hefe filter:

I wondered how a picture would look like after applying sequentially all the available filters, so I did using this picture taken at Copenhagen lakes:

The picture is progressively degraded, but I like how it looks in the third row for example where is still possible to guess what’s in it, and some contours are still well defined.

This is the same process using the inverse order of the filters:

The last frame reminds me a burnt frame of a film in a projector.

Conclusion: the order of the filters DOES affect the product :)