Moiré in z-axis

April 22nd, 2013

I shoot video and take pictures of screen-based interfaces quite often and Moiré patterns, despite anti-aliasing filters, are very present. I found out that fashion photographers encounter the same issue when taking close-ups of garments. The Moiré effect happens when two or more grids are superposed. A grid can be the interweave of fabric, the array of a digital camera sensor,  the pixels in a screen, etc.

The relative movement between the grids create dynamic Moiré patterns. For instance, this effect is apparent when zooming in and out pictures of a screen taken with a phone camera.

(it seems that the Moiré is trying to mimic the wood pattern :)

I tried to simulate those Moiré patterns from grids moving relatively in depth one from the other as it happens in the example above, as opposite to the traditional patterns generated by grids moving on the same plane. I used the pixels of my laptop screen as a first grid, and an image I created as the second grid, consisting of a grid of 1 pixel black lines on a white background (or a matrix of white pixels on a black background):

Using Processing I created a sequence of images zooming the pattern above from 0% to 200%, in 960 steps. The superposed grids (pattern from the image and pixels from the screen) rendered in a series of Moiré patterns that repeat sequentially:

 

- 30%

 

- 25% (same as 50%, 16.67%, 12.5%, 10%, …)

 

- 33% (same as 100%, 20%, …)

 

- 40% (same as 200%, 66.67%, 13%, …)

 

- 75% (same as 150%, 18.75%, 7.5%, …)

 

And some interesting in-betweens:

(you can perceive another dynamic Moiré effect in some if the patterns above when scrolling up and down this page)

These are some of the patterns above at 32x:

 

-100%:

 

- 30%:

 

- 40%:

 

- 75%:

 

- 100.62%:

 

I made a video of the sequence. Due to the video compression the patterns are not shown properly – I suggest to download it to appreciate the sharpness of the patterns.

 

Taking any of the images above as seed for the zoom in sequence generate similar patterns.

I’ll try to post more experiments on Moiré patterns – I’m specially interested on the variations of colour when one of the grids has a coloured structure, such as screens (rgb). Here three pictures of the same image on a screen: (1) unfocused (to avoid Moiré patterns, natural perceived tone, grey) and near the optimal focus with a different focal distance, with (2) green and (3) red prevalence:

 

Some references about Moiré patterns:

- Illustrations and maths regarding the Moiré effect.

- A book containing an exhaustive study of Moiré patterns.

- Another book about grids from the same author.

Slow light

March 5th, 2013

A couple of years ago in a Q&A for The Creators Project they asked me:

“What fantasy piece of technology would you like to see invented?”

I said:

“A simple knob that connects to any source of light, so it slows down the speed of the light. Even down to zero, like a lightsaber.”

I always dreamed about the endless possibilities if such a magical thing was ever feasible. And I recently discovered that it is happening! In the last story of this podcast from Radiolab (min. 45),

Danish physicist Lene Hau explains how she has been able to slow down a beam of light, passing it through an ultra-cold cloud of sodium atoms. She has also been able to transform the form of light into matter, recording the shape of the light pulse with a laser. This light copy (or light metadata) can be stored, and the light form re-created in another place and time.

Collection of sunsets

Cold-cloud photo camera

 

An article and a video about Lene’s research.

Data forecast

November 28th, 2012

Our data today:

 

The forecast for the time to come:

_ manipulation

 

_ monopoly

 

_ hacks

 

_ leaks

 

_ data not found

 

_ restart

 

( s o u r c e s )

Flaps: fast and contextual browsing

November 11th, 2012

Web browsing is probably the main activity we use computers for today. Tabs have been a universal standard in web browsers, helping a wide range of users to navigate the web. Tabs provide a visual representation of the active webpages, waiting to be processed: either read them, discard them, keep them for later, or archive. Lately I’ve been trying to question if tabs are the best way to navigate web content.

The way tabs are sorted is a combination of time (new tabs open at the right end of the tabs bar) and source (new tabs from links coming from a specific webpage will sit next to that webpage’s tab). This two sorting strategies combined with the flat visual representation of the tabs doesn’t help navigating them, specially when the tab bar is cluttered and web titles and/or icons are hidden.

If we take the tabs metaphor back to its origin, while organising paper documents we used to write names on the folders, use coloured folders or use different drawers. Tabs on the browser are ethereal and don’t require such a structure, although they could potentially organise themselves understanding the typology of webpage they host. Tabs could be organised by type of content (media, personal, social, etc.) or use (in focus / in background, one time / frequent access, etc.)

Tabs are designed for ‘point and click’ in order to navigate through them. Pointing at things requires shifting the focus of attention and it usually slows down the interaction.

In order to understand how browsing could be improved, I analysed some of the behaviours I have developed while browsing (I use Chrome):

- I usually don’t look at the address bar when launching websites or performing searches. The shortcut to open a new page (Alt+T), Chrome’s omnibox, and the autocomplete converted the bar to an invisible interface. I think about reading the news and the news website appears on screen, putting zero effort on thinking how to get there.

Also I have noticed that I normally don’t keep open those webpages I visit frecuently. I normally create a new tab and launch the webpage, do what I need to do and I close it right after to come back where I was. One of the reasons I unconsciously develop this behaviour is that it’s faster and less disrupting to create a new tab (alt+T, type first character, Enter – 0.25sec max) than activate a tab that is already open (find the tab, point, click, refresh – 1sec + change of visual focus + potential procrastination). Switching between tabs follows the same reasoning.

- I pause/resume the music streaming many times a day and it still takes two or three steps to do so each time. Likewise it takes many steps to save an image either to my local or remote repository. Or browse webpages I previously bookmarked. There are many frequent operations that are performed using a generic interface.

- While looking for a specific tab on the tab bar, I sometimes end up checking the news on the way, or my email, or articles that I left open. Having the tabs always and all visible can distract me, although I embrace it and I like it somehow.

Based on the behaviours described above, there are some principles I’d like a browser’s interface to follow:

- Maximise the possibilities for psychomotor automation.

- Mutually adapt with the user and disappear with time.

- Minimise the ‘point & click’ and encourage fast navigation with the keyboard.

- Provide dedicated interactions for operations I perform frequently.

- Provide an adaptive interface that helps to focus, but doesn’t kill procrastination.

Trying to imagine a browser that follows those principles, I sketched Flaps, a full-screen browser with a minimum visual infrastructure, an interface for contextual navigation and dedicated interactions for optimising frequent actions. (play full screen)

 

Bits of the interface:

- main interface:

- contextual interface, after opening links from a webpage. When possible, webpage titles are formatted to increase its meaning:

- extended interface, with automatic grouping:

- repository / bookmarks / ‘read it later’ interface:

- example of predefined searches:

- examples of actions over the active website:

 

Flaps is just a video prototype for now – I’d be curious to let people try it and see how their browsing behaviours would evolve. There are a few aspects that haven’t been tackled in this prototype, that should be taken into account while implementing an interactive prototype (loading progress feedback, history overview, need for full length URL’s, optimise position of the interface for different webpages and screen ratios/resolutions, compatibility with existing keyboard shortcuts, etc.)

Any feedback on the concept, as well as input about personal behaviours and workflows while browsing, is very welcome!

Interaction Design Awards 2013, enter your work!

September 15th, 2012

Seven months have passed since I was invited to the Interaction Awards ceremony to receive an award for the project Pas a Pas. It was a true honour to participate in the event and to be recognised alongside all of the great designers I had the chance to share the experience with.

The award ceremony was part of Interaction’12, a conference fueled by a community of passionate designers that represent where and how Interaction Design is practiced today – from well-established design companies to emergent studios, from large technology firms to research centres, from professors to students. The various levels of experience and wealth of knowledge was acknowledged by the first edition of the Interaction Awards, which recognise work in numerous categories that represent how broad our discipline is.

Spending those three days in Dublin was a great opportunity to learn and be inspired by outstanding keynotes, to connect and debate with designers from around the globe, and contribute to a very active community. It was lots of fun too!

With just a few days left to submit new work for the next edition of the Interaction Awards, I would encourage all students that have been taking part in an Interaction Design education to submit your best work and take part in this great experience. For those who are planning to submit projects I’d like to share a couple of aspects that I feel are important when creating a strong project profile.

1. Frame your project.

A school project differs in many aspects from a professional project. While clients, budgets, technology roadmaps, deadlines are constraints for design companies or departments, student projects are often driven by other aspects – a theme or topic as brief, personal motivation or interests, the pursuit of a specific skill, the opportunity to collaborate with a company or social collective, etc. It’s important that those constraints, motivations and aspirations are reflected in the application to help the jury understand your initial playground.

2. Describe your journey.

Besides experiencing a new product or service first-hand, there is nothing more exciting for us, designers, than understanding what happens behind-the-scenes. Walk people through the steps on your project, describe the key moments of your process and how they had an impact on the outcome. This is where the jury can sense your passion, recognise your ability to take the right decisions, and discover your intention for each of your prototype’s iterations.

3. Evaluate the outcome.

In opposition to the previous point, it can also be valuable to detach ourselves from the process and the passion we’ve put into the project – that’s important when evaluating where we are in the process and how far we are from the initial expectations.

Whether it’s a ready-to-market product, a concept for a large scale service or a stepping stone that opens new opportunities, there is always a way to validate the concept, a scale to evaluate its impact, and a path to pursue it’s highest potential.

Besides writing about it, there is nothing more powerful and honest than a video showing people trying out your concept in a real environment. Show enough to let the concept shine by itself, let the audience identify with the people in the video and envision the potential of your idea.

I’m very looking forward to see this year’s entries for the Interaction Awards. Good luck with your submissions and hope to see everybody next January in Toronto!

Robot readable brontosaurus

June 2nd, 2012

Natural interactions with spatially aware devices

April 12th, 2012

This weekend I wanted to transfer some articles I had in my web browser to my Kindle, to read them later. Just thinking about all the necessary procedures made me rule out the idea.

Despite how well virtually connected our devices are (iCloud, Dropbox, …), they still lack a tangible connection. A (representation of a) physical connection of those devices would facilitate a more intuitive interaction built on traditional mental models from the physical world. That’s one of the main reasons why kids interact with iPad so naturally, because it uses interfaces based on natural, tangible interactions.

I tried to imagine how a more intuitive interaction would be while transferring media between devices, sketching it in a short video*:

The interaction feels natural, and provides a seamless transition while consuming media – in this case, listening to music from being in front of the computer to going mobile. It’s a more intuitive way to synchronize media across devices, and the ‘cloud’ would take care of the data transferring in the background (high res files, music that is not existing in your device yet, etc.)

 

Portable devices can locate other devices quite precisely on a large scale (using GPS, wifi triangulation, etc.) but in small spaces they only ‘sense’ the existence of other devices (bluetooth, local network) – neither the absolute nor relative position of other devices are being measured with precision enough to enable a physical connection beyond the cable.

Some platforms use a physical connection using the device itself to create more intuitive ways to interact with them. Sifteo cubes use IrDA transceivers to detect other cubes nearby (<1cm). Microsoft Surface is using near-infrared light and cameras to detect objects sitting on the table. Last versions of Surface use PixelSense technology, being able to detect the object using micro sensors embedded on the screen pixel array.

Sifteo

Microsoft Surface

It seems that desktop and tablets are converging into a personal touch-screen device. Incorporating the technologies mentioned above on these devices would create a new canvas for exploring more natural ways of engaging with media on the tangible realm across multiple devices.

 

*The video sketch was done in a a few hours using the following tools:

- Keynote for the animations.

- Screenium for screencasting.

- iMovie for video editing.

- Freesound.org for the sound fx.

Mixing analog and digital techniques in photography post-production

March 18th, 2012

Analog photography forced us to better understand the light and the physics behind photography, to better select the right moment to shoot, to better accept flaws as part of the captured moment. The fact of not being able to see the picture right after shooting, the ritual of bringing the roll film for development, having to wait some days and then, while browsing the photographs, making the connection to the moment they were shot… analog photography might not be convenient today, but it definitely has more magic than digital photography.

Likewise, digital post-processing is much easier and convenient than the analog process, but it lacks the same magic. The process of burning and dodging for example, isn’t it beautiful?

After watching this documentary (War Photographer, 2001, which I highly recommend) I wanted to try this process. Today it is not easy to access one of those labs, and each print, specially on this size, is quite expensive. I start thinking about how could I experience the process of manipulating the exposure as in the analog process, using more accessible means.

The analog dodging and burning process requires a light source, the negative of the picture, and the photographic paper to capture the projected light. A projector can substitute the light source + the film to project a picture, and a digital camera on long-exposure can substitute the photographic paper, to capture the projection. The exposure manipulation is the same in both methods, and definitely with more magic than the digital one.

I set up a room with the setup above (same as I used for Generative Photography experiments). To be aligned with the mix of analog and digital process I was about to try (camera and projector are digital, manipulation is analog), I decided to play with some pictures I took with my film camera. The pictures were developed using and A/D process, so I had them on my computer. The workflow then is the following:

First of all I made some tests to find the right settings for the camera and the projector, so the exposure as neutral as possible – the long-exposure picture of the projection (without manipulation) had similar exposure than the original.

I started playing with a picture of the sky, quite homogenous, in order to see how sensitive is the result to the manipulation. I used a circular tool, and a square one for big areas.

Trying to do some gradients, the first one is not quite smooth, the other two a bit better:

I was using 20 second exposures, and specially with the circular tool it was difficult to remember exactly which areas were already manipulated.

Combining the circular and square tool:

Then I used another picture to try a smooth gradient or to darken an area. Those some of the results:

I made some tests with this other picture, slightly overexposed in some areas:

Similar to what happens using the analog burning & dodging method, I didn’t have completely direct feedback about the manipulation, just what I could see on the small LCD screen of the camera. I thought it would be interesting to see the result on the screen, and be able to work on it right after. This way it would be an iterative process so I could manipulate small details in each iteration, that would be accumulated in each step. I used Processing to do the following steps:

1. Project the picture A

2. Open the shutter of the camera

3. (me) Manipulate the exposure

4. Close the shutter of the camera (after 20 seconds)

5. Send the recently taken picture A’ to the computer

6. Project the picture A’

7. (me) see the changes and analyze which area needs manipulation

(back to 2 and repeat)

 

The result was good in terms of the experience, being able to make small modification each time. The drawbacks were that there wasn’t a Ctrl+Z feature, and that it was extremely difficult to adjust the crop of the camera in order to keep the frame and aspect ratio of the picture. Actually after many tests I didn’t succeed – the width of the picture was diminishing at each iteration, while the contrast was increasing. This resulted in some freaky images:

Willing to do some other digital/analog post-processing with accessible tools, I tried to apply a texture to a picture. I printed a picture in a plain white paper, and I used a photocopier and my camera to capture the texture of the paper.

This is the original picture:

This is the result, texturized:

Another test with another picture:

It was fun but I still want to spend some time in the photography lab :)

I made a short video of two of the tests I made: