Some weeks ago Elena brought me a set of cast metal sorts from Italy, a gift from an old typographer. She knows I have a soft spot for typography, analog processes and old machinery: letterpress printing is a good example. I loveallthatstuff.
I decided to build a small letterpress so I could use the types. I checked which materials I had in my material box and I found some scraps from past experiments. Three blocks of wood (I’m pretty sure is Mahogany), brass little rods and some copper. These random pieces defined the shape of the little letterpress.
I made a little ink brayer with some parts of an old radio cassette player I found in the studio.
Despite typography and letterpress printing is everything about accuracy, I must say this is not the most precise letterpress ever. This type of wood is really hard and thin drills were bending. Also I couldn’t work with a precise router this time (I used a Dremel with the router table).
I made a little video of one of the first print tests:
Last weeks at CIID I’ve been involved mainly in research projects and mentoring the students on their final projects, not much hands-on prototyping. This little press project has balanced the thinking with some making.
A couple of weeks ago I watched the Microsoft’s future vision video, showing ‘How people will get things done at work, at home, and on the go, in 5-10 years’. Well, I hope they’re wrong because I don’t quite like what this future looks like. The whole video is an extrapolation of the power of technology to the future, brutally forced into everyday moments.
I’ve been working on projects that required making a video to show how a product or service would integrate on a future context. Sometimes is challenging to convey ideas without bending some of the features in order to make them understandable for everybody, using just audio and video. But there are ways to do it. In Microsoft’s video I don’t feel neither the concepts nor the representation of them are exactly on track.
I picked three moments that called my attention:
1. Is she using her glasses to translate the audio? I don’t think glasses are the best product to integrate such a feature. Solving an audio problem by putting a piece glass in front of your eyes when you don’t need it doesn’t sound appropriate. Wouldn’t be the phone a better option?
If the reason behind it is to make it discreet for other people and integrated in an object people usually wears, then I don’t understand the shining “translating” on the glasses arm which is seen only by people surrounding the user.
2. It scares me the amount of screens that will be surrounding people. Or people surrounding screens… But it feels the relationship between users, screens and context (which is what I’d like to know about the future) remains unclear. It’s even more confusing. Is all the information (work-related, personal, confidential) available from every screen? How is it filtered? By location, by people around the screen? Maybe it’s too risky to answer some of these questions on a visionary video.
Also I’m not sure why information is not confined within the screen frame anymore, and expands on walls and tables.
3. Do you really need to check the content of your fridge on a screen? Is it too much effort to just open the fridge?
Anyway. This July I spent some days in California. One afternoon I was sitting with my friend Eric on a campsite near Yosemite. It was a place with great views, nice weather, and the right time to open a bottle of wine. The cork popping sound is kind of iconic, and always helps celebrate a memorable event.
Then we imagined the cork was able to capture that moment. *Of course* technology would make it possible – this technology that will be everywhere, that will be very small, and very intelligent. With a pinch of irony, we imagined another example of what we might find around us in the future, specially if we follow certain future visions.
Music sampling has been done for years using different techniques. Currently samplers (either as a piece of hardware or as software) is the most extended tool for playing samples that can come from digital formatted music, live recording, vinyls or tapes. One of the most old techniques for sampling was cut&paste the audio tape. I love this video from Delia Derbyshire using reel-to-reel recording, creating loops by cut&pasting the audio tape, and sync the samples to create music.
Driven by my devotion for vinyls and analog processes (perhaps a bit of Dj wannabe too), and emulating the audio tape cut&paste technique, I tried to make the vinyl sampling a bit more analog – literally cut and paste pieces of vinyl to create samples.
I bought some second hand vinyl records, different music styles: Supertramp, Wagner, Paul Anka, Chicago, Lil Jon and some random ones to make the first tests. I spend a couple of hours browsing and listening to old records – I remember thinking “all projects should start like this”.
Back to the studio, I considered different options to cut the vinyls – it had to be a clean cut in order to minimize the resulting groove and therefore the stress on the stylus.
I first used a hot wire cutter – it took some time to set the right temperature so the wire cut but didn’t deform the vinyl. It was quite important to keep a constant speed to avoid undesired melting too. I cut a small sector with the idea of reversing it afterwards, so a song from Side A would have a sample from a song from Side B.
The piece fitted quite well in its natural position but not in its reverse position. I had to smooth it out with a file, but there was already a serious gap and V-shape groove pretty difficult to resolve.
So I jumped into the second attempt, using a blade. It took around 50 passes to cut one straight line.
I cut a radial sector, it was slightly better than the first trial (no melted material) but I had to remove a burr with a file and again, it created a tiny gap, big enough to scare the stylus.
Then I tried the laser cutter and things went better.
I made many tests to find the right laser power in order to get the cleanest cut possible. The best setting was to let the laser go through *almost* through the vinyl, and then crack manually the last thin layer (1). If the laser goes all the way through, it melts too much material and leaves a gap (2). If the laser doesn’t go enough deep, it’s pretty much impossible to take the piece out without creating an undesired crack (3).
Even if the laser is well calibrated, it always cuts creating a cone-shape cut. Using the first option the crack doesn’t take out any material or creates burrs on the bottom surface, so that surface is the one I used for playing the record afterwards. The top one always have a gap where the stylus would go in.
I made some tests with different sectors to analyze repeatability and the cut wasn’t totally consistent on different positions of the disc and even on different positions of a sector. I think it’s due to the difference in resolution of the laser head depending of the combinations of X and Y axis speeds.
These are sectors from the same record, already exchanged, seen from the laser cut side:
And this is seen from the bottom side, where the final layer is cracked. Aligning the surfaces properly the gap is almost not perceptible by the finger:
The first time I placed the record on the vinyl player for testing, I noticed that the sectors were too small and it was difficult to guess which sample was it. The transition wasn’t clean – when the stylus found the groove it created a low sound (similar than a bass drum).
I decided to cut larger sectors on different records and exchange them to create loops or tunes using samples from different albums. I cut these patterns:
I cut the same angle in the label area so after the sectors were exchanged, I can remember which samples contain every record.
I exchanged the sectors from 4 different records: Paul Anka, Supertramp, Lil Jon and Chicago. I selected these four from the once I bought since they have the same thickness (1,2mm). The pieces snapped pretty well on its new position but I secured them temporarily with tape, so I could adjust the height and make the surface as even as possible before playing the record.
This are some of the resulting albums:
I made a video containing part of the process and the result, playing the records in a vinyl player.
It’s possible to hear (and see) the the stylus jumping a little bit – that’s not good for the needle. However this bumps create a new beat over the unmatching beats of the two samples, and that helps to define a new rhythm. I thought about selecting specific samples and make them match perfectly but that would work only for one rotation, so it might be good for scratching but not for listening continuously – it’s quite difficult to find records that the beat corresponds with a revolution.
It’s been an interesting experiment with a really fun process. I knew it would be, having vinyls, music and lasers involved :)
For the Generative Photography project I used basic patterns created with Processing. I thought it would be interesting to capture words with the same technique, so they would appear ‘broken’ depending on the framerate, and thus, difficult to read. I started seeking this effect projecting words with Processing – glitches didn’t appear when using regular typography, probably because the way is rendered is different than basic shapes (rectangle, ellipse, etc.)
I created a typography based on squares, so each character can fit in a 3×5 matrix, inspired on MiniML fonts by Craig Kroeger. I adapted some characters to make them fit into the 3×5 – ‘M’ and ‘W’ characters look a bit weird in such a small container but are still recognizable.
After coding the typo in processing to be generated parametrically, I was appealed by the aesthetics of the characters being overlapped with some transparency. The resulting symbols using negative tracking define a visual code or identity for each word.
It’s also interesting to think about these symbols as a way to encrypt information. I haven’t spent time thinking on how many words (from the English dictionary, say) could a single symbol represent.
It might be relatively easy to extract which characters are inside each symbol although it doesn’t have any information about the sequence, so anagrams are not distinguishable – ‘LISTEN’ and ‘SILENT’ look the same on the most compressed symbol.
It starts being more complicated (and beautiful) when the letter-spacing is not multiple of the pixel size, creating interesting patterns and shades of grey.
Using this idea I took the poem “Ma Bohème” by Arthur Rimbaud and I distributed each line vertically. These are two different layouts using different spacing.
These are two close-ups of the images above, I really like to think that this apparent randomness have a meaning behind.
I wanted to capture the dynamism of using this typography while morphing from one position to another – I did this small experiment using the same poem, this time with horizontal arrangement so is easier to distinguish the text. The poem is recited by a french virtual lady (i.e. text-to-speech):
Separately, I thought about using the property of the typo to shrink for visualizing the population density of the world’s 16 most populated cities. The more dense is a city (population / km2), the more compressed are the characters.
I’m not completely convinced about it since the appearance of the symbols not only depends on the population density but also on how long is the name of the city. What is true (and taking the meaning of density literally) is that the density of the symbols change according to the population density – amount of black / cm2, for example. Is not strictly comparable from one city to another since the name length is not the same, but the grey intensity together with the level of legibility gives a sense of density.
While comparing different cities, the population density lacks some meaning without the population number. In the following poster each city symbol contains information about the population (using the font size, linear relationship) and the population density (using letter-spacing, Dens^2+Dens+A relationship; letter-spacing being relative to the font size).
In parallel I made some tests adding color. These are samples using different ‘densities’ and color patterns:
Using the population parameter, I created some posters for other cities. Here the ones from the two cities I was visiting in the moment of doing these experiments:
There has been a slight deviation from the first purpose of that dynamic font and the experiments shown above – I’ll come back to the photography path some day.
I’m not a heavy user of Instagram but I enjoy its easiness – take a picture, make it look better, and share it. Thanks to Statigram I know that I use Instagram mostly on Fridays, I publish 88% of the photos with filter and that the filter I use more is X-Pro II.
Recently I discovered that is possible to use more than one filter on the same picture:
1. Take a picture with instagram (or retrieve it from your Camera Roll)
2. Apply a filter and tap Next.
3. Wait a couple of seconds on this screen. Your picture will be saved in your Camera Roll without the need of publishing it.
4. Press Back twice and retrieve your picture (already filtered) using the bottom-left icon. Adjust the crop, and press Next.
5. Apply the new filter and publish.
This is an example of a picture taken from Waterloo Bridge (London), after applying two filters (Apollo and then Hefe) to combine a vignetting and the color levels:
This serie is the original picture, only with the Apollo filter and only with Hefe filter:
I wondered how a picture would look like after applying sequentially all the available filters, so I did using this picture taken at Copenhagen lakes:
The picture is progressively degraded, but I like how it looks in the third row for example where is still possible to guess what’s in it, and some contours are still well defined.
This is the same process using the inverse order of the filters:
Recently I made some more experiments in Generative Photography. I went a bit deeper on analyzing the glitches caused by the rendering and the asynchrony between the frame rate of the video signal and the refresh rate of the projector. The experiments pursue an artistic exploration to achieve a certain aesthetic outcome more than a research on computer engineering. Thus, how these glitches are generated and why they behave as they do has not been analysed. However, there are some observations in the following lines.
First I tried repeatability, these are three pictures taken consecutively at 10fps:
The glitches never look the same but they are always distributed along a line in the same height of the image. If I restart the sketch in Processing, the position of the glitches change – in this case, the glitches are on the very top of the picture.
And following pictures that were taken after this one have the glitches also in the top. So it seems that there is a relation between the position and the moment the sketch is started. I don’t know much about computer engineering so suggestions on why that happens are welcome. Maybe it depends on the position of the computer’s or graphic card’s clock at the moment of starting the sketch?
Afterwards I decided to play a movie with Quicktime while running the sketch, to see how the processor’s activity affects the glitches. And it does quite a lot – in these three images taken consecutively it seems that it stabilizes the rendering:
In the next experiment I used the same sketch with different frame rates ranging from 10fps to 100fps. Here some examples, keeping the aperture and ISO constant:
An example of how much patterns change only adding 1 frame per second.
I wanted to know if the exposure time of the different gray shades in a given picture are multiples (so the brightest grey has been projected n times the time the darkest grey has been projected). After slightly blur an image in Photoshop to reduce variability, I measured the four grey levels using the color picker.
I repeated the measurement in four different photographs, obtaining values that I centered and put them on a graph.
The curve kind of adjusts to the logarithmic formula of exposure value. It seems then that the different shades of grey come from multiple units of time exposure (e.g. 0,2sec, 0,4sec, 0,6sec, etc.).
Another interesting detail, sometimes the glitches are not clean cuts of rectangles but have some tooth, as shown in the picture:
The following are a few examples of some pictures created with this technique.
Colored rectangles with overexposed and underexposed bits. Probably none of the two colors that differ in luminance are the color that was projected since it depends on the time exposure.
Vertical and and horizontal rectangles projected sequentially:
Circles with huge radius with a random component that make them overlap creating overexposed areas (in yellow):
Stripes with color variation, following the projected sequence with a white surface to paint create relief:
Combination of different patterns triggered manually, projected at high framerates, adding a certain degree of randomness to specific variables:
Same principle as the rectangles from the top of the post, in concentric circles:
To conclude this post, a screenshot I took while designing Generative Photography’s website. Talking about digital glitches: