ISHAC BERTRAN

Creating music samples with vinyl records

August 18th, 2011 by ishback

Music sampling has been done for years using different techniques. Currently samplers (either as a piece of hardware or as software) is the most extended tool for playing samples that can come from digital formatted music, live recording, vinyls or tapes. One of the most old techniques for sampling was cut&paste the audio tape. I love this video from Delia Derbyshire using reel-to-reel recording, creating loops by cut&pasting the audio tape, and sync the samples to create music.

Driven by my devotion for vinyls and analog processes (perhaps a bit of Dj wannabe too), and emulating the audio tape cut&paste technique, I tried to make the vinyl sampling a bit more analog – literally cut and paste pieces of vinyl to create samples.

I bought some second hand vinyl records, different music styles: Supertramp, Wagner, Paul Anka, Chicago, Lil Jon and some random ones to make the first tests. I spend a couple of hours browsing and listening to old records – I remember thinking “all projects should start like this”.

Back to the studio, I considered different options to cut the vinyls – it had to be a clean cut in order to minimize the resulting groove and therefore the stress on the stylus.

I first used a hot wire cutter – it took some time to set the right temperature so the wire cut but didn’t deform the vinyl. It was quite important to keep a constant speed to avoid undesired melting too. I cut a small sector with the idea of reversing it afterwards, so a song from Side A would have a sample from a song from Side B.

The piece fitted quite well in its natural position but not in its reverse position. I had to smooth it out with a file, but there was already a serious gap and V-shape groove pretty difficult to resolve.

So I jumped into the second attempt, using a blade. It took around 50 passes to cut one straight line.

I cut a radial sector, it was slightly better than the first trial (no melted material) but I had to remove a burr with a file and again, it created a tiny gap, big enough to scare the stylus.

Then I tried the laser cutter and things went better.

I made many tests to find the right laser power in order to get the cleanest cut possible. The best setting was to let the laser go through *almost* through the vinyl, and then crack manually the last thin layer (1). If the laser goes all the way through, it melts too much material and leaves a gap (2). If the laser doesn’t go enough deep, it’s pretty much impossible to take the piece out without creating an undesired crack (3).

Even if the laser is well calibrated, it always cuts creating a cone-shape cut. Using the first option the crack doesn’t take out any material or creates burrs on the bottom surface, so that surface is the one I used for playing the record afterwards. The top one always have a gap where the stylus would go in.

I made some tests with different sectors to analyze repeatability and the cut wasn’t totally consistent on different positions of the disc and even on different positions of a sector. I think it’s due to the difference in resolution of the laser head depending of the combinations of X and Y axis speeds.

These are sectors from the same record, already exchanged, seen from the laser cut side:

And this is seen from the bottom side, where the final layer is cracked. Aligning the surfaces properly the gap is almost not perceptible by the finger:

The first time I placed the record on the vinyl player for testing, I noticed that the sectors were too small and it was difficult to guess which sample was it. The transition wasn’t clean – when the stylus found the groove it created a low sound (similar than a bass drum).

I decided to cut larger sectors on different records and exchange them to create loops or tunes using samples from different albums. I cut these patterns:

I cut the same angle in the label area so after the sectors were exchanged, I can remember which samples contain every record.

 

I exchanged the sectors from 4 different records: Paul Anka, Supertramp, Lil Jon and Chicago. I selected these four from the once I bought since they have the same thickness (1,2mm). The pieces snapped pretty well on its new position but I secured them temporarily with tape, so I could adjust the height and make the surface as even as possible before playing the record.

 

This are some of the resulting albums:

I made a video containing part of the process and the result, playing the records in a vinyl player.

It’s possible to hear (and see) the the stylus jumping a little bit – that’s not good for the needle. However this bumps create a new beat over the unmatching beats of the two samples, and that helps to define a new rhythm. I thought about selecting specific samples and make them match perfectly but that would work only for one rotation, so it might be good for scratching but not for listening continuously – it’s quite difficult to find records that the beat corresponds with a revolution.

It’s been an interesting experiment with a really fun process. I knew it would be, having vinyls, music and lasers involved :)

 

 

Dynamic typeface

July 25th, 2011 by ishback

For the Generative Photography project I used basic patterns created with Processing. I thought it would be interesting to capture words with the same technique, so they would appear ‘broken’ depending on the framerate, and thus, difficult to read. I started seeking this effect projecting words with Processing – glitches didn’t appear when using regular typography, probably because the way is rendered is different than basic shapes (rectangle, ellipse, etc.)

I created a typography based on squares, so each character can fit in a 3×5 matrix, inspired on MiniML fonts by Craig Kroeger. I adapted some characters to make them fit into the 3×5 – ‘M’ and ‘W’ characters look a bit weird in such a small container but are still recognizable.

After coding the typo in processing to be generated parametrically, I was appealed by the aesthetics of the characters being overlapped with some transparency. The resulting symbols using negative tracking define a visual code or identity for each word.

It’s also interesting to think about these symbols as a way to encrypt information. I haven’t spent time thinking on how many words (from the English dictionary, say) could a single symbol represent.

It might be relatively easy to extract which characters are inside each symbol although it doesn’t have any information about the sequence, so anagrams are not distinguishable – ‘LISTEN’ and ‘SILENT’ look the same on the most compressed symbol.

It starts being more complicated (and beautiful) when the letter-spacing is not multiple of the pixel size, creating interesting patterns and shades of grey.

Using this idea I took the poem “Ma Bohème” by Arthur Rimbaud and I distributed each line vertically. These are two different layouts using different spacing.

 

These are two close-ups of the images above, I really like to think that this apparent randomness have a meaning behind.

I wanted to capture the dynamism of using this typography while morphing from one position to another – I did this small experiment using the same poem, this time with horizontal arrangement so is easier to distinguish the text. The poem is recited by a french virtual lady (i.e. text-to-speech):

Separately, I thought about using the property of the typo to shrink for visualizing the population density of the world’s 16 most populated cities. The more dense is a city (population / km2), the more compressed are the characters.

I’m not completely convinced about it since the appearance of the symbols not only depends on the population density but also on how long is the name of the city. What is true (and taking the meaning of density literally) is that the density of the symbols change according to the population density – amount of black / cm2, for example. Is not strictly comparable from one city to another since the name length is not the same, but the grey intensity together with the level of legibility gives a sense of density.

While comparing different cities, the population density lacks some meaning without the population number. In the following poster each city symbol contains information about the population (using the font size, linear relationship) and the population density (using letter-spacing, Dens^2+Dens+A relationship; letter-spacing being relative to the font size).

White version:

Black version:

In parallel I made some tests adding color. These are samples using different ‘densities’ and color patterns:

Using the population parameter, I created some posters for other cities. Here the ones from the two cities I was visiting in the moment of doing these experiments:

There has been a slight deviation from the first purpose of that dynamic font and the experiments shown above – I’ll come back to the photography path some day.

 

 

 

 

 

Future

July 16th, 2011 by ishback

Filter the filter

May 16th, 2011 by ishback

I’m not a heavy user of Instagram but I enjoy its easiness – take a picture, make it look better, and share it. Thanks to Statigram I know that I use Instagram mostly on Fridays, I publish 88% of the photos with filter and that the filter I use more is X-Pro II.

Recently I discovered that is possible to use more than one filter on the same picture:

1. Take a picture with instagram (or retrieve it from your Camera Roll)

2. Apply a filter and tap Next.

3. Wait a couple of seconds on this screen. Your picture will be saved in your Camera Roll without the need of publishing it.

4. Press Back twice and retrieve your picture (already filtered) using the bottom-left icon. Adjust the crop, and press Next.

5. Apply the new filter and publish.

This is an example of a picture taken from Waterloo Bridge (London), after applying two filters (Apollo and then Hefe) to combine a vignetting and the color levels:

This serie is the original picture, only with the Apollo filter and only with Hefe filter:

I wondered how a picture would look like after applying sequentially all the available filters, so I did using this picture taken at Copenhagen lakes:

The picture is progressively degraded, but I like how it looks in the third row for example where is still possible to guess what’s in it, and some contours are still well defined.

This is the same process using the inverse order of the filters:

The last frame reminds me a burnt frame of a film in a projector.

Conclusion: the order of the filters DOES affect the product :)

Think, Plan, Execute

May 5th, 2011 by ishback

Project phases from Bézier’s point of view.

Hunting glitches

March 16th, 2011 by ishback

Recently I made some more experiments in Generative Photography. I went a bit deeper on analyzing the glitches caused by the rendering and the asynchrony between the frame rate of the video signal and the refresh rate of the projector. The experiments pursue an artistic exploration to achieve a certain aesthetic outcome more than a research on computer engineering. Thus, how these glitches are generated and why they behave as they do has not been analysed. However, there are some observations in the following lines.

First I tried repeatability, these are three pictures taken consecutively at 10fps:

The glitches never look the same but they are always distributed along a line in the same height of the image. If I restart the sketch in Processing, the position of the glitches change – in this case, the glitches are on the very top of the picture.

And following pictures that were taken after this one have the glitches also in the top. So it seems that there is a relation between the position and the moment the sketch is started. I don’t know much about computer engineering so suggestions on why that happens are welcome. Maybe it depends on the position of the computer’s or graphic card’s clock at the moment of starting the sketch?

Afterwards I decided to play a movie with Quicktime while running the sketch, to see how the processor’s activity affects the glitches. And it does quite a lot – in these three images taken consecutively it seems that it stabilizes the rendering:

In the next experiment I used the same sketch with different frame rates ranging from 10fps to 100fps. Here some examples, keeping the aperture and ISO constant:

24fps:

25fps:

An example of how much patterns change only adding 1 frame per second.

30fps:

31fps:

32fps:

33fps:

34fps:

I wanted to know if the exposure time of the different gray shades in a given picture are multiples (so the brightest grey has been projected n times the time the darkest grey has been projected). After slightly blur an image in Photoshop to reduce variability, I measured the four grey levels using the color picker.

I repeated the measurement in four different photographs, obtaining values that I centered and put them on a graph.

The curve kind of adjusts to the logarithmic formula of exposure value. It seems then that the different shades of grey come from multiple units of time exposure (e.g. 0,2sec, 0,4sec, 0,6sec, etc.).

Another interesting detail, sometimes the glitches are not clean cuts of rectangles but have some tooth, as shown in the picture:

The following are a few examples of some pictures created with this technique.

Colored rectangles with overexposed and underexposed bits. Probably none of the two colors that differ in luminance are the color that was projected since it depends on the time exposure.

Vertical and and horizontal rectangles projected sequentially:

Circles with huge radius with a random component that make them overlap creating overexposed areas (in yellow):

Stripes with color variation, following the projected sequence with a white surface to paint create relief:

Combination of different patterns triggered manually, projected at high framerates, adding a certain degree of randomness to specific variables:

Same principle as the rectangles from the top of the post, in concentric circles:

To conclude this post, a screenshot I took while designing Generative Photography’s website. Talking about digital glitches:

More pictures in the website of the project: www.generativephotography.com

Generative photography

February 20th, 2011 by ishback

The picture above has been generated projecting white vertical rectangles, from left to right, at 25fps, to a projection screen. A camera, set to long exposure, captured the projection in 5 seconds. The rectangles aren’t homogeneous due to the rendering and the asynchrony between the frame rate of the video signal and the refresh rate of the projector.

The light grey rectangles have been in projected (and thus, exposed) double time than the dark grey ones. The brightest stripe has probably been projected three times the dark grey ones, and there is a rectangle that hasn’t been projected.

I’ve been doing some experiments using Processing to generate different patterns and sequences, a projector, and a camera pointing to the projection screen. Some of them are using a technique called procedural light painting, some other combining slit-scan with projected patterns. I’m also very interested in the low repeatability of some of these experiments, like the picture above, due to the noise introduced by the asynchrony of generation, communication and output means. Maybe we can call it Generative Photography.

The following pictures are generated projecting a vertical lines, one after the other, and then the same with horizontal lines (25 fps). Lines have 3-pixel stroke, and move 4 pixels each time, creating a double exposure every two lines. Plus the error introduced by the asynchrony.

At 18 fps:

Projecting vertical rectangles instead of lines:

The following pictures are generated projecting white squares sequentially, from left to right and top to bottom. The effect of tiled wall is due to a small movement of the projection screen – the edges od the squares don’t match perfectly, creating overexposed areas (the white ones) and non-exposed areas (the black ones):

At 32 fps lots of squares are missed because the asynchrony:

It’s interesting to play with two technologies that have a specific rate of refresh and communication (sketch and projector), creating this asynchrony, but being able to capture the whole result using the long exposure of the camera. Most of the times there are unpredictable results, even after seeing the projection on the screen. There are squares that are generated but not projected. There are squares that are projected but the naked eye doesn’t see.

Adding some perspective to capture more volume:

Using color squares:

Squares with stroke, no fill:

Vertical and horizontal lines, sequentially:

The following pictures are 1 second exposure – 0,5 seconds projecting white concentric circles and 0,5 seconds projecting the complementary image, so overall the whole surface is covered with light. In the first one I’m moving after the first projected image, so the second set of concentric circles don’t catch me in the same position – it feels like I’m behind them.

In the next experiment I’m projecting one circle each frame (at 25 fps), increasing the radius in 1px each frame. Different objects are placed on the line of light.

Changing the position of the projection screen the projected circles overlap and over-expose some areas, or leave some areas without exposure, creating an amazing effect. I used a 3-pixel stroke weight line – if the movement of the canvas is fast, it gets a bit blurry since areas are exposed 3 times in different positions. I did this because using 1-pixel line it wasn’t enough light to have the desired contrast. Next photos are 20 to 30 sec exposure.

The detail of the pattern is quite interesting, created by the pixelated source of light from the projector.

I used also a piece of lycra as projection screen to be able to move the canvas in a more flexible way.

Then it’s easier to pull or push the fabric, or move one of the corners. It’s also interesting to pull and release, capturing the waving of the fabric returning to its stable position.

I took made many other tests with this configuration, all of them resulted to be quite beautiful:

I’ll continue experimenting with this kind of photography. I’m interested in introducing feedback to the projection, so it reacts on what the camera is capturing, in real time.

For more pictures and information about the exposure times, aperture, etc. you can check my Flickr.

Imprints and memories

December 20th, 2010 by ishback

Some scans from my old notebook, 2010-07.