### Image Texture Processing

[I've updated this post with some example code, you can find it on github here ]

Recently I've been developing various drawing and painting machines.  I'm especially interested in how gesture in drawing and painting, when used to make marks, changes the way we see or read an image.  In order to test the machines, I've needed to supply large amounts of data.  As far as I know, no one has a data file of all the gestures used to create a painted or drawn image.  So instead I have developed some code to take a photograph and process it to generate texture and directional information.  It turns out this process can be quite simple.

In summary, we can do two convolutions on an image to produce two maps.  One is with a horizontal Sobel filter, and the other with a verticle Sobel filter.  These filters amplify or diminish pixel values if they are similar in value to a neighbouring pixel on either the horizontal or vertical axis respectively.  As an output of the convolution, each pixel then has a 'strength' value associated to either a vertical and a horizontal component (depending on the sobel filter used).  We then use these strength values as the x and y components for a vector, and from this we can get a general heading toward texture directionality.

Image convolution is shockingly simple and quite magic.  Victor Powell has a great interactive explanation here.

We can look at each of these steps individually.  I've taken the below image from Pexels.com and it is marked as free to use without attribution.  I picked this image because it has a solid background, some furry bits, and some hard edges - all good components to test the algorithm.

Next we run the horizontal Sobel filter.  You can read more about these on Wikipedia.  The horizontal filter looks like:

Doing a convolution with the the above image, I get minimum and maximum pixel values of -286 and +278.  These aren't suitable values for a colour or tone.  However if we map this range between 0 and 255, we can produce the following representation:

In the above image, white and black values are a strong alignment to the horizontal axis, and we see them in the image as shadow and highlights.  Notice that we get a highlight across the top of the antennae, and a shadow across the bottom of the antennae.  Remember, these are normally values at the extremes, between -286 and +278, and they have only been mapped to 0:255 (black:white).  If we think of these values as a vector, then the mid-tone region, here grey (125), is actually a vector with no magnitude.  It is the extreme values that contribute to a vector with magnitude.   Hopefully the below illustration will help:

Next we can use the vertical Sobel filter on the original image:

And following the same procedure to map the filter values to 0:255 tonal values, we get the following output - notice the highlights and shadows are now associated to parts of the image along vertical lines:

With these two maps, we can now combine each pixel strength in both horizontal and vertical orientation to get a vector for each pixel.  From this vector, we can get a resultant angle.  Depending on the relative contribution of the horizontal and vertical component, we will determine a different angle.  Again, an illustration to help:

With this directional information, we can produce an image to display all these pixels as vectors.  Naturally, to see it, we have to make the image very big, to space out all the information.  This is a close up of a section of the right antennae:

Above you can see a good outline following diagonally down the antennae length and segment, as well some noise and swirls within the antennae body.  Also notice the absolutely horizontal lines of the background.  The background of the image is solid black, so it has no texture or direction, which means it ends up with an angle of 0 (horizontal).

Next is a section of the fur on top of the head:

This section is a bit more noisy, which you can see as the vectors in some disarray.  This may have something to do with the resolution of the original image, and any noise from compression.  However, there are still some strong contours following the fur direction.

Below is another shot, this time a close up of the lower right hand where the body of the bee is blurred because of shallow depth of field in the photograph, and in the top right hand corner of this crop is the fur of the chin of the bee:

In the above you can see that the shallow depth of field creates a smooth texture directionality, probably due to the blurred colour gradient in the image.  The fur boundary creates lots interesting swirls.  The trick here would be to interpret these regions and the statistical noise with some intelligence.

With this vector field associated to the image, the next step is to then use a path finding algorithm to create pencil or brush stroke movements.  I'll document that another day!

Example code:

### DIY Planar Magnetic drivers, part 1

I've been putting together some planar magnetic loud speaker drivers.  There is a wealth of information available at diyaudio.com, particularly in this thread.   Inner Fidelity have a nice write up on planar magnetic drivers.  Euwemax documents his DIY planar magnetic drivers for headphones, and he uses etching as his principle method.  I thought I'd attempt to build some using enamelled wire and embroidery hoops.  This post documents my progress so far.  I've made working prototypes, but at the moment the sound is clipping.

### Arduino, Smooth RC Servo Motor Control

When using the arduino Servo.h library, you normally use myServo.write(), but you can use myServo.writeMicroseconds().  This lets you write out the controlling squarewave period in microseconds.  The rc servo standard is typically 1000us <-> 2000us, with 1500us as neutral position.  This means that you have a resolution of ~2000 individual steps, rather than the 180 (degree) steps the normal write() would provide - and therefore, the potential for significantly smoother servo motor movements.

Check out: http://arduino.cc/en/Reference/ServoWriteMicroseconds

Here is the test code I have used to good effect so far, I have been using servo motors with an extended range of uSeconds.  Currently using one pot to set the target position, and the second pot to change the ramping/smoothing fraction.  Ramping/smoothing is done with a technique called easing (see here) which means continually adding a fraction-of-the-difference between target and current position.  See below, very simple…