Image Texture Processing

[I've updated this post with some example code, you can find it on github here ]

Recently I've been developing various drawing and painting machines.  I'm especially interested in how gesture in drawing and painting, when used to make marks, changes the way we see or read an image.  In order to test the machines, I've needed to supply large amounts of data.  As far as I know, no one has a data file of all the gestures used to create a painted or drawn image.  So instead I have developed some code to take a photograph and process it to generate texture and directional information.  It turns out this process can be quite simple.

In summary, we can do two convolutions on an image to produce two maps.  One is with a horizontal Sobel filter, and the other with a verticle Sobel filter.  These filters amplify or diminish pixel values if they are similar in value to a neighbouring pixel on either the horizontal or vertical axis respectively.  As an output of the convolution, each pixel then has a 'strength' value associated to either a vertical and a horizontal component (depending on the sobel filter used).  We then use these strength values as the x and y components for a vector, and from this we can get a general heading toward texture directionality.

Image convolution is shockingly simple and quite magic.  Victor Powell has a great interactive explanation here.

We can look at each of these steps individually.  I've taken the below image from and it is marked as free to use without attribution.  I picked this image because it has a solid background, some furry bits, and some hard edges - all good components to test the algorithm.

Next we run the horizontal Sobel filter.  You can read more about these on Wikipedia.  The horizontal filter looks like:

Doing a convolution with the the above image, I get minimum and maximum pixel values of -286 and +278.  These aren't suitable values for a colour or tone.  However if we map this range between 0 and 255, we can produce the following representation:

In the above image, white and black values are a strong alignment to the horizontal axis, and we see them in the image as shadow and highlights.  Notice that we get a highlight across the top of the antennae, and a shadow across the bottom of the antennae.  Remember, these are normally values at the extremes, between -286 and +278, and they have only been mapped to 0:255 (black:white).  If we think of these values as a vector, then the mid-tone region, here grey (125), is actually a vector with no magnitude.  It is the extreme values that contribute to a vector with magnitude.   Hopefully the below illustration will help:

Next we can use the vertical Sobel filter on the original image:

And following the same procedure to map the filter values to 0:255 tonal values, we get the following output - notice the highlights and shadows are now associated to parts of the image along vertical lines:

With these two maps, we can now combine each pixel strength in both horizontal and vertical orientation to get a vector for each pixel.  From this vector, we can get a resultant angle.  Depending on the relative contribution of the horizontal and vertical component, we will determine a different angle.  Again, an illustration to help:

With this directional information, we can produce an image to display all these pixels as vectors.  Naturally, to see it, we have to make the image very big, to space out all the information.  This is a close up of a section of the right antennae:

Above you can see a good outline following diagonally down the antennae length and segment, as well some noise and swirls within the antennae body.  Also notice the absolutely horizontal lines of the background.  The background of the image is solid black, so it has no texture or direction, which means it ends up with an angle of 0 (horizontal).

Next is a section of the fur on top of the head:

This section is a bit more noisy, which you can see as the vectors in some disarray.  This may have something to do with the resolution of the original image, and any noise from compression.  However, there are still some strong contours following the fur direction.

Below is another shot, this time a close up of the lower right hand where the body of the bee is blurred because of shallow depth of field in the photograph, and in the top right hand corner of this crop is the fur of the chin of the bee:

In the above you can see that the shallow depth of field creates a smooth texture directionality, probably due to the blurred colour gradient in the image.  The fur boundary creates lots interesting swirls.  The trick here would be to interpret these regions and the statistical noise with some intelligence.

With this vector field associated to the image, the next step is to then use a path finding algorithm to create pencil or brush stroke movements.  I'll document that another day!

Example code:

Plamondon's Handwriting Model, Print Making and

[I've translated Plamondon's work into a processing sketch.  You can find this on github.  ]

In my research at CFPR I am interested in how advanced robotics technologies can be used to challenge our idea of what a print is.  Perhaps the most common type of printing is InkJet, where tiny dots of ink are sprayed in quite precise patterns onto paper to create machine printed images.

InkJet prints are typically flat images in terms of surface relief.  The ink is also sprayed in a very controlled way so that a low number of colours (like cyan, magenta, yellow and black) can interpolate to create the illusion of full colour.  This is a hard thing to do, but it is also very suited to images that are stored as a grid (or table) or pixels, and a machine that is cheap and reliable to make.

From these sort of assumptions, I am asking, what if we could enhance a print machine to include an understanding of the substrate (like paper), the medium (like ink or paint) and the implement (the way the substrate and medium are brought together), and the dexterity and sensing to work with all that information?

You might imagine this as a painting machine, or a sculpting machine, or any machine which requires quite impressive dexterity, intelligence and knowledge.  To me, this is a fascinating robotics problem, requiring embodied sensing, actuation and AI.  It might not be a driverless car navigating a hazardous street, but it is a machine attempting to navigate an unreliable world of materials, tools and visual representation.

For me, a key part of this puzzle is the way a print machine moves.  It is simply not good enough to scan an image on to a page.  Even though traditional print methods typically press a print on to a page (one simple action, repeated), a great deal of dexterous and sensitive work is done creating the original plate or matrix, such as wood engraving.  I imagine the digital file in digital printing is analogous to the traditional plate or matrix.  A wood plate beautifully captures the marks made by hand, and I think a digital file could be enhanced to capture something similar.  If the machine were to paint, then again, scanning an image on to page is not good enough, the paint needs to be manipulated.

Is machine-painting printing?  I don't know.

In any case, I've been looking for a way to represent movement digitally that quite easily translates to machine movement.  My first thoughts were to look at Bezier Curves.  These are really appealing because for a few input numbers, you can calculate any point on a complex curve with infinite resolution.

I'm interested in an algorithmic approach because to me, voxels are a data-expensive way to represent 3D objects, and you also lose any understanding of how the elements of the object relate to one another.  Any relationship information has to be extrapolated.  A pixel image is a 2D table of colour information, and in a similar way, you can't read from an image file anything about how the image was composed.  To me, an image made of algorithmic defined curves seems data-cheap, and you could also get some stuff for 'free', such as observing where curves overlap and using a colour model to look at how these coloured paths might combine.

However, Bezier curves have some immediate draw backs for this application.  First, the start and the end of the curve has to be defined, so if you need to link lots of curves into something like the trail of a word, then there are lots of numbers to update and check.  Secondly, there are more numbers (anchors) that do not relate to the character of the curve in an obvious way.  The third aspect killed Bezier for me; there is no real representation of how the curve is progressed over time - or in other words, how speed changes as the mark is made.  Really, Bezier curves are about deriving points, not motion.

My current source of material has been models of handwriting and signature recognition.  In fact, I've found this area quite fascinating.  Something as unique and as representative of a person as a signature can in fact be modelled quite reliably numerically.  Furthermore, the models necessarily include the changes of speed involved in the gesture of making the mark on the page.  Apparently, when we look at a signature, at the variation of the ink as it travels across the page, we can intuitively discern the type of motion that was required to create the mark.  In hindsight this stuff is not surprising, it is (or has been) essential for banking.

I've found the following on handwriting models to be good reading:

Of these, the last, Plamondon's model, was the most interesting because of it's relative simplicity and because it appears very empirically investigated.  That paper  also specifically describes the math involved, provides a table of input data, and provides a graphic of the result - these are the perfect combination to recreate someone's work.  I love the idea that something as intuitive, meaningful and ancient as mark making can be investigated with some numbers and computation.

The 'simple' bit that I like is the representation.  It has these characteristics:

  • The representation looks like a small set of numbers.
  • These numbers create a curve, not a single ambiguous 'pixel'
  • Marks are represented with a sense of direction and strength.
  • Marks are given a time at which they occur.
  • The change in velocity for each mark is generalised by a graph (that appears to have good grounding in studies of real people writing and their muscle activation).
  • You can create as many of these marks as you like, and they are simply summed up to produce one fluid 'word' in joined up writing. 
  • The movement points are generated with a spacing along the curve that is proportional to the velocity change.
  • The whole thing is algorithmic, meaning we can trivially increase the resolution when generating coordinates to operate a machine.
The most appealing bit for me is that you can generate as many marks as you like, and they are simply summed together.  This means scribbles can be made quite easily by random values.  In previous research I have used evolutionary algorithms and one of the problems with these is in translating input to output and ensuring it is sensible.

I've translated Plamondon's work into a processing sketch.  You can find this on github.  I've done my best to comment the workings.  It would be good to look at the lognormal distribution.  And it would be good to also take a look at Plamondon's publications.

Compliant Motion Control on a DC Motor

If you just want the code you can find it on github.


Creating a compliant DC motor or compliant motion controller could mean many things depending on what the application scenario is.  In this case I would like an electric motor driven mechanism to resist having it's position disturbed but also to accept a new position if it is pushed and held in place (i.e. over-riding the holding force of the motor).  I'd also like to be able to set the resistance digitally (from code) so that at different times the motor has different strengths of resistance.  Finally I'd also like to be able to digitally command (from code) a new position, despite what is happening through physical interaction, and whether or not the motor moves depends on the strength setting of the motor.  

These things together might seem like a bit of a contradiction.  However imagine a mechanism that has to move through many set positions as part of an animation sequence.  This mechanism will also be exposed to the public, so you might want to have the mechanism soft enough to prevent a finger being trapped, or you might want the mechanism to be pushed and pulled around interactively without causing any damage to the equipment.  This kind of physical interaction could also be useful for a more complex robot experience with people.  In fact, this kind of compliant motion control is the future for safe human-robot interaction.  

For instance, Rusty Squid produced a swarm of books which responded to the visual detection of movement in an audience.  These books were autonomous and exposed to a public, but in reality they could not be physically interacted with.  The motors used in the installation were high grade hobby RC servo motors, which include reduction gear-sets that do not like to be back-driven (it can entirely break the motor unit).  Check out this video of the Book Hive:

In this post I document my first attempt at creating a low-cost compliant motion controller.  I've made it as simple as possible and as well commented as possible.


To create such a compliant motor and motion control we need the following ingredients:

  • An Arduino or similar to read sensors and send movement commands.
  • A strong DC motor with no gearing, I'm using a 12V motor with 4.7 oz-in torque
  • A position encoder, the motor I am using has an integrated optical encoder with 1000 clicks per revolution.  The code is written for a quadrature encoder.  This one is good.
  • A DC motor controller, I'm using a DRV8801 from Polulu.
  • A potentiometer to provide some external input to the system.
  • A 12V power supply for your motor, I'm using a lab power pack with 2 amps of current available.
  • A computer to do some coding.
When you add it all together it looks a bit like this:

The Circuit 

The circuit is very simple.  You'll want to connect the DRV8801 as it is specified on this webpage:

In my setup I've connected DIR to pin 7 and PWM to pin 6.  I've also connected a potentiometer across Analog 14, Ground and Pin 53 (set high).  If you want to use the example code without modification you'll have to do the same.  I've also wired the quadrature encoder data into pins 2 and 3 - you may need to reverse these or your motor polarity.

Overview of the Controller Operation

In the above diagram we have the following elements that work together in a constant loop of operation:
  • Motor position: This is read from the rotary encoder and should be quite a reliable indication of where the motor actually is in reality.
  • Desired position: This is a position set in software, which is relative to the units read from the encoder.  
  • PID Algorithm: This is algorithm takes both the Motor Position and Desired Position and creates an error signal, which describes whether the motor should rotate clockwise or anti-clockwise and with how much magnitude in order to gain the desired position.
  • Error Signal: As above,  but note that the same value is fed into the Position Controller and the Backdrive Monitor.
  • Position Controller: This element takes the Error Signal and decides how to operate the motor.  There are two important characteristics.  First the direction, set through Pin 7, and this corresponds to whether the Error Signal is positive or negative.  The DRV8801 needs a high or low signal to determine direction,  .  Second, the power of the response, which is taken as the magnitude of the error and sent out of Pin 6 as a PWM signal. Using these two things combined, the motor should move toward the Desired Position if it is away from it.  When the Error Signal is zero, the PWM signal will also be zero, and so there should be no movement from the motor.
  • Backdrive Monitor: This is the crucial element for providing the motion compliance.  Without it we have just a standard motor controller that will fight to keep the desired position.  The Backdrive Monitor therefore watches the Error Signal for the sign that the motor is being held against moving toward the desired position.  If the motor is being held away from the desired position, then the Backdrive Monitor adjusts the Desired Position to reflect this.  This has the knock-on effect of changing the output of the PID Algorithm - the Error Signal - which changes the Position Controller and also itself, the Backdrive Monitor.  The result is that if the motor is held away from the Desired Position and is held steady, the resistance will be slowly reduced and the motor will 'relax' into it's new physically set position.
  • Position Sequence Controller: This last component simply sets a new desired position, and in this example we will use a pentiometer to set this externally and test the performance.  In application, this might be a series of positions in an animation or a signal from a more complex behavioural controller.  In this example, the Position Sequence Controller is very simple, and could do with more advanced features (discussed later).

More on the Backdrive Monitor

The Backdrive monitor needs more explanation.  The Backdrive Monitor watches the Error Signal over a period of time for the sign that the motor is being held against moving toward the Desired Position.  

This is achieved by storing the Error Signal values across a period of time, and then looking at how much variation there is across the whole set.  If the Error Signal values are all different, then the motor is either moving by itself to a new position, or it is being moved (moving) externally.  If the error values are all the same, then we can assume that the motor has been moved to a new position and is being held there, or it has met an obstruction and can't move.  The desired result is that if the motor is held away from the Desired Position and is held steady, the resistance of the motor will be gradually reduced and the motor will 'relax' into it's new physically set position.  If the microcontroller is fast enough, this realisation of movement can happen very fast, and so the compliance of the motion can feel very natural (not stuttered) whilst simultaneously offering some resistance.  

This observation of variance in the Error Signal is nicely achieved with Standard Deviation.  Standard Deviation gives a value from 0 upward to quantify the variance in a set of numbers.  Therefore, Even if the Error Signal values from the PID algorithm are arbitrary non-zero values and signed  (e.g. +22, or -31), if the values are consistent the standard deviation will be closer to 0.  This consistent measure of variance from 0 upward is useful.  It will allow us to observe the characteristic of  change over time.

Using Standard Deviation we can decide the likelihood of the motor being held, and also use it to ignore any jitter in the motor movement which will also cause the encoder value to jump around.  Jitter would occur if the motor is trying to move but not succeeding, and it makes it hard to tell if the motor is being held still.  Jitter can be removed by thresholding the returned Standard Deviation (SD) value.  That means we set a value to check the SD against, and use this to decide the correct course of action.  

For instance, if we passed our compliance check only on a SD value of 0, the motor would need to be held exactly in the same position for a period of time (i.e., stored Error Signal values all identical).  If we passed on an SD value of 1 or less, we could ignore a small amount of movement (less than 1).  If we passed on an SD value of 10 or less, the motor can be held quite roughly in the same position, and the compliance will kick in.  Of course, these values are dependent on what types of values your encoder returns and any jitter from your motor fighting against the held position.  This motor jitter is partly defined by the PID algorithm and the position controller.

You can probably see that through all of this there are quite a few variables that will need to be tuned to your specific system and applications.  You might be asking, what units are these values all in?  Quite bluntly, most of it is arbitrary.   The encoder count can be converted into an angle, but it is not necessarily useful to do so, and it will consume computing time.  Any definition of angle will be chewed up by the PID algorithm and converted into an Error Signal that cannot be directly sent to he motor controller.  So in reality, a PID system will always need to be tuned, and this is best done with a physical system and through iterative testing. Units of measurement are not especially useful in a system like this.  Hopefully you will discover that tuning the system is not too tricky, and it is actually quite fun to see how it responds and what the limits are.


I've put loads of comments in code to try to make it as understandable as possible.  You can also find this on github.

// Arduino Pin definitions
// Check your wiring.
#define DRV_DIR_PIN 7
#define DRV_PWM_PIN 6
#define POT_IN_PIN     A14
#define POT_5V_PIN     53

// Encoder Variables.  Note volatile, an interupt changes them.
static int pinA = 2; // Our first hardware interrupt pin is digital pin 2
static int pinB = 3; // Our second hardware interrupt pin is digital pin 3
volatile byte aFlag = 0; // let's us know when we're expecting a rising edge on pinA to signal that the encoder has arrived at a detent
volatile byte bFlag = 0; // let's us know when we're expecting a rising edge on pinB to signal that the encoder has arrived at a detent (opposite direction to when aFlag is set)
volatile int encoderPos = 0; //this variable stores our current value of encoder position. Change to int or uin16_t instead of byte if you want to record a larger range than 0-255
volatile int oldEncPos = 0; //stores the last encoder position value so we can compare to the current reading and see if it has changed (so we know when to print to the serial monitor)
volatile byte reading = 0; //somewhere to store the direct values we read from our interrupt pins before checking to see if we have moved a whole detent
volatile int count;

// PID variables.
// Lots of resource for PID algorithms online
long lastPIDTime;
float pid_setpoint = 0;
float errSum = 0;
float lastErr = 0;
float kp = 0.1;
float ki = 00.00;
float kd = 10;
float error = 0;

// The setpoint from the sequence monitor
// in this example set from a potentiometer.
float sequence_setpoint = 0;

// Window Average variables
#define SAMPLE_SIZE 20
int sampleIndex;
float samples[SAMPLE_SIZE];

// Used to intermittently calculate
// standard deviation and check if
// the motor is being held in a new
// position.
long heldUpdateTime;

// Being held time threshold value (ms)

// Standard Deviation threshold 
// value
#define SD_THRESHOLD 0.2

// Easing for Backdrive position adjustment
// Larger values, bigger jump / faster response
// when the motor is held in a position.

void setup() {
    // Rotary Encoder Setup
  pinMode(pinA, INPUT); // set pinA as an input, pulled HIGH to the logic voltage (5V or 3.3V for most cases)
  pinMode(pinB, INPUT); // set pinB as an input, pulled HIGH to the logic voltage (5V or 3.3V for most cases)
  attachInterrupt(0,PinA,RISING); // set an interrupt on PinA, looking for a rising edge signal and executing the "PinA" Interrupt Service Routine (below)
  attachInterrupt(1,PinB,RISING); // set an interrupt on PinB, looking for a rising edge signal and executing the "PinB" Interrupt Service Routine (below)

  // These are the pins for the DC motor controller
  // At this point DIR is arbitary but we will set it
  // to a known value anyway.
  pinMode( DRV_PWM_PIN, OUTPUT );
  pinMode( DRV_DIR_PIN, OUTPUT );

  // Enable pins to attach a potentiometer,
  // we'll use this to set a target position
  // as if it were part of a movement instruction.
  pinMode( POT_5V_PIN, OUTPUT );
  digitalWrite( POT_5V_PIN, HIGH );
  pinMode( POT_IN_PIN, INPUT );

  // Initialise Windowed Average
  // Time flow variables.
  // Take an initial time stamp.
  heldUpdateTime = millis();
  lastPIDTime = micros();

  // Show a reset so we know when something has gone wrong.
  Serial.println("*** R E S E T ***");

void loop() {

   // Functions as per elements explained in
   // blog post.  See functions for comments.

void pidAlgorithm() {
  // Update our PID aglorithm
  // The algorithm takes care of time interval
  error = getPID( encoderPos );

void positionController() {
  // We have to tell the motor which way to rotate
  // based on the error value.  
  // You might need to switch this depending on how
  // you wire your DC motor.
  if( error < 0 ) {
    digitalWrite( DRV_DIR_PIN, HIGH );
  } else {
    digitalWrite( DRV_DIR_PIN, LOW );

  // Set power based on error.
  // We use abs() because we only want the magnitude
  // of the error not the sign (+/-).
  // Because the error is set by the PID algorithm,
  // power does not map linearly to 0:255 range.
  // We cap it at 255 as that is the maximum value for
  // analogWrite.
  // Therefore, to get a different resistance from the 
  // motor you should to tweak the kp,ki and kd variables.
  // To compensate for the weight of an object, or gravity,
  // you'd probably need to bias power depending on direction
  // etc.
  float power = abs(error);
  power = constrain( power, 0, 255);
  analogWrite(DRV_PWM_PIN, power );

void backdriveMonitor() {
  // Every loop we update our windowed average
  // encoderPos is global and altered via interupt.

  // We periodically check to see if the error position
  // is actually being held in a stable position.  This
  // is periodic because calculating a standard deviation
  // is expensive.  Doing this routine more often will
  // make the compliance more responsive.
  // Scenario, someone holding the mechanism in a fixed
  // new position.  In which case, the standard deviation of
  // the error over a period time will be close to 0, even
  // though the error value itself may be non-zero.
  // When this occurs, we slowly alter the pid_setpoint to
  // match the new position. 
  // We do this slowly because otherwise the setpoint jumps
  // and this can be felt in the mechanism, it feels a bit
  // like a stepper motor.  Slowly adjusting the position
  // makes a smooth transition.
  // There are a couple of variables here worth playing with:
  //   updateTime > 25(?)
  //   sd < 2(?)
  //   error * -0.3(?)
  if( millis() - heldUpdateTime > HELD_TIME_THRESHOLD ) {
    heldUpdateTime = millis();

    // Grab SD value, decide if the motor is
    // being held by comparing against a 
    // fixed value.
    float sd = getStdDev();
    if( sd < SD_THRESHOLD ) {

      // Note, -= because we need to invert the error
      pid_setpoint -= (error * (BACKDRIVE_EASING) );

// Because we don't actually have an animation sequence
// we instead use an external POT to simulate a moving
// target.  This code therefore watches for a change of
// value on the POT to set a new sequence position.
void sequenceController() {
  // Read in a target position set by the potentiometer.
  // We are going to use this to simulate a movement 
  // routine.
  // The pot only updates the pid setpoint if it is
  // moved to a new position.
  // Otherwise the setpoint is managed by backdriving
  // the motor.
  // We threshold the target value to ignore jitter
  // from the pot reading.
  float target = analogRead( POT_IN_PIN );

  if( abs(target - sequence_setpoint) > 5 ) {
    pid_setpoint = target;
    sequence_setpoint = target;

// Very simple PID routine.
// Registers the time it was called.
// Would be better to run it from an
// interupt timer.
// Set global kp,ki and kd variables.
// Tune these based on observed performance.
float getPID( float input ) {
  long now = micros();
  long timeChange = now - lastPIDTime;

  float error = pid_setpoint - input;
  errSum += error * timeChange;

  float dErr = 0;
  if( timeChange != 0 ){
   dErr = ( error - lastErr ) / timeChange;

  float output = (kp * error);// + (ki * errSum ) + (kd * dErr );

  lastErr = error;
  lastPIDTime = now;

  return output;

// Ensure the buffer (storage) of values is
// initially set to 0
void initSamples() {
  for( int i = 0; i < SAMPLE_SIZE; i++ ) {
    samples[i] = 0;
  sampleIndex = 0;

// Maintains a buffer (storage) of values from
// which to draw an average and standard deviation
// This function handles the wrap around on the 
// index value.
void addSample( float s ) {
  samples[sampleIndex] = s;
  if( sampleIndex < SAMPLE_SIZE ) {
  } else { sampleIndex = 0;}

// Calculates the mean (average) value
// from the current values in the buffer
float getMean(){
  int i;
  float ave = 0;
  for( i = 0; i < SAMPLE_SIZE;i++ ) {
    ave += samples[i];
  return (ave/SAMPLE_SIZE);

// Calculates the Standard Deviation
// from the current values in the buffer
float getStdDev() {
  float mean = getMean();
  float sqDevSum = 0.0;
  for( int i = 0; i < SAMPLE_SIZE; i++ ) sqDevSum += ( pow( mean - samples[i], 2 ));
  if( sqDevSum == 0 ) return 0;
  return sqrt( sqDevSum / SAMPLE_SIZE );

// Interupt based encoder update.
// Note(!): 
// Because I am using an arduino mega I have altered the example
// to use the PORT definition and bit operation to PINE & B00110000
// You'll need to adjust this depending on your arduino device.
void PinA(){
  cli(); //stop interrupts happening before we read pin values
  reading = PINE & B00110000; // read all eight pin values then strip away all but pinA and pinB's values
  if(reading == B00110000 && aFlag) {
    encoderPos--; //decrement the encoder's position count
    bFlag = 0; //reset flags for the next turn
    aFlag = 0; //reset flags for the next turn
  } else if ( reading == B00010000 ) bFlag = 1; //signal that we're expecting pinB to signal the transition to detent from free rotation
  sei(); //restart interrupts

void PinB(){
  cli(); //stop interrupts happening before we read pin values
  reading = PINE & B00110000;
  //readingA = digitalRead(pinA); // read all eight pin values then strip away all but pinA and pinB's values
  //readingB = digitalRead(pinB); //read all eight pin values then strip away all but pinA and pinB's values
  if(reading == B00110000 && bFlag) {
    encoderPos++; //increment the encoder's position count
    bFlag = 0; //reset flags for the next turn
    aFlag = 0; //reset flags for the next turn

  } else if ( reading == B00100000) aFlag = 1; //signal that we're expecting pinA to signal the transition to detent from free rotation
  sei(); //restart interrupts



 There is plenty of room for improvement in this code, such as:

  •  The PID routine would be better if it ran reliably on a timer interrupt routine. 
  • The backdrive monitor currently adjusts the Desired Position (pid_setpoint) using the easing technique. This could itself be replaced with a PID algorithm to provide a more resilient or characterful response to the motor being backdriven. 
  • The Position Controller simply maps the PID error signal to the range 0:255. In order to compensate for gravity or an unbalanced load, this mapping would need to be biased depending on the direction (sign) of the error. 
  •  In this example, the motor power is mapped from the error signal, which means that the resistance the motor offers increases as the motor is moved away from the setpoint (desired position). To improve this, the DRV8801 provides an analogue output signal that indicates the current draw of the motor. To provide a constant resistance, the output power could be limited by a reading the current consumption of the motor.

Blogger Landing Page without losing Blog using Cookies

I followed the tutorials to set a custom redirect in blogger using a static page and discovered that you can no longer navigate back to the ordinary blog.  In fact, all the existing search engine references to your blog posts get lost.  This isn't a great solution.

To address this, I've put together the javascript code below which uses a cookie to recognise a new user.  A Blogger blog already uses a cookie to create the cookie notice that comes up.  One more cookie shouldn't be a problem.

The following javascript works like this: If a new user comes to your page, they are first sent to the landing (front) page.  However, if a new user is trying to get to a specific page, such as a blog post, then the javascript does not redirect them, it just lets them through to where they want to go.  This is achieved by checking if a cookie exists.   If the cookie does not exist, we assume they have not visited the page before, and the cookie is created for the next time they visit.

Importantly, if you simply redirect from the main address of your blog to your landing page, they will never be able to navigate to the blog itself.  So this cookie creates a once-only solution unique to each visitor.

You can use the following piece of code to redirect new users of your Blogger blog to a customised static page.  You need to copy and paste this code into the <head> section of your blogger template.  You can do this by navigating through the Blogger control panel, Template, Customize HTML (guide here).  When you are looking at the HTML, search for </head>, and paste the below code just before </head>.

Be sure to back up your custom html / template first, just in case something goes wrong.

Important: you need to change a few lines in the below code.

  • var homepage_url = "";
    • Change the web address here to the web address of your blog.  This might be "", or something similar.
  • window.location = "";
    • You need to change the web address here to the address where you want your user to end up.  To create a landing page, use the Blogger control panel to set up a static page (tutorial here), and then use the URL you find in the address bar when you are on that static page.

<script type='text/javascript'>;

// Blogger Landing Page Redirect by
// Paul O'Dowd
// 23 May 2016

// Get current cookies for this page.
var existing_cookies = document.cookie;

// Get URL string from address bar
var incoming_url = window.location.href;

// Add your homepage url here:
var homepage_url = "";

  // Check to see if the user is addressing the webpage directly.
  // If they want to go to a specific blog post we let them through.
  if( incoming_url.toLowerCase() == homepage_url.toLowerCase() ) {
  // If the Second Visit cookie does not exist, we send the user
  // to the landing page.
  if("Second Visit") == -1 ) {

    // Create the cookie so if the user visits again
    // they don't go to the landing page.
    document.cookie = "Second Visit";

    // Send user to landing page.


You can test if this is working quite easily by opening up a Private Window in Firefox, or Incognito Window in Chrome, and navigating to your page.  Cookies are automatically deleted when you close a private/incognito window.

The following pages might be useful resources if you want to develop the above code further:

Send GCode to 3D Printer from Processing

I've written a minimal example Processing sketch of how to read an ASCII GCode file and send the commands to a 3D printer.  The SerialGCodeSender_c class handles all Serial port transactions, reading from an ASCII GCode file, and handles resend requests.

This has been developed against Marlin 1.1.0-RC4 - 24 3D printer firmware available here:
Which is installed on an arduino mega 2560 ramps1.4 system running a serial interface at 115200 baud.

It has been written so that the transmission process is handled by a Processing thread, which means it happens outside the synchronisation of the Processing draw() loop.  This means that you can do some other jobs in the draw() loop and main window.

One thing to note is that the openPort() routine is checking for a specific string to be received from the 3D printer.  This should be the last string your printer transmits after it has boot up.

Other firmwares will probably require some modification.

The source code is available on GitHub.

Write GCode files from Processing

I've written an example program and a class to handle writing ASCII GCode files from Processing.  It is minimal.  It does not have any start up routines for specific machines.  It does have line numbers and CRC checksums.

The code is available on GitHub. 

Simple Particle Simulation (KD-Tree)

I modified the 2.0 example CircleCollision to include some mouse interaction and friction.  The code is up on

Edit: I've now tweaked it to include a KD-Tree data structure, This reduces the number of collision detections per frame down to 1 per particle.  It is computationally cheaper to build the KD-Tree on each iteration than it is to check every particle against every other particle.  The KD-Tree algorithm has been modified from an example by Thomas Diewald.

Edit: I've now tweaked it to include Probabilistic collision detection.

You can interact with this one :)

Pasta Machine Prints

Some notes on this process:
  • Wet paper and leave to dry until most the surface water has gone but the paper itself is still damp and flexible.  Use a heavy (thicker) paper that can take some abuse.
  • The plates I used were some scrap copper.  Any hard surface will do. 
  • The pasta machine I have can vary the gap between the lasagne sheet rollers. Even so, I had it set to the widest gap to take the paper, plate, plastic and felt.
  • Acrylic paint dries very fast.  Using an oil paint would give you more time to work.  However, you get very nice clean prints using acrylic as there is no wet residues left on the plate between each print - meaning you can get areas of the print which are just the paper showing through with no other marks.  Oil tends to leave a subtle staining unless your plates are well cleaned between each print.
  • I sandwiched the paper and plate between a piece of polypropylene plastic (the type used for stationary boxes) and felt.  The plastic guides the paper through the rollers.  The felt helps the rollers climb over the edge of the plate.  Without the felt the roller just collides with the lip of the plate edge and drags.
An example print:

Kalman Centroid Algorithm, Swarm Robotics

I'm using this post to share some Processing code I wrote a while ago.  I was asked by David Glowacki of the Danceroom Spectroscopy project if I knew of an algorithm to determine the centre of a group of electronic handheld devices without using an external infrastructure.  I wrote a decentralised algorithm inspired by the Kalman filter but it fell short against other alternatives for the project.  I think the algorithm is interesting though, so here it is, and I'll try to explain it.  I've not checked if a similar algorithm already exists, so I'd welcome any commentary.  If you're really interested, you can play with the processing sketch, code at the end, or via GitHub.

Textured 3D Printing: How to explore it yourself.

This blog post aims to give the essential details for other people to recreate some research I have done at CFPR.  When I started my employment as a Research Fellow I was given unlimited access to a Rostock Max FDM 3D printer to cut my teeth.  I developed an intimate understanding of the machine, software and materials, and with a playful attitude, I developed unusual surface textures such as this:

DIY Pass F6 Amplifier

Finished Amplifier

General Purpose Float Calculations via GPU / GLSL / Shaders in

Update: These encoding methods look promising but I've still not got a 100% solution.

I've had some success using the GPU to do general purpose calculations in Processing.  Processing now supports Shaders, although the documentation is sparse.  This tutorial is worth looking at, and I found the Book of Shaders very useful too, although it requires some support to be finished.

DIY Planar Magnetic drivers, part 1

I've been putting together some planar magnetic loud speaker drivers.  There is a wealth of information available at, particularly in this thread.   Inner Fidelity have a nice write up on planar magnetic drivers.  Euwemax documents his DIY planar magnetic drivers for headphones, and he uses etching as his principle method.  I thought I'd attempt to build some using enamelled wire and embroidery hoops.  This post documents my progress so far.  I've made working prototypes, but at the moment the sound is clipping.

DS1307 woes, I2C freezes and locks arduino

I've come across a problem with the DS1307 real time clock module and I thought I'd share my work around solution.

Write SVG files with

It is possible to use Processing's XML functionality to write SVG files.  Here is a simple function that will save a bezier curve in SVG format with the call writeXML( "mySVG.svg");  It will create an SVG file with the bare minimum of data.  Hopefully you can extrapolate how to draw multiple shapes, and it would also be of benefit to look at a specification for SVG path's and other elements (such as this one):

Threads in

Threads in Processing are very simple and easy to use.  So simple in fact that there are a few things to make note of.  Threads are useful if you want to do some background work, or if you want to have something like a loading screen to display whilst you initially sort out some data.  Using threads help you to do some computation outside from the draw() loop that otherwise dominates the processing sketch.  Take a look at the quick sketch below:


Miranda showed me how to make mono-prints using an old clothes mangle.  Good fun, and messy!  We applied oil inks with a roller, as well as using rags, card, fingers, pencils, a toothbrush, a pin, and splashed on some turpentine from time to time.

uFonken and Amp Camp Amp

Over the summer of 2014 Elisabeth came and I helped her to build a set of uFonken bass reflex speakers (the design is available here) and another Amp Camp Amp, this time using printed circuit boards.

Amp Camp Amp Version 1

I decided to have a go at building my first amplifier.  The amp camp amp by Nelson Pass has a low part count, and is designed with highly sensitive drivers like the Fostex 206EN I own in mind.  The schematic and theory is freely available here.

I ordered the components, a mix of RS online and ebay.  I build the circuits on copper strip board.  I had planned all along to dissipate the 50W of heat using a CPU heatsink.  This one is a Zalman heatsink I think.  It is rated to 90W dissipation with the fan active.  Conveniently it has a metal bracket with holes to attach to a motherboard, which means I could bolt it down to a case with ease.  I used a DC-DC step down circuit from ebay to step down the 19VDC switch mode supply to 5VDC to (under)drive the cooling fan.

I find the heatsink aesthetically pleasing:

Making a box:

Clamping, gluing:

I spent a while deciding on a hole arrangement for the various connectors:

Found some scrap metal to act as a heat spreader beneath the MOSFETS, which are to be sandwiched between the box and the CPU heatsink:

A more finished box:

Cutting a hole in the top for the CPU heatsink to peak out of:


Stained and varnished the box with a rustic look (intentional, honest):

Placing the MOSFETS, prior to clamping down with the heatsink:

Used kapton tape to electrically insulate the heatsink from the gate of each MOSFET:

The finished prototype, eek spaghetti wiring, was too excited to tidy:

A satisfying connector arrangement I think:

Very impressed with this amplifier.  Only 5W of audio power per channel, but maximum volume is unsocial.   Completely destroyed my understanding of audio power: