[I've translated Plamondon's work into a processing sketch. You can find this on github. ]
In my research at CFPR I am interested in how advanced robotics technologies can be used to challenge our idea of what a print is. Perhaps the most common type of printing is InkJet, where tiny dots of ink are sprayed in quite precise patterns onto paper to create machine printed images.
InkJet prints are typically flat images in terms of surface relief. The ink is also sprayed in a very controlled way so that a low number of colours (like cyan, magenta, yellow and black) can interpolate to create the illusion of full colour. This is a hard thing to do, but it is also very suited to images that are stored as a grid (or table) or pixels, and a machine that is cheap and reliable to make.
From these sort of assumptions, I am asking, what if we could enhance a print machine to include an understanding of the substrate (like paper), the medium (like ink or paint) and the implement (the way the substrate and medium are brought together), and the dexterity and sensing to work with all that information?
You might imagine this as a painting machine, or a sculpting machine, or any machine which requires quite impressive dexterity, intelligence and knowledge. To me, this is a fascinating robotics problem, requiring embodied sensing, actuation and AI. It might not be a driverless car navigating a hazardous street, but it is a machine attempting to navigate an unreliable world of materials, tools and visual representation.
For me, a key part of this puzzle is the way a print machine moves. It is simply not good enough to scan an image on to a page. Even though traditional print methods typically press a print on to a page (one simple action, repeated), a great deal of dexterous and sensitive work is done creating the original plate or matrix, such as wood engraving. I imagine the digital file in digital printing is analogous to the traditional plate or matrix. A wood plate beautifully captures the marks made by hand, and I think a digital file could be enhanced to capture something similar. If the machine were to paint, then again, scanning an image on to page is not good enough, the paint needs to be manipulated.
Is machine-painting printing? I don't know.
In any case, I've been looking for a way to represent movement digitally that quite easily translates to machine movement. My first thoughts were to look at Bezier Curves. These are really appealing because for a few input numbers, you can calculate any point on a complex curve with infinite resolution.
I'm interested in an algorithmic approach because to me, voxels are a data-expensive way to represent 3D objects, and you also lose any understanding of how the elements of the object relate to one another. Any relationship information has to be extrapolated. A pixel image is a 2D table of colour information, and in a similar way, you can't read from an image file anything about how the image was composed. To me, an image made of algorithmic defined curves seems data-cheap, and you could also get some stuff for 'free', such as observing where curves overlap and using a colour model to look at how these coloured paths might combine.
However, Bezier curves have some immediate draw backs for this application. First, the start and the end of the curve has to be defined, so if you need to link lots of curves into something like the trail of a word, then there are lots of numbers to update and check. Secondly, there are more numbers (anchors) that do not relate to the character of the curve in an obvious way. The third aspect killed Bezier for me; there is no real representation of how the curve is progressed over time - or in other words, how speed changes as the mark is made. Really, Bezier curves are about deriving points, not motion.
My current source of material has been models of handwriting and signature recognition. In fact, I've found this area quite fascinating. Something as unique and as representative of a person as a signature can in fact be modelled quite reliably numerically. Furthermore, the models necessarily include the changes of speed involved in the gesture of making the mark on the page. Apparently, when we look at a signature, at the variation of the ink as it travels across the page, we can intuitively discern the type of motion that was required to create the mark. In hindsight this stuff is not surprising, it is (or has been) essential for banking.
I've found the following on handwriting models to be good reading:
- An Oscillation Theory of Handwriting
- The Relationship Between Linear Extent and Velocity in Drawing Movements
- The Law Relating the Kinematic and Figural Aspects of Drawing Movements
- A Multi-Level Representation Paradigm for Handwriting Stroke Generation
- The Generation of Handwriting with Delta-Lognormal Synergies
Of these, the last, Plamondon's model, was the most interesting because of it's relative simplicity and because it appears very empirically investigated. That paper also specifically describes the math involved, provides a table of input data, and provides a graphic of the result - these are the perfect combination to recreate someone's work. I love the idea that something as intuitive, meaningful and ancient as mark making can be investigated with some numbers and computation.
The 'simple' bit that I like is the representation. It has these characteristics:
- The representation looks like a small set of numbers.
- These numbers create a curve, not a single ambiguous 'pixel'
- Marks are represented with a sense of direction and strength.
- Marks are given a time at which they occur.
- The change in velocity for each mark is generalised by a graph (that appears to have good grounding in studies of real people writing and their muscle activation).
- You can create as many of these marks as you like, and they are simply summed up to produce one fluid 'word' in joined up writing.
- The movement points are generated with a spacing along the curve that is proportional to the velocity change.
- The whole thing is algorithmic, meaning we can trivially increase the resolution when generating coordinates to operate a machine.
I've translated Plamondon's work into a processing sketch. You can find this on github. I've done my best to comment the workings. It would be good to look at the lognormal distribution. And it would be good to also take a look at Plamondon's publications.