Friday, June 26, 2009

 

reliably repeatable positioning

First off, I'm writing from my olpc, which does not have the most typist-friendly of keyboards. Please excuse any typos.

I've been fascinated for some time by the idea of using cheap optical mouse inards to augment or replace repraps' existing methods of determining tool location. The main problem with existing designs doesn't seem to be accuracy (a 1.8 degree stepper, half-stepping, with a 10 turn-per-inch lead screw achieves 1/4000th inch / step granulaity), but repeatability. After hundreds of movements in various directions, how can we ensure we are in _exactly_ the same place as before?

Steppers and servos go a long way toward solving the repeatability poblem, but both are expensive.

A few days ago, I read a charming article about a simple binary pattern used by some pediatricians (via stadiometers) to measure height. (1) The pattern is called the Gray Code. It's essentially a highly accurate, fault-tollerant way of encoding position into a black-and-white pattern, easily printed on an axis (or a sheet of paper later glued to an axis) and it only requires an optical sensor with a few pixels resolution to read - simple enough for even our lowly Atmega microcontollers to do the required image processing in real time.

In short, it's a printable pattern perfectly suited for granting an optical mouse repeatability. Numerous people have extracted the graphical data from an optical mouse's ccd... see: http://www.contrib.andrew.cmu.edu/~ttrutna/16-264/Vision_Project/

A 600dpi laser printer could print a Gray pattern with 0.01 inch granularity, with each binary digit of the pattern encompasing a 6x6 dot square - 36 dots. Just an educated guess, but that seems like it should be granular enough for an opical mouse image sensor (which can do about 32x32 pixels at a time) to resolve.

Thoughts?

1: http://blog.plover.com/

Comments:
The particular article which footnote 1 refers to is: Gray code at the pediatrician's office

Blogs aren't static!
 
Sorry about that, a consequence of me wanting to get pen to paper, so to speak, while not having slept for 20+ hours. Thank you for spotting my mistake!
 
are you suggesting that we use this as a supplement to steppers, or use normal DC motors and this instead of steppers? I was under the impression that normal motors wouldn't be able to start and stop accurately enough.
 
Most modern inkjet printers use an optical strip encoder just like that. It's interesting that the older printers used steppers; and now they've moved to optical sevo designs.

Having messed with feedback loops on my extruder, I can say that the programming a servo motion controller is a bit tougher than an open loop stepper controller. I'm moving towards all open loop steppers for now, but in the future, I could see something like this being useful, especially for cost reductions.

RIght now, it could be used to detect problems - the only time my steppers get out of sync right now is when I've done something stupid and run the extruder head into something I've already built. A closed loop system with an optical encoder would throw an error right away, and prevent further damage. That said, my open loop system (stepper driven Darwin) right now often runs for 5 million steps or more without loosing a single step, as long as I don't do something stupid.

Also, keep in mind that the build speed is important - 1.8 deg steppers on leadscrews will give you great resolution, but your build speed will be very, very slow.
 
About 2 years ago when I first started reading about Reprap. I looked at using optical servo positioning bought 4 Lexmark printers at ASDA/Walmart in a clear out sale @ £16 each it was an ink cartridge with a free printer deal. Lexmark use Arm processors I was working with Arm processors at the time, I stripped 3 for the bits one my youngest daughter is using.
Shortly after the Audrino appeared on Reprap offering much less R&D with the promise of instant gratification thus they were put to one side for a future project.

Having read the forums, there is also the possibility of not re-inventing the wheel and using a printer positioning system as is. Some one in the forum used a HP printer with water filled cartridges and plaster of paris to print simple 3D blocks. Following this idea for a proof of concept s would make the driver software considerably more complex as you would be driving 3 USB printers i.e. 1 for each axis, however it could avoid vast amounts of R&D on optical servo control.
 
The pain of the servo control design is a one off thing.

There after the same design should be reusable only needing loop tuning.

Vik oliver has already done a pic servo controler design that was on the RepRap site and this could be used as a step up for a next iteration.
 
You reminded me of anoto dots http://www.anoto.com/?id=906. They are ridiculously simple to make and can be printed on a piece of paper. using a cheap camera(mouser.com has them for $1.56-$3.14) on white paper, you could get accurate tracking for less than $10.

Furthermore, you can use a free program like context free art http://www.contextfreeart.org, and this source code,

startshape dotpage

rule dotpage{
100* {y .2} {
100* {x .2} dotgroup {}
}
}

rule dotgroup{
DOT{x .2 y .2}
DOT{x .2 y .3}
DOT{x .3 y .2}
DOT{x .3y .3}
}

rule DOT{
CIRCLE { size 0.01 x 0.01}
}

rule DOT{
CIRCLE { size 0.01 x -0.01}
}

rule DOT{
CIRCLE { size 0.01 y 0.01}
}

rule DOT{
CIRCLE { size 0.01 y -0.01}
}

right now. (work is a bit slow at the moment)
 
Gray code has been used for many many years with a great deal of success. To say it is fault tolerant is not actually correct, it is transition tollerant because only one bit changes at a time. Unfortunately Graycode reads data in parallel and soon the amount of bits start adding up, resulting in a wide strip of data.

The strips used in HP printers are relative, and only need 1 bit. In reality though you need 2 bits, one slightly out of phase of the other in a quadrature encoded signal, this gives you both direction and count. the second bit can be created by using 2 masks 90 degrees out of phase, and a pair of optical pickups. Masks and rulers are printable on transparency using a laser printer.

Now for accurate encoding I have been toying with an idea for a number of years now. Use a simple camera to read barcode like data on a strip.
The barcode (can be any binary representation) stores the absolute position of the barcode. The position of the barcode in pixels on the sensor gives you the relative position that provides the resolution.
Now you have a system that gives you high resolution and absolute positioning using a serial bit stream. The challenge will be reading it quickly enough to control moving systems. Analogue camera are cheap and comes with lenses attached and has a so-so horizontal scan rate of +-15 KHz, but has a dead time during vertical refresh. Alternately linear sensors, like those used in scanners and faxes are fast and reads the same line over and over, but don't come with small and cheap lens assemblies as the sensors are rather large.

I hope this idea will inspire someone out there to go play, and share the results.
 
Gray code would be good for linear cameras (for instance, a camera with a resolution of 1x32). For a square camera (with a resolution of 32x32), I suspect that a different pattern would work better.

If you use large dots, so that the camera can see an 8x8 pattern of dots at any time, then you can use an 8-bit Gray code, which means you can distinguish between 256 different positions. On the other hand, if you just used a totally random dot pattern, then each 8x8 sub-block will be unique with very high probability along a strip as long as you want, so that you can distinguish far more than 256 positions.
 
You know, the sensors used with the grey strips in laser printers (5 stripes per mm) do have two sensors spaced to be nicely in quadrature. This means 20 counts per mm, which would do nicely for reprap.
 
Good to see someone else working on this.

Gray code is the standard output for rotory and linear encoders. I don't think they put grey code on the scale, they just output it in grey code.

They have two detectors in two positions on the scale, which naturally receive gray code. A mouse chip has a bunch of detectors, so you could simply use 2 pixels to receive gray code.

Microstepping a stepper motor does nothing to improve the accuracy of a stepper motor. If a motor has 1.8 degree steps, its accuracy is 1.8 degrees, regardless of whether you use microstepping or not. Microstepping increases resolution, but with no increase in accuracy.

You can improve accuracy by having a fairly powerful stepper motor and low friction, which Repraps have. For 3D printing it works, sort of, but that is not usable in other applications. Anyway, it is extremely wasteful of electricity.

With a little work, someone could create a new driver for the stepper motors which make them act like regular servo motors, which might allow the old design to be updated, if there is a way to mount encoders on them. I don't know if that is worth the effort though.

I'm working on the mouse encoder idea as well. Although, I may not get to it for quite a while. I am designing a whole new machine.

Our mouse encoder will have around 0.0001 inch resolution (.0025 mm I believe). Operate at over 200 inches per minute (if I remember right). These are the target goals for my machine, and the mouse encoder, as well as everything else on the machine, is barely able to do it.

This is way out of the range of what an Arduino can do though. An arduino could test the idea at a much lower speed though.

A fairly cheap FPGA could easily handle all the encoders on a machine, but a very fast MCU might not even handle one encoder.

If you'd like to help out on my project, e-mail me at tony at conceivia period com

Tony
 
My apologies Timothy.
On re-reading your blog it seems I missed the plot. You are suggesting using a high speed camera to read the Gray code, all bits in parallel.
It seems the only value my post might add is using the pixel location of the binary identifier as a relative fine adjust.
 
I've managed to do 2d positioning with a cheap webcam, it was a bit slow with the image processing done in Python, but accurate to ~0.002mm over a 15cmx15cm area. I'll write up a description and take some photos when I have a bit of time.
 
Post a Comment

Links to this post:

Create a Link



<< Home

This page is powered by Blogger. Isn't yours?

Subscribe to
Posts [Atom]