Shooting in All Directions

Chet and I get together for the first time in 2007, to some useful effect. Also I try to provide a bit more insight into what we’re doing, and not doing, and why.

A Customer Test

Chet was very hot for a particular customer test, and it took a long time to get it through my head and to figure out something that might conceivably work. He was thinking that as customer, he wanted to look at an actual target with holes in it, and say where the center of the pattern was, and report to the client that the center of pattern is so many inches above and so many to the right of the aiming point.

So he wanted an acceptance test where he would input where the center of the pattern was, and the test would see whether the program’s center of pattern was “close enough”. I couldn’t figure out how it was that he was going to provide an exact point for the center, and I was wondering how he was going to provide the data and how the test would work, and so on. So we were confused for much of an hour, at least the part where we weren’t talking about cars or people in the coffee shop.

But finally I understood what he wanted. In my mind it came down to a simple FitNesse test, with inputs being the picture of the target dots, and the X and Y coordinates of the center of the area where he thought the center was. This was made a bit harder because he didn’t bring the target paper, so we worked from the pictures we had, mostly that nice output pic with the red dots and green oval. So we wrote a very simple FitNesse test based on the pretty JPG:

|!-com.hendricksonxp.patterning.fitnesse.CenterOfPatternFixture-!|
|inputFileName|x() | y() |
|PB270011.bmp|5|8|

imageWe based our x and y estimates on eyeballing the picture and seeing where the green oval was, since we trusted our code for the center. Then we implemented the fixture:

public class CenterOfPatternFixture extends ColumnFixture {
    final String folder = "..\\Data\\";

    public String inputFileName;

    public int x(){
        ShotPattern pattern = new ShotPattern(folder + inputFileName);
        return pattern.centerOfMassXinches();
    }

    public int y(){
        ShotPattern pattern = new ShotPattern(folder + inputFileName);
        return pattern.centerOfMassYinches();
    }

}

… and the supporting code for X and Y inches, in ShotPattern:

   public int centerOfMassXinches() {
        int rawX = centerOfMass().getX();
        return (int) Math.round(rawX / 51.2);
    }

    public int centerOfMassYinches() {
        int rawX = centerOfMass().getY();
        return (int) Math.round(rawX / 38.4);
    }

Whence the magic numbers, you’re wondering? Well, the BMP is 2048×1536, and the paper is 40 inches wide, so the X dimension is 51.2 pixels per inch and th Y is 38.4. We figured we’d scale the real photos more carefully but figured this would get us in the ballpark. So we ran the test:

imageWhoa! What’s wrong with that?? We did the math by hand, knowing that the true center was at 115,297 according to our NUnit test:

   @Test
    public void useRasterToCreateShotPatternBiggerFile() {
        ShotPattern  shotPattern = new ShotPattern(folder + "PB270011.bmp");
        assertEquals(new Hit(115,297), shotPattern.centerOfMass());
    }

Sure enough, 2,8 is correct, not 5,8 as our eyeballs would have it. But the point of the test was to compare the computer to the customer’s eyeballs. What was up???

Finally Chet realized what had happened. We had this code to draw the green oval shown in the earlier article:

    public RenderedImage patternImage() {
        int width = 2048;
        int height = 1536;

        BufferedImage bufferedImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);    
        Graphics2D g2d = bufferedImage.createGraphics();
        g2d.setColor(Color.white);
        g2d.fillRect(0, 0, width, height);
        g2d.translate(1024, 768);
        g2d.scale(1, -1);

        ShotPattern pattern = new ShotPattern(folder+"PB270011.bmp");

        g2d.setColor(Color.GREEN);
        g2d.fillOval(pattern.centerOfMass().getX(), pattern.centerOfMass().getY(), 150, 74);

        g2d.setColor(Color.BLACK);
        g2d.setStroke(new BasicStroke(3));
        g2d.drawLine(-25, 0, 25, 0);
        g2d.drawLine(0,-25,0,25);
        g2d.setColor(Color.RED);
        for (Hit hit: pattern.hits) {
            g2d.fillOval(hit.getX(), hit.getY(), 10, 10);
        }

        g2d.dispose();    
        return bufferedImage;
    }

Yeah, well. FillOval draws an oval where the containing rectangle’s top left corner is at the given coordinates, not the center. So the oval in the picture is off to the right, and low. The height was close enough, but the right offset made us estimate the X incorrectly. We redrew the picture and agreed with the computer that the center was at about 2,8. Adjusted the test and it’s green:

imageAnd we adjusted the offending code in the drawing method:

g2d.setColor(Color.GREEN);
        g2d.fillOval(pattern.centerOfMass().getX()-75, pattern.centerOfMass().getY()-37, 150, 74);

It’s all running, if not all good. And it was lunch time. There are things to think about, and to clean up.

Retrospecting …

We’re happy that we pushed to get a FitNesse test that pleased Chet as a customer, and happy to have a first-cut conversion from pixels to inches.

Along the way, we experimented with displaying the various pictures right in the FitNesse page, and it turns out that FitNesse just doesn’t want to cooperate on that. We could gin up some fake pictures, perhaps according to some naming convention, but since you can’t control their size, it just didn’t seem to help much. We’re not sure where that will leave us when it comes to customer testing other pictures, but we’ll deal with that when it comes up.

There are a lot of magic numbers in the code just now, and we clearly need a fillOvalCentered method. But we’ll keep our eye on the code and clean up the parts that matter, as we get closer to production pictures instead of the ones we’ve been doing to keep our customer senses informed of what our programmer senses were seeing.

In addition, we’d better note that our pretty picture that made us feel so good was misleading us a bit as to where the center of the pattern was. That tells us two important things:

  1. We’d better not get too complacent about looking at a displayed picture and pronouncing it OK. It’s possible to accept some pretty big mistakes.
  2. The detailed accuracy of the picture can probably be pretty flexible and still get the message across clearly to the viewer.

So this is kind of a good news / bad news situation. We do need to be careful, but we can also be confident that if we miss a dot or two, everything will probably be OK.

We’ll do more today — it’s 0635 as I write this — and we’ll keep you posted on what happens next. For now, a little discussion of why it’s happening.

The Old Mail Bag …

Kelly Anderson, on the TDD list, came roaring back in the new year with an exhortation that we really need to solve the problem of the paper having 1500 holes in it and the BMP Chet made only having 773. Kelly said:

There is one more step that you must take to know whether or not what you’ve done is sufficient. You need to take the picture that resulted from your fiddling in Photoshop and make it into a binary image suitable for input into the rest of your algorithms. Without the final binary picture, it’s hard to know if what you did really worked.

I replied at length, and we went back and forth a few times. You can check the list archive for the full discussion if you’re that kind of masochist. For the kind who prefers to read it here, here’s the summary:

We do not find “business value” right now in the processing step from paper to BMP file. We have a very solid indication that, given a BMP file with all the holes visible as black dots, we can count them, clump them, pat them and prick them and mark them with a B and put them in the oven for Baby and me. Mostly for me, if they’re tasty.

Kelly got a new Acme Convolver for Christmas a few years back and he is hot to convolve the living daylights out of our JPG, so as to get more like 1500 pixels into the BMP file. Or actually, he wants us to build our own Acme Convolver and do it ourselves. A bit of web searching finds no evidence that convolution will in fact kill vampires and werewolves, but apparently it will.

But we think we don’t need that. We do need a pretty good BMP for processing, but we see that we have spiked through two parts of the problem:

  1. Get a black and white BMP from a piece of paper;
  2. Given a black and white BMP, process it, identifying hits, centers of mass, regional densities, and the like.

It’s true that our process for getting a pretty good BMP is still pretty poor, but we see lots of ways out of that. In rough order of increasing difficulty, we could:

  1. use better paper;
  2. set up the physical picture taking situation better;
  3. use a better camera;
  4. run some existing Photoshop filters before creating the BMP;
  5. switch to processing the JPG directly using a threshhold or even a high/low filter. (What is the opposite of a bandpass filter?)
  6. borrow or buy some filtering software;
  7. write some filtering software.

The first few of these clearly need to be done. Holes in the cheap paper healed, and folds are definitely right out. Chet just went down his basement and turned on the lights and snapped a picture of the target hanging on the wall. And he wants a better camera anyway.

Once those steps are in place, we imagine that the simple BMP conversion, however he did it, will get a lot closer to showing all the holes as dots. If it doesn’t, we’ll dig deeper into the bag above.

After all, our current process is picking up half the dots. That might actually be enough, statistically, to produce perfectly good reports. If the process was biased to only get the ones on the left or something, I’d worry. But if it’s picking up just the dark ones, it’s quite likely that the distributions we’re interested in are actually OK now.

(We could test that by deep-processing our JPG down to another BMP with more like 1500 pixels on it, and run all out tests on that and compare the results. Perhaps we’ll do that, if the customer cares. But if the better camera and setup pick up most of the dots without more effort, there will be no need for that step at all.)

Meanwhile, what really matters to our customer is whether we can produce, from our BMP file, reports and graphs that will appeal enough to the high end shooting enthusiast that he will spend time and money to get those reports. If we can show some reports and graphs that shooters express interest in, then setting up the process to create good pictures will be worthwhile … and if shooters won’t spend money for this information, cleaning up the image processing process would be wasteful.

Therefore our customer is insisting that we drop the image play, and move into producing results from our existing BMP, which looks enough to him like a real shot pattern.

Kelly suggested that we were really just working on whatever interests us and would make good articles. That’s really not what’s happening, though we do hope that all this makes good theater. What we’re doing is building a product in what we consider a well-balanced style, addressing customer needs and technical needs as well as we can, focusing always on getting as quickly and inexpensively as we can to what the customer wants.

This is a bit tricky because Chet is acting as both customer and developer and I’m acting as both developer and a person with some customer understanding. But that happens on real projects as well, as everyone learns more about all sides of the application and tries to pitch in as best they can. As I put it to Kelly:

The job, as we see it, is to do an adequate job for a reasonable price, not to do a perfect job for an much larger price. It’s a product, not a research program funded by someone with an infinite supply of time and money. Trading off labor and software development is a perfectly reasonable thing to do.

These questions are what make this exercise already interesting in the sense of understanding Agile …

As I understand Agile, one takes stories in value order, with cost in mind, and chooses the ones to do. Our customer does not see that spending time and money on processing a poor snapshot of a wadded up target addresses the product need, which is to process a good photograph of a clean target.

So that’s all the news that’s fit to print for now. Stay tuned and find out what we code up today. I’m excited to find out, myself.

Posted on:

Written by: Ron Jeffries

Categorization: Articles

Recent Articles

Codea Calculator

Based on a simple example on the codea.io forums, I decided to write an article showing all the stuff I might do on a production calculator project. Wow.