Tuesday, October 16, 2012

SOCIS - part 10

In the previous post I mentioned that my mentor has updated the POT so that the discontinuity on the t variable is moved away from the initial -1/1 point to -sqrt(2)/2. I thus included, in the metrics computing program, a new metric to computer the number of crossings over this new discontinuity. What I noticed was that the number of discontinuities of this kind is almost the same as the number of -1/1 jumps and that average smoothing takes care of both of them. The results can be found here [1, 2].

In this new POT I incorporated average smoothing and tested to see if indeed the new results were better than the old ones. They were more or less the same. While it is not noticeable on the SNR plots below, this new implementation is slightly better (the SNR values are larger but not by much).





My latest attempt at incorporating resets into the average smoothing implementation, is showing promise. I no longer have discontinuities as in previous attempts, but the image does get blurry again. 
From the idea of deprecating previous covariance values I thought about deprecating previous average values. Firstly, why do we need the resets? If we consider an image in which the upper half is water and the lower half is land, this averaging method will specialize on smoothing the upper part and perform badly on the lower part. This is because, as the procedures progresses line by line, the average value is constructed from similar values of a certain region (in this case, the region is the upper half, which is water). But when we switch to a different region (the lower half, which is land), the average constructed from the 'water values' is no longer relevant. For this reason, previous values must be deprecated in time. 
My idea was to take 2 averages, the first one we shall name currentAverage and the second newAverage. The actual value we shall use will be a weighted sum of these two, namely (1 - w) * currentAverage + w * newAverage, where 0 <= w <= 1. If we divide the image into blocks of height NUM_LINES, then w = (currentLine mod NUM_LINES) / NUM_LINES. We can see from this that w increases as we approach the block's last line. This means that, in our formula, the bias towards currentAverage decreases and the bias towards newAverage increases. currentAverage is the average of all values from the current block and previous block, while newAverage is just the average of the values in the current block. The 'algorithm' is as follows:

for each line
    if currentLine mod NUM_LINES == 0
        switch(currentAverage, newAverage) // interchange the two averages
        reset(newAverage)
    update both currentAverage and newAverage
    average to be used = (1 - w) * currentAverage + w * newAverage.

For the image I've been using in my tests (the Mount St. Helens image), I've noticed that increasing NUM_LINES leads to better results, in the sense that we obtain smoother rotation and mean values. This makes sense for our image, as increasing NUM_LINES makes this method resemble 'traditional' average smoothing more and more. However, this image is roughly the same everywhere (in keeping with the previous analogy... we have land everywhere). The resulting images I obtained after smoothing are blurry, but discontinuity free. You can see one here [3] along with the corresponding means and rots here [4,5].

The problem of taking the sign change into account for the lossless case still remains. I'm still have implementation issues here. While my attempt seems correct to me, the results seem to disagree (discontinuities). Still in the process of understanding why this happens.

Tuesday, October 9, 2012

SOCIS - part 9

Not much progress to report this week unfortunately. College has started and this has severely affected my work. Apart from this I've hit a few road blocks.

I've tried and tried to somehow mix average smoothing and simple smoothing but I either end up with discontinuities or reducing it to just simple smoothing. The problem is that after a reset, all information about previous lines is gone and the average smoothing would have to start over as if it were a new image. I tried to replace the tCurrent values from the average computation with 0.984 * tPrev + 0.016 * tCurrent, but results are pretty much the same. I also tried the following: tNew obtained from average smoothing, if tNew differs from tPrev by more than 10%, smooth this tNew using simple smoothing (with resets after N lines of course). This has shown slightly better results but discontinuities still appear, and it's blurry, like the simple smoothing approach. We see this here:



Still exploring ways of getting a better result after taking resets into account. My mentor suggested a reset on the covariance by "deprecating" the old values to weigh less than the new one. I shall try this approach. I was also thinking about maybe saving the averages each time on disk. It doesn't add to the overall complexity, since it's a simple manner of writing a line of averages after each line in the image, but it does add to running time since IO operations are time consuming.

My mentor has updated the POT so that the discontinuity on the t variable is moved away from the initial -1/1 point to -sqrt(2)/2. This, in combination with average smoothing should yield better results, though probably not noticeable on this image.

Other than that, there have been some implementation issues with the lossless case. The idea was to add a multiplication by s (which is either -1 or 1) to the lifting network for the second component (see the article for POT, mentioned in my first post). There have been some problems with this but hopefully I will solve them and explain in more detail in my next post.

Tuesday, October 2, 2012

SOCIS - part 8

As mentioned in the previous post, I will start with the correct SNR plots comparing average smoothing with simple smoothing, original POT and the single line POT:

BIFR

Reverse waterfill

The results seem quite good. We shall see soon if this holds for other images as well.

Early on, I mentioned how the smoothing solution would have to be simple and not be susceptible to resets (which tend to happen on satellites). Average smoothing is susceptible to resets because it relies on information from all previous lines, in contrast with the simple smoothing approach which only requires information from the previous line. In terms of used memory both techniques behave similarly:

For simple POT smoothing for each tCurrent (mCurrent) I have to retain tPrev (mPrev) in order to compute tNew (mNew). So, for each line I have to store the previous line => O(z) memory.
In the averaged smoothing approach, for each tCurrent (mCurrent) I have to retain the average of previous t values (means values), which is updated at each line in constant time, in order to compute tNew (mNew). That said, for each line I will also retain a vector of averages of the previous values. => O(z) memory.
So in terms of used memory both behave the same, however, if there is a reset the simple smoothing isn't affected that much, since after the next line it will have a new vector of previous values. On the other hand, the average smoothing approach loses all information about the previous lines and has to start over from the line where the reset happened.

It would thus be ideal if we could combine the two. This is what I've been working on this week. Unfortunately it's not an easy task. After trying many variations, the problem seems to be when switching from simple smoothing to average smoothing discontinuities appear. I think this happens because average smoothing uses the averages of the tCurrent values when smoothing and this doesn't seem to 'fit' after smoothing with simple smoothing. Here is how the t parameter looks: 
In this example I used a block size of 400 lines in which I used average smoothing for the first 256 and simple smoothing for the rest. After 400 lines this procedure repeats. The discontinuities appear at around the multiples of 400, indicating that the problem appears when starting a new block. 
I will try to improve this, using something other than tCurrent in the average computation.

In parallel I will also be working on the lossless case which has been ignored so far. Everything we've talked about so far has been related to the lossy implementation. So far, I've represented the image after a simple application of lossless POT, which can be found here [1]. But more on this topic will be covered in the next blog post.