Category Archives: Programming

eFail!

This is kindof hilarious…

That brings us back to last week, and the release of Efail. The hack is simple and brilliant: It uses the fact that your email client thinks it’s a web browser. An attacker sending mail can steal the content of secret messages you may have sent or received. It works like this: An email client running OpenPGP (the current standard of PGP) or S/MIME decrypts messages when it receives them, and since the clients are also web browsers, they fetch things from the web for displaying them to you in the email you open at the same time. So what if you happened to open an email, which decrypts whatever message it may have inside, even a hidden one, while the same email also tells your email client to fetch an image off the web whose name is now the entire contents of a message it just decrypted? It would just do it, invisibly, sending the now easily readable message anywhere on the net without you ever knowing it happened. Sure, an image named “Meet me at the park on Sunday at 3 a.m. and we’ll make plans from there come alone.jpg” would never load on your screen, but you’ll have invisibly asked for it, and that ask will now be recorded in whatever computer out there the person who sent the mail wanted it recorded on. And that mail could have just as easily said it was from your spouse or boss as God or Santa Claus.

EMail is fundamentally broken.

Alas. I like email.

Source: Email Hackers Are Winning – The Atlantic

Throw Them All Away

Schneier on Spectre and Meltdown:

The security of pretty much every computer on the planet has just gotten a lot worse, and the only real solution — which of course is not a solution — is to throw them all away and buy new ones.

“Throw it away and buy a new one” is ridiculous security advice, but it’s what US-CERT recommends. It is also unworkable. The problem is that there isn’t anything to buy that isn’t vulnerable. Pretty much every major processor made in the past 20 years is vulnerable to some flavor of these vulnerabilities. Patching against Meltdown can degrade performance by almost a third. And there’s no patch for Spectre; the microprocessors have to be redesigned to prevent the attack, and that will take years. (Here’s a running list of who’s patched what.)

This is bad, but expect it more and more. Several trends are converging in a way that makes our current system of patching security vulnerabilities harder to implement.

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they — and the research into the Intel ME vulnerability — have shown researchers where to look, more is coming — and what they’ll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

Schneier is rarely optimistic when it comes to security issues. But I’d never bet against him being right, either.

Generating fantasy maps

Here is an interesting article about the algorithms used to generate ‘fantasy’ maps like this one:

This is far beyond the programs I wrote to generate terrain on my Amiga 500, way back in ’87. I was content with square grids, no ‘erosion’, no labels, etc.. But, I did draw them in color, with nice isometric 3D projection!

VBScript!

I’ve been fighting with a little script I wrote to massage some text for me. It’s a very simple task so I thought VBS would be adequate (read each line of text, if there’s a special bit of text in the line capitalize everything to its right – no big deal). I know I can use C# and JavaScript if I use Windows’ Powershell, but I didn’t think VBS would have any trouble with it. And it didn’t. But I learned something ridiculous about VBS in the process.

Here is a very simple VBScript program.

Function timesTwo(ByRef inParam)
    inParam = inParam * 2
    timesTwo = 1
End Function

i = 10

WScript.Echo "i = " & i
timesTwo(i)
WScript.Echo "i = " & i
z = timesTwo(i)
WScript.Echo "i = " & i

Here I have a simple function called ‘timesTwo’ that multiplies its input parameter by two and returns a value of 1. Since I declared the input of the function “ByRef”, the input is passed “by reference”, which basically means the function can change the value of the parameter. If I pass it a variable with a value of 5, the variable should have a value of ten when the function ends. Magic.

If the parameter was declared “ByVal” (“by value”), the function would receive a copy of the input parameter, which it could modify as much as it wanted, but changing the copy wouldn’t change the value of the original parameter. But I didn’t declare it that way…

The code below that function tests the function. It…

  1. initializes a variable called ‘i’ to a value of 10 (i = 10)
  2. prints the value of i (WScript.Echo "i = " & i )
  3. calls the function, ignoring its return value. (timesTwo(i))
  4. prints the value of i
  5. calls the function again, putting its return value in a variable ‘z’. (z = timesTwo(i))
  6. prints the value of i

So, what do you think we should get?

i = 10
i = 20
i = 40

That seems reasonable. Start with i = 10, print i, call the function to double i, print i, call the function to double i again, print i.

But if that worked, this would be a truly boring blog post!

No, what we actually get is:

i = 10
i = 10
i = 20

Why? Well, I don’t know exactly why. But, the effect is: if you ignore the return value of a function, the parameter you pass is actually passed by value (ByVal), not by reference.

Pass by value:

timesTwo(i)

Pass by reference:

z = timesTwo(i)

What. The. Fuck.

Why would that difference in behavior ever be useful, let alone expected ?

I should’ve used JavaScript.

Dynamic Camouflage

There’s a specific technique for finding the optimal solution to complex problems (typically mathematical or computer programming problems) by breaking them into simpler sub-problems – each of which only needs to be solved once, optimally – and then combining the results of those sub-problems to arrive at the final solution. It’s called “Dynamic Programming“. Sounds sexy! But when you get into it, you notice that there’s really nothing dynamic about it. When you start down the DP path for a problem, you don’t find yourself in an exciting world of fast and forceful – dynamic! – things to think about; rather, it’s more about carefully and deliberately rethinking the problem at hand until you find a way to flip it on its head, or turn it inside out, or maybe even solve it backwards.

And the name doesn’t actually refer to computer programming. It’s about mathematical programming: finding the best possible solution from the set of all possible solutions, given some definition of ‘best’. That DP is often used in computer programming today is coincidence.

So where does the name come from?

The man who came up with the name, Richard Bellman, explains:

“I spent the Fall quarter (of 1950) at RAND. My first task was to find a name for multistage decision processes. An interesting question is, Where did the name, dynamic programming, come from? The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word research. I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term research in his presence. You can imagine how he felt, then, about the term mathematical. The RAND Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose? In the first place I was interested in planning, in decision making, in thinking. But planning, is not a good word for various reasons. I decided therefore to use the word “programming”. I wanted to get across the idea that this was dynamic, this was multistage, this was time-varying. I thought, let’s kill two birds with one stone. Let’s take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an adjective, and that it’s impossible to use the word dynamic in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It’s impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities.”

Rain girl

This is one of my favorite pics – a girl walking in the rain outside our first house. I’ve posted it previously, a couple of times.

I came up with an implementation of Aaron Hertzmann’s “painterly” non-photorealistic rendering algorithm. This recreates the source image by using simulated brush strokes of varying size, color and shape. It’s not a blurring or filtering of the source; it’s a recreation made in much the same fashion that a human painter would copy a picture: look at the source, choose a color, make a stroke which follows that color. Repeat until you’re done. As with a human painter, it starts with large brushes to get the background and large areas filled, then it refines the image with progressively smaller brushes.

This video is the algorithm in action, drawing the strokes (20 per frame, to save time). I told it to use short strokes.

The music is called “One Bird”. That’s me.

Here’s one of a pic of Adrian Belew:

Longer, thinner brush strokes.

And, Tricksey:

Tricksey, Painterly

Drat Of The Day

Which is the minimum of the two:

A: -3.4028235e+038
B: 1.1754944e-038

That depends on how you define minimum. If you want the smallest magnitude, it’s B. B is so close to 0 it might as well be 0 for most practical uses (0.000000000000000000000000000000000000011754944). But A is smaller if you think ‘minimum’ means ‘leftmost on the number line’. A is so far to the left of B there aren’t enough pixels on all the monitors in the world to draw that number line.

In C++ the result of std::numeric_limits<float>::min() is B: the smallest value a ‘float’ can store is B.

But the most-negative number a float can store is A. And A is std::numeric_limits<float>::max().

So, if you are trying to limit the results of a function to the range of values that a float can hold, and you do this:

double result = doSomething();
double t = max(std::numeric_limits::min(), result);
t = min(std::numeric_limits::max(), t);
return (float)t;

You’re going to very surprised when you discover that you never see a value smaller than 1.1754944e-038. You’ll never even get 0.

What you really needed to write is this:

double result = doSomething();
double t = max(-std::numeric_limits::max(), result);
t = min(std::numeric_limits::max(), t);
return (float)t;

But, that trickery only applies to floating point types. The smallest magnitude you can store in an int is 0. But std::numeric_limits<int>::min() is -2147483648. That’s the most-negative value, not the value with least magnitude.

And now you know.

So, Filling…

Continuing with this… (busy week at work! whew!)

Instead of filling with boring old rectangles, what about filling with ellipses?

First problem: it’s just too much work to do the dividing and conquering with actual ellipses. There’s no good way to recursively chop a rectangular image into successively smaller ellipses. So, I decided to just stick with the rectangles right up until the point where I go to fill them in. Then, I’ll just draw an ellipse that covers what the rectangle covers. This is just an experiment. I can do what I want.

So, given that we need each ellipse to cover, at least, the same area as the rectangle would have, is there a way to determine the optimal ellipse? That is: given a rectangle, how big is its proportional enclosing ellipse? Beats me. Sounds like tough problem. But, StackExchange knew:

A=RectWidth / √2
B=RectHeight / √2

So much simpler than I’d expected.

So, here we go. Everything else is the same, except that I’m drawing ellipses instead of rectangles.

ml_sz2_t100_ell

Hmm… Interesting. Looks like a lot of eggs. I don’t like that the tops and lefts of the ellipses are all visible, but the bottoms and rights aren’t. That’s a consequence of the fact that the overall drawing and processing basically goes left to right, top to bottom.

Can I change the order in which they’re drawn? Yes.

Instead of drawing ellipses as soon as they’re calculated, I put them in a list. At the end I have a list of all the ellipses that would’ve been drawn. Then, sort that list by the area of each ellipse. And then, draw the sorted ellipses – biggest first. That way, the smaller ellipses will overlay the bigger ones. And that gives:

ml_sz2_t100_ellSt

Better. There’s variety in which parts of the ellipses are obscured. And, the ends of the really big ellipses along the left and right sides are covered up.

With borders? Sure!

ml_sz2_t100_ellStO

No fill, borders only?

ml_sz2_t100_ellStNo

Sweeeeeeet.

Divide And Conquer

There’s an old-school technique for rendering the classic fractal, the Mandelbrot set, called Divide and Conquer. This technique takes advantage of the fact that all of points in the Mandelbrot set are ‘connected’ – there are no islands of Mandelbrot set points which aren’t connected to the main set; and there are no islands of non-Mandelbrot set enclosed by points within Mandelbrot set. Think of it as a single blob of water. Given this, you can speed up rendering of any subset of the Mandelbrot set by:

  1. Divide the image into four rectangles.
  2. For each of the four rectangles:
  3. If the rectangle has a non-zero width and height:
  4. Calculate all the points along the edges.
  5. If all the points along the edges have the same calculated value, all points within the rectangle must be the same, too. No ‘islands’, remember. So, fill the rectangle with the value you calculated for the edges.
  6. If the points along the edge are not the same, divide that rectangle into four rectangles and, for each sub-rectangle, go to step 3. (Keep in mind that since you’ve already calculated the points on the edges, you can reduce the size of the rectangle by 1 on each edge).

So, it’s a recursive space-filling technique. It divides the work area into smaller and smaller rectangles until it finds something it can fill or it can no longer sub-divide. And in this way it gobbles-up large identically-valued sections of the Set very quickly, and so it spends most of its time working on details. In general, that’s a performance gain. And, even better, this technique is far more fun to watch than just seeing pixels calculated one at a time, one row after another. It’s kind of mezmerizing. Or, it was back in the day. These days, computers are fast enough that you can’t see it in action.

Anyway, I thought it would be fun to put a variation of this technique to use on non-fractal images. So, I used color differences in the source image to determine where the edges of the rectangles go, with a selectable threshold to control color matching. I added a way to limit the minimum rectangle size, and taught it to optionally draw tastefully-colored borders around each rectangle.

And away we go!

Start with this:

monalisa_sm

Set the minimum rectangle dimension to 25 pixels, set the threshold to “100” and…

ml_sz25_t100

Not amazing.

The rectangles are too big. So, reduce the min rect size to 10:

ml_sz10_t100

Now you can start to see what’s going on. Variety in rectangle sizes is what we’re after – otherwise it’s just pixelation. And this would be a most inefficient way to do pixelation…

Set min rect size to 4, and then to 2:

ml_sz4_t100

ml_sz2_t100

There we go.

Now, that’s fun.

But there’s one thing I don’t love about this: the rectangles follow a perfectly even grid, regardless of min rect size. Look at the four rectangles in the top-left of this one, then look at the previous images. They are the same in all of the images except for minsize=25 – and in that case, those four rects fit perfectly into its top-left rectangle. This is completely expected, since we’re just dividing the image into 1/2 then 1/4 then 1/8 then 1/16, etc., and so we’re going to keep hitting those same edges over and over.

What we need is to vary the sizes of the sub-rects, to get off of that power-of-two thing. To do that, I added something called ‘jitter’. Instead of dividing the rectangle evenly into four equal sub-rects, jitter divides the rectangle by 2, 3 or 4 vertically, then 2, 3 or 4 horizontally. The divisor is chosen at random.

That gives:

ml_sz2_t100_j

And because the jitter is random, the output image will be different every time.

Other variations:

No outlines:
ml_sz2_t100_no
Looks like hyper-aggressive JPEG compression.

Or, don’t fill the rectangles, just draw the outlines:

ml_sz2_t100_je
I really like this effect. And this one shows another variation: instead of randomizing the sub-rect proportions, it just randomly nudges the edges of the sub-rects by a pixel or two. This maintains the even grid, but adds just enough noise to break the monotony if you look closely.

Or, render the same image twenty times with jitter on, and put the results into a GIF !

bart

Far out, man!